1% BEGIN LICENSE BLOCK
2% Version: CMPL 1.1
3%
4% The contents of this file are subject to the Cisco-style Mozilla Public
5% License Version 1.1 (the "License"); you may not use this file except
6% in compliance with the License.  You may obtain a copy of the License
7% at www.eclipse-clp.org/license.
8% 
9% Software distributed under the License is distributed on an "AS IS"
10% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied.  See
11% the License for the specific language governing rights and limitations
12% under the License. 
13% 
14% The Original Code is  The ECLiPSe Constraint Logic Programming System. 
15% The Initial Developer of the Original Code is  Cisco Systems, Inc. 
16% Portions created by the Initial Developer are
17% Copyright (C) 2006 Cisco Systems, Inc.  All Rights Reserved.
18% 
19% Contributor(s): Helmut Simonis, Parc Technologies
20% 
21% END LICENSE BLOCK
22
23\documentclass[a4paper,12pt]{report}
24\usepackage{hevea}
25\usepackage{epsfig}
26\usepackage{alltt}
27\usepackage{makeidx}
28%\usepackage{html}
29\usepackage{tocbibind}
30\usepackage{hyperref}
31\usepackage{../texinputs/eclipse}
32\input{../texinputs/sepiachiphtml}
33
34%
35% \techReport
36%
37\usepackage{pstricks}
38\newcommand{\techReport}[3]{
39\thispagestyle{empty}
40\begin{center}\fbox{\large TECHNICAL REPORT IC-PARC-#1}\end{center}
41\vspace{120pt}
42\begin{center}
43\huge #2
44\vspace{24pt} \\
45\large #3
46\end{center}
47\vspace{\fill}
48\newpage
49\title{#2}
50}
51
52
53\makeindex
54\begin{document}
55\techReport{03-2}{Developing Applications with ECLiPSe}{H. Simonis\\
56email: Helmut.Simonis@crosscoreop.com\\
57}
58
59%\title{Developing Applications with ECLiPSe}%
60%\author{H. Simonis\\
61%Parc Technologies Limited\\
62%8th Floor, The Tower Building\\
63%11 York Road\\
64%London SE1 7NX\\
65%email: Helmut.Simonis@parc-technologies.com\\
66%}%
67%\maketitle
68\begin{abstract}
69In this tutorial we provide a methodology for developing large scale 
70applications with the ECLiPSe system. This methodology follows a top-down 
71approach from the original problem specification to detailed implementation 
72aspects of the resulting code. We concentrate on general design and 
73programming aspects, and largely ignore specific problem solving 
74techniques, which can be found in other tutorials. This current report 
75is intended for programmers developing applications with ECLiPSe 
76as well as those tasked with the maintenance of such programs.
77
78This tutorial is based on an earlier version restricted to internal Parc Technologies use, and contains numerous corrections and improvements suggested by the ECLiPSe team. 
79\end{abstract}
80\pagestyle{headings}
81\tableofcontents
82\newpage
83\chapter{Introduction}
84\label{introduction}
85
86This tutorial is an introduction to the design, development, test and maintenance of large scale applications with the ECLiPSe system. It follows a top-down methodology of converting an initial high-level specification into executable code of high quality. We assume that fundamental decisions on selecting the right tool and the right problem solver have already been taken so that we are commited to using ECLiPSe and one of its problem solving libraries. We are basically interested in the engineering aspects of the program development, not its research and development content.
87
88At the same time we provide a number of programming concepts, standard templates that can be used for many ECLiPSe programming tasks. A knowledge of these concepts helps the programmer not only to understand existing code more rapidly, but also should lead to standard solutions whenever a new problem has to be solved.
89
90The tutorial uses excerpts from one of applications developed by Parc Technologies to highlight some of the design choices and also to provide code examples taken from real-life components. This application is centered on IP network analysis, so devices like routers and interfaces will occur throughout the example code. A detailed understanding of the application domain should not be required to understand and re-use the examples.
91
92\section{Intended audience}
93The tutorial is aimed at programmers learning how to develop applications with ECLiPSe and tries to codify a 'best practice'\index{best practice} for developers. It assumes a basic familarity with concepts of constraint and logic programming and is not a reference manual, so many concepts and language elements are only introduced very briefly. The material also includes some general topics from software engineering. Knowledge of general OO-related methodologies may be helpful to put that material into context.
94
95The tutorial should not only benefit the designers and developers of ECLiPSe applications, but also those who maintain existing systems. It explains a number of typical design patterns in ECLiPSe programs, and also discusses how to isolate problems and how to make small, incremental changes to a design.
96
97\section{Related work}
98Most of the existing textbooks on logic programming carefully describe all language features, but do not provide information how to use these features to obtain particular objectives or how to develop complete systems that can be maintained and extended easily. 
99 
100The methodology developed in the CHIC-II project \index{CHIC-II} looks at a much wider methodology question (http://www.icparc.ic.ac.uk/chic2/methodology/)\index{methodology}. It not only covers problems like how to select a problem solving technology to fit a particular problem, but also discusses project related issues like team composition\index{team composition}, rapid prototyping\index{rapid prototyping} and development of problem specifications. On the other hand, it does not discuss the details of developing an ECLiPSe application once the main design choices have been taken.
101
102The book ``The Craft of Prolog'' by Richard O'Keefe is an invaluable source for writing efficient Prolog programs. Many of the techniques and tricks presented are also applicable to writing efficient ECLiPSe programs. The underlying maxim ``elegance is not optional'' summarizes the spirit of declarative programming. As the book is based on a standard Edinburgh style Prolog, it does not handle issues of module structure and in-line documentation, nor other ECLiPSe specific features.
103
104The ECLiPSe documentation contains most of the information provided in this tutorial, but presents it in a very different way. Invariably, it describes all features of the system, many of which are only required for quite specific (if important) tasks like developing new constraint engines inside the system. It can be difficult to find which parts of the documentation contains important hints to solve a particular problem. On the other hand, it will be useful to look up each feature in the user manual and/or the reference manual as they occur in the tutorial.
105
106The software engineering techniques used in this tutorial are mainly adaptations of OO-development methodologies. This field is far too wide to give specific references here, but the excellent Object Technology Series (http://www.awl.com/cseng/otseries) is a good starting point.
107
108\section{Structure of tutorial}
109The tutorial follows a top-down methodology for the design of an application. Chapter~\ref{highleveldesign} discusses general issues of modular design and self-documenting code for ECLiPSe programs. The next chapter on data structures compares different ways of representing data internally and externally, and presents a canonical multi-representation format which allows effective access to data in all parts of an application. Chapter~\ref{gettingittowork} shows how to convert a high level specification into an executable program early in the development stage. The bulk of the tutorial is contained in chapter~\ref{programmingconcepts}, where we present a number of different programming concepts which can be used as development templates to solve particular problems. This is followed by chapter~\ref{inputoutput} on input/output, a particularly important aspect of developing extensible programs. The last two chapters deal with debugging (chapter~\ref{ifitdoesntwork}) and testing (chapter~\ref{correctnessandperformance}).
110
111There are three appendices to the tutorial. The first one summarizes style rules that good ECLiPSe programs should satisfy. Most of the rules are explained inside the main tutorial text, they are presented here for easy reference. Appendix~\ref{layoutrules} gives some rules how ECLiPSe programs should be indented. The last section (appendix~\ref{corepredicates}) lists core ECLiPSe predicates which should cover most programming needs and which all ECLiPSe programmers should be familiar with.
112
113\chapter{High Level Design}
114\label{highleveldesign}
115
116In this chapter we discuss some high-level design decisions which should be 
117clarified before starting the actual development of any code.
118
119\section{Application structure}
120We first review the overall application structure\index{application structure} found in systems developed at Parc Technologies (at least those using ECLiPSe for part of their development). We distinguish between two application types, one a full application with user-interface, database, reporting, etc, and the other a much smaller system, typically reading data from and to files and performing a single, batch type operation.
121\begin{figure}[htbp]
122\begin{center}
123\begin{toimage}
124\setlength{\unitlength}{0.5cm}
125\begin{picture}(19,9)
126\put(0,0){\framebox(10,2){ECLiPSe Application}}
127\put(0,4){\framebox(10,2){ECLiPSe/Java Interface}}
128\put(0,7){\framebox(10,2){Java Application}}
129\put(12,7){\makebox(6,2){User}}
130\put(15,8){\oval(6,2)}
131\put(4,4){\vector(0,-1){2}}
132\put(6,2){\vector(0,1){2}}
133\put(4,7){\vector(0,-1){1}}
134\put(6,6){\vector(0,1){1}}
135\put(10,8){\vector(1,0){2}}
136\put(12,8){\vector(-1,0){2}}
137\put(3.8,2.5){\makebox(0,0)[br]{Queries}}
138\put(6.2,2.5){\makebox(0,0)[bl]{Variable bindings}}
139\end{picture}
140\end{toimage}\imageflush
141\end{center}
142\caption{Full application}
143\label{fullapplication}
144\end{figure}
145
146\begin{figure}[htbp]
147\begin{center}
148\begin{toimage}
149\setlength{\unitlength}{0.5cm}
150\begin{picture}(11,7)
151\put(0,0){\framebox(10,2){ECLiPSe Application}}
152\put(0,4){\makebox(10,2){Data Files}}
153\put(5,5){\oval(10,2)}
154\put(4,4){\vector(0,-1){2}}
155\put(6,2){\vector(0,1){2}}
156\put(3.8,2.8){\makebox(0,0)[br]{read input}}
157\put(6.2,2.8){\makebox(0,0)[bl]{write results}}
158\end{picture}
159\end{toimage}\imageflush
160\end{center}
161\caption{Batch type application}
162\label{batchtypeapplication}
163\end{figure}
164Examples of the first type (see figure~\ref{fullapplication}) are Parc Technologies applications (http://www.parc-technologies.com) like AirPlanner\index{AirPlanner} and RiskWise\index{RiskWise}\footnote{In the following we use a number of examples from the RiskWise application. It is a network analysis tool for IP networks, which uses a constraint solver to determine traffic pattern in the network. If this doesn't make any sense to you, relax. An understanding of the networking application domain is not required to follow this tutorial.}, where everything except the problem solver is developed in Java\index{Java} or related tools. The interface between the main application and the problem solver written in ECLiPSe is via a Java-ECLiPSe interface. In this interface, the main application poses queries for the ECLiPSe solver, passing data and arguments into ECLiPSe. The problem solver then runs the query and returns results as variable bindings in the given query. The Java side only knows about these queries, their data format and the expected results. The internals of the solver, how the queries are resolved, is completely hidden. This defines a nice interface between the application parts, as long as the queries are well defined and documented. Once that design is frozen, the developers for the different parts can continue development independently from each other, using stubs or dummy routines to simulate the other application parts. 
165
166The NDI-Mapper\index{NDI mapper} in RiskWise is an example of the second application type (see figure~\ref{batchtypeapplication}). The application reads some data files (defined in a clear specification), performs some operation on the data and produces results in another set of data files. The top-level query typically just states where the data should be found and where the results should be written to. This batch command \index{batch processing}then internally calls more detailed routines to load data, etc.
167
168\section{Queries}
169For each required function of the interface, we should define a specific query\index{query}. The query consists of three parts. The first is the predicate name, which obviously should have a relation to the intended function. The second part consists of the input arguments\index{input arguments}, which are used to pass information from the outside to the problem solver. The structure of these arguments should be as simple as possible, easy to generate and to analyze. The third part consists of the output arguments\index{output arguments}, which are used to pass information from the problem solver back to the calling interface. When calling the query these arguments will be free variables, which are instantiated inside the solver to some result data structure.
170 
171The critical issue in defining the queries lies in identifying which data are required and/or produced, without building an actual implementation of the system. Another issue is the format of the data, which should allow a simple and efficient implementation without extra overhead for data manipulation. It is most important to get this interface right from the start, as any change will create re-work and problems integrating different versions of the software.
172
173\section{API}
174The API \index{API}(application programmer's interface) definition should be finished and signed-off by the designers of the main application and of the problem solver before any code is written\footnote{This doesn't mean that the interface can't evolve over time, but at each time point there should be a current, valid definition of the API accepted by all concerned.}. The syntax of each query (predicate) should be defined completely, stating the argument structure and types for both input and outputs. A mode declaration should be used to indicate input and output arguments.
175   
176An important part of the API definition is the identification of constraints\index{constraints} attached to the data. Constraints in this context are conditions that the data must satisfy in order to obtain meaningful results. These constraints ensure that the data set is complete, minimal and consistent. 
177\begin{description}
178\item[complete] All data required for the application should be present, with proper links between different records. An example from RiskWise: if an interface of a router is mentioned, then the router to which the interface belongs should be mentioned as well.
179
180\item[minimal] The data set should not contain any unneeded information, like duplicated records or references to objects that are not used in the system. 
181
182\item[consistent] The data should be consistent overall, so that different records can be used without verifying their correctness every time. This can increase processing speed dramatically, and simplifies the code inside the application.
183\end{description}
184Any constraints identified should be added to the specification of a data record in a numbered list, so that they can be identified easily. The example below shows the constraints attached to the\index{backbone\_line} 
185\begin{verbatim}backbone_line(Node1, Interface1, Node2, Interface2)
186\end{verbatim}
187structure in RiskWise.
188
189\begin{enumerate}
190\item Each pair (Node, Interface) occurs at most once in the entries. 
191\item There can be no entry starting and ending in the same node. 
192\item Each node Node1 must have a node/2 fact. 
193\item Each node Node2 must have a node/2 fact. 
194\item The interface Interface1 cannot be 'local'. 
195\item The interface Interface2 cannot be 'local'. 
196\item Each pair (Node1, Interface1) must have a bandwidth/5 fact. 
197\item Each pair (Node2, Interface2) must have a bandwidth/5 fact. 
198\item There can be no backbone line between two net nodes. 
199\item There is at most one direct connection between two nodes, each line is uniquely described by the pair (Node1, Node2). 
200\item If there is a connection (Node1, Node2) between two nodes, then we cannot have the inverse connection (Node2, Node1) as well. 
201\end{enumerate}
202
203For major application interfaces\index{application interfaces}, in particular those linking the Java\index{Java} and the ECLiPSe side, it is recommended to write specific code checking the data for compliance with the constraints. This simplifies the integration task by either pointing out which data do not satisfy the constraints and are therefore invalid, or by guaranteeing that the data passed are correct, and any problem is on the receiving side of the interface.
204
205At this point it is also a good idea to write some dummy batch tests which create the proper interface structures to test the queries. These dummy tests must satisfy the constraints on the data, but do not need to contain useful data. They will be used to exercise the interface and the data constraint checks for coverage with consistent data while integration testing is not yet possible.
206 
207\section{High level structure}
208Once the external API is clearly defined, we can start looking at the next level of internal structure. This will depend on the intended purpose of the system, but we can identify some typical structures that can be found in many applications. Here, we present two typical designs, one for solving combinatorial problems, the other for transforming data.
209
210\subsection{Classical LSCO structure}
211This structure is typical for large scale combinatorial optimization\index{large scale combinatorial optimization} (LSCO)\index{LSCO} problems. The flow analysis part of RiskWise follows this structure. It consists of five parts, where each performs one particular task. 
212\begin{figure}[htbp]
213\begin{center}
214\begin{toimage}
215\setlength{\unitlength}{0.5cm}
216\begin{picture}(11,15)
217\put(0,0){\framebox(10,2){output results}}
218\put(0,3){\framebox(10,2){find solution}}
219\put(0,6){\framebox(10,2){create constraints}}
220\put(0,9){\framebox(10,2){create variables}}
221\put(0,12){\framebox(10,2){prepare data}}
222\put(5,3){\vector(0,-1){1}}
223\put(5,6){\vector(0,-1){1}}
224\put(5,9){\vector(0,-1){1}}
225\put(5,12){\vector(0,-1){1}}
226\end{picture}
227\end{toimage}\imageflush
228\end{center}
229\caption{LCSO Application structure}
230\label{lscoapplicationstructure}
231\end{figure}
232\begin{description}
233\item[prepare data]In this module we read the data and prepare the data structures that are required for the problem solver. These data structures should group information that belongs together in one record, and must allow fast access to the data, so that the following application parts can access it easily.
234\item[create variables] This is followed by the variable generation, where we create all variables that will be used in the solver. These variables will simply be placed in slots already provided in the data structures.
235\item[create constraints] Once the variables are in place, we can create the constraints between them. This generation should be done in such a way that we can  disable or exchange constraints easily.
236\item[find solution] Having stated all constraints, we can then find a solution (or the optimal solution) to the problem. This will instantiate the problem variables that we have created in step 2.
237\item[output results] We then have to take the solution and produce the output results, either as data structures, or by creating some result files.
238\end{description}
239It may not be immediately obvious, but it is very important to clearly separate the different phases. For example, it sometimes may seem faster to generate some variables together with the constraints that are set up between them, as this will save at least one scan of the data structure. But at the same time it makes it much more difficult to disable this constraint or to rearrange the constraints in a different order.
240
241Exceptions to this rule are auxiliary variables which are only introduced to express some constraint and which are not part of the problem variables proper, or constraints that are introduced inside the search process in order to find a solution.
242
243\subsection{Data transformation structure}
244The second high-level design is a data transformation\index{data transformation} structure, used for example in the NDI-Mapper\index{NDI mapper} application. It consists of three parts.
245\begin{figure}[htbp]
246\begin{center}
247\begin{toimage}
248\setlength{\unitlength}{0.5cm}
249\begin{picture}(11,9)
250\put(0,0){\framebox(10,2){output results}}
251\put(0,3){\framebox(10,2){transform}}
252\put(0,6){\framebox(10,2){read input}}
253\put(5,3){\vector(0,-1){1}}
254\put(5,6){\vector(0,-1){1}}
255\end{picture}
256\end{toimage}\imageflush
257\end{center}
258\caption{Data transformation structure}
259\label{datatransformationstructure}
260\end{figure}
261\begin{description}
262\item[read input] In the first module, we read all data into data structures.
263\item[transform] We then use these data structures to create new, transformed data in the tranformation part of the application.
264\item[output results] The last module consists in an output routine, which creates the result data files.
265\end{description}
266The main limitation of this approach is that all data must be loaded into the application (in main memory)\index{main memory}. This limits the size of the data sets that can be handled to several megabytes. On the other hand, all information can be made available in an efficient way using lookup hash tables or arrays, so that the implementation of the transformation is quite simple.
267
268\subsection{Data flow considerations}
269Once we have decided on a top-level structure, we must consider the input/output arguments of each part. We have to decide which data must be fed into which modules, where new data structures will be created and which other modules require this data. For each piece of information we must identify its source and its possible sinks. Designing these data flows is an iterative process, assigning functionality to modules and making sure that the required information is available. The design aim should be to minimize the amount of information that must be passed across module boundaries and to arrange functions in the correct data flow order, so that all information is produced before it is required in another module.
270
271It can be helpful to draw a data flow graph\index{data flow graph} of the system as a directed graph, showing the main components as nodes and links between sources and sinks of information. A graph of this form should not contain any cycles, and if many links exist between two nodes, we may have to reconsider the split of functionality between. 
272
273\section{Modules}
274We now have an idea of the overall structure of our application and can now turn this into the top-level code structure. We use the module concept of ECLiPSe to clearly separate the components and to define fixed interfaces between them. Each component of our top-level structure should define a module in one file. We place a {\it module}\index{module} directive at the beginning of the file to indicate the module name. 
275
276The following examples are taken from the file {\it flow\_prepare\_data.ecl}\index{flow\_prepare\_data}.
277\begin{verbatim}
278:-module(flow_prepare_data).
279\end{verbatim}
280
281\subsection{Making predicates available}
282The {\it export}\index{export} directive makes some predicate definition available to users outside the module. All other predicates can be only used inside the module, but cannot be used from outside. A module effectively hides all non exported predicates in a separate name space\index{name space}, so that we do not have to worry about using the same name in two different modules. 
283
284In our example, the module exports a single predicate {\it prepare\_data/12}\index{prepare\_data/12}.
285\begin{verbatim}
286:-export(prepare_data/12).
287\end{verbatim}
288
289\subsection{Using packaged functionality}
290If we want to use some function which has been exported from some module in another module, we have to use the {\it use\_module}\index{use\_module} directive. It says that all exported predicates of a module should be available in the current module, and ensures that the module is loaded into the system. We can use the same module in more than one place, but the directive makes sure it is loaded only once. The {\it use\_module} directive assumes that the module is defined in a file in the current directory, which has the same name as the module. 
291
292The {\it flow\_prepare\_data} module uses predicates from a variety of sources.
293\begin{verbatim}
294:-use_module(data_topology).
295:-use_module(data_peering_policy).
296:-use_module(data_traffic).
297:-use_module(data_routing).
298:-use_module(data_group).
299:-use_module(data_general).
300
301:-use_module(flow_statistics).
302:-use_module(flow_structures).
303:-use_module(flow_utility).
304:-use_module(report).
305:-use_module(report_utility).
306:-use_module(utility).
307\end{verbatim}
308
309If we want to use predicates defined in some library, we have to use the {\it lib}\index{library}\index{lib} directive. It loads the library from the library directory in the ECLiPSe installation, unless we have changed the library path setting to include other directories in the search path.
310Libraries in ECLiPSe are modules like all others, they are just stored in a different place in the file system.
311
312In our example, a single library called ``hash'' is loaded.
313\begin{verbatim}
314:-lib(hash).
315\end{verbatim}
316
317\section{Naming schemes}
318We now introduce a naming scheme\index{naming scheme} that should be used to define identifiers for the various entities in an ECLiPSe program. A consistent naming scheme makes it easier to maintain and modify the program, and also makes sure that contributions from different programmers can be merged without extra work. The scheme presented here is used for RiskWise.
319
320\subsection{Modules and files}
321Each module\index{module name}\index{name, module} must have a unique name of the form [a-z][a-z\_]+ . There is a one-to-one correspondence of modules and program files.
322
323All program files in an application should be placed in a single directory, with each file named after the module it contains. The extension for ECLiPSe programs is {\it .ecl}.
324
325\subsection{Predicates}
326Predicate names\index{predicate name}\index{name, predicate} are of form [a-z][a-z\_]*[0-9]* . Underscores are used to separate words. Digits should only be used at the end of the name. Words should be English. The top-level query of an application should be named {\it top}\index{top}, and should execute a default query with some valid data set. Although predicates in different modules can use the same name, this can be confusing and should be avoided. You should also avoid using the same predicate name with different arity. Although legal, this can create problems when modifying code.
327
328\subsection{Variables}
329Variable names \index{variable name}\index{name, variable}are of form [A-Z\_][a-zA-Z]*[0-9]* . Separate words start with capital letters. Digits should only be used at the end. Words should be English. If a variable occurs only once in a clause, then its name should start with an underscore\index{underscore}. This marks it as a {\it singleton variable}\index{singleton variable}, and stops compiler warnings about the single occurrence of a variable. If a variable occurs more than once in a clause, then its name should not start with an underscore.
330
331\section{Documentation}
332A program that is not documented is not usable. It is very hard to deduce the intention of a program piece just from the implementation, but even a few sentences of explanation can simplify this task dramatically. On the other hand, it is imperative that the documentation and the implementation are consistent. In ECLiPSe we use in-line {\it comment} \index{comment directive}directives to integrate the documentation with the program code. This makes it much easier to keep the documentation up to date than with a separate description file.  
333
334\subsection{Comment directive}
335The {\it comment} directives are placed in the source code together with other parts of the program. The ECLiPSe system contains some library predicates which extract the comment directives and build a set of interrelated HTML \index{HTML}pages for the documentation. The same system is used to generate the reference manual \index{reference manual}documentation of the library predicates. The {\it comment} directives come in two flavours. One is a comment for a module\index{module comment}\index{comment, module}, which gives an overview what is module is for. As an example we again use the file {\it flow\_prepare\_data.ecl}.
336{\small
337\begin{verbatim}
338:-comment(summary,"This file contains the data preparation for 
339the flow analysis.").
340:-comment(desc,html("This file contains the data preparation for 
341the flow analysis.
342")).
343:-comment(author,"R. Rodosek, H. Simonis").
344:-comment(copyright,"Parc Technologies Ltd").
345:-comment(date,"$Date: 2006/09/23 01:48:40 $").
346\end{verbatim}
347}
348The other form of the {\it comment} directive is the predicate comment\index{predicate comment}\index{comment, predicate}, which describes a particular predicate.
349{\small
350\begin{verbatim}
351:-comment(prepare_data/12,[
352summary:"creates the data structures for the flow analysis
353",
354amode:prepare_data(+,+,+,-,-,-,-,-,-,+,-,-),
355args:[
356"Dir":"directory for report output",
357"Type":"the type of report to be generated",
358"Summary":"a summary term",
359"Nodes":"a nodes data structure",
360"Interfaces":"a interfaces data structure",
361"Lines":"a lines data structure",
362"Groups":"a groups data structure",
363"CMatrix":"a matrix (indexed by nodes) of contributions to 
364traffic",
365"FMatrix":"a matrix (indexed by nodes) of flow variables",
366"ScaleFactor":"the scale factor applied to the traffic data",
367"Sizes":"the sizes of min, small, large, max packets",
368"PMatrix":"The Pi Matrix containing the next hop information, 
369indexed by node keys"
370],
371desc:html("
372This route creates the data structures for the flow analysis. 
373...
374"),
375see_also:[hop/3]
376]).
377
378\end{verbatim}
379}
380The exact form of the different fields is described in the the documentation of the {\it directives} \index{directives section}section of the ECLiPSe built-in predicate documentation in the ECLiPSe reference manual.
381
382\subsection{Documenting all interfaces}
383Many predicates in a module only perform very simple tasks which are immediately obvious from the implementation. It would be overkill to document all these predicates. We are working with a rule that all module interfaces must be documented, as well as all predicates which either have a complex implementation or which are expected to be customized by the user.
384
385For predicates without a comment directive, we should use a one line description by a normal ECLiPSe comment in the source code. 
386
387\subsection{Mode declaration}
388We also use mode declarations \index{mode declaration}to document the calling pattern \index{call pattern}of predicates. This uses four symbols
389\begin{description}
390\item[+] for arguments which are {\it instantiated}\index{instantiated}, i.e. are not free variables, but which may contain variables
391\item[++] for arguments that are {\it ground}\index{ground}, i.e. do not contain variables
392\item[-] for arguments which are free variables
393\item[?] for an unknown mode, where we either don't know the mode or we do not care if the argument is instantiated or not
394\end{description}
395While the compiler uses the mode declarations for code optimization, we basically only use it to document the calling pattern. It allows to isolate a predicate definition and to understand its purpose without checking all callers\footnote{Note that the compiler currently does not perform a mode check, i.e. it does not generate compile/runtime errors if a predicate is called with a wrong mode.}.
396
397To continue with our example module {\it flow\_prepare\_data}, the one exported predicate has the following mode declaration\footnote{We can see that we have not followed the rule to place all input arguments before the output arguments.}.
398\begin{verbatim}
399:-mode prepare_data(+,+,+,-,-,-,-,-,-,+,-,-).
400\end{verbatim}
401\subsection{Documentation query}
402If a system contains many modules, it can be helpful to provide a query which automatically generates the documentation for all files. In RiskWise, there is a module {\it document} \index{document}with an entry point {\it document/0} which creates the complete documentation tree. It uses the built-in predicates {\it icompile/1} and {\it ecis\_to\_htlms/4} to extract the documentation information from the source files and to build the HTML files required. Whenever we add a new module to the source of the application, we have to add its name into the {\it components} list.
403\begin{verbatim}
404document:-
405        components(L),
406        (foreach(Module,L) do
407           icompile(Module)
408        ),
409        getcwd(Dir),
410        ecis_to_htmls([Dir],'HTML Doc',[],'ApplicationName').
411
412
413components([
414        module1,
415        module2,
416        ...
417        modulen]).
418\end{verbatim}
419\chapter{Data Structures}
420\label{datastructures}
421
422In this chapter we discuss the choice of data structures for the different application parts. Next to the top-level design, this is the most important aspect that must be specified correctly right from the beginning of a project. The wrong choice of a data structure may mean significant re-work in order to change some deficiency later on, while on the other hand a good data structure design can simplify the coding process and can lead to a very efficient implementation.
423
424\section{External data representation}
425The first question is how the data \index{external data}will be fed to the application. We can distinguish five alternatives.
426
427\begin{description}
428\item[arguments] In the first alternative, all data are passed in arguments to the query. Multiple items of the same type will usually be represented as lists, with structures to hold different attributes of the different objects. This form has the advantage that each query can be run with a completely new data set without changing the database or creating a new set of files. But debugging data in this form can be more difficult, as there is not direct way to look up some data item. This method also requires work on the Java side to build all the data structures before a call to the ECLiPSe solver. A similar effort is required to develop testing code written in ECLiPSe which exercises the interface.
429
430\item[data files] The second alternative is to use data files in a fixed format. The ECLiPSe program then has to read these files and build the internal data structures at the same time. Depending on the format, this may require parsing the input format with definite clause grammars (DCG) \index{DCG}\index{definite clause grammar}(see section \ref{howtousedcgs}), adding to the development effort\footnote{ECLiPSEe 5.4 contains a freeware XML (http://www.xml.org) parser which handles most of the detail of parsing XML files. This makes the use of XML as a data exchange format for ECLiPSe are very exiting new possibility. The parser is described in the ``Third Party Library'' section of the reference manual.}. But as the files can be read and written easily, it is quite simple to create test data sets and to analyze problems by hand. The design for the fixed format may require some extra effort if we want to use the full character set for atoms and strings. A proper quoting mechanism may be required in order to distinguish say a comma separator from a comma contained inside a data field.
431
432\item[prolog terms] The third alternative is to use data files as before, but to format them as valid Prolog terms \index{Prolog term}that can directly read with the ECLiPSe term I/O predicates. This avoids the overhead of writing parsers in ECLiPSe, but may be difficult for the calling side of the application, unless that is also written in ECLiPSe. Note that we again may face quoting problems, in particular for single and double quotes.
433
434\item[EXDR] ECLiPSe also provides a binary data format called EXDR that can be used to exchange information. This can be generated and parsed quite easily in ECLiPSe and in Java, and often allows significant space savings. In addition, problems with quoting are avoided. A disadvantage is that EXDR files are not directly readable by humans, and so may require extra effort during debugging. 
435
436\item[facts] The last alternative is to store the data as facts \index{fact}in the application. They can then be accessed from any part of the ECLiPSe code quite easily. Testing the code is simple by compiling some data files into the system. The Java interface can also store facts into the database quite easily. But changing the data for a new query can be rather complex, and may require recompiling some data modules.
437
438\end{description}
439
440We should note that instead of using files we can also build queues between the ECLiPSe and the Java parts of the application, avoiding the need for file system space.
441
442Which of these methods should be used? This depends on the application. Passing data as arguments clearly is the cleanest way, but requires significant work on the interface and on code for testing. Using data files in fixed formats is simple if the format is defined correctly, but its use of the file system can cause problems when multiple queries should be run concurrently on the same machine. Using Prolog terms in data files has the same disadvantage, but is very simple to use if different ECLiPSe systems exchange data. EXDR files are the safest form to store data, but also the least intuitive. Using queues instead of files avoids problems with multiple instances running at the same time, but require some form of logging to allow debugging. Using facts is a valid alternative if most of the data do not change from one query to the next, but requires extra work to reclaim memory after each change. The following table tries to summarize the advantages and disadvantages of each method.
443
444\begin{table}[htbp]
445\begin{tabular}{|l|r|r|r|r|r|}
446\hline
447Property & Argument & Data file & Terms & Facts & EXDR\\
448\hline
449Multiple runs& ++ & + & + & - & +\\
450\hline
451Debugging & - & + & + & ++ & -\\
452\hline
453Test generation effort & -& + & + & + & -\\
454\hline
455Java I/O effort & - & + & - & + & +\\
456\hline
457ECLiPSe I/O effort & ++ & + & ++ & ++ & ++\\
458\hline
459Memory & ++ & - & -& - - & -\\
460\hline
461Development effort & + & - & + & + & -\\
462\hline
463\end{tabular}
464\caption{\label{Data representation}Data representation}
465\end{table}
466
467\section{Internal data representation}
468The next question is how the data should be represented inside \index{internal data}the application. For this purpose we will have to introduce data structures\index{data structures} which allow rapid access to information, which deal with multiple data sets in different parts of the application and where we can add information in a structured way. It should be clear that the built-in fact data base cannot be used for this purpose. Instead, we have to pass the information via arguments of the predicates. In the following sections, we will discuss how the data should be structured to simplify access and coding.
469
470Note that all our data structures use {\it single assignment}\index{single assignment}, i.e. there is no destructive assignment\index{destructive assignment} in the language\footnote{Destructive assignment in the hash library is hidden from the user.}. Instead of removing\index{remove data}\index{change data}\index{delete data} or changing elements of a structuce, we will always make a near-copy with some information being removed or changed. 
471
472\subsection{Named structures}
473The principal data representation feature of ECLiPSe are named structures\index{named structure}\index{structure, named}. These are terms were each argument is linked to an argument name. We can access one or more of the arguments with the {\it with}\index{with} operator. Named structures are very similar to structures in other languages, the arguments of the structure correspond to attributes of the entity represented. Different attributes can have different types, so that we can store diverse information in a named structure.
474
475In order to use a structure, it must be defined with a {\it struct}\index{struct} definition. We can define a structure either {\it local}\index{local struct} to a module or {\it export} \index{export struct}the definition so that the same structure can be used in other modules which import the definition. As part of a systematic design we normally create a module which contains nothing but exported structure definitions. This module is then imported with a {\it use\_module}\index{use\_module} directive in all other modules of the application which use the structures. If a structure is used in one module only, we should define it as a local structure in that module.
476
477We also use comment directives to document the named structures, just like we do for exported predicates. For each attribute name, we define the data type of the attribute. Normally, these will be atomic data types \index{atomic data types}(integer, real, atom, string), but that is not required. The attribute can hold any data type that we can pass as an argument to a predicate. 
478
479As an example of a named structure we use a small part of the RiskWise module {\it flow\_structures}\index{flow\_structures}. 
480\begin{verbatim}
481:-comment(struct(group),[
482summary:"
483this data structure describes the group object
484",
485fields:[
486"name":"atom, name of the group",
487"type":"atom, one of pop, interconntion, vpn or tariff_class",
488"index":"integer, unique index of the group",
489"list":"list of interface indices belonging to the group",
490"nodes":"list of nodes which contain interfaces of that group"
491]
492]).
493:-export struct(group(name,
494                      type,
495                      index,
496                      list,
497                      nodes)).
498\end{verbatim}
499
500\subsection{Placeholder variables}
501\index{placeholder variables}If we do not specify a fixed attribute value when the named structure is created, then its value will be a free variable which can be bound later on. This is useful for two main purposes. On one side we can define attributes of a structure which will hold the constraint variables\index{constraint variables} of a problem, on the other side we can leave some attributes initially unassigned so that we can fill these slots with results of a computation later on.
502
503\subsection{Nested structures vs. key lookup}
504\index{structure, nested}\index{nested structure}A very common data representation problem is how to access information about some structure from another structure, for example in RiskWise how to access the information about a router from an interface of the router. There are two main alternatives. The first is to insert the data of the first entity (router) directly in the representation of the second entity (interface) as an additional attribute, the second is to store a key which can be used to look up the entity. Although the first method has the advantage of avoiding the extra lookup, we do not recommend this approach. If we have recursive references to objects (in our example above if the router also contains a link to all its interfaces) then this direct representation becomes an infinite data structure, which causes problems for printing and debugging. If we use the second approach, we obviously need a way to find the entity belonging to a particular key without too much overhead. The choice of the key depends on the representation of our overall data structure, which we will discuss in the next sections.
505
506\subsection{Lists}
507A natural way to represent a collection of items of the same type is to use lists\index{list}. They are very convenient to handle an arbitrary number of items by iterating on successive heads of the list, until the empty list is reached. Unfortunately, finding a particular item in a list is a very expensive operation, as we have to scan the list sequentially.
508
509We should never use a list when we can use a structure\index{structure} instead. If we know that a collection will always have the same number of items (say 3), it is much better to use a structure with that number of arguments than to use a list. 
510
511\subsection{Hash tables}
512Hash tables\index{hash tables} are a very useful alternative to lists, if we sometimes want to look up items rather than iterate over all of them. They are defined in the library {\it hash}.
513We can add items one by one, without an a priori limit on the number of items.
514As key \index{key}we can use numbers, atoms or arbitrary terms, but atoms would be the most common key in a hash table. This is very useful when converting input data, since the external data representation often will use names (atoms) to identify objects.
515
516While it is possible to iterate over all items of a hash table, this is not as simple as iteration over a list or an array.
517
518\subsection{Vectors and arrays}
519Vectors\index{vector} are another way to represent a collection of items. Each item is associated with an integer key in the range from 1 to $N$, where $N$ is the size of the vector. Unfortunately, the value $N$ must be known a priori, when we first create the vector. Accessing individual entries by index is very fast, and iterating over all entries is nearly as simple as for lists. The main drawbacks of a vector representation are that we have to know the total number of items beforehand and that the keys must be consecutive integers in the range from 1 to $N$.
520
521Multi-dimensional arrays \index{array}are simple nested vectors, they are created with the {\it dim} \index{dim}predicate for a given dimension and size. Access to an element is with the {\it subscript}\index{subscript} predicate (see section \ref{iterationonarray} for an example).
522
523\subsection{Multi-representation structures}
524Each of the alternative representations given above has some advantages and disadvantages. To obtain a very flexible representation, we can choose a multi-representation structure\index{multi representation}. In this structure, a collection of items is represented as a list and as a hash table and as an array. The list representation can be used for a very simple iteration over all items, the hash table is used in the initial data input phase to find items with a given name and the array of items is used in the core routines of the solver for the fastest access by an integer index. 
525
526The memory requirements of this multi-representation scheme are quite low. The storage to hold the items themselves is shared for all representations, we only need the additional space for the list, hash table and array structures.
527
528In RiskWise\index{RiskWise}, we use the multi-representation scheme for most data structures, with special access predicates like {\it find\_interface/3}\index{find\_interface/3} to access items with either numeric indices or atom keys. References from one entity to another are by integer key, e.g. each interface structure contains as an attribute the integer key value of the node (router) to which it belongs.
529
530\chapter{Getting it to Work}
531\label{gettingittowork}
532
533Once the initial design is finished, and the top-level structure of the program has been defined, we should convert this specification into workable code as quickly as possible. Problems with the parameter passing, missing information or a wrong sequence of the data flow can be detected much more easily this way. We propose to use stubs and dummy code to get an (incomplete) implementation directly from the specification.
534
535\section{Stubs}
536\index{stubs}Each query has been defined in form of a predicate, with input and output parameters. Regardless of the actual function of the query, it is easy to generate a predicate which syntactically behaves like the finished program. It reads the input parameters and creates output terms which satisfy the specification. Initially, it does not matter if the output parameters are not consistent with the input values, as long as their form is correct.
537
538If a top-level query has already been revised into smaller components, we can immediately write the body of the top-level predicate calling the individual components in the right order and with the correct parameters. Adding stub definitions of these components again leads to an executable program.
539
540Whenever we finish development of some of the components, we can immediately replace the stub code with the working implementation. Provided that the inputs are sufficiently simple, we get a simulated version of our application that we can convert piece by piece into the real application.
541
542\section{Argument checking}
543\index{argument checking}It is a good idea, at least for the top-level queries, to verify all parameters systematically. In the specification, we have defined various constraints that the input data must satisfy. Most of these constraints can be translated without too much work into checks that verify the constraints. A separate module for error checking can handle this work and leave the application core to rely on the correctness of the data.
544
545In RiskWise, the module {\it error\_checking}\index{error\_checking} performs these checks, using a simple language to define data constraints into executable rules.
546
547\section{Early testing}
548Experience has shown that the testing\index{testing} and tuning\index{tuning} of an application are by far the most time consuming activities in the development of a LSCO system. It is very important that we prepare test data sets as early as possible, together with some test routines which exercise our API queries with these tests.
549
550If we use different test sets \index{test data}right from the start on our stub implementation, then we can detect problems early on during the development of individual components.
551
552\section{Line Coverage}
553Another tool that is very useful at this stage of development is the line coverage \index{line coverage}profiler \index{profiler}in the {\it coverage}\index{coverage} library. Running this tool on the stub implementation we can check that each piece of code is exercised by our test sets.
554
555\section{Heeding warnings}
556When we load our first implementation into the ECLiPSe system, it is quite possible that we find a number of error\index{error message} and warning messages\index{warning messages}. Errors will usually be caused by simple syntax problems, by forgetting to define some predicate or by not importing a module where it is required. These errors are typically easy to fix once we understand which part of the program is responsible.
557
558It is tempting to ignore warnings in order to get the code running as quickly as possible. That would be a big mistake. We should eliminate all warnings about singleton variables\index{singleton variable} and missing predicate \index{missing predicate}definitions before continuing. Not only will this lead to the detection of problems in the code at this point, we will also immediately see if new warnings are introduced when we change some part of the program. 
559
560\section{Keep it working}
561As a general rule, once we have created a working stub system, we should always keep the program working by making changes in small increments and testing after each change. This way we know which part of the program was modified last and which is therefore most likely to cause the problem. 
562
563
564\chapter{Programming Concepts}
565\label{programmingconcepts}
566
567\section{Overview}
568In this chapter we will present typical programming concepts\index{programming concept} in ECLiPSe with example 
569uses in the RiskWise application. These programming concepts each perform one 
570particular operation in an efficient way, and show how these tasks should be 
571programmed in ECLiPSe. They can be adapted to specific tasks by adding 
572additional parameters, changing calls inside the concepts or passing different 
573data structures. 
574
575The presentation of the concepts follows the same pattern: We first describe the 
576concept in general terms and then present the parameters required. This is 
577followed by one or several implementations of the concept in ECLiPSe, and 
578some larger examples from the RiskWise code.
579
580\pagebreak
581\section{Alternatives}
582\paragraph{Description} This concept is used to choose between alternative \index{alternative}actions 
583based on some data structure. For each alternative, a guard $q_{i}$ is specified. 
584The guard \index{guard}is a test which succeeds if the condition for selecting one alternative 
585is met. The actions \index{action}$r_{i}$ are executed when the guard succeeds. In order to choose 
586only the right alternative, and not to leave any unwanted choicepoints in the 
587execution, we must eliminate the remaining alternatives after the guard succeeds. For 
588this we use a cut (!) \index{cut}after each guard but the last. We can leave out the cut after 
589the last guard, as there are no choices left at this point.
590\paragraph{Parameters}
591\begin{description}
592\item[X] a data structure
593\end{description}
594\paragraph{Schema}
595\begin{verbatim}
596:-mode alternatives(+).
597alternatives(X):-
598        q1(X),
599        !,
600        r1(X).
601alternatives(X):-
602        q2(X),
603        !,
604        r2(X).
605alternatives(X):-
606        qn(X),
607        rn(X).
608\end{verbatim}
609\paragraph{Comments}
610Very often, other parameters must be passed either to the guards, or to the actions. 
611
612The errors which are introduced if a cut to commit to a choice is left out are 
613very hard to debug, and may only show after long execution. Much better to 
614always cut after each guard.
615
616When adding new parameters it is important to ensure that they are added to all 
617clauses of the predicate. If a parameter is not used in some clause, then it 
618should be added as a singleton variable.
619If we miss an argument on one of the clauses in the middle, the compiler will create an error message about {\it non consecutive clauses}\index{non consecutive clauses}. But if we miss an argument for either the first or the last clause, the compiler will just treat this as another predicate definition with the same name, but a different arity. Errors of this form are very hard to spot.
620\paragraph{Example}
621\index{interface\_type/3}
622\begin{verbatim}
623:-mode interface_type(+,+,-).
624interface_type(_Node,local,local):-
625        !.
626interface_type(Node,_Interface,backbone_net):-
627        node(Node,net),
628        !.
629interface_type(Node,Interface,backbone):-
630        backbone_line(Node,Interface,_,_),
631        !.
632interface_type(Node,Interface,backbone):-
633        backbone_line(_,_,Node,Interface),
634        !.
635interface_type(Node,Interface,interconnection):-
636        group(interconnection,_,Node,Interface),
637        !.
638interface_type(_Node,_Interface,customer).
639\end{verbatim}
640Here we branch on information passed in the first two arguments, and return a result in the last argument. The last clause is a default rule, saying that the interface type is {\it customer}, if none of the other rules applied.
641
642Some programmers perfer to make the output unification explicit, like so:
643\begin{verbatim}
644:-mode interface_type(+,+,-).
645interface_type(_Node,local,Result):-
646        !,
647        Result = local.
648interface_type(Node,_Interface,Result):-
649        node(Node,net),
650        !,
651        Result = backbone_net.
652interface_type(Node,Interface,Result):-
653        backbone_line(Node,Interface,_,_),
654        !,
655        Result = backbone.
656interface_type(Node,Interface,Result):-
657        backbone_line(_,_,Node,Interface),
658        !,
659        Result = backbone.
660interface_type(Node,Interface,Result):-
661        group(interconnection,_,Node,Interface),
662        !,
663        Result = interconnection.
664interface_type(_Node,_Interface,Result):-
665        Result = customer.
666\end{verbatim}
667This has advantages if the predicate may be called with the last argument instantiated.
668\pagebreak
669\section{Iteration on lists}
670\paragraph{Description}
671\index{iteration, list}\index{list}This concept is used to perform some action on each element of a list. There are two implementations given here. The first uses the {\it do}\index{do loop}\index{loop} loop of ECLiPSe, the second uses recursion\index{recursion} to achieve the same purpose. In the {\it do} loop, the {\it foreach}\index{foreach} keyword describes an action for each element of a list. The first argument (here $X$) is a formal parameter\index{formal parameter} of the loop. At each iteration, it will be bound to one element of the list. The second argument is the list over which we iterate.
672
673It is a matter of style whether to use the first or second variant. For simple iterations, the {\it do} loop is usually more elegant. We can also often use it {\it  inline}\index{inline}, and avoid introducing a new predicate name just to perform some iteration.
674\paragraph{Parameters}
675\begin{description}
676\item[L] a list
677\end{description}
678\paragraph{Schema}
679\begin{verbatim}
680/* version 1 */
681
682:-mode iteration(+).
683iteration(L):-
684        (foreach(X,L) do
685            q(X)
686        ).
687
688/* version 2 */
689
690:-mode iteration(+).
691iteration([]).
692iteration([H|T]):-
693        q(H),
694        iteration(T).
695
696\end{verbatim}
697\paragraph{Comments}
698If we want to scan several lists in parallel, we can use multiple {\it foreach} statements\index{foreach, multiple} in the same {\it do} loop. The following code fragment calls predicate {\it q} for the first elements of list $L$ and $K$, then for the second elements, etc.
699\begin{verbatim}
700:-mode iteration(+,+).
701iteration(L,K):-
702        (foreach(X,L),
703         foreach(Y,K) do
704            q(X,Y)
705        ).
706\end{verbatim}
707This requires that the lists are of the same length, otherwise this {\it do} loop will fail. 
708
709Note that we can put as many parallel operations into a {\it do} loop as we want, they are all executed inside one big loop. We can of course also nest {\it do} loops so that one loop is executed inside another loop.
710
711The {\it foreach} operator can also be used to create a list in a {\it do} loop. This is shown in the transformation concept.
712
713Very often, we have to pass additional parameters into the {\it do} loop. We do this with the {\it param}\index{param} parameter, which lists all variables from outside the loop that we want to use inside the loop. A variable which is not mentioned as a {\it param} argument, is unbound inside the loop. Normally, this will create a warning about a singleton variable\index{singleton variable} inside a {\it do} loop. The following code fragment shows the use of {\it param} to pass variables $A$, $B$ and $C$ to the call of predicate {\it q}.
714\begin{verbatim}
715:-mode iteration(+,+,+,+).
716iteration(L,A,B,C):-
717        (foreach(X,L),
718         param(A,B,C) do
719            q(X,A,B,C)
720        ).
721\end{verbatim}
722\paragraph{Example}
723\index{set\_group\_of\_interfaces/2}
724\begin{verbatim}
725% set the group fields inside the interfaces for each interface
726:-mode set_group_of_interfaces(+,+).
727set_group_of_interfaces(L,Interfaces):-
728        (foreach(group with [type:Type,
729                             name:Name,
730                             interface:I],L),
731         param(Interfaces) do
732            find_interface(I,Interfaces,Interface),
733            set_group_of_interface(Type,Name,Interface)
734        ).
735\end{verbatim}
736Here we use the information that each member of the list $L$ is a term {\it group/4} \index{group term}to replace the formal parameter with a term structure where we access individual fields directly. Also note that the body of the loop may contain more than one predicate call.
737\pagebreak
738\section{Iteration on terms}
739\paragraph{Description}
740\index{iteration, term}We can iterate not only over all elements of a list, as in the previous concept, but also over all arguments of a term. Obviously, this only makes sense if all arguments of the term are of a similar type i.e. the term is used as a {\it  vector}\index{vector}. The {\it foreacharg}\index{foreacharg} keyword of the {\it do} loop iterates over each argument of a term.
741\paragraph{Parameters}
742\begin{description}
743\item[T] a term
744\end{description}
745\paragraph{Schema}
746\begin{verbatim}
747:-mode iteration(+).
748iteration(T):-
749        (foreacharg(X,T) do
750            q(X)
751        ).
752\end{verbatim}
753\paragraph{Comments}
754We can use multiple {\it foreacharg} keywords to scan multiple vectors at the same time, but we cannot use {\it foreacharg} to create terms\index{create terms} (we do not know the functor of the term). If we want to create a new term, we have to generate it with the right functor and arity before the {\it do} loop. The following code segment performs vector addition\index{vector addition} $\vec{C} = \vec{A}+ \vec{B}$.
755\index{vector\_add/3}
756\begin{verbatim}
757:-mode vector_add(+,+,-).
758vector_add(A,B,C):-
759        functor(A,F,N),
760        functor(C,F,N),
761        (foreacharg(X,A),
762         foreacharg(Y,B),
763         foreacharg(Z,C) do
764           Z is X + Y
765        ).
766\end{verbatim}
767If the terms A and B do not have the same number of arguments, the predicate will fail.
768\pagebreak
769\paragraph{Example}
770\index{interface\_mib\_add/3}
771\begin{verbatim}
772:-mode interface_mib_add(+,+,-).
773interface_mib_add(A,B,C):-
774        C = interface_mib with [],
775        (foreacharg(A1,A),
776         foreacharg(B1,B),
777         foreacharg(C1,C) do
778            C1 is A1 + B1
779        ).
780\end{verbatim}
781This predicate adds vectors with the functor {\it interface\_mib} and returns such a vector.
782\pagebreak
783\section{Iteration on array}
784\label{iterationonarray}
785\paragraph{Description}
786\index{iteration, array}The next concept is iteration on an array \index{array}structure. We often want to perform some action on each element of a two-dimensional array. 
787
788Again, we present two implementations. The first uses nested {\it foreacharg}\index{foreacharg, nested} {\it do} loops to perform some operation $q$ on each element of an array. The second uses nested {\it for}\index{for}\index{loop} loops to iterate over all index combinations $I$ and $J$. This second variant is more complex, and should be used only if we require the index values $I$ and $J$ as well as the matrix element $X$. 
789\paragraph{Parameters}
790\begin{description}
791\item[Matrix] a matrix
792\end{description}
793\paragraph{Schema}
794\begin{verbatim}
795/* version 1 */
796
797:-mode iteration(+).
798iteration(Matrix):-
799        (foreacharg(Line,Matrix) do
800            (foreacharg(X,Line) do
801                q(X)
802            )
803        ).
804
805/* version 2 */
806
807:-mode iteration(+).
808iteration(Matrix):-
809        dim(Matrix,[N,M]),
810        (for(I,1,N),
811         param(M,Matrix) do
812            (for(J,1,M),
813             param(I,Matrix) do
814                subscript(Matrix,[I,J],X),
815                q(X,I,J)
816            )
817        ).
818\end{verbatim}
819\paragraph{Comments}
820The {\it dim}\index{dim} predicate can not only be used to create arrays, but also to find the size of an existing array. 
821
822Note the strange way in which parameters $M$, $I$ and $Matrix$ are passed through the nested {\it for} loops with {\it param} arguments. But if we do not do this, then the variable $Matrix$ outside and inside the {\it do} loop are unrelated.
823\paragraph{Example}
824The example calls the predicate {\it fill\_empty/3} for each index combination of entries in a matrix $PMatrix$.
825\index{fill\_rest\_with\_empty/2}
826\begin{verbatim}
827:-mode fill_rest_with_empty(+,+).
828fill_rest_with_empty(N,PMatrix):-
829        (for(I,1,N),
830         param(PMatrix,N) do
831            (for(J,1,N),
832             param(PMatrix,I) do
833                fill_empty(PMatrix,I,J)
834            )
835        ).
836\end{verbatim}
837
838\pagebreak
839\section{Transformation}
840\paragraph{Description}
841This next concept is used to perform some transformation\index{transformation} on each element of a list and to create a list of the transformed elements. At the end, both lists will have the same length, and the elements match, i.e. the first element of the second list is the transformed first element of the first list. 
842
843This concept uses the {\it foreach}\index{foreach} keyword in two different ways. The first is used to scan an existing list $L$, the second is used to construct a list\index{list, construction} $K$ as the result of the operation.
844\paragraph{Parameters}
845\begin{description}
846\item[L] a list
847\item[K] a free variable, will be bound to a list
848\end{description}
849\paragraph{Schema}
850\begin{verbatim}
851:-mode transformation(+,-).
852transformation(L,K):-
853        (foreach(X,L),
854         foreach(Y,K) do
855            q(X,Y)
856        ).
857\end{verbatim}
858\paragraph{Comments}
859In the code above we cannot see that list $L$ is an input and list $K$ is an output. This can only be deduced from the calling pattern or from the mode declaration\index{mode declaration}. 
860
861\pagebreak
862\paragraph{Example}
863The example takes a list of {\it router\_mib\_data} terms and builds a list of temporary {\it t/2} terms where the second argument consists of {\it router\_mib} terms.
864\index{convert\_to\_router\_mib/3}
865\begin{verbatim}
866:-mode convert_to_router_mib(+,-,-).
867convert_to_router_mib(L,K,Router):-
868        (foreach(router_mib_data with 
869                 [router:Router,
870                  time:Time,
871                  tcp_segm_in:A,
872                  tcp_segm_out:B,
873                  udp_datagram_in:C,
874                  udp_datagram_out:D],L),
875         foreach(t(Time,router_mib with 
876                   [tcp_segm_in:A,
877                   tcp_segm_out:B,
878                   udp_datagram_in:C,
879                   udp_datagram_out:D]),K),
880         param(Router) do
881            true
882         ).
883\end{verbatim}
884In this example the transformation is completely handled by matching arguments in the {\it foreach} statements. We use the predicate {\it true} \index{true}for an empty loop body\index{loop body, empty}.
885
886Figuring out what is happening with the variable {\it Router} is left as an exercise for the advanced reader.
887\pagebreak
888\section{Filter}
889\paragraph{Description}
890The filter\index{filter} concept extracts from a list of elements those that satisfy some condition {\it q} and returns a list of these elements.
891
892We present three implementations, one using recursion, the others using a {\it do} loop with the {\it fromto}\index{fromto} keyword.
893\paragraph{Parameters}
894\begin{description}
895\item[L] a list
896\item[K] a free variable, will be bound to a list
897\end{description}
898\paragraph{Schema}
899\begin{verbatim}
900/* version 1 */
901
902:-mode filter(+,-).
903filter([],[]).
904filter([A|A1],[A|B1]):-
905        q(A),
906        !,
907        filter(A1,B1).
908filter([_A|A1],B1):-
909        filter(A1,B1).
910
911/* version 2 */
912
913:-mode filter(+,-).
914filter(L,K):-
915        (foreach(X,L),
916         fromto([],In,Out,K) do
917            q(X,In,Out)
918        ).
919
920q(X,L,[X|L]):-
921        q(X),
922        !.
923q(_,L,L).
924\end{verbatim}
925\pagebreak
926\begin{verbatim}
927/* version 3 */
928
929:-mode filter(+,-).
930filter(L,K):-
931        (foreach(X,L),
932         fromto(K,In,Out,[]) do
933            q(X,In,Out)
934        ).
935
936q(X,[X|L],L):-
937        q(X),
938        !.
939q(_,L,L).
940\end{verbatim}
941\paragraph{Comments}
942The difference between versions 2 and 3 lies in the order of the elements in the result list. Version 2 produces the elements in the inverse order of version 1, whereas version 3 produces them in the same order as version 1. This shows that the {\it fromto} statement can be used to build lists forwards or backwards. Please note that the predicate {\it q/3} is also different in variants 2 and 3.
943
944The cuts (!) \index{cut}in the program clauses are very important, as they remove the possibility that a selected element is not included in the filtered list. If we remove the cuts, then the {\it filter} predicate has an exponential number of ``solutions''. Only the first solution will be correct, on backtracking we will decide to reject elements which satisfy the test criterion and we will explore all combinations until we reach the empty list as the last ``solution''.
945\pagebreak
946\paragraph{Example}
947The following program is used to extract interfaces related to customers (types customer, selected and remaining) as a list of {\it customer/3} terms, group them by node and perform some action on each group.
948\index{selected\_min\_max/2}
949\index{selected\_customer/3}
950\begin{verbatim}
951:-mode selected_min_max(+,+).
952selected_min_max(Type,Interfaces):-
953        Interfaces = interfaces with list:List,
954        (foreach(Interface,List),
955         fromto([],In,Out,Customers) do
956            selected_customer(Interface,In,Out)
957        ),
958        sort(0,=<,Customers,Sorted),
959        customers_by_node(Sorted,Grouped),
960        selected_together(Type,Grouped,Interfaces).
961
962selected_customer(interface with [type:Type,
963                                  index:I,
964                                  node_index:Node],
965                  In,
966                  [customer with [node:Node,
967                                  index:I,
968                                  type:Type]|In]):-
969        memberchk(Type,[customer,selected,remaining]),
970        !.
971% all other types: backbone,backbone_net,interconnection,local
972selected_customer(_,In,In).
973\end{verbatim}
974
975\pagebreak
976\section{Combine}
977\paragraph{Description}
978This concept takes a list, combines\index{combine} consecutive elements according to some criterion and returns a list of the combined elements.
979
980The typical use of this concept will first sort the input list so that elements that can be combined are consecutive in the list.
981\paragraph{Parameters}
982\begin{description}
983\item[L] a list
984\item[Res] a free variable, will be bound to a list
985\end{description}
986\paragraph{Schema}
987\begin{verbatim}
988:-mode combine(+,-).
989combine([],[]).
990combine([A,B|R],Res):-
991        can_combine(A,B,C),
992        !,
993        combine([C|R],Res).
994combine([A|A1],[A|Res]):-
995        combine(A1,Res).
996\end{verbatim}
997\paragraph{Comments}
998It is important to note that the recursive call in the second clause continues with the combined element $C$, since it may be combined with more elements of the rest of the list $R$.
999
1000The cut in the second clause ensures that elements that can be combined are always combined, and that we do not leave a choice point in the execution.
1001
1002The most simple use of the concept is the removal of duplicate entries\index{duplicate entries} in a sorted list.
1003
1004\pagebreak
1005\paragraph{Example}
1006\index{combine\_traffic/2}
1007\index{try\_to\_combine/3}
1008\begin{verbatim}
1009:-mode combine_traffic(+,-).
1010combine_traffic([],[]).
1011combine_traffic([A,B|R],L):-
1012        try_to_combine(A,B,C),
1013        !,
1014        combine_traffic([C|R],L).
1015combine_traffic([A|R],[A|S]):-
1016        combine_traffic(R,S).
1017
1018try_to_combine(interface_traffic_sample(Time,Router,Interface,
1019                                        X1,X2,X3,X4,X5,
1020                                        X6,X7,X8,X9,X10),
1021        interface_traffic_sample(Time,Router,Interface,
1022                                 Y1,Y2,Y3,Y4,Y5,
1023                                 Y6,Y7,Y8,Y9,Y10),
1024        interface_traffic_sample(Time,Router,Interface,
1025                                 Z1,Z2,Z3,Z4,Z5,
1026                                 Z6,Z7,Z8,Z9,Z10)):-
1027        Z1 is X1+Y1,
1028        Z2 is X2+Y2,
1029        Z3 is X3+Y3,
1030        Z4 is X4+Y4,
1031        Z5 is X5+Y5,
1032        Z6 is X6+Y6,
1033        Z7 is X7+Y7,
1034        Z8 is X8+Y8,
1035        Z9 is X9+Y9,
1036        Z10 is X10+Y10.
1037\end{verbatim}
1038Here we combine traffic samples for the same interface and time point by adding the sample values $X_{1} ... X_{10}$ and $Y_{1} ... Y_{10}$. The predicate {\it try\_to\_combine} will only succeed if the two input arguments have the same time stamp, router and interface, but it will fail if the arguments differ on these fields. 
1039
1040Also note that we do not use named structures in this example. This is justified as any extension of the structure would probably entail a change of the program anyway.
1041\pagebreak
1042\section{Minimum}
1043\paragraph{Description}
1044\index{minimum}This concept selects the smallest element of a list according to some comparison operator {\it better}.
1045\paragraph{Parameters}
1046\begin{description}
1047\item[L] a list
1048\item[V] a free variable, will be bound to an entry of $L$
1049\end{description}
1050\paragraph{Schema}
1051\begin{verbatim}
1052:-mode minimum(+,-).
1053minimum([H|T],V):-
1054        (foreach(X,T),
1055         fromto(H,In,Out,V) do
1056            minimum_step(X,In,Out)
1057        ).
1058
1059minimum_step(X,Old,X):-
1060        better(X,Old),
1061        !.
1062minimum_step(X,Old,Old).
1063\end{verbatim}
1064\paragraph{Comments}
1065This implementation of minimum fails if the input list has no elements. This means that somewhere else in the program we have to handle the case where the input list is empty. This seems to be the most clear definition of minimum, an empty list does not have a smallest element.
1066
1067If several elements of the list have the same minimal value, only the first one is returned.
1068\pagebreak
1069\section{Best and rest}
1070\paragraph{Description}
1071\index{best and rest}This concept is an extension of the minimum concept. It not only returns the best element in the input list, but also the rest of the original list without the best element. This rest can then be used for example to select another element, and so on.
1072\paragraph{Parameters}
1073\begin{description}
1074\item[L] a list
1075\item[Best] a free variable, will be bound to an entry of $L$
1076\item[Rest] a free variable, will be bound to a list of the entries of $L$ without $Best$
1077\end{description}
1078\paragraph{Schema}
1079\begin{verbatim}
1080:-mode best_and_rest(+,-,-).
1081best_and_rest([H|T],Best,Rest):-
1082        (foreach(X,T),
1083         fromto(H,In1,Out1,Best),
1084         fromto([],In2,Out2,Rest) do
1085            best(X,In1,Out1,In2,Out2)
1086        ).
1087
1088best(X,Y,X,L,[Y|L]):-
1089        better(X,Y),
1090        !.
1091best(X,Y,Y,L,[X|L]).
1092\end{verbatim}
1093\paragraph{Comments}
1094The predicate fails if the input list is empty. We must handle that case somewhere else in the program.
1095
1096If several elements of the list have the same best value, only the first one is selected. 
1097
1098The order of elements in {\it Rest} may be quite different from the order in the input list. If that is not acceptable, we must use a different implementation. A rather clever one is given below:
1099\pagebreak
1100\begin{verbatim}
1101best_and_rest([First|Xs],Best,Rest):-
1102        (foreach(X,Xs),
1103         fromto(First,BestSoFar,NextBest,Best),
1104         fromto(Start,Rest1,Rest2,[]),
1105         fromto(Start,Head1,Head2,Gap),
1106         fromto(Rest,Tail1,Tail2,Gap) do
1107            (better(X,BestSoFar) ->
1108                NextBest = X,
1109                Tail1 = [BestSoFar|Head1],
1110                Tail2 = Rest1,
1111                Head2 = Rest2
1112            ;
1113                NextBest = BestSoFar,
1114                Tail2 = Tail1,
1115                Head2 = Head1,
1116                Rest1 = [X|Rest2]
1117            )
1118        ).
1119\end{verbatim}
1120\pagebreak
1121\section{Sum}
1122\paragraph{Description}
1123The sum \index{sum}concept returns the sum of values which have been extracted from a list of data structures. It uses a {\it foreach} to scan each elements of the list and a {\it fromto} to accumulate the total sum.
1124\paragraph{Parameters}
1125\begin{description}
1126\item[L] a list
1127\item[Sum] a free variable, will be bound to a value
1128\end{description}
1129\paragraph{Schema}
1130\begin{verbatim}
1131:-mode sum(+,-).
1132sum(L,Sum):-
1133        (foreach(X,L),
1134         fromto(0,In,Out,Sum) do
1135            q(X,V),
1136            Out is In+V
1137        ).
1138\end{verbatim}
1139\paragraph{Comments}
1140The initial value\index{initial value} for the sum accumulator here is 0. We could use another initial value, this can be useful if we want to obtain the total over several summations. 
1141\pagebreak
1142\paragraph{Example}
1143The program counts how many entries in the {\it interface\_mib\_data} list refer to active interfaces (octet count non-zero).
1144\index{non\_zero\_measurements/2}
1145\begin{verbatim}
1146:-mode non_zero_measurements(+,-).
1147non_zero_measurements(L,N):-
1148        (foreach(X,L),
1149         fromto(0,In,Out,N) do
1150            non_zero_measurement(X,In,Out)
1151        ).
1152
1153non_zero_measurement(interface_mib_data with [octet_in:A,
1154                                              octet_out:B],
1155                     In,Out):-
1156        A+B > 0,
1157        !,
1158        Out is In+1.
1159non_zero_measurement(_X,In,In).
1160\end{verbatim}
1161\pagebreak
1162\section{Merge}
1163\paragraph{Description}
1164The merge \index{merge}concept is used when we want to match corresponding entries in two lists. We sort both lists on the same key, and then scan them in parallel, always discarding the entry with the smallest key first.
1165
1166We can use this concept to combine information from the two lists, to find differences between lists \index{difference}quickly, or to lookup \index{lookup}information from the second list for all elements of the first list.
1167\paragraph{Parameters}
1168\begin{description}
1169\item[L] a list
1170\item[K] a list
1171\end{description}
1172\paragraph{Schema}
1173\begin{verbatim}
1174:-mode merge(+,+).
1175merge(L,K):-
1176        sort_on_key(L,L1),
1177        sort_on_key(K,K1),
1178        merge_lp(L1,K1).
1179
1180merge_lp([],_):-
1181        !.
1182merge_lp([_|_],[]):-
1183        !.
1184merge_lp([A|A1],[B|B1]):-
1185        merge_compare(A,B,Op),
1186        merge_cont(Op,A,A1,B,B1).
1187
1188merge_cont(<,A,A1,B,B1):-
1189        merge_lp(A1,[B|B1]).
1190merge_cont(=,A,A1,B,B1):-
1191        merge_op(A,B),
1192        merge_lp(A1,[B|B1]).
1193merge_cont(>,A,A1,B,B1):-
1194        merge_lp([A|A1],B1).
1195\end{verbatim}
1196\paragraph{Comments}
1197The cuts in {\it merge\_lp} are used to remove choicepoints\index{choice point} left by the compiler\footnote{As ECLiPSe only uses indexing on a single argument, the compiler cannot recognize that the clause patterns are exclusive.}.
1198
1199The schema looks quite complex, but its performance is nearly always significantly better than a simple lookup in the second list.
1200\pagebreak
1201\paragraph{Example}
1202The example takes data from two different sources and merges it. The first argument is a list of {\it interface\_topology} terms, the second a list of {\it ndi\_interface} structures. For matching $Node-Interface$ pairs, we copy information from the first structure into the second. If the $Node-Interface$ pairs do not match, then we don't do anything.
1203
1204Also note the use of {\it compare/3} to obtain a lexicographical ordering of $Node-Interface$ pairs.
1205
1206\index{insert\_topology/2}
1207\index{compare\_index\_interface/3}
1208\index{insert\_topology\_op/5}
1209\begin{verbatim}
1210:-mode insert_topology(+,+).
1211insert_topology([],_):-
1212        !.
1213insert_topology([_|_],[]):-
1214        !.
1215insert_topology([A|A1],[B|B1]):-
1216        compare_index_interface(A,B,Op),
1217        insert_topology_op(Op,A,B,A1,B1).
1218
1219compare_index_interface(interface_topology(_,Router1,
1220                                           Index1,_,_),
1221                  ndi_interface with [router:Router2,
1222                                      index:Index2],Op):-
1223        compare(Op,Router1-Index1,Router2-Index2).
1224
1225insert_topology_op(<,A,B,A1,B1):-
1226        insert_topology(A1,[B|B1]).
1227insert_topology_op(=,A,B,A1,B1):-
1228        insert_one_topology(A,B),
1229        insert_topology(A1,B1).
1230insert_topology_op(>,A,_B,A1,B1):-
1231        insert_topology([A|A1],B1).
1232
1233insert_one_topology(interface_topology(_,_,_,Ip,Mask),
1234                    ndi_interface with [ip_address:Ip,
1235                                    subnet_mask:Mask,
1236                                    subnet:Subnet]):-
1237        subnet(Ip,Mask,Subnet).        
1238\end{verbatim}
1239
1240\pagebreak
1241\section{Group}
1242\paragraph{Description}
1243\index{group}This concept takes a sorted list of items and creates a list of lists, where items with the same key are put in the same sub-list. This works even for the empty input list.
1244
1245The second argument of {\it group\_lp} serves as an accumulator\index{accumulator} to collect items with the same key. As long as the next item uses the same key, it is put into this accumulator (2nd clause). If the remaining list is empty (1st clause) or it starts with an element of a different key (3rd clause), the accumulated list is put into the output list.
1246\paragraph{Parameters}
1247\begin{description}
1248\item[L] a list
1249\item[K] a free variable, will be bound to a list
1250\end{description}
1251\paragraph{Schema}
1252\begin{verbatim}
1253:-mode group(+,-).
1254group([],[]).
1255group([H|T],K):-
1256        group_lp(T,[H],K).
1257
1258group_lp([],L,[L]).
1259group_lp([H|T],[A|A1],K):-
1260        same_group(H,A),
1261        !,
1262        group_lp(T,[H,A|A1],K).
1263group_lp([H|T],L,[L|K]):-
1264        group_lp(T,[H],K).
1265\end{verbatim}
1266\paragraph{Comments}
1267The order of items in the resulting sub lists is the reverse of their order in the initial list.
1268
1269The order of the sub lists in the result is the same as the order of the keys in the original list.
1270
1271If the initial list is not sorted by the same key that is used in {\it same\_group}, then this program does not work at all.
1272\paragraph{Example}
1273The following program takes a list of terms and groups them according to some argument number $N$. It returns a list of {\it group/2} terms, where the first argument is the common key in the group, and the second argument is a list of all terms which share that key.
1274\index{mode\_group/3}
1275\begin{verbatim}
1276:-mode group(+,+,-).
1277group([],_,[]):-
1278        !.
1279group(Terms,N,Grouped):-
1280        sort(N,=<,Terms,[H|T]),
1281        arg(N,H,Group),
1282        group1(T,Group,[H],N,Grouped).
1283
1284group1([],Group,L,_,[group(Group,L)]).
1285group1([H|T],Group,L,N,Grouped):-
1286        arg(N,H,Group),
1287        !,
1288        group1(T,Group,[H|L],N,Grouped).
1289group1([H|T],Old,L,N,[group(Old,L)|Grouped]):-
1290        arg(N,H,Group),
1291        group1(T,Group,[H],N,Grouped).
1292\end{verbatim}
1293
1294\pagebreak
1295\section{Lookup}
1296\paragraph{Description}
1297The lookup \index{lookup}concept is used to convert data stored in the local database into a list of terms that can be manipulated in the program. The most natural template (first argument of {\it findall})\index{findall} is to use the same term as for the facts\index{facts} in the database.
1298\paragraph{Parameters}
1299\begin{description}
1300\item[Res] a free variable, will be bound to a list
1301\end{description}
1302\paragraph{Schema}
1303\begin{verbatim}
1304:-mode lookup(-).
1305lookup(Res):-
1306        findall(q(X),q(X),Res).
1307\end{verbatim}
1308\paragraph{Comments}
1309{\it findall} examplifies a typical {\it meta-predicate}\index{meta predicate}, the second argument is actually a goal that will be executed. There are a number of other predicates of this type, and this feature can be extremely useful in writing interpreters, emulators etc, which treat data as program parts.
1310
1311The {\it findall} predicate is significantly faster than {\it bagof}\index{bagof} or {\it setof}\index{setof}. Their use is not recommended. 
1312\paragraph{Example}
1313\index{gather\_hops/1}
1314\begin{verbatim}
1315% find all hops routing information
1316:-mode gather_hops(-).
1317gather_hops(Hops):-
1318        findall(hop(A,B,C),hop(A,B,C),Hops).
1319\end{verbatim}
1320
1321\pagebreak
1322\section{Fill matrix}
1323\paragraph{Description}
1324\index{fill matrix}
1325\index{matrix, fill}
1326This concept takes a list of entries with indices $I$ and $J$ and a value $V$, and put the value in a matrix $M$ at position $M_{i,j}$.
1327\paragraph{Parameters}
1328\begin{description}
1329\item[L] a list of entries
1330\item[Matrix] a matrix
1331\end{description}
1332\paragraph{Schema}
1333\begin{verbatim}
1334:-mode fill_matrix(+,+).
1335fill_matrix(L,Matrix):-
1336        (foreach(entry with [i:I,j:J,value:V],L),
1337         param(Matrix) do
1338            subscript(Matrix,[I,J],V)
1339        ).
1340\end{verbatim}
1341\paragraph{Comments}
1342The program may fail if two entries in the list refer to the same index pair $I$ and $J$, as the program would then try to insert two values at the same position.
1343
1344It is not required that the input list contains all index combinations, we can use the {\it iteration on array} concept to fill any un-set elements with a default value.
1345\paragraph{Example}
1346The example fills an array $PMatrix$ with values from a list of {\it hop/3} terms. We also convert the node names in the {\it hop} term into node indices for lookup in the $PMatrix$ matrix.
1347\index{fill\_with\_hops/3}
1348\begin{verbatim}
1349:-mode fill_with_hops(+,+,+).
1350fill_with_hops([],_,_).
1351fill_with_hops([hop(Source,Dest,List)|R],Nodes,PMatrix):-
1352        find_node_index(Source,Nodes,S),
1353        find_node_index(Dest,Nodes,D),
1354        find_node_indices(List,Nodes,L),
1355        length(L,N), 
1356        subscript(PMatrix,[S,D],pi_entry with [list:L,
1357                                               length:N]),
1358        fill_with_hops(R,Nodes,PMatrix).
1359\end{verbatim}
1360
1361\pagebreak
1362\section{Cartesian}
1363\paragraph{Description}
1364\index{cartesian}This concept takes two input lists $L$ and $K$ and creates a list of pairs $Res$, in which each combination of elements of the first and the second list occurs exactly once.
1365
1366The result is a list of terms {\it pair(X, Y)}, where $X$ is an element of list $L$ and $Y$ is an element of list $K$.
1367
1368The implementation uses nested {\it foreach} {\it do} loops to create each combination of elements once. The {\it fromto} accumulators are used to collect the result list.
1369\paragraph{Parameters}
1370\begin{description}
1371\item[L] a list
1372\item[K] a list
1373\item[Res] a free variable, will be bound to a list
1374\end{description}
1375\paragraph{Schema}
1376\begin{verbatim}
1377:-mode cartesian(+,+,-).
1378cartesian(L,K,Res):-
1379        (foreach(X,L),
1380         fromto([],In,Out,Res),
1381         param(K) do
1382            (foreach(Y,K),
1383             fromto(In,In1,[pair(X,Y)|In1],Out),
1384             param(X) do
1385                true
1386            )
1387        ).
1388\end{verbatim}
1389\paragraph{Comments}
1390Note the use of an empty body ({\it true}) \index{true}in the innermost loop. All calculations are done by parameter passing alone.
1391
1392If we want to create the elements in the same order as the elements in the input list, we have to exchange input and output arguments of the {\it fromto} statements, like so:
1393\pagebreak
1394\begin{verbatim}
1395:-mode cartesian(+,+,-).
1396cartesian(L,K,Res):-
1397        (foreach(X,L),
1398         fromto(Res,In,Out,[]),
1399         param(K) do
1400            (foreach(Y,K),
1401             fromto(In,[pair(X,Y)|In1],In1,Out),
1402             param(X) do
1403                true
1404            )
1405        ).
1406\end{verbatim}
1407\paragraph{Example}
1408
1409The example builds all pairs of sources and sink nodes for flows and creates contribution structures from them. An additional accumulator $NoPath$ is used to collect cases where there is no route between the nodes.
1410\index{create\_contribution/5}
1411\begin{verbatim}
1412:-mode create_contribution(+,+,+,-,-).
1413create_contribution(FromList,ToList,
1414                    PMatrix,Contribution,NoPath):-
1415        (foreach(From,FromList),
1416         fromto([],In1,Out1,Contribution),
1417         fromto([],NP1,NP2,NoPath),
1418         param(ToList,PMatrix) do
1419            (foreach(To,ToList),
1420             fromto(In1,In,Out,Out1),
1421             fromto(NP1,NoPath1,NoPath2,NP2),
1422             param(From,PMatrix) do
1423                contribution(From,To,From,To,1,PMatrix,
1424                             In,Out,NoPath1,NoPath2)
1425            )
1426        ).
1427\end{verbatim}
1428
1429\pagebreak
1430\section{Ordered pairs}
1431\paragraph{Description}
1432This concept creates ordered pairs\index{ordered pairs}\index{pairs, ordered} of entries from a list. Each combination where the first element occurs in the input list before the second element is created exactly once.
1433
1434The result is a list of terms {\it pair(X, Y)} where $X$ and $Y$ are elements of the input list $L$.
1435\paragraph{Parameters}
1436\begin{description}
1437\item[L] a list
1438\item[K] a free variable, will be bound to a list
1439\end{description}
1440\paragraph{Schema}
1441\begin{verbatim}
1442:-mode ordered_pairs(+,-).
1443ordered_pairs(L,K):-
1444        ordered_pairs_lp(L,[],K).
1445
1446ordered_pairs_lp([],L,L).
1447ordered_pairs_lp([H|T],In,Out):-
1448        ordered_pairs_lp2(H,T,In,In1),
1449        ordered_pairs_lp(T,In1,Out).
1450
1451ordered_pairs_lp2(H,[],L,L).
1452ordered_pairs_lp2(H,[A|A1],In,Out):-
1453        ordered_pairs_lp2(H,A1,[pair(H,A)|In],Out).
1454\end{verbatim}
1455\pagebreak
1456\paragraph{Comments}
1457The second and third argument of {\it ordered\_pairs\_lp} and the third and fourth argument of {\it ordered\_pairs\_lp2} serve as an accumulator to collect the results.
1458
1459This concept can also be implemented with nested {\it do} loops. The recursive version seems more natural.
1460\begin{verbatim}
1461ordered_pairs(L,K):-
1462        (fromto(L,[El|Rest],Rest,[_]),
1463         fromto(K,TPairs,RPairs,[]) do
1464            (foreach(R,Rest),
1465             param(El),
1466             fromto(TPairs,[pair(El,R)|Pairs],Pairs,RPairs) do
1467                true
1468            )
1469        ).
1470\end{verbatim}
1471\chapter{Input/Output}
1472\label{inputoutput}
1473
1474In this chapter we will discuss input and output with the ECLiPSe system. We will first discuss how to read data into an ECLiPSe program, then discuss different output methods. From this we extract some rules about good and bad data formats that may be useful when defining a data exchange format between different applications. At the end we show how to use a simple report generator library in RiskWise, which converts data lists into HTML reports.
1475
1476\section{Reading input data into data structures}
1477\label{ReadingInput}
1478The easiest way to read data \index{read data}into ECLiPSe programs is to use the Prolog term format for the data. Each term is terminated by a fullstop\index{fullstop}, which is a dot (.) followed by some white space. The following code reads terms from a file until the end of file is encountered and returns the terms in a list.
1479\begin{verbatim}
1480:-mode read_data(++,-).
1481read_data(File,Result):-
1482        open(File,read,S),
1483        read(S,X),
1484        read_data_lp(S,X,Result),
1485        close(S).
1486
1487read_data_lp(_S,end_of_file,[]):-
1488        !.
1489read_data_lp(S,X,[X|R]):-
1490        read(S,Y),
1491        read_data_lp(S,Y,R).
1492\end{verbatim}
1493This method is very easy to use if both source and sink of the data are ECLiPSe programs. Unfortunately, data provided by other applications will normally not be in the Prolog term format. For them we will have to use some other techniques\footnote{We should at this point again mention the possibilities of the EXDR format which can be easily read into ECLiPSe, and which is usually simpler to generate in other languages than the canonical Prolog format.}.
1494
1495\section{How to use DCGs}
1496\label{howtousedcgs}
1497
1498In this section we describe the use of tokenizers and DCG \index{DCG}\index{definite clause grammar}(definite clause grammar) to produce a very flexible input system. The input routine of the NDI mapper\index{NDI mapper} of RiskWise is completely implemented with this method, and we use some of the code in the examples below.
1499
1500In this approach the data input is split into two parts, a tokenizer and a parser. The tokenizer read the input and splits it into tokens. Each token corresponds to one field in a data item. The parser is used to recognize the structure of the data and to group all data belonging to one item together.
1501
1502Using these techniques to read data files is a bit of overkill, they are much more powerful and are used for example to read ECLiPSe terms themselves. But, given the right grammar, this process is very fast and extremely easy to modify and extend.
1503
1504The top level routine {\it read\_file} \index{read\_file}opens the file, calls the tokenizer, closes the file and starts the parser. We assume here that at the end of the parsing the complete input stream has been read (third argument in predicate {\it phrase}\index{phrase}). Normally, we would check the unread part and produce an error message.
1505\begin{verbatim}
1506:-mode read_file(++,-).
1507read_file(File,Term):-
1508        open(File,'read',S),
1509        tokenizer(S,1,L),
1510        close(S),
1511        phrase(file(Term),L,[]).
1512\end{verbatim}
1513
1514The tokenizer\index{tokenizer} is a bit complicated, since our NDI data format explicitly mentions end-of-line\index{end of line} markers, and does not distinguish between lower and upper case spelling. Otherwise, we might be able to use the built-in tokenizer of ECLiPSe (predicate {\it read\_token}\index{read\_token}).
1515
1516The tokenizer reads one line of the input at a time and returns it as a string. After each line, we insert a {\it end\_of\_line(N)} token into the output with $N$ the current line number. This can be used for meaningful error messages, if the parsing fails (not shown here). We then split the input line into white space separated strings, eliminate any empty strings and return the rest as our tokens.
1517
1518The output of the tokenizer will be a list of strings intermixed with end-of-line markers.
1519\index{read\_string/4}
1520\begin{verbatim}
1521:-mode tokenizer(++,++,-).
1522tokenizer(S,N,L):-
1523        read_string(S,'end_of_line',_,Line),
1524        !,
1525        open(string(Line),read,StringStream),
1526        tokenizer_string(S,N,StringStream,L).
1527tokenizer(_S,_N,[]).
1528
1529tokenizer_string(S,N,StringStream,[H|T]):-
1530        non_empty_string(StringStream,H),
1531        !,
1532        tokenizer_string(S,N,StringStream,T).
1533tokenizer_string(S,N,StringStream,[end_of_line(N)|L]):-
1534        close(StringStream),
1535        N1 is N+1,
1536        tokenizer(S,N1,L).
1537
1538non_empty_string(Stream,Token):-
1539        read_string(Stream, " \t\r\n", _, Token1),
1540        (Token1 = "" ->
1541            non_empty_string(Stream,Token)
1542        ;
1543            Token = Token1
1544        ).
1545\end{verbatim}
1546We now show an example of grammar rules which define one data file of the NDI mapper, the RouterTrafficSample\index{RouterTrafficSample} file. The grammar for the file format is shown below:
1547
1548\begin{verbatim}
1549file              := <file_type_line> 
1550                     <header_break>
1551                     [<data_line>]* 
1552file_type_line    := RouterTrafficSample <newline> 
1553header_break      := # <newline> 
1554data_line         := <timestamp> 
1555                     <router_name> 
1556                     <tcp_segments_in> 
1557                     <tcp_segments_out> 
1558                     <udp_datagrams_in> 
1559                     <udp_datagrams_in> 
1560                     <newline> 
1561timestamp         := <timestamp_ms> 
1562router_name       := <name_string>  
1563tcp_segments_in   := integer 
1564tcp_segments_out  := integer 
1565udp_datagrams_in  := integer 
1566udp_datagrams_out := integer 
1567\end{verbatim}
1568
1569This grammar translates directly into a DCG representation. The start symbol\index{start symbol} of the grammar is {\it file(X)}, the argument $X$ will be bound to the parse tree\index{parse tree} for the grammar. Each rule uses the symbol \verb|-->| to define a rule head on the left side and its body on the right. All rules for this particular data file replace one non-terminal symbol\index{non terminal symbol}\index{symbol, non terminal} with one or more non-terminal symbols. The argument in the rules is used to put the parse tree together. For this data file, the parse tree will be a term {\it router\_traffic\_sample(L)} with $L$ a list of terms {\it router\_traffic\_sample(A,B,C,D,E,F)} where the arguments are simple types (atoms and integers).
1570
1571\begin{verbatim}
1572file(X) --> router_traffic_sample(X).
1573
1574router_traffic_sample(router_traffic_sample(L)) --> 
1575        file_type_line("RouterTrafficSample"),
1576        header_break,
1577        router_traffic_sample_data_lines(L).
1578
1579router_traffic_sample_data_lines([H|T]) --> 
1580        router_traffic_sample_data_line(H), 
1581        !,
1582        router_traffic_sample_data_lines(T).
1583router_traffic_sample_data_lines([]) --> [].
1584
1585router_traffic_sample_data_line(
1586            router_traffic_sample(A,B,C,D,E,F)) --> 
1587        time_stamp(A),
1588        router_name(B),
1589        tcp_segments_in(C),
1590        tcp_segments_out(D),
1591        udp_datagrams_in(E),
1592        udp_datagrams_out(F),
1593        new_line.
1594
1595tcp_segments_in(X) --> integer(X).
1596
1597tcp_segments_out(X) --> integer(X).
1598
1599udp_datagrams_in(X) --> integer(X).
1600
1601udp_datagrams_out(X) --> integer(X).
1602\end{verbatim}
1603Note the cut in the definition of {\it router\_traffic\_sample\_data\_lines}, which is used to commit to the rule when a complete data line as been read. If a format error occurs in the file, then we will stop reading at this point, and the remaining part of the input will be returned in {\it phrase}\index{phrase}.
1604
1605The following rules define the basic symbols of the grammar. Terminal symbols\index{terminal symbol}\index{symbol, terminal} are placed in square brackets, while additional Prolog code is added with braces\index{braces}. The {\it time\_stamp} rule for example reads one token $X$. It first checks that $X$ is a string, then converts it to a number $N$, and then uses a library predicate {\it eclipse\_date}\index{eclipse\_date} to convert $N$ into a date representation $Date: 2006/09/23 01:48:40 $, which is returned as the parse result. 
1606\begin{verbatim}
1607file_type_line(X) --> ndi_string(X), new_line.
1608
1609header_break --> 
1610        ["#"],
1611        new_line.
1612
1613router_name(X) --> name_string(X).
1614
1615time_stamp(Date) --> 
1616        [X],
1617        {string(X),
1618         number_string(N,X),
1619         eclipse_date(N,Date)
1620        }.
1621
1622integer(N) --> [X],{string(X),number_string(N,X),integer(N)}.
1623
1624name_string(A) --> ndi_string(X),{atom_string(A,X)}.
1625
1626ndi_string(X) --> [X],{string(X)}.
1627
1628new_line --> [end_of_line(_)].
1629\end{verbatim}
1630These primitives are reused for all files, so that adding the code to read a new file format basically just requires some rules defining the format. 
1631
1632\section{Creating output data files}
1633In this section we discuss how to generate output data files\index{output files}. We present three methods which implement different output formats.
1634
1635\subsection{Creating Prolog data}
1636We first look at a special case where we want to create a file which can be read back with the input routine shown in section \ref{ReadingInput}. The predicate {\it output\_data} writes each item in a list of terms on one line of the output file, each line terminated by a dot (.). The predicate {\it writeq}\index{writeq} ensures that atoms are quoted, operator definitions\index{operator definition} are handled correctly, etc.
1637\begin{verbatim}
1638:-mode output_data(++,+).
1639output_data(File,L):-
1640        open(File,'write',S),
1641        (foreach(X,L),
1642         param(S) do
1643           writeq(S,X),writeln('.')
1644        ),
1645        close(S).
1646\end{verbatim}
1647It is not possible to write unbound constrained variables to a file and to load them later, they will not be re-created with their previous state and constraints. In general, it is a good idea to restrict the data format to ground terms\index{ground terms}, i.e. terms that do not contain any variables.
1648
1649\subsection{Simple tabular format}
1650Generating data in Prolog format is easy if the receiver of the data is another ECLiPSe program, but may be inconvienient if the receiver is written in another language. In that case a tabular format\index{tabular format} that can be read with routines like {\it scanf}\index{scanf} is easier to handle. The following program segment shows how this is done. For each item in a list we extract the relevant arguments, and write them to the output file separated by white space.
1651\begin{verbatim}
1652:-mode output_data(++,+).
1653output_data(File,L):-
1654        open(File,'write',S),
1655        (foreach(X,L),
1656         param(S) do
1657            output_item(S,X)
1658        ),
1659        close(S).
1660
1661output_item(S,data_item with [attr1:A1,attr2:A2]):-
1662        write(S,A1),
1663        write(S,' '),
1664        write(S,A2),
1665        nl(S).
1666\end{verbatim}
1667We use the predicate {\it write}\index{write} to print out the individual fields. As this predicate handles arbitrary argument types, this is quite simple, but it does not give us much control over the format of the fields.
1668
1669\subsection{Using {\it printf}}
1670Instead, we can use the predicate {\it printf}\index{printf} which behaves much like the C-library routine. For each argument we must specify the argument type and an optional format\index{format}. If we make a mistake in the format specification, a run-time error will result.
1671\begin{verbatim}
1672:-mode output_data(++,+).
1673output_data(File,L):-
1674        open(File,'write',S),
1675        (foreach(X,L),
1676         param(S) do
1677            output_item(S,X)
1678        ),
1679        close(S).
1680
1681output_item(S,data_item with [attr1:A1,attr2:A2]):-
1682        printf(S,"%s %d\n",[A1,A2]).
1683\end{verbatim}
1684We can use the same schema for creating tab\index{tab separated files} or comma separated files\index{comma separated files}, which provides a simple interface to spreadsheets\index{spreadsheet} and data base readers\index{data base}.
1685
1686\section{Good and bad data formats}
1687When defining the data format for an input or output file, we should choose a representation which suits the ECLiPSe application. Table \ref{InputFormats} shows good and bad formats\index{data format}. Prolog terms are very easy to read and to write, simple tabular forms are easy to write, but more complex to read. Comma separated files need a special tokenizer which separates fields by comma characters. The most complex input format is given by a fixed column format\index{fixed column format}\index{format, fixed column}, for example generated by FORTRAN\index{FORTRAN} applications. We should avoid such data formats as input if possible, since they require significant development effort.
1688\begin{table}[htbp]
1689\begin{tabular}{|l|r|r|}
1690\hline
1691Format & Input & Output \\ \hline
1692Prolog terms & ++ & ++ \\ \hline
1693EXDR & ++ & ++ \\ \hline
1694White space separated & + & ++ \\ \hline
1695Comma separated & - & ++ \\ \hline
1696Fixed columns & - - & + \\ \hline
1697\end{tabular}
1698\caption{\label{InputFormats}Good and bad input formats}
1699\end{table}
1700
1701\section{Report generation}
1702There is another output format that can be generated quite easily. RiskWise uses a {\it report}\index{report} library, which presents lists of items as HTML\index{HTML} tables in hyper linked files. This format is very useful to print some data in a human readable form, as it allows some navigation\index{navigation} across files and sorting of tables by different columns. Figure \ref{HTMLReport} shows an example from the RiskWise application.
1703
1704\begin{figure}[thbp]
1705\psfig{figure=htmlreport.eps,width=13.7cm}
1706\caption{\label{HTMLReport}HTML Report}
1707\end{figure}
1708
1709%\section{XML/HTML (?)}
1710
1711%\chapter{Constraints (?)}
1712%\section{Co-routining}
1713%\section{Variables}
1714%\section{Constraints}
1715%\section{Finding solutions}
1716
1717\chapter{If it doesn't Work}
1718\label{ifitdoesntwork}
1719
1720Suppose we have followed our methodology, and just made some modification of our previously running program. Unfortunately, after the change the program stops working. What do we do next to find the problem and to correct it? We first have to understand what types of errors\index{error} can occur in an ECLiPSe program.
1721
1722\section{Understanding failure}
1723We can distinguish five types of failure\index{failure}, each with its own set of causes and possible remedies.
1724
1725\subsection{Run-time errors} 
1726Run-time errors\index{run time error}\index{error, run time} occur if we call a built-in predicate with a wrong argument pattern. This will usually either be a type mismatch\index{type mismatch}, i.e. using a number where an atom is expected, or an instantiation problem, i.e. passing a variable where a ground value was expected or vice versa. In this case the ECLiPSe system prints out the offending call together with an error message\index{error message} indicating the problem. If we are lucky, we can identify the problem immediately, otherwise we may have to look up the documentation of the built-in to understand the problem.
1727
1728In the following example {\it bug0}, we have called the predicate {\it open/3}\index{open/3} with an incorrect first argument.
1729\pagebreak
1730\begin{alltt}
1731:-export(bug0/0).
1732bug0:-
1733        open(1,read,S), % wrong
1734        read(S,X),
1735        writeln(X),
1736        close(S).
1737\end{alltt}
1738
1739When we run the query {\it bug0}, the ECLiPSe system prints out the message:
1740\index{type error}
1741\begin{verbatim}
1742type error in open(1, read, S)
1743\end{verbatim}
1744In general run-time errors are quite simple to locate and to fix. But the system does not indicate which particular call of a built-in is to blame. We may need to use the tracer to find out which of dozens of calls to the predicate {\it is/2} for example may have caused a particular problem. There are several things that may help to avoid the tedious tracing. 
1745\begin{itemize}
1746\item If the call to the predicate contains some variable name, we may be able to locate the problem by searching for that name in the source file(s). 
1747\item Well placed logging messages may also be helpful, they indicate which program part is responsible.
1748\item If we have only applied a small change to a previously working program, then it is likely that the error is located close to the change we have made. Testing the program after each change will speed up development.
1749\end{itemize} 
1750
1751\subsection{Environment limits} 
1752\index{environment limit}These errors occur when we hit some limit of the run-time system, typically exceeding available memory\index{memory}. An example is the run-away recursion in the program {\it bug1} below:
1753
1754\begin{alltt}
1755:-export(bug1/0).
1756bug1:-
1757        lp(X),  % wrong
1758        writeln(X).
1759
1760lp([_H|T]):-
1761        lp(T).
1762lp([]).
1763\end{alltt}
1764
1765After some seconds, the system returns an error message of the form:
1766\begin{footnotesize}
1767\index{Overflow}
1768\begin{verbatim}
1769*** Overflow of the local/control stack!
1770You can use the "-l kBytes" (LOCALSIZE) option to have a larger stack.
1771Peak sizes were: local stack 13184 kbytes, control stack 117888 kbytes
1772\end{verbatim}
1773\end{footnotesize}
1774Typical error sources are passing free variables to a recursion, where the terminating clause is not executed first or the use of an infinite data structure.
1775
1776\subsection{Failure} Probably the most common error symptom is a failure\index{failure} of the application. Instead of producing an answer, the system returns 'no'\index{no}. This is caused by: 
1777\begin{itemize}
1778\item Calling a user-defined predicate \index{user defined predicate}with the wrong calling pattern. If none of the rules of a predicate matches the calling pattern, then the predicate will fail. Of course, quite often this is intentional (tests for some condition for example). It becomes a problem if the calling predicate does not handle the failure properly. We should know for each predicate that we define if it can fail and make sure that any calling predicate handles that situation.
1779\item Calling a built-in predicate with arguments so that it fails unexpectedly. This is much less likely, but some built-in predicates can fail if the wrong arguments are passed.
1780\item Wrong control structure. A common problem is to miss the alternative branch in an {\it if-then-else construct}\index{if then else}. If the condition part fails, then the whole call fails. We must always add an {\it else} \index{else}part, perhaps with an empty statement {\it true}\index{true}.
1781\end{itemize}
1782The best way of finding failures is by code inspection, helped by logging messages which indicate the general area of the failure. If this turns out to be too complex, we may have to use the tracer.
1783
1784\subsection{Wrong answer} 
1785More rare is the situation where a ``wrong'' answer \index{wrong answer}is returned. This means that the program computes a result, which we do not accept as an answer. The cause of the problem typically lies in a mismatch of the intended behaviour of the program (what we think it is doing) and the actual behaviour.
1786
1787In a constraint problem we then have to identify which constraint(s) should eliminate this answer and why that didn't happen. There are two typical scenarios.
1788
1789\begin{itemize}
1790\item The more simple one is caused by missing a constraint alltogether, or misinterpreting the meaning of a constraint that we did include. 
1791\item The more complex scenario is a situation where the constraint did not trigger and therefore was not activated somewhere in the program execution. 
1792\end{itemize}
1793We can often distinguish these problems by re-stating the constraint a second time after the wrong answer has been found. 
1794
1795If the constraint still accepts the solution, then we can assume that it simply does not exclude this solution. We will have to reconsider the declarative definition of the constraint to reject this wrong answer.
1796
1797If the program fails when the constraint is re-stated, then we have a problem with the dynamic execution of the constraints in the constraint solver. That normally is a much more difficult problem to solve, and may involve the constraint engine itself.
1798
1799\subsection{Missing answer} Probably the most complex problem is a missing answer\index{missing answer}, i.e. the program produces correct answers, but not all of them. This assumes that we know this missing answer, or we know how many answers there should be to a particular problem and we get a different count when we try to generate them all. In the first case we can try to instantiate our problem variables with the missing answer before stating the constraints and then check where this solution is rejected.
1800
1801This type of problem often occurs when we develop our own constraints and have made a mistake in one of the propagation rules\index{propagation rules}. It should not occur if we use a pre-defined constraint solver.
1802
1803\section{Checking program points}
1804A very simple debugging technique is the logging \index{logging}of certain predicate calls as they occur in the program.
1805\begin{verbatim}
1806:-export(bug2/0).
1807
1808bug2:-
1809        ...
1810        generate_term(Term),
1811        analyze_term(Term,Result),
1812        ...
1813\end{verbatim}
1814Suppose we assume that the predicate {\it analyze\_term} is responsible for a problem. We can then simply add a {\it writeln} call before and/or after the predicate call to print out the suspected predicate. The output before that call should show all input arguments, the output after the call should also show the output results.
1815\begin{verbatim}
1816:-export(bug2/0).
1817
1818bug2:-
1819        ...
1820        generate_term(Term),
1821        writeln(analyze_term(Term,Result)),
1822        analyze_term(Term,Result),
1823        writeln(analyze_term(Term,Result)),
1824        ...
1825\end{verbatim}
1826Used sparingly for suspected problems, this technique can avoid the use of the debugger and at the same time give a clear indication of the problem. The ECLiPSe system will normally restrict the output of a term to a certain complexity, so that long lists are not printed completely. This can be influenced by the {\it print\_depth}\index{print\_depth} flag of {\it set\_flag}\index{set\_flag}.
1827
1828Such messages can be written to the {\it log\_output} stream (or a user-defined stream) which can be later be discarded by
1829\begin{verbatim}
1830set_stream(log_output,null)
1831\end{verbatim}
1832or which can be directed to a log file. This can be useful in ``live'' code, in order to capture information about problems in failure cases.
1833\section{Debugger}
1834Ideally, we can figure out all problems without running the application in the debugger\index{debugger}. But at some point we may need to understand what the system is doing in more detail. We can then start the ECLiPSe tracer\index{tracer} tool from the {\it Tools} menu. Figure \ref{Tracer} shows a typical output. In the top window we see the current stack of predicate calls, in the bottom we see a history of the program execution. 
1835\begin{figure}[thbp]
1836\psfig{figure=tracer.eps,width=13.7cm}
1837\caption{\label{Tracer}ECLiPSe Tracer}
1838\end{figure}
1839
1840\subsection{Tracing}
1841The three buttons {\it Creep}\index{creep}, {\it Skip}\index{skip}, {\it Leap}\index{leap} show the main commands of the tracer. The creep operation single-steps to the next statement in the program. The skip operation executes the current query and stops again when that query either succeeds or fails. The leap operation continues execution until it reaches a spy point\index{spy point}, a predicate call for which we have indicated some interest before.
1842
1843A normal debugging session consists of a sequence of leap, skip and creep operations to reach an interesting part of the program and then a more detailed step-by step analysis of what is happening. It is pointless and very time consuming to single-step through a part of the program that we have already checked, but we have to be careful not to skip over the interesting part of the program.
1844
1845\subsection{Jumping to a program point}
1846In a large program, it may be difficult to leap directly to the interesting part of the program. But we may have to repeat this operation several times, if we repeatedly leap/skip over an interesting statement. We can use the invocation number of a statement to jump\index{jump} to this exact place in the execution again. The invocation number\index{invocation number} is printed in parentheses at the beginning of each trace line. By re-starting the debugger, copying this number into the text field to the right of the button {\it To Invoc:}\index{To Invoc} and then pressing this button we can directly jump to this location.
1847
1848Unfortunately, jumping to an invocation number can be quite slow in a large program. The debugger has to scan each statement to check its invocation number. It can be faster (but more tedious) to use skip and leap commands to reach a program point\footnote{Experiment! Your mileage may vary!}.
1849
1850\chapter{Correctness and Performance}
1851\label{correctnessandperformance}
1852
1853The effort in design and implementation is aimed at producing a working and maintainable program with minimal effort. But how do we check that a program is actually working?
1854
1855We typically define what a correct program should do only in an informal way, stating constraints that should be satisfied, data formats that should be accepted, etc. Proving correctness\index{correctness} would require a much more formal approach, where we first agree on a specification in a formal language and then prove that the implementation satisfies that specification. But even that does not protect us from a misunderstanding in defining the spec itself, which may not reflect the wishes of the user.
1856
1857Lacking this formal approach, the best hope of checking correctness of a program lies in exercising it with different types of tests.
1858
1859Related to the question of correctness is the issue of performance\index{performance}. The program should not only produce correct results, but must produce them within a limited time. The correct result which is available after five years of computation is (most of the time) as useless as the wrong result obtained in five seconds.
1860
1861\section{Testing}
1862Testing\index{testing} is often one of the activities that gets dropped when delivery time points get closer. This is a sad mistake, as any problem that is not recognized immediately will take much more effort to find and correct later on. In an ideal world, tests for each component are defined before implementation and cover each level of the design. In the real world we must find the right compromise between spending time on defining tests in the beginning and time spent on developing the application core.
1863
1864A test may have different objectives:
1865\begin{itemize}
1866\item The weakest form is a test that is run to check if a program works with the test data, i.e. produces some answer without generating an error. We have suggested this form of test above to check an (incomplete) early implementation. 
1867\item A more complex test can be used to exercise all (or nearly all) of the program code. The test data must contain enough variation to force all alternative parts in the program. This can be achieved either by accumulating large number of tests or by designing particular tests to handle special cases.
1868\item Another test form is {\it regression}\index{regression testing} testing. This form of testing is used to evaluate results of a program run after each modification of the code. Results of previous runs are used to check the current results. This does not check for correctness, but rather for the same answer before and after a change.
1869\item {\it Correctness}\index{correctness} of a program can only be checked if we have obtained the expected results of a test in an independent way, either by hand or by a trusted program, or simply by re-using benchmark sets from a third party.
1870\item We may also be using some tests to do {\it performance testing}\index{performance testing}, i.e. to check that the program finds the solution within given limits on execution time and/or system resources. Clearly, this only makes sense if we also check the result for correctness. 
1871\end{itemize}
1872
1873It is important to realize the limitations of the tests that we perform. If we have never checked that the solutions produced by a regression test are correct, then they will be most likely {\it not} correct. We only know that they are still the same solutions that the program has found before.
1874
1875Unfortunately, testing of combinatorial problem\index{combinatorial problems} solvers is even more complex than testing of ``normal'' programs. Often, a given problem has more than one solution, and it is not clear in which order the answers will be produced. Providing one solution may not be enough, but there may be millions of solutions in total.
1876
1877\section{Profiling}
1878The line profiling\footnote{The profiling facility is now available as one of the ECLiPSe libraries in the library coverage. The actual output looks slightly different.}\index{line profiling}\index{profiling} tool that we already mentioned above can be used to check the coverage\index{coverage} of the program with some query. We can easily find lines that are not executed at all, indicating the need for another test case. If we cannot construct a test which executes a program segment, then this indicates some problem.
1879
1880We can use the same profile to find program parts which are executed very often and this can provide hints for optimization. Normally it is better not just to concentrate on those parts that are called very often, but on those which are calling these predicates. 
1881
1882\begin{figure}[thbp]
1883\psfig{figure=profile.eps,width=13.7cm}
1884\caption{\label{Profiling}Profiling Example}
1885\end{figure}
1886
1887Figure \ref{Profiling} shows the output of the profiling tool. Each line is marked with the number of times it is executed (the first number in green) and the number of times we backtrack over this line (the second number in red). In this example shown there are two parts of if-then-else predicates which have not been selected by the test example.
1888
1889\section{Reviewing}
1890A standard technique to check for consistency in the development is a reviewing\index{reviewing} process. Each module of an application goes through a review process, where persons not connected with the development check the deliverables (source code and documentation) for completeness and consistency. This review process serves multiple purposes:
1891\begin{itemize}
1892\item It forces the developer to finish a version of the program with a certain polish. 
1893\item It helps to find inconsistencies or missing explanations in the documentation.
1894\item It encourages ``best practice''\index{best practice} in the ECLiPSe application development, bringing together experts from different application teams.
1895\item It helps spread knowledge about applications and their sub-systems, so that re-use opportunities are recognized earlier.
1896\end{itemize}
1897On the other hand, a number of problems are normally not recognized by a review:
1898\begin{itemize}
1899\item The review checks one version of an application at a given time point. It does not guarantee that changes and modifications after the review are performed to the same standard.
1900\item A successful code review does not imply that the application code is correct. Reviewers might sometimes note suspect code, but a review cannot replace proper testing.
1901\item If nobody actually checks the code, then the whole process becomes useless overhead. This means that resources must be properly allocated to the review, it is not a task that reviewers can undertake in their spare time.
1902\item Comments and change requests in the review must be recorded and acted on. A formal review comment form may be used, alternatively we might work with detailed and complete minutes.
1903\end{itemize}
1904
1905\section{Issues to check for}
1906\subsection{Unwanted choicepoints}
1907It is important to remove all unwanted choicepoints\index{choicepoint}\index{unwanted choicepoint} in an application, since they are a common source of errors. In addition, a choicepoint requires a significant amount of memory, so that leaving unwanted choicepoints is also a performance problem.
1908
1909For most predicates, in particular those following one of the programming concepts in chapter \ref{programmingconcepts}, it is quite simple to avoid unwanted choicepoints. Other predicates may need more effort. We can use the ECLiPSe debugger to detect if there are any unwanted choicepoints. In the trace output for the {\it EXIT} \index{EXIT}port ECLiPSe will print a \verb+*+ if the predicate leaves a choicepoint. We can easily check a complete query by skipping over its execution and checking the exit port. If a choicepoint is indicated, we can re-run the query to locate the missed choicepoint. 
1910
1911\subsection{Open streams}
1912A common error is to open some file without closing it before leaving. This typically happens if it is opened in one part of the program and closed in another part. Normally everything works fine, but under some exceptional circumstances the second part is not executed, leaving the file open. This can consume system resources quite quickly, leading to follow-on problems. It is a good idea to verify that every call to {\it open/3} is followed by a {\it close/1}, even if some other part of the program unexpectedly fails.
1913
1914\subsection{Modified global state}
1915We have already stated that changing global state should be used as a last resort, not as a common practice. But if the program modifies dynamic predicates, creates global variables or uses {\it record}, then we must be very careful to restore the state properly, i.e. remove dynamic predicates after use, reset global variables etc.
1916
1917\subsection{Delayed goals}
1918Normally, a solver should not leave delayed goals after it is finished. For some solvers, we can enforce this by instantiating all solver variables in a solution, others require more complex actions. If a query returns with delayed goals, this should be seen as an error message that needs to be investigated.
1919\appendix
1920\chapter{Style Guide}
1921\label{styleguide}
1922
1923\section{Style rules}
1924\begin{enumerate}
1925\item   There is one directory containing all code and its documentation (using sub-directories).
1926\item  Filenames\index{file name} are of form [a-z][a-z\_]+ with extension .ecl .
1927\item  One file per module, one module per file.
1928\item  Each module is documented with comment directives.
1929\item  All required interfaces are defined in separate spec files which are included in the source with a {\it comment include} directive. This helps to separate specification and implementation code.
1930\item  The actual data of the problem is loaded dynamically from the Java interface; for stand-alone testing data files from the data directory are included in the correct modules.
1931\item  The file name is equal to the module name.
1932\item  Predicate names\index{predicate name} are of form [a-z][a-z\_]*[0-9]* . Underscores are used to separate words. Digits should only be used at the end of the name. Words should be English.
1933\item  Variable names\index{variable name} are of form [A-Z\_][a-zA-Z]*[0-9]* . Separate words star with capital letters. Words should be English. The plural should be used for lists and other collections. Digits should only be used at the end to distinguish different versions of the same conceptual thing.
1934\item  The code should not contain singleton variables\index{singleton}, unless their names start with \_. The final program may not generate singleton warnings.
1935\item  Each exported predicate is documented with a comment directive\index{comment directive}.
1936\item  Clauses for a predicate must be consecutive.
1937\item  Base clauses should be stated before recursive cases.
1938\item  Input arguments should be placed before output arguments.
1939\item  Predicates which are not exported should be documented with a single line comment. It is possible to use comment directives instead.
1940\item  The sequence of predicates in a file is top-down with a (possibly empty) utility section at the end.
1941\item  All structures are defined in one file (e.g. flow\_structures.ecl) and are documented with comment directives.
1942\item  Terms should not be used; instead use named structures\index{named structure}.
1943\item  When possible, use do loops instead of recursion.
1944\item  When possible, use separate predicates instead of disjunction\index{disjunction} or if-then-else.
1945\item  There should be no nested if-then-else\index{if then else} construct in the code.
1946\item  All input data should be converted into structures at the beginning of the program; there should be no direct access to the data afterwards. 
1947\item  All numeric constants\index{numeric constants} should be parametrized via facts. There should be no numeric values (other than 0 and 1) in rules.
1948\item The final code should not use failure-loops\index{failure loop}; they are acceptable for debugging or testing purposes.
1949\item  Cuts (!) \index{cut} should be inserted only to eliminate clearly defined choice points.
1950\item  The final code may not contain open choice points, except for alternative solutions that still can be explored. This is verified with the tracer tool in the debugger.
1951\item  Customizable data facts should always be at the end of a file; their use is deprecated.
1952\item  The predicate member/2\index{member/2} should only be used where backtracking is required; otherwise use memberchk/2\index{memberchk/2} to avoid hidden choice points.
1953\item  The final code may not contain dead code\index{dead code} except in the file/module unsupported.ecl. This file should contain all program pieces which are kept for information/debugging, but which are not part of the deliverable.
1954\item  The test set(s) should exercise 100 percent of the final code. Conformity is checked with the line coverage profiler.
1955\item  Explicit unification (=/2) should be replaced with unification inside terms where possible.
1956\item  There is a top-level file (document.ecl) which can be used to generated all on-line documentation automatically.
1957\item  Don't use ','/2 to make tuples.
1958\item  Don't use lists to make tuples.
1959\item  Avoid append/3 where possible, use accumulators instead.
1960\end{enumerate}
1961
1962\section{Module structure}
1963The general form of a module is:
1964
1965\begin{enumerate}
1966\item  module definition
1967\item  module comment or inclusion of a spec file
1968\item  exported/reexported predicates
1969\item  used modules
1970\item  used libraries
1971\item  local variable definitions 
1972\item  other global operations and settings
1973\item  predicate definitions
1974\end{enumerate}
1975
1976\section{Predicate definition}
1977The general form of a predicate definition is:
1978
1979\begin{enumerate}
1980\item  predicate comment directive
1981\item  mode declaration
1982\item  predicate body
1983\end{enumerate}
1984
1985\chapter{Layout Rules}
1986\label{layoutrules}
1987In formulating these layout rules, we have taken certain arbitrary choices between different sensible formatting variants. While these choices work well for us, you may want to adapt a slightly different style. As long as it is done consistently (and agreed within a team), you should follow your own preferences.
1988\section{General guidelines}
1989Code should be indented consistently.  Tabs should be every 8 spaces.
1990Indentation should be one tab per level of indentation\index{indentation}, except for
1991highly indented code which may be indented at 4 spaces per level of
1992indentation after the first.
1993
1994Each line should not extend beyond 79 characters, unless this is necessary
1995due to the use of long string literals.  If a statement or predicate call
1996is too long, continue it on the next line indented two levels deeper.
1997If the statement or call extends over more than two lines, then make sure
1998the subsequent lines are indented to the same depth as the second line.
1999For example:
2000
2001\begin{verbatim}
2002Here is A_really_long_statement_that_does_not_fit +
2003           On_one_line + In_fact_it_doesnt_even_fit +
2004           On_two_lines.
2005\end{verbatim}
2006Don't put more than one statement or call on a line.
2007
2008Put spaces after commas in comma-separated lists.  Put spaces either
2009side of infix operators.  In both cases, this makes them easier to read,
2010particularly in expressions containing long names or names with underscores.
2011(There are some exceptions, such as module qualification\index{module qualification} \verb^foo:bar^ and
2012functor/arity pairings \verb^foo/3^.  But not unification \verb^X = Y^ or list consing
2013\verb^[Head | Tail]^.)
2014
2015Make judicious use of single blank lines in clause bodies to separate logical
2016components.
2017
2018
2019\section{Predicates and clauses}
2020
2021Keep all clauses of a predicate\index{predicate}\index{clause} in the one place.  Leave at least one
2022blank line between the clauses of one predicate and the next (particularly
2023if they have similar names), since otherwise they may at first glance
2024appear to be all from the same predicate (of course, this never happens
2025because there's a comment before every predicate, right?).  Similarly,
2026it is best not to leave blank lines between the clauses of a predicate
2027(particularly if done inconsistently), since otherwise at first glance
2028they may appear to be from different predicates.
2029
2030Clause heads should be flush against the left margin.  As well as making
2031them easier to pick out visually, this makes it easier to grep for the
2032definitions of predicates (as opposed to their invocations).  The head/body
2033separator `:-' should follow the head on the same line.  The body of the
2034clause should then commence on the next line, indented one tab stop.
2035
2036\begin{verbatim}
2037non_overlap(Start1, Dur1, Start2, _Dur2):-
2038        Start1 + Dur1 #=< Start2.
2039non_overlap(Start1, _Dur1, Start2, Dur2):-
2040        Start1 #>= Start2 + Dur2.
2041\end{verbatim}
2042
2043\section{If-then-elses}
2044
2045If-then-elses\index{if then else, formatting} should always be parenthesised. Always include the else part, even if you don't think
2046it's required. Always put semicolons at the start of a new line, aligned
2047with the opening and closing parentheses.  Example:
2048
2049\begin{verbatim}
2050( test1 ->
2051       goal1
2052;
2053       goal2
2054),
2055\end{verbatim}
2056\section{Disjunctions}
2057
2058Disjunctions\index{disjunction, formatting} should be formatted in a similar way.  Example:
2059
2060\begin{verbatim}
2061(
2062        goal1
2063;
2064        goal2
2065).
2066\end{verbatim}
2067
2068\section{Do loops}
2069
2070Do loops\index{do loop, formatting} should also always be parenthesised.  Loop conditions should be
2071listed one per line, starting after the opening parenthesis
2072and indented one character.  The `do' keyword should be at the end of the last condition.  The body of the
2073loop should then follow on the next line, again indented.  Example:
2074
2075\begin{verbatim}
2076(for(J, I + 1, N),
2077 param(A, I) do
2078        Diff is J - I,
2079        A[I] #\= A[J],
2080        A[I] #\= A[J] + Diff,
2081        A[I] #\= A[J] - Diff
2082).
2083\end{verbatim}
2084
2085\section{Comments}
2086
2087Every predicate should have a one line comment\index{comment} documenting what it does, 
2088all module interfaces should be properly documented with a comment directive\index{comment directive}.
2089
2090If an individual clause is long, it should be broken into sections, and
2091each section should have a {\it block comment}\index{block comment} describing what it does; blank
2092lines should be used to show the separation into sections.  Comments should
2093precede the code to which they apply, rather than following it.
2094
2095\begin{alltt}
2096%
2097% This is a block comment; it applies to the code in the next
2098% section (up to the next blank line).
2099%
2100        blah,
2101        blah,
2102        blahblah,
2103        blah,
2104\end{alltt}
2105
2106If a particular line or two needs explanation, a {\it line} comment\index{line comment} 
2107
2108\begin{alltt}
2109        % This is a "line" comment; 
2110        % it applies to the next line or two
2111        % of code
2112        blahblah
2113\end{alltt}
2114
2115or an {\it inline} comment\index{inline comment} 
2116
2117\begin{alltt}
2118        blahblah        % This is an "inline" comment
2119\end{alltt}
2120
2121should be used.
2122
2123\chapter{Core Predicates}
2124\label{corepredicates}
2125
2126This section lists essential ECLiPSe built-in and library predicates. These predicates are used in most applications, and should be well understood. There are more than 1000 other built-in and library predicates which are required for specific tasks only. 
2127\section{Modules}
2128\begin{description}
2129\item[module directive]\index{module directive} define a module
2130\item[export directive]\index{export directive} make predicates available outside a module
2131\item[use\_module directive]\index{use\_module directive} make predicates from other module available
2132\item[lib directive]\index{lib directive} make predicates from library available
2133\item[:] call a predicate from a particular module
2134\end{description}
2135
2136\section{Predicate definition}
2137\begin{description}
2138\item[mode directive]\index{mode directive} define input/output modes for a predicate
2139\item[comment directive]\index{comment directive} define comments about a module or a predicate
2140\end{description}
2141
2142\section{Control}
2143\begin{description}
2144\item[,] conjunction\index{conjunction}, and
2145\item[;] disjunction\index{disjunction}, or
2146\item[-\gt] implication\index{implication}, if-then-else\index{if then else} together with disjunction
2147\item[!] cut\index{cut}, remove choices
2148\item[call/1] \index{call/1}call argument as a predicate call
2149\item[once/1] \index{once/1}call argument as a predicate call, find one solution only
2150\item[not/1] \index{not/1}negation\index{negation}, fail if call to argument succeeds
2151\item[true/0] \index{true/0}always succeed, empty predicate definition
2152\item[block/3] \index{block/3}define a block in order to catch exceptions
2153\item[exit\_block/1] \index{exit\_block/1}jump out of a block
2154\end{description}
2155
2156\subsection{Do Loops}
2157\begin{description}
2158\item[do] \index{do loop}general iteration operator
2159\item[foreach] \index{foreach}iterate over each element in a list
2160\item[foreacharg] \index{foreacharg}iterate over each argument of a term
2161\item[fromto] \index{fromto}accumulator \index{accumulator}argument
2162\item[for] \index{for}iterate over all integers between two bounds
2163\item[param] \index{param}declare parameter variables which are passed into the do-loop
2164\end{description}
2165
2166\section{I/O}
2167\begin{description}
2168\item[exists/1] \index{exists/1}succeeds if a file exists
2169\item[open/3] \index{open/3}open a file or a string as a stream
2170\item[close/1] \index{close/1}close a stream
2171\item[write/2] \index{write/2}write some term to a stream
2172\item[nl/1] \index{nl/1}write a new-line to a stream
2173\item[writeln/2] \index{writeln/2}write a term to a stream, followed by a new-line
2174\item[writeq/2] \index{writeq/2}write a term in canoncial form, so that if can be read back
2175\item[read/2] \index{read/2}read a term from a stream
2176\item[read\_string/4] \index{read\_string/4}read a string from a stream
2177\item[concat\_string/2] \index{concat\_string/2}build a string from a list of components
2178\item[phrase/3] \index{phrase/3}call DCG \index{DCG}analyzer
2179\item[read\_exdr/2] \index{read\_exdr/2} read a term from a file in EXDR\index{EXDR} format
2180\item[write\_exdr/2] \index{write\_exdr/2} write a term to a file in EXDR\index{EXDR} format
2181\item[flush/1] \index{flush/1} flush an output stream
2182\end{description}
2183
2184\section{Arrays}
2185\begin{description}
2186\item[dim/2] \index{dim/2}define an array\index{array}; can also be used to find size of an array
2187\item[subscript/3] \index{subscript/3}get an element of an array
2188\end{description}
2189
2190\section{Hash Tables}
2191\begin{description}
2192\item[hash\_create/1] \index{hash\_create/1}create a hash table
2193\item[hash\_add/3] \index{hash\_add/3}add an item to a hash table
2194\item[hash\_find/3] \index{hash\_find/3}find if an item is in a hash table
2195\end{description}
2196
2197\section{Arithmetic}
2198\begin{description}
2199\item[is/2] \index{is/2}evaluate term
2200\item[=:= /2] evaluate two terms and compare for equality
2201\item[=\bsl= /2] evaluate two terms and compare for disequality
2202\item[\gt /2] evaluate two terms and compare for inequality
2203\item[\gt= /2] evaluate two terms and compare for inequality
2204\item[\lt /2] evaluate two terms and compare for inequality
2205\item[=\lt /2] evaluate two terms and compare for inequality
2206\end{description}
2207
2208\section{Terms and structures}
2209\begin{description}
2210\item[struct/1] \index{struct/1}define named structure
2211\item[with] \index{with}access element(s) in named structure
2212\item[of] \index{of}find argument position in named structure
2213\item[functor/3] \index{functor/3}define/analyze term for functor and arity
2214\item[arg/3] \index{arg/3}access argument of term
2215\item[compare/3]  \index{compare/3}compare two terms for lexicographical order
2216\item[= /2] two terms can be unified
2217\item[== /2] two terms are identical
2218\item[\bsl= /2] two terms cannot be unified
2219\item[number/1] \index{number/1}argument is a number
2220\item[integer/1] \index{integer/1}argument is an integer
2221\item[atom/1] \index{atom/1}argument is an atom
2222\item[var/1] \index{var/1}argument is a variable
2223\item[string/1] \index{string/1}argument is a string
2224\item[real/1] \index{real/1}argument is a float
2225\item[compound/1] \index{compound/1}argument is a term
2226\end{description}
2227
2228\section{List Handling}
2229\begin{description}
2230\item[sort/2/4] \index{sort/2/4}sort a list
2231\item[memberchk/2] \index{memberchk/2}check if an item occurs in a list
2232\item[length/2] \index{length/2}find the length of a list or create a list with given length
2233\item[findall/3] \index{findall/3}find all solutions to a query and return a list of all answers
2234\end{description}
2235\printindex
2236\end{document}
2237
2238
2239% to add a floating figure
2240\begin{figure}[htbp]
2241\begin{verbatim}
2242\end{verbatim}
2243\caption{\label{}}
2244\end{figure}
2245
2246% to add a programming concept section
2247\section{}
2248\paragraph{Description}
2249\paragraph{Parameters}
2250\begin{description}
2251\item[]
2252\end{description}
2253\begin{figure}[htbp]
2254\begin{verbatim}
2255\end{verbatim}
2256\caption{\label{}}
2257\end{figure}
2258\paragraph{Comments}
2259\paragraph{Example}
2260
2261
2262