1\part{Advanced Methods}
2Before continuing, it might be wise to try some of your own examples in
3Isabelle, reinforcing your knowledge of the basic functions.
4
5Look through {\em Isabelle's Object-Logics\/} and try proving some
6simple theorems.  You probably should begin with first-order logic
7(\texttt{FOL} or~\texttt{LK}).  Try working some of the examples provided,
8and others from the literature.  Set theory~(\texttt{ZF}) and
9Constructive Type Theory~(\texttt{CTT}) form a richer world for
10mathematical reasoning and, again, many examples are in the
11literature.  Higher-order logic~(\texttt{HOL}) is Isabelle's most
12elaborate logic.  Its types and functions are identified with those of
13the meta-logic.
14
15Choose a logic that you already understand.  Isabelle is a proof
16tool, not a teaching tool; if you do not know how to do a particular proof
17on paper, then you certainly will not be able to do it on the machine.
18Even experienced users plan large proofs on paper.
19
20We have covered only the bare essentials of Isabelle, but enough to perform
21substantial proofs.  By occasionally dipping into the {\em Reference
22Manual}, you can learn additional tactics, subgoal commands and tacticals.
23
24
25\section{Deriving rules in Isabelle}
26\index{rules!derived}
27A mathematical development goes through a progression of stages.  Each
28stage defines some concepts and derives rules about them.  We shall see how
29to derive rules, perhaps involving definitions, using Isabelle.  The
30following section will explain how to declare types, constants, rules and
31definitions.
32
33
34\subsection{Deriving a rule using tactics and meta-level assumptions} 
35\label{deriving-example}
36\index{examples!of deriving rules}\index{assumptions!of main goal}
37
38The subgoal module supports the derivation of rules, as discussed in
39\S\ref{deriving}.  When the \ttindex{Goal} command is supplied a formula of
40the form $\List{\theta@1; \ldots; \theta@k} \Imp \phi$, there are two
41possibilities:
42\begin{itemize}
43\item If all of the premises $\theta@1$, \ldots, $\theta@k$ are simple
44  formulae{} (they do not involve the meta-connectives $\Forall$ or
45  $\Imp$) then the command sets the goal to be 
46  $\List{\theta@1; \ldots; \theta@k} \Imp \phi$ and returns the empty list.
47\item If one or more premises involves the meta-connectives $\Forall$ or
48  $\Imp$, then the command sets the goal to be $\phi$ and returns a list
49  consisting of the theorems ${\theta@i\;[\theta@i]}$, for $i=1$, \ldots,~$k$.
50  These meta-level assumptions are also recorded internally, allowing
51  \texttt{result} (which is called by \texttt{qed}) to discharge them in the
52  original order.
53\end{itemize}
54Rules that discharge assumptions or introduce eigenvariables have complex
55premises, and the second case applies.  In this section, many of the
56theorems are subject to meta-level assumptions, so we make them visible by by setting the 
57\ttindex{show_hyps} flag:
58\begin{ttbox} 
59set show_hyps;
60{\out val it = true : bool}
61\end{ttbox}
62
63Now, we are ready to derive $\conj$ elimination.  Until now, calling \texttt{Goal} has
64returned an empty list, which we have ignored.  In this example, the list
65contains the two premises of the rule, since one of them involves the $\Imp$
66connective.  We bind them to the \ML\ identifiers \texttt{major} and {\tt
67  minor}:\footnote{Some ML compilers will print a message such as {\em binding
68    not exhaustive}.  This warns that \texttt{Goal} must return a 2-element
69  list.  Otherwise, the pattern-match will fail; ML will raise exception
70  \xdx{Match}.}
71\begin{ttbox}
72val [major,minor] = Goal
73    "[| P&Q;  [| P; Q |] ==> R |] ==> R";
74{\out Level 0}
75{\out R}
76{\out  1. R}
77{\out val major = "P & Q  [P & Q]" : thm}
78{\out val minor = "[| P; Q |] ==> R  [[| P; Q |] ==> R]" : thm}
79\end{ttbox}
80Look at the minor premise, recalling that meta-level assumptions are
81shown in brackets.  Using \texttt{minor}, we reduce $R$ to the subgoals
82$P$ and~$Q$:
83\begin{ttbox}
84by (resolve_tac [minor] 1);
85{\out Level 1}
86{\out R}
87{\out  1. P}
88{\out  2. Q}
89\end{ttbox}
90Deviating from~\S\ref{deriving}, we apply $({\conj}E1)$ forwards from the
91assumption $P\conj Q$ to obtain the theorem~$P\;[P\conj Q]$.
92\begin{ttbox}
93major RS conjunct1;
94{\out val it = "P  [P & Q]" : thm}
95\ttbreak
96by (resolve_tac [major RS conjunct1] 1);
97{\out Level 2}
98{\out R}
99{\out  1. Q}
100\end{ttbox}
101Similarly, we solve the subgoal involving~$Q$.
102\begin{ttbox}
103major RS conjunct2;
104{\out val it = "Q  [P & Q]" : thm}
105by (resolve_tac [major RS conjunct2] 1);
106{\out Level 3}
107{\out R}
108{\out No subgoals!}
109\end{ttbox}
110Calling \ttindex{topthm} returns the current proof state as a theorem.
111Note that it contains assumptions.  Calling \ttindex{qed} discharges
112the assumptions --- both occurrences of $P\conj Q$ are discharged as
113one --- and makes the variables schematic.
114\begin{ttbox}
115topthm();
116{\out val it = "R  [P & Q, P & Q, [| P; Q |] ==> R]" : thm}
117qed "conjE";
118{\out val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm}
119\end{ttbox}
120
121
122\subsection{Definitions and derived rules} \label{definitions}
123\index{rules!derived}\index{definitions!and derived rules|(}
124
125Definitions are expressed as meta-level equalities.  Let us define negation
126and the if-and-only-if connective:
127\begin{eqnarray*}
128  \neg \Var{P}          & \equiv & \Var{P}\imp\bot \\
129  \Var{P}\bimp \Var{Q}  & \equiv & 
130                (\Var{P}\imp \Var{Q}) \conj (\Var{Q}\imp \Var{P})
131\end{eqnarray*}
132\index{meta-rewriting}%
133Isabelle permits {\bf meta-level rewriting} using definitions such as
134these.  {\bf Unfolding} replaces every instance
135of $\neg \Var{P}$ by the corresponding instance of ${\Var{P}\imp\bot}$.  For
136example, $\forall x.\neg (P(x)\conj \neg R(x,0))$ unfolds to
137\[ \forall x.(P(x)\conj R(x,0)\imp\bot)\imp\bot.  \]
138{\bf Folding} a definition replaces occurrences of the right-hand side by
139the left.  The occurrences need not be free in the entire formula.
140
141When you define new concepts, you should derive rules asserting their
142abstract properties, and then forget their definitions.  This supports
143modularity: if you later change the definitions without affecting their
144abstract properties, then most of your proofs will carry through without
145change.  Indiscriminate unfolding makes a subgoal grow exponentially,
146becoming unreadable.
147
148Taking this point of view, Isabelle does not unfold definitions
149automatically during proofs.  Rewriting must be explicit and selective.
150Isabelle provides tactics and meta-rules for rewriting, and a version of
151the \texttt{Goal} command that unfolds the conclusion and premises of the rule
152being derived.
153
154For example, the intuitionistic definition of negation given above may seem
155peculiar.  Using Isabelle, we shall derive pleasanter negation rules:
156\[  \infer[({\neg}I)]{\neg P}{\infer*{\bot}{[P]}}   \qquad
157    \infer[({\neg}E)]{Q}{\neg P & P}  \]
158This requires proving the following meta-formulae:
159$$ (P\Imp\bot)    \Imp \neg P   \eqno(\neg I) $$
160$$ \List{\neg P; P} \Imp Q.       \eqno(\neg E) $$
161
162
163\subsection{Deriving the $\neg$ introduction rule}
164To derive $(\neg I)$, we may call \texttt{Goal} with the appropriate formula.
165Again, the rule's premises involve a meta-connective, and \texttt{Goal}
166returns one-element list.  We bind this list to the \ML\ identifier \texttt{prems}.
167\begin{ttbox}
168val prems = Goal "(P ==> False) ==> ~P";
169{\out Level 0}
170{\out ~P}
171{\out  1. ~P}
172{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
173\end{ttbox}
174Calling \ttindex{rewrite_goals_tac} with \tdx{not_def}, which is the
175definition of negation, unfolds that definition in the subgoals.  It leaves
176the main goal alone.
177\begin{ttbox}
178not_def;
179{\out val it = "~?P == ?P --> False" : thm}
180by (rewrite_goals_tac [not_def]);
181{\out Level 1}
182{\out ~P}
183{\out  1. P --> False}
184\end{ttbox}
185Using \tdx{impI} and the premise, we reduce subgoal~1 to a triviality:
186\begin{ttbox}
187by (resolve_tac [impI] 1);
188{\out Level 2}
189{\out ~P}
190{\out  1. P ==> False}
191\ttbreak
192by (resolve_tac prems 1);
193{\out Level 3}
194{\out ~P}
195{\out  1. P ==> P}
196\end{ttbox}
197The rest of the proof is routine.  Note the form of the final result.
198\begin{ttbox}
199by (assume_tac 1);
200{\out Level 4}
201{\out ~P}
202{\out No subgoals!}
203\ttbreak
204qed "notI";
205{\out val notI = "(?P ==> False) ==> ~?P" : thm}
206\end{ttbox}
207\indexbold{*notI theorem}
208
209There is a simpler way of conducting this proof.  The \ttindex{Goalw}
210command starts a backward proof, as does \texttt{Goal}, but it also
211unfolds definitions.  Thus there is no need to call
212\ttindex{rewrite_goals_tac}:
213\begin{ttbox}
214val prems = Goalw [not_def]
215    "(P ==> False) ==> ~P";
216{\out Level 0}
217{\out ~P}
218{\out  1. P --> False}
219{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
220\end{ttbox}
221
222
223\subsection{Deriving the $\neg$ elimination rule}
224Let us derive the rule $(\neg E)$.  The proof follows that of~\texttt{conjE}
225above, with an additional step to unfold negation in the major premise.
226The \texttt{Goalw} command is best for this: it unfolds definitions not only
227in the conclusion but the premises.
228\begin{ttbox}
229Goalw [not_def] "[| ~P;  P |] ==> R";
230{\out Level 0}
231{\out [| ~ P; P |] ==> R}
232{\out  1. [| P --> False; P |] ==> R}
233\end{ttbox}
234As the first step, we apply \tdx{FalseE}:
235\begin{ttbox}
236by (resolve_tac [FalseE] 1);
237{\out Level 1}
238{\out [| ~ P; P |] ==> R}
239{\out  1. [| P --> False; P |] ==> False}
240\end{ttbox}
241%
242Everything follows from falsity.  And we can prove falsity using the
243premises and Modus Ponens:
244\begin{ttbox}
245by (eresolve_tac [mp] 1);
246{\out Level 2}
247{\out [| ~ P; P |] ==> R}
248{\out  1. P ==> P}
249\ttbreak
250by (assume_tac 1);
251{\out Level 3}
252{\out [| ~ P; P |] ==> R}
253{\out No subgoals!}
254\ttbreak
255qed "notE";
256{\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
257\end{ttbox}
258
259
260\medskip
261\texttt{Goalw} unfolds definitions in the premises even when it has to return
262them as a list.  Another way of unfolding definitions in a theorem is by
263applying the function \ttindex{rewrite_rule}.
264
265\index{definitions!and derived rules|)}
266
267
268\section{Defining theories}\label{sec:defining-theories}
269\index{theories!defining|(}
270
271Isabelle makes no distinction between simple extensions of a logic ---
272like specifying a type~$bool$ with constants~$true$ and~$false$ ---
273and defining an entire logic.  A theory definition has a form like
274\begin{ttbox}
275\(T\) = \(S@1\) + \(\cdots\) + \(S@n\) +
276classes      {\it class declarations}
277default      {\it sort}
278types        {\it type declarations and synonyms}
279arities      {\it type arity declarations}
280consts       {\it constant declarations}
281syntax       {\it syntactic constant declarations}
282translations {\it ast translation rules}
283defs         {\it meta-logical definitions}
284rules        {\it rule declarations}
285end
286ML           {\it ML code}
287\end{ttbox}
288This declares the theory $T$ to extend the existing theories
289$S@1$,~\ldots,~$S@n$.  It may introduce new classes, types, arities
290(of existing types), constants and rules; it can specify the default
291sort for type variables.  A constant declaration can specify an
292associated concrete syntax.  The translations section specifies
293rewrite rules on abstract syntax trees, handling notations and
294abbreviations.  \index{*ML section} The \texttt{ML} section may contain
295code to perform arbitrary syntactic transformations.  The main
296declaration forms are discussed below.  There are some more sections
297not presented here, the full syntax can be found in
298\iflabelundefined{app:TheorySyntax}{an appendix of the {\it Reference
299    Manual}}{App.\ts\ref{app:TheorySyntax}}.  Also note that
300object-logics may add further theory sections, for example
301\texttt{typedef}, \texttt{datatype} in HOL.
302
303All the declaration parts can be omitted or repeated and may appear in
304any order, except that the {\ML} section must be last (after the {\tt
305  end} keyword).  In the simplest case, $T$ is just the union of
306$S@1$,~\ldots,~$S@n$.  New theories always extend one or more other
307theories, inheriting their types, constants, syntax, etc.  The theory
308\thydx{Pure} contains nothing but Isabelle's meta-logic.  The variant
309\thydx{CPure} offers the more usual higher-order function application
310syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in Pure.
311
312Each theory definition must reside in a separate file, whose name is
313the theory's with {\tt.thy} appended.  Calling
314\ttindexbold{use_thy}~{\tt"{\it T\/}"} reads the definition from {\it
315  T}{\tt.thy}, writes a corresponding file of {\ML} code {\tt.{\it
316    T}.thy.ML}, reads the latter file, and deletes it if no errors
317occurred.  This declares the {\ML} structure~$T$, which contains a
318component \texttt{thy} denoting the new theory, a component for each
319rule, and everything declared in {\it ML code}.
320
321Errors may arise during the translation to {\ML} (say, a misspelled
322keyword) or during creation of the new theory (say, a type error in a
323rule).  But if all goes well, \texttt{use_thy} will finally read the file
324{\it T}{\tt.ML} (if it exists).  This file typically contains proofs
325that refer to the components of~$T$.  The structure is automatically
326opened, so its components may be referred to by unqualified names,
327e.g.\ just \texttt{thy} instead of $T$\texttt{.thy}.
328
329\ttindexbold{use_thy} automatically loads a theory's parents before
330loading the theory itself.  When a theory file is modified, many
331theories may have to be reloaded.  Isabelle records the modification
332times and dependencies of theory files.  See
333\iflabelundefined{sec:reloading-theories}{the {\em Reference Manual\/}}%
334                 {\S\ref{sec:reloading-theories}}
335for more details.
336
337
338\subsection{Declaring constants, definitions and rules}
339\indexbold{constants!declaring}\index{rules!declaring}
340
341Most theories simply declare constants, definitions and rules.  The {\bf
342  constant declaration part} has the form
343\begin{ttbox}
344consts  \(c@1\) :: \(\tau@1\)
345        \vdots
346        \(c@n\) :: \(\tau@n\)
347\end{ttbox}
348where $c@1$, \ldots, $c@n$ are constants and $\tau@1$, \ldots, $\tau@n$ are
349types.  The types must be enclosed in quotation marks if they contain
350user-declared infix type constructors like \texttt{*}.  Each
351constant must be enclosed in quotation marks unless it is a valid
352identifier.  To declare $c@1$, \ldots, $c@n$ as constants of type $\tau$,
353the $n$ declarations may be abbreviated to a single line:
354\begin{ttbox}
355        \(c@1\), \ldots, \(c@n\) :: \(\tau\)
356\end{ttbox}
357The {\bf rule declaration part} has the form
358\begin{ttbox}
359rules   \(id@1\) "\(rule@1\)"
360        \vdots
361        \(id@n\) "\(rule@n\)"
362\end{ttbox}
363where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
364$rule@n$ are expressions of type~$prop$.  Each rule {\em must\/} be
365enclosed in quotation marks.  Rules are simply axioms; they are 
366called \emph{rules} because they are mainly used to specify the inference
367rules when defining a new logic.
368
369\indexbold{definitions} The {\bf definition part} is similar, but with
370the keyword \texttt{defs} instead of \texttt{rules}.  {\bf Definitions} are
371rules of the form $s \equiv t$, and should serve only as
372abbreviations.  The simplest form of a definition is $f \equiv t$,
373where $f$ is a constant.  Also allowed are $\eta$-equivalent forms of
374this, where the arguments of~$f$ appear applied on the left-hand side
375of the equation instead of abstracted on the right-hand side.
376
377Isabelle checks for common errors in definitions, such as extra
378variables on the right-hand side and cyclic dependencies, that could
379least to inconsistency.  It is still essential to take care:
380theorems proved on the basis of incorrect definitions are useless,
381your system can be consistent and yet still wrong.
382
383\index{examples!of theories} This example theory extends first-order
384logic by declaring and defining two constants, {\em nand} and {\em
385  xor}:
386\begin{ttbox}
387Gate = FOL +
388consts  nand,xor :: [o,o] => o
389defs    nand_def "nand(P,Q) == ~(P & Q)"
390        xor_def  "xor(P,Q)  == P & ~Q | ~P & Q"
391end
392\end{ttbox}
393
394Declaring and defining constants can be combined:
395\begin{ttbox}
396Gate = FOL +
397constdefs  nand :: [o,o] => o
398           "nand(P,Q) == ~(P & Q)"
399           xor  :: [o,o] => o
400           "xor(P,Q)  == P & ~Q | ~P & Q"
401end
402\end{ttbox}
403\texttt{constdefs} generates the names \texttt{nand_def} and \texttt{xor_def}
404automatically, which is why it is restricted to alphanumeric identifiers.  In
405general it has the form
406\begin{ttbox}
407constdefs  \(id@1\) :: \(\tau@1\)
408           "\(id@1 \equiv \dots\)"
409           \vdots
410           \(id@n\) :: \(\tau@n\)
411           "\(id@n \equiv \dots\)"
412\end{ttbox}
413
414
415\begin{warn}
416A common mistake when writing definitions is to introduce extra free variables
417on the right-hand side as in the following fictitious definition:
418\begin{ttbox}
419defs  prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"
420\end{ttbox}
421Isabelle rejects this ``definition'' because of the extra \texttt{m} on the
422right-hand side, which would introduce an inconsistency.  What you should have
423written is
424\begin{ttbox}
425defs  prime_def "prime(p) == ALL m. (m divides p) --> (m=1 | m=p)"
426\end{ttbox}
427\end{warn}
428
429\subsection{Declaring type constructors}
430\indexbold{types!declaring}\indexbold{arities!declaring}
431%
432Types are composed of type variables and {\bf type constructors}.  Each
433type constructor takes a fixed number of arguments.  They are declared
434with an \ML-like syntax.  If $list$ takes one type argument, $tree$ takes
435two arguments and $nat$ takes no arguments, then these type constructors
436can be declared by
437\begin{ttbox}
438types 'a list
439      ('a,'b) tree
440      nat
441\end{ttbox}
442
443The {\bf type declaration part} has the general form
444\begin{ttbox}
445types   \(tids@1\) \(id@1\)
446        \vdots
447        \(tids@n\) \(id@n\)
448\end{ttbox}
449where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$
450are type argument lists as shown in the example above.  It declares each
451$id@i$ as a type constructor with the specified number of argument places.
452
453The {\bf arity declaration part} has the form
454\begin{ttbox}
455arities \(tycon@1\) :: \(arity@1\)
456        \vdots
457        \(tycon@n\) :: \(arity@n\)
458\end{ttbox}
459where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
460$arity@n$ are arities.  Arity declarations add arities to existing
461types; they do not declare the types themselves.
462In the simplest case, for an 0-place type constructor, an arity is simply
463the type's class.  Let us declare a type~$bool$ of class $term$, with
464constants $tt$ and~$ff$.  (In first-order logic, booleans are
465distinct from formulae, which have type $o::logic$.)
466\index{examples!of theories}
467\begin{ttbox}
468Bool = FOL +
469types   bool
470arities bool    :: term
471consts  tt,ff   :: bool
472end
473\end{ttbox}
474A $k$-place type constructor may have arities of the form
475$(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts and $c$ is a class.
476Each sort specifies a type argument; it has the form $\{c@1,\ldots,c@m\}$,
477where $c@1$, \dots,~$c@m$ are classes.  Mostly we deal with singleton
478sorts, and may abbreviate them by dropping the braces.  The arity
479$(term)term$ is short for $(\{term\})term$.  Recall the discussion in
480\S\ref{polymorphic}.
481
482A type constructor may be overloaded (subject to certain conditions) by
483appearing in several arity declarations.  For instance, the function type
484constructor~$fun$ has the arity $(logic,logic)logic$; in higher-order
485logic, it is declared also to have arity $(term,term)term$.
486
487Theory \texttt{List} declares the 1-place type constructor $list$, gives
488it the arity $(term)term$, and declares constants $Nil$ and $Cons$ with
489polymorphic types:%
490\footnote{In the \texttt{consts} part, type variable {\tt'a} has the default
491  sort, which is \texttt{term}.  See the {\em Reference Manual\/}
492\iflabelundefined{sec:ref-defining-theories}{}%
493{(\S\ref{sec:ref-defining-theories})} for more information.}
494\index{examples!of theories}
495\begin{ttbox}
496List = FOL +
497types   'a list
498arities list    :: (term)term
499consts  Nil     :: 'a list
500        Cons    :: ['a, 'a list] => 'a list
501end
502\end{ttbox}
503Multiple arity declarations may be abbreviated to a single line:
504\begin{ttbox}
505arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
506\end{ttbox}
507
508%\begin{warn}
509%Arity declarations resemble constant declarations, but there are {\it no\/}
510%quotation marks!  Types and rules must be quoted because the theory
511%translator passes them verbatim to the {\ML} output file.
512%\end{warn}
513
514\subsection{Type synonyms}\indexbold{type synonyms}
515Isabelle supports {\bf type synonyms} ({\bf abbreviations}) which are similar
516to those found in \ML.  Such synonyms are defined in the type declaration part
517and are fairly self explanatory:
518\begin{ttbox}
519types gate       = [o,o] => o
520      'a pred    = 'a => o
521      ('a,'b)nuf = 'b => 'a
522\end{ttbox}
523Type declarations and synonyms can be mixed arbitrarily:
524\begin{ttbox}
525types nat
526      'a stream = nat => 'a
527      signal    = nat stream
528      'a list
529\end{ttbox}
530A synonym is merely an abbreviation for some existing type expression.
531Hence synonyms may not be recursive!  Internally all synonyms are
532fully expanded.  As a consequence Isabelle output never contains
533synonyms.  Their main purpose is to improve the readability of theory
534definitions.  Synonyms can be used just like any other type:
535\begin{ttbox}
536consts and,or :: gate
537       negate :: signal => signal
538\end{ttbox}
539
540\subsection{Infix and mixfix operators}
541\index{infixes}\index{examples!of theories}
542
543Infix or mixfix syntax may be attached to constants.  Consider the
544following theory:
545\begin{ttbox}
546Gate2 = FOL +
547consts  "~&"     :: [o,o] => o         (infixl 35)
548        "#"      :: [o,o] => o         (infixl 30)
549defs    nand_def "P ~& Q == ~(P & Q)"    
550        xor_def  "P # Q  == P & ~Q | ~P & Q"
551end
552\end{ttbox}
553The constant declaration part declares two left-associating infix operators
554with their priorities, or precedences; they are $\nand$ of priority~35 and
555$\xor$ of priority~30.  Hence $P \xor Q \xor R$ is parsed as $(P\xor Q)
556\xor R$ and $P \xor Q \nand R$ as $P \xor (Q \nand R)$.  Note the quotation
557marks in \verb|"~&"| and \verb|"#"|.
558
559The constants \hbox{\verb|op ~&|} and \hbox{\verb|op #|} are declared
560automatically, just as in \ML.  Hence you may write propositions like
561\verb|op #(True) == op ~&(True)|, which asserts that the functions $\lambda
562Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.
563
564\medskip Infix syntax and constant names may be also specified
565independently.  For example, consider this version of $\nand$:
566\begin{ttbox}
567consts  nand     :: [o,o] => o         (infixl "~&" 35)
568\end{ttbox}
569
570\bigskip\index{mixfix declarations}
571{\bf Mixfix} operators may have arbitrary context-free syntaxes.  Let us
572add a line to the constant declaration part:
573\begin{ttbox}
574        If :: [o,o,o] => o       ("if _ then _ else _")
575\end{ttbox}
576This declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt
577  if~$P$ then~$Q$ else~$R$} as well as \texttt{If($P$,$Q$,$R$)}.  Underscores
578denote argument positions.  
579
580The declaration above does not allow the \texttt{if}-\texttt{then}-{\tt
581  else} construct to be printed split across several lines, even if it
582is too long to fit on one line.  Pretty-printing information can be
583added to specify the layout of mixfix operators.  For details, see
584\iflabelundefined{Defining-Logics}%
585    {the {\it Reference Manual}, chapter `Defining Logics'}%
586    {Chap.\ts\ref{Defining-Logics}}.
587
588Mixfix declarations can be annotated with priorities, just like
589infixes.  The example above is just a shorthand for
590\begin{ttbox}
591        If :: [o,o,o] => o       ("if _ then _ else _" [0,0,0] 1000)
592\end{ttbox}
593The numeric components determine priorities.  The list of integers
594defines, for each argument position, the minimal priority an expression
595at that position must have.  The final integer is the priority of the
596construct itself.  In the example above, any argument expression is
597acceptable because priorities are non-negative, and conditionals may
598appear everywhere because 1000 is the highest priority.  On the other
599hand, the declaration
600\begin{ttbox}
601        If :: [o,o,o] => o       ("if _ then _ else _" [100,0,0] 99)
602\end{ttbox}
603defines concrete syntax for a conditional whose first argument cannot have
604the form \texttt{if~$P$ then~$Q$ else~$R$} because it must have a priority
605of at least~100.  We may of course write
606\begin{quote}\tt
607if (if $P$ then $Q$ else $R$) then $S$ else $T$
608\end{quote}
609because expressions in parentheses have maximal priority.  
610
611Binary type constructors, like products and sums, may also be declared as
612infixes.  The type declaration below introduces a type constructor~$*$ with
613infix notation $\alpha*\beta$, together with the mixfix notation
614${<}\_,\_{>}$ for pairs.  We also see a rule declaration part.
615\index{examples!of theories}\index{mixfix declarations}
616\begin{ttbox}
617Prod = FOL +
618types   ('a,'b) "*"                           (infixl 20)
619arities "*"     :: (term,term)term
620consts  fst     :: "'a * 'b => 'a"
621        snd     :: "'a * 'b => 'b"
622        Pair    :: "['a,'b] => 'a * 'b"       ("(1<_,/_>)")
623rules   fst     "fst(<a,b>) = a"
624        snd     "snd(<a,b>) = b"
625end
626\end{ttbox}
627
628\begin{warn}
629  The name of the type constructor is~\texttt{*} and not \texttt{op~*}, as
630  it would be in the case of an infix constant.  Only infix type
631  constructors can have symbolic names like~\texttt{*}.  General mixfix
632  syntax for types may be introduced via appropriate \texttt{syntax}
633  declarations.
634\end{warn}
635
636
637\subsection{Overloading}
638\index{overloading}\index{examples!of theories}
639The {\bf class declaration part} has the form
640\begin{ttbox}
641classes \(id@1\) < \(c@1\)
642        \vdots
643        \(id@n\) < \(c@n\)
644\end{ttbox}
645where $id@1$, \ldots, $id@n$ are identifiers and $c@1$, \ldots, $c@n$ are
646existing classes.  It declares each $id@i$ as a new class, a subclass
647of~$c@i$.  In the general case, an identifier may be declared to be a
648subclass of $k$ existing classes:
649\begin{ttbox}
650        \(id\) < \(c@1\), \ldots, \(c@k\)
651\end{ttbox}
652Type classes allow constants to be overloaded.  As suggested in
653\S\ref{polymorphic}, let us define the class $arith$ of arithmetic
654types with the constants ${+} :: [\alpha,\alpha]\To \alpha$ and $0,1 {::}
655\alpha$, for $\alpha{::}arith$.  We introduce $arith$ as a subclass of
656$term$ and add the three polymorphic constants of this class.
657\index{examples!of theories}\index{constants!overloaded}
658\begin{ttbox}
659Arith = FOL +
660classes arith < term
661consts  "0"     :: 'a::arith                  ("0")
662        "1"     :: 'a::arith                  ("1")
663        "+"     :: ['a::arith,'a] => 'a       (infixl 60)
664end
665\end{ttbox}
666No rules are declared for these constants: we merely introduce their
667names without specifying properties.  On the other hand, classes
668with rules make it possible to prove {\bf generic} theorems.  Such
669theorems hold for all instances, all types in that class.
670
671We can now obtain distinct versions of the constants of $arith$ by
672declaring certain types to be of class $arith$.  For example, let us
673declare the 0-place type constructors $bool$ and $nat$:
674\index{examples!of theories}
675\begin{ttbox}
676BoolNat = Arith +
677types   bool  nat
678arities bool, nat   :: arith
679consts  Suc         :: nat=>nat
680\ttbreak
681rules   add0        "0 + n = n::nat"
682        addS        "Suc(m)+n = Suc(m+n)"
683        nat1        "1 = Suc(0)"
684        or0l        "0 + x = x::bool"
685        or0r        "x + 0 = x::bool"
686        or1l        "1 + x = 1::bool"
687        or1r        "x + 1 = 1::bool"
688end
689\end{ttbox}
690Because $nat$ and $bool$ have class $arith$, we can use $0$, $1$ and $+$ at
691either type.  The type constraints in the axioms are vital.  Without
692constraints, the $x$ in $1+x = 1$ (axiom \texttt{or1l})
693would have type $\alpha{::}arith$
694and the axiom would hold for any type of class $arith$.  This would
695collapse $nat$ to a trivial type:
696\[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
697
698
699\section{Theory example: the natural numbers}
700
701We shall now work through a small example of formalized mathematics
702demonstrating many of the theory extension features.
703
704
705\subsection{Extending first-order logic with the natural numbers}
706\index{examples!of theories}
707
708Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,
709including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.
710Let us introduce the Peano axioms for mathematical induction and the
711freeness of $0$ and~$Suc$:\index{axioms!Peano}
712\[ \vcenter{\infer[(induct)]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
713 \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
714\]
715\[ \infer[(Suc\_inject)]{m=n}{Suc(m)=Suc(n)} \qquad
716   \infer[(Suc\_neq\_0)]{R}{Suc(m)=0}
717\]
718Mathematical induction asserts that $P(n)$ is true, for any $n::nat$,
719provided $P(0)$ holds and that $P(x)$ implies $P(Suc(x))$ for all~$x$.
720Some authors express the induction step as $\forall x. P(x)\imp P(Suc(x))$.
721To avoid making induction require the presence of other connectives, we
722formalize mathematical induction as
723$$ \List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n). \eqno(induct) $$
724
725\noindent
726Similarly, to avoid expressing the other rules using~$\forall$, $\imp$
727and~$\neg$, we take advantage of the meta-logic;\footnote
728{On the other hand, the axioms $Suc(m)=Suc(n) \bimp m=n$
729and $\neg(Suc(m)=0)$ are logically equivalent to those given, and work
730better with Isabelle's simplifier.} 
731$(Suc\_neq\_0)$ is
732an elimination rule for $Suc(m)=0$:
733$$ Suc(m)=Suc(n) \Imp m=n  \eqno(Suc\_inject) $$
734$$ Suc(m)=0      \Imp R    \eqno(Suc\_neq\_0) $$
735
736\noindent
737We shall also define a primitive recursion operator, $rec$.  Traditionally,
738primitive recursion takes a natural number~$a$ and a 2-place function~$f$,
739and obeys the equations
740\begin{eqnarray*}
741  rec(0,a,f)            & = & a \\
742  rec(Suc(m),a,f)       & = & f(m, rec(m,a,f))
743\end{eqnarray*}
744Addition, defined by $m+n \equiv rec(m,n,\lambda x\,y.Suc(y))$,
745should satisfy
746\begin{eqnarray*}
747  0+n      & = & n \\
748  Suc(m)+n & = & Suc(m+n)
749\end{eqnarray*}
750Primitive recursion appears to pose difficulties: first-order logic has no
751function-valued expressions.  We again take advantage of the meta-logic,
752which does have functions.  We also generalise primitive recursion to be
753polymorphic over any type of class~$term$, and declare the addition
754function:
755\begin{eqnarray*}
756  rec   & :: & [nat, \alpha{::}term, [nat,\alpha]\To\alpha] \To\alpha \\
757  +     & :: & [nat,nat]\To nat 
758\end{eqnarray*}
759
760
761\subsection{Declaring the theory to Isabelle}
762\index{examples!of theories}
763Let us create the theory \thydx{Nat} starting from theory~\verb$FOL$,
764which contains only classical logic with no natural numbers.  We declare
765the 0-place type constructor $nat$ and the associated constants.  Note that
766the constant~0 requires a mixfix annotation because~0 is not a legal
767identifier, and could not otherwise be written in terms:
768\begin{ttbox}\index{mixfix declarations}
769Nat = FOL +
770types   nat
771arities nat         :: term
772consts  "0"         :: nat                              ("0")
773        Suc         :: nat=>nat
774        rec         :: [nat, 'a, [nat,'a]=>'a] => 'a
775        "+"         :: [nat, nat] => nat                (infixl 60)
776rules   Suc_inject  "Suc(m)=Suc(n) ==> m=n"
777        Suc_neq_0   "Suc(m)=0      ==> R"
778        induct      "[| P(0);  !!x. P(x) ==> P(Suc(x)) |]  ==> P(n)"
779        rec_0       "rec(0,a,f) = a"
780        rec_Suc     "rec(Suc(m), a, f) = f(m, rec(m,a,f))"
781        add_def     "m+n == rec(m, n, \%x y. Suc(y))"
782end
783\end{ttbox}
784In axiom \texttt{add_def}, recall that \verb|%| stands for~$\lambda$.
785Loading this theory file creates the \ML\ structure \texttt{Nat}, which
786contains the theory and axioms.
787
788\subsection{Proving some recursion equations}
789Theory \texttt{FOL/ex/Nat} contains proofs involving this theory of the
790natural numbers.  As a trivial example, let us derive recursion equations
791for \verb$+$.  Here is the zero case:
792\begin{ttbox}
793Goalw [add_def] "0+n = n";
794{\out Level 0}
795{\out 0 + n = n}
796{\out  1. rec(0,n,\%x y. Suc(y)) = n}
797\ttbreak
798by (resolve_tac [rec_0] 1);
799{\out Level 1}
800{\out 0 + n = n}
801{\out No subgoals!}
802qed "add_0";
803\end{ttbox}
804And here is the successor case:
805\begin{ttbox}
806Goalw [add_def] "Suc(m)+n = Suc(m+n)";
807{\out Level 0}
808{\out Suc(m) + n = Suc(m + n)}
809{\out  1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}
810\ttbreak
811by (resolve_tac [rec_Suc] 1);
812{\out Level 1}
813{\out Suc(m) + n = Suc(m + n)}
814{\out No subgoals!}
815qed "add_Suc";
816\end{ttbox}
817The induction rule raises some complications, which are discussed next.
818\index{theories!defining|)}
819
820
821\section{Refinement with explicit instantiation}
822\index{resolution!with instantiation}
823\index{instantiation|(}
824
825In order to employ mathematical induction, we need to refine a subgoal by
826the rule~$(induct)$.  The conclusion of this rule is $\Var{P}(\Var{n})$,
827which is highly ambiguous in higher-order unification.  It matches every
828way that a formula can be regarded as depending on a subterm of type~$nat$.
829To get round this problem, we could make the induction rule conclude
830$\forall n.\Var{P}(n)$ --- but putting a subgoal into this form requires
831refinement by~$(\forall E)$, which is equally hard!
832
833The tactic \texttt{res_inst_tac}, like \texttt{resolve_tac}, refines a subgoal by
834a rule.  But it also accepts explicit instantiations for the rule's
835schematic variables.  
836\begin{description}
837\item[\ttindex{res_inst_tac} {\it insts} {\it thm} {\it i}]
838instantiates the rule {\it thm} with the instantiations {\it insts}, and
839then performs resolution on subgoal~$i$.
840
841\item[\ttindex{eres_inst_tac}] 
842and \ttindex{dres_inst_tac} are similar, but perform elim-resolution
843and destruct-resolution, respectively.
844\end{description}
845The list {\it insts} consists of pairs $[(v@1,e@1), \ldots, (v@n,e@n)]$,
846where $v@1$, \ldots, $v@n$ are names of schematic variables in the rule ---
847with no leading question marks! --- and $e@1$, \ldots, $e@n$ are
848expressions giving their instantiations.  The expressions are type-checked
849in the context of a particular subgoal: free variables receive the same
850types as they have in the subgoal, and parameters may appear.  Type
851variable instantiations may appear in~{\it insts}, but they are seldom
852required: \texttt{res_inst_tac} instantiates type variables automatically
853whenever the type of~$e@i$ is an instance of the type of~$\Var{v@i}$.
854
855\subsection{A simple proof by induction}
856\index{examples!of induction}
857Let us prove that no natural number~$k$ equals its own successor.  To
858use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
859instantiation for~$\Var{P}$.
860\begin{ttbox}
861Goal "~ (Suc(k) = k)";
862{\out Level 0}
863{\out Suc(k) ~= k}
864{\out  1. Suc(k) ~= k}
865\ttbreak
866by (res_inst_tac [("n","k")] induct 1);
867{\out Level 1}
868{\out Suc(k) ~= k}
869{\out  1. Suc(0) ~= 0}
870{\out  2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
871\end{ttbox}
872We should check that Isabelle has correctly applied induction.  Subgoal~1
873is the base case, with $k$ replaced by~0.  Subgoal~2 is the inductive step,
874with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
875The rest of the proof demonstrates~\tdx{notI}, \tdx{notE} and the
876other rules of theory \texttt{Nat}.  The base case holds by~\ttindex{Suc_neq_0}:
877\begin{ttbox}
878by (resolve_tac [notI] 1);
879{\out Level 2}
880{\out Suc(k) ~= k}
881{\out  1. Suc(0) = 0 ==> False}
882{\out  2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
883\ttbreak
884by (eresolve_tac [Suc_neq_0] 1);
885{\out Level 3}
886{\out Suc(k) ~= k}
887{\out  1. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
888\end{ttbox}
889The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
890Negation rules transform the subgoal into that of proving $Suc(x)=x$ from
891$Suc(Suc(x)) = Suc(x)$:
892\begin{ttbox}
893by (resolve_tac [notI] 1);
894{\out Level 4}
895{\out Suc(k) ~= k}
896{\out  1. !!x. [| Suc(x) ~= x; Suc(Suc(x)) = Suc(x) |] ==> False}
897\ttbreak
898by (eresolve_tac [notE] 1);
899{\out Level 5}
900{\out Suc(k) ~= k}
901{\out  1. !!x. Suc(Suc(x)) = Suc(x) ==> Suc(x) = x}
902\ttbreak
903by (eresolve_tac [Suc_inject] 1);
904{\out Level 6}
905{\out Suc(k) ~= k}
906{\out No subgoals!}
907\end{ttbox}
908
909
910\subsection{An example of ambiguity in \texttt{resolve_tac}}
911\index{examples!of induction}\index{unification!higher-order}
912If you try the example above, you may observe that \texttt{res_inst_tac} is
913not actually needed.  Almost by chance, \ttindex{resolve_tac} finds the right
914instantiation for~$(induct)$ to yield the desired next state.  With more
915complex formulae, our luck fails.  
916\begin{ttbox}
917Goal "(k+m)+n = k+(m+n)";
918{\out Level 0}
919{\out k + m + n = k + (m + n)}
920{\out  1. k + m + n = k + (m + n)}
921\ttbreak
922by (resolve_tac [induct] 1);
923{\out Level 1}
924{\out k + m + n = k + (m + n)}
925{\out  1. k + m + n = 0}
926{\out  2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
927\end{ttbox}
928This proof requires induction on~$k$.  The occurrence of~0 in subgoal~1
929indicates that induction has been applied to the term~$k+(m+n)$; this
930application is sound but will not lead to a proof here.  Fortunately,
931Isabelle can (lazily!) generate all the valid applications of induction.
932The \ttindex{back} command causes backtracking to an alternative outcome of
933the tactic.
934\begin{ttbox}
935back();
936{\out Level 1}
937{\out k + m + n = k + (m + n)}
938{\out  1. k + m + n = k + 0}
939{\out  2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
940\end{ttbox}
941Now induction has been applied to~$m+n$.  This is equally useless.  Let us
942call \ttindex{back} again.
943\begin{ttbox}
944back();
945{\out Level 1}
946{\out k + m + n = k + (m + n)}
947{\out  1. k + m + 0 = k + (m + 0)}
948{\out  2. !!x. k + m + x = k + (m + x) ==>}
949{\out          k + m + Suc(x) = k + (m + Suc(x))}
950\end{ttbox}
951Now induction has been applied to~$n$.  What is the next alternative?
952\begin{ttbox}
953back();
954{\out Level 1}
955{\out k + m + n = k + (m + n)}
956{\out  1. k + m + n = k + (m + 0)}
957{\out  2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
958\end{ttbox}
959Inspecting subgoal~1 reveals that induction has been applied to just the
960second occurrence of~$n$.  This perfectly legitimate induction is useless
961here.  
962
963The main goal admits fourteen different applications of induction.  The
964number is exponential in the size of the formula.
965
966\subsection{Proving that addition is associative}
967Let us invoke the induction rule properly, using~{\tt
968  res_inst_tac}.  At the same time, we shall have a glimpse at Isabelle's
969simplification tactics, which are described in 
970\iflabelundefined{simp-chap}%
971    {the {\em Reference Manual}}{Chap.\ts\ref{simp-chap}}.
972
973\index{simplification}\index{examples!of simplification} 
974
975Isabelle's simplification tactics repeatedly apply equations to a subgoal,
976perhaps proving it.  For efficiency, the rewrite rules must be packaged into a
977{\bf simplification set},\index{simplification sets} or {\bf simpset}.  We
978augment the implicit simpset of FOL with the equations proved in the previous
979section, namely $0+n=n$ and $\texttt{Suc}(m)+n=\texttt{Suc}(m+n)$:
980\begin{ttbox}
981Addsimps [add_0, add_Suc];
982\end{ttbox}
983We state the goal for associativity of addition, and
984use \ttindex{res_inst_tac} to invoke induction on~$k$:
985\begin{ttbox}
986Goal "(k+m)+n = k+(m+n)";
987{\out Level 0}
988{\out k + m + n = k + (m + n)}
989{\out  1. k + m + n = k + (m + n)}
990\ttbreak
991by (res_inst_tac [("n","k")] induct 1);
992{\out Level 1}
993{\out k + m + n = k + (m + n)}
994{\out  1. 0 + m + n = 0 + (m + n)}
995{\out  2. !!x. x + m + n = x + (m + n) ==>}
996{\out          Suc(x) + m + n = Suc(x) + (m + n)}
997\end{ttbox}
998The base case holds easily; both sides reduce to $m+n$.  The
999tactic~\ttindex{Simp_tac} rewrites with respect to the current
1000simplification set, applying the rewrite rules for addition:
1001\begin{ttbox}
1002by (Simp_tac 1);
1003{\out Level 2}
1004{\out k + m + n = k + (m + n)}
1005{\out  1. !!x. x + m + n = x + (m + n) ==>}
1006{\out          Suc(x) + m + n = Suc(x) + (m + n)}
1007\end{ttbox}
1008The inductive step requires rewriting by the equations for addition
1009and with the induction hypothesis, which is also an equation.  The
1010tactic~\ttindex{Asm_simp_tac} rewrites using the implicit
1011simplification set and any useful assumptions:
1012\begin{ttbox}
1013by (Asm_simp_tac 1);
1014{\out Level 3}
1015{\out k + m + n = k + (m + n)}
1016{\out No subgoals!}
1017\end{ttbox}
1018\index{instantiation|)}
1019
1020
1021\section{A Prolog interpreter}
1022\index{Prolog interpreter|bold}
1023To demonstrate the power of tacticals, let us construct a Prolog
1024interpreter and execute programs involving lists.\footnote{To run these
1025examples, see the file \texttt{FOL/ex/Prolog.ML}.} The Prolog program
1026consists of a theory.  We declare a type constructor for lists, with an
1027arity declaration to say that $(\tau)list$ is of class~$term$
1028provided~$\tau$ is:
1029\begin{eqnarray*}
1030  list  & :: & (term)term
1031\end{eqnarray*}
1032We declare four constants: the empty list~$Nil$; the infix list
1033constructor~{:}; the list concatenation predicate~$app$; the list reverse
1034predicate~$rev$.  (In Prolog, functions on lists are expressed as
1035predicates.)
1036\begin{eqnarray*}
1037    Nil         & :: & \alpha list \\
1038    {:}         & :: & [\alpha,\alpha list] \To \alpha list \\
1039    app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
1040    rev & :: & [\alpha list,\alpha list] \To o 
1041\end{eqnarray*}
1042The predicate $app$ should satisfy the Prolog-style rules
1043\[ {app(Nil,ys,ys)} \qquad
1044   {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
1045We define the naive version of $rev$, which calls~$app$:
1046\[ {rev(Nil,Nil)} \qquad
1047   {rev(xs,ys)\quad  app(ys, x:Nil, zs) \over
1048    rev(x:xs, zs)} 
1049\]
1050
1051\index{examples!of theories}
1052Theory \thydx{Prolog} extends first-order logic in order to make use
1053of the class~$term$ and the type~$o$.  The interpreter does not use the
1054rules of~\texttt{FOL}.
1055\begin{ttbox}
1056Prolog = FOL +
1057types   'a list
1058arities list    :: (term)term
1059consts  Nil     :: 'a list
1060        ":"     :: ['a, 'a list]=> 'a list            (infixr 60)
1061        app     :: ['a list, 'a list, 'a list] => o
1062        rev     :: ['a list, 'a list] => o
1063rules   appNil  "app(Nil,ys,ys)"
1064        appCons "app(xs,ys,zs) ==> app(x:xs, ys, x:zs)"
1065        revNil  "rev(Nil,Nil)"
1066        revCons "[| rev(xs,ys); app(ys,x:Nil,zs) |] ==> rev(x:xs,zs)"
1067end
1068\end{ttbox}
1069\subsection{Simple executions}
1070Repeated application of the rules solves Prolog goals.  Let us
1071append the lists $[a,b,c]$ and~$[d,e]$.  As the rules are applied, the
1072answer builds up in~\texttt{?x}.
1073\begin{ttbox}
1074Goal "app(a:b:c:Nil, d:e:Nil, ?x)";
1075{\out Level 0}
1076{\out app(a : b : c : Nil, d : e : Nil, ?x)}
1077{\out  1. app(a : b : c : Nil, d : e : Nil, ?x)}
1078\ttbreak
1079by (resolve_tac [appNil,appCons] 1);
1080{\out Level 1}
1081{\out app(a : b : c : Nil, d : e : Nil, a : ?zs1)}
1082{\out  1. app(b : c : Nil, d : e : Nil, ?zs1)}
1083\ttbreak
1084by (resolve_tac [appNil,appCons] 1);
1085{\out Level 2}
1086{\out app(a : b : c : Nil, d : e : Nil, a : b : ?zs2)}
1087{\out  1. app(c : Nil, d : e : Nil, ?zs2)}
1088\end{ttbox}
1089At this point, the first two elements of the result are~$a$ and~$b$.
1090\begin{ttbox}
1091by (resolve_tac [appNil,appCons] 1);
1092{\out Level 3}
1093{\out app(a : b : c : Nil, d : e : Nil, a : b : c : ?zs3)}
1094{\out  1. app(Nil, d : e : Nil, ?zs3)}
1095\ttbreak
1096by (resolve_tac [appNil,appCons] 1);
1097{\out Level 4}
1098{\out app(a : b : c : Nil, d : e : Nil, a : b : c : d : e : Nil)}
1099{\out No subgoals!}
1100\end{ttbox}
1101
1102Prolog can run functions backwards.  Which list can be appended
1103with $[c,d]$ to produce $[a,b,c,d]$?
1104Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
1105\begin{ttbox}
1106Goal "app(?x, c:d:Nil, a:b:c:d:Nil)";
1107{\out Level 0}
1108{\out app(?x, c : d : Nil, a : b : c : d : Nil)}
1109{\out  1. app(?x, c : d : Nil, a : b : c : d : Nil)}
1110\ttbreak
1111by (REPEAT (resolve_tac [appNil,appCons] 1));
1112{\out Level 1}
1113{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
1114{\out No subgoals!}
1115\end{ttbox}
1116
1117
1118\subsection{Backtracking}\index{backtracking!Prolog style}
1119Prolog backtracking can answer questions that have multiple solutions.
1120Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$?  This
1121question has five solutions.  Using \ttindex{REPEAT} to apply the rules, we
1122quickly find the first solution, namely $x=[]$ and $y=[a,b,c,d]$:
1123\begin{ttbox}
1124Goal "app(?x, ?y, a:b:c:d:Nil)";
1125{\out Level 0}
1126{\out app(?x, ?y, a : b : c : d : Nil)}
1127{\out  1. app(?x, ?y, a : b : c : d : Nil)}
1128\ttbreak
1129by (REPEAT (resolve_tac [appNil,appCons] 1));
1130{\out Level 1}
1131{\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
1132{\out No subgoals!}
1133\end{ttbox}
1134Isabelle can lazily generate all the possibilities.  The \ttindex{back}
1135command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:
1136\begin{ttbox}
1137back();
1138{\out Level 1}
1139{\out app(a : Nil, b : c : d : Nil, a : b : c : d : Nil)}
1140{\out No subgoals!}
1141\end{ttbox}
1142The other solutions are generated similarly.
1143\begin{ttbox}
1144back();
1145{\out Level 1}
1146{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
1147{\out No subgoals!}
1148\ttbreak
1149back();
1150{\out Level 1}
1151{\out app(a : b : c : Nil, d : Nil, a : b : c : d : Nil)}
1152{\out No subgoals!}
1153\ttbreak
1154back();
1155{\out Level 1}
1156{\out app(a : b : c : d : Nil, Nil, a : b : c : d : Nil)}
1157{\out No subgoals!}
1158\end{ttbox}
1159
1160
1161\subsection{Depth-first search}
1162\index{search!depth-first}
1163Now let us try $rev$, reversing a list.
1164Bundle the rules together as the \ML{} identifier \texttt{rules}.  Naive
1165reverse requires 120 inferences for this 14-element list, but the tactic
1166terminates in a few seconds.
1167\begin{ttbox}
1168Goal "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
1169{\out Level 0}
1170{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
1171{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
1172{\out         ?w)}
1173\ttbreak
1174val rules = [appNil,appCons,revNil,revCons];
1175\ttbreak
1176by (REPEAT (resolve_tac rules 1));
1177{\out Level 1}
1178{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
1179{\out     n : m : l : k : j : i : h : g : f : e : d : c : b : a : Nil)}
1180{\out No subgoals!}
1181\end{ttbox}
1182We may execute $rev$ backwards.  This, too, should reverse a list.  What
1183is the reverse of $[a,b,c]$?
1184\begin{ttbox}
1185Goal "rev(?x, a:b:c:Nil)";
1186{\out Level 0}
1187{\out rev(?x, a : b : c : Nil)}
1188{\out  1. rev(?x, a : b : c : Nil)}
1189\ttbreak
1190by (REPEAT (resolve_tac rules 1));
1191{\out Level 1}
1192{\out rev(?x1 : Nil, a : b : c : Nil)}
1193{\out  1. app(Nil, ?x1 : Nil, a : b : c : Nil)}
1194\end{ttbox}
1195The tactic has failed to find a solution!  It reached a dead end at
1196subgoal~1: there is no~$\Var{x@1}$ such that [] appended with~$[\Var{x@1}]$
1197equals~$[a,b,c]$.  Backtracking explores other outcomes.
1198\begin{ttbox}
1199back();
1200{\out Level 1}
1201{\out rev(?x1 : a : Nil, a : b : c : Nil)}
1202{\out  1. app(Nil, ?x1 : Nil, b : c : Nil)}
1203\end{ttbox}
1204This too is a dead end, but the next outcome is successful.
1205\begin{ttbox}
1206back();
1207{\out Level 1}
1208{\out rev(c : b : a : Nil, a : b : c : Nil)}
1209{\out No subgoals!}
1210\end{ttbox}
1211\ttindex{REPEAT} goes wrong because it is only a repetition tactical, not a
1212search tactical.  \texttt{REPEAT} stops when it cannot continue, regardless of
1213which state is reached.  The tactical \ttindex{DEPTH_FIRST} searches for a
1214satisfactory state, as specified by an \ML{} predicate.  Below,
1215\ttindex{has_fewer_prems} specifies that the proof state should have no
1216subgoals.
1217\begin{ttbox}
1218val prolog_tac = DEPTH_FIRST (has_fewer_prems 1) 
1219                             (resolve_tac rules 1);
1220\end{ttbox}
1221Since Prolog uses depth-first search, this tactic is a (slow!) 
1222Prolog interpreter.  We return to the start of the proof using
1223\ttindex{choplev}, and apply \texttt{prolog_tac}:
1224\begin{ttbox}
1225choplev 0;
1226{\out Level 0}
1227{\out rev(?x, a : b : c : Nil)}
1228{\out  1. rev(?x, a : b : c : Nil)}
1229\ttbreak
1230by prolog_tac;
1231{\out Level 1}
1232{\out rev(c : b : a : Nil, a : b : c : Nil)}
1233{\out No subgoals!}
1234\end{ttbox}
1235Let us try \texttt{prolog_tac} on one more example, containing four unknowns:
1236\begin{ttbox}
1237Goal "rev(a:?x:c:?y:Nil, d:?z:b:?u)";
1238{\out Level 0}
1239{\out rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
1240{\out  1. rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
1241\ttbreak
1242by prolog_tac;
1243{\out Level 1}
1244{\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
1245{\out No subgoals!}
1246\end{ttbox}
1247Although Isabelle is much slower than a Prolog system, Isabelle
1248tactics can exploit logic programming techniques.  
1249
1250