1(*  Title:      HOL/Isar_Examples/Basic_Logic.thy
2    Author:     Makarius
3
4Basic propositional and quantifier reasoning.
5*)
6
7section \<open>Basic logical reasoning\<close>
8
9theory Basic_Logic
10  imports Main
11begin
12
13
14subsection \<open>Pure backward reasoning\<close>
15
16text \<open>
17  In order to get a first idea of how Isabelle/Isar proof documents may look
18  like, we consider the propositions \<open>I\<close>, \<open>K\<close>, and \<open>S\<close>. The following (rather
19  explicit) proofs should require little extra explanations.
20\<close>
21
22lemma I: "A \<longrightarrow> A"
23proof
24  assume A
25  show A by fact
26qed
27
28lemma K: "A \<longrightarrow> B \<longrightarrow> A"
29proof
30  assume A
31  show "B \<longrightarrow> A"
32  proof
33    show A by fact
34  qed
35qed
36
37lemma S: "(A \<longrightarrow> B \<longrightarrow> C) \<longrightarrow> (A \<longrightarrow> B) \<longrightarrow> A \<longrightarrow> C"
38proof
39  assume "A \<longrightarrow> B \<longrightarrow> C"
40  show "(A \<longrightarrow> B) \<longrightarrow> A \<longrightarrow> C"
41  proof
42    assume "A \<longrightarrow> B"
43    show "A \<longrightarrow> C"
44    proof
45      assume A
46      show C
47      proof (rule mp)
48        show "B \<longrightarrow> C" by (rule mp) fact+
49        show B by (rule mp) fact+
50      qed
51    qed
52  qed
53qed
54
55text \<open>
56  Isar provides several ways to fine-tune the reasoning, avoiding excessive
57  detail. Several abbreviated language elements are available, enabling the
58  writer to express proofs in a more concise way, even without referring to
59  any automated proof tools yet.
60
61  Concluding any (sub-)proof already involves solving any remaining goals by
62  assumption\<^footnote>\<open>This is not a completely trivial operation, as proof by
63  assumption may involve full higher-order unification.\<close>. Thus we may skip the
64  rather vacuous body of the above proof.
65\<close>
66
67lemma "A \<longrightarrow> A"
68proof
69qed
70
71text \<open>
72  Note that the \<^theory_text>\<open>proof\<close> command refers to the \<open>rule\<close> method (without
73  arguments) by default. Thus it implicitly applies a single rule, as
74  determined from the syntactic form of the statements involved. The \<^theory_text>\<open>by\<close>
75  command abbreviates any proof with empty body, so the proof may be further
76  pruned.
77\<close>
78
79lemma "A \<longrightarrow> A"
80  by rule
81
82text \<open>
83  Proof by a single rule may be abbreviated as double-dot.
84\<close>
85
86lemma "A \<longrightarrow> A" ..
87
88text \<open>
89  Thus we have arrived at an adequate representation of the proof of a
90  tautology that holds by a single standard rule.\<^footnote>\<open>Apparently, the
91  rule here is implication introduction.\<close>
92
93  \<^medskip>
94  Let us also reconsider \<open>K\<close>. Its statement is composed of iterated
95  connectives. Basic decomposition is by a single rule at a time, which is why
96  our first version above was by nesting two proofs.
97
98  The \<open>intro\<close> proof method repeatedly decomposes a goal's conclusion.\<^footnote>\<open>The
99  dual method is \<open>elim\<close>, acting on a goal's premises.\<close>
100\<close>
101
102lemma "A \<longrightarrow> B \<longrightarrow> A"
103proof (intro impI)
104  assume A
105  show A by fact
106qed
107
108text \<open>Again, the body may be collapsed.\<close>
109
110lemma "A \<longrightarrow> B \<longrightarrow> A"
111  by (intro impI)
112
113text \<open>
114  Just like \<open>rule\<close>, the \<open>intro\<close> and \<open>elim\<close> proof methods pick standard
115  structural rules, in case no explicit arguments are given. While implicit
116  rules are usually just fine for single rule application, this may go too far
117  with iteration. Thus in practice, \<open>intro\<close> and \<open>elim\<close> would be typically
118  restricted to certain structures by giving a few rules only, e.g.\ \<^theory_text>\<open>proof
119  (intro impI allI)\<close> to strip implications and universal quantifiers.
120
121  Such well-tuned iterated decomposition of certain structures is the prime
122  application of \<open>intro\<close> and \<open>elim\<close>. In contrast, terminal steps that solve a
123  goal completely are usually performed by actual automated proof methods
124  (such as \<^theory_text>\<open>by blast\<close>.
125\<close>
126
127
128subsection \<open>Variations of backward vs.\ forward reasoning\<close>
129
130text \<open>
131  Certainly, any proof may be performed in backward-style only. On the other
132  hand, small steps of reasoning are often more naturally expressed in
133  forward-style. Isar supports both backward and forward reasoning as a
134  first-class concept. In order to demonstrate the difference, we consider
135  several proofs of \<open>A \<and> B \<longrightarrow> B \<and> A\<close>.
136
137  The first version is purely backward.
138\<close>
139
140lemma "A \<and> B \<longrightarrow> B \<and> A"
141proof
142  assume "A \<and> B"
143  show "B \<and> A"
144  proof
145    show B by (rule conjunct2) fact
146    show A by (rule conjunct1) fact
147  qed
148qed
149
150text \<open>
151  Above, the projection rules \<open>conjunct1\<close> / \<open>conjunct2\<close> had to be named
152  explicitly, since the goals \<open>B\<close> and \<open>A\<close> did not provide any structural clue.
153  This may be avoided using \<^theory_text>\<open>from\<close> to focus on the \<open>A \<and> B\<close> assumption as the
154  current facts, enabling the use of double-dot proofs. Note that \<^theory_text>\<open>from\<close>
155  already does forward-chaining, involving the \<open>conjE\<close> rule here.
156\<close>
157
158lemma "A \<and> B \<longrightarrow> B \<and> A"
159proof
160  assume "A \<and> B"
161  show "B \<and> A"
162  proof
163    from \<open>A \<and> B\<close> show B ..
164    from \<open>A \<and> B\<close> show A ..
165  qed
166qed
167
168text \<open>
169  In the next version, we move the forward step one level upwards.
170  Forward-chaining from the most recent facts is indicated by the \<^theory_text>\<open>then\<close>
171  command. Thus the proof of \<open>B \<and> A\<close> from \<open>A \<and> B\<close> actually becomes an
172  elimination, rather than an introduction. The resulting proof structure
173  directly corresponds to that of the \<open>conjE\<close> rule, including the repeated
174  goal proposition that is abbreviated as \<open>?thesis\<close> below.
175\<close>
176
177lemma "A \<and> B \<longrightarrow> B \<and> A"
178proof
179  assume "A \<and> B"
180  then show "B \<and> A"
181  proof                    \<comment> \<open>rule \<open>conjE\<close> of \<open>A \<and> B\<close>\<close>
182    assume B A
183    then show ?thesis ..   \<comment> \<open>rule \<open>conjI\<close> of \<open>B \<and> A\<close>\<close>
184  qed
185qed
186
187text \<open>
188  In the subsequent version we flatten the structure of the main body by doing
189  forward reasoning all the time. Only the outermost decomposition step is
190  left as backward.
191\<close>
192
193lemma "A \<and> B \<longrightarrow> B \<and> A"
194proof
195  assume "A \<and> B"
196  from \<open>A \<and> B\<close> have A ..
197  from \<open>A \<and> B\<close> have B ..
198  from \<open>B\<close> \<open>A\<close> show "B \<and> A" ..
199qed
200
201text \<open>
202  We can still push forward-reasoning a bit further, even at the risk of
203  getting ridiculous. Note that we force the initial proof step to do nothing
204  here, by referring to the \<open>-\<close> proof method.
205\<close>
206
207lemma "A \<and> B \<longrightarrow> B \<and> A"
208proof -
209  {
210    assume "A \<and> B"
211    from \<open>A \<and> B\<close> have A ..
212    from \<open>A \<and> B\<close> have B ..
213    from \<open>B\<close> \<open>A\<close> have "B \<and> A" ..
214  }
215  then show ?thesis ..         \<comment> \<open>rule \<open>impI\<close>\<close>
216qed
217
218text \<open>
219  \<^medskip>
220  With these examples we have shifted through a whole range from purely
221  backward to purely forward reasoning. Apparently, in the extreme ends we get
222  slightly ill-structured proofs, which also require much explicit naming of
223  either rules (backward) or local facts (forward).
224
225  The general lesson learned here is that good proof style would achieve just
226  the \<^emph>\<open>right\<close> balance of top-down backward decomposition, and bottom-up
227  forward composition. In general, there is no single best way to arrange some
228  pieces of formal reasoning, of course. Depending on the actual applications,
229  the intended audience etc., rules (and methods) on the one hand vs.\ facts
230  on the other hand have to be emphasized in an appropriate way. This requires
231  the proof writer to develop good taste, and some practice, of course.
232
233  \<^medskip>
234  For our example the most appropriate way of reasoning is probably the middle
235  one, with conjunction introduction done after elimination.
236\<close>
237
238lemma "A \<and> B \<longrightarrow> B \<and> A"
239proof
240  assume "A \<and> B"
241  then show "B \<and> A"
242  proof
243    assume B A
244    then show ?thesis ..
245  qed
246qed
247
248
249
250subsection \<open>A few examples from ``Introduction to Isabelle''\<close>
251
252text \<open>
253  We rephrase some of the basic reasoning examples of @{cite
254  "isabelle-intro"}, using HOL rather than FOL.
255\<close>
256
257
258subsubsection \<open>A propositional proof\<close>
259
260text \<open>
261  We consider the proposition \<open>P \<or> P \<longrightarrow> P\<close>. The proof below involves
262  forward-chaining from \<open>P \<or> P\<close>, followed by an explicit case-analysis on the
263  two \<^emph>\<open>identical\<close> cases.
264\<close>
265
266lemma "P \<or> P \<longrightarrow> P"
267proof
268  assume "P \<or> P"
269  then show P
270  proof                    \<comment> \<open>rule \<open>disjE\<close>: \smash{$\infer{\<open>C\<close>}{\<open>A \<or> B\<close> & \infer*{\<open>C\<close>}{[\<open>A\<close>]} & \infer*{\<open>C\<close>}{[\<open>B\<close>]}}$}\<close>
271    assume P show P by fact
272  next
273    assume P show P by fact
274  qed
275qed
276
277text \<open>
278  Case splits are \<^emph>\<open>not\<close> hardwired into the Isar language as a special
279  feature. The \<^theory_text>\<open>next\<close> command used to separate the cases above is just a
280  short form of managing block structure.
281
282  \<^medskip>
283  In general, applying proof methods may split up a goal into separate
284  ``cases'', i.e.\ new subgoals with individual local assumptions. The
285  corresponding proof text typically mimics this by establishing results in
286  appropriate contexts, separated by blocks.
287
288  In order to avoid too much explicit parentheses, the Isar system implicitly
289  opens an additional block for any new goal, the \<^theory_text>\<open>next\<close> statement then
290  closes one block level, opening a new one. The resulting behaviour is what
291  one would expect from separating cases, only that it is more flexible. E.g.\
292  an induction base case (which does not introduce local assumptions) would
293  \<^emph>\<open>not\<close> require \<^theory_text>\<open>next\<close> to separate the subsequent step case.
294
295  \<^medskip>
296  In our example the situation is even simpler, since the two cases actually
297  coincide. Consequently the proof may be rephrased as follows.
298\<close>
299
300lemma "P \<or> P \<longrightarrow> P"
301proof
302  assume "P \<or> P"
303  then show P
304  proof
305    assume P
306    show P by fact
307    show P by fact
308  qed
309qed
310
311text \<open>Again, the rather vacuous body of the proof may be collapsed.
312  Thus the case analysis degenerates into two assumption steps, which
313  are implicitly performed when concluding the single rule step of the
314  double-dot proof as follows.\<close>
315
316lemma "P \<or> P \<longrightarrow> P"
317proof
318  assume "P \<or> P"
319  then show P ..
320qed
321
322
323subsubsection \<open>A quantifier proof\<close>
324
325text \<open>
326  To illustrate quantifier reasoning, let us prove
327  \<open>(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)\<close>. Informally, this holds because any \<open>a\<close> with
328  \<open>P (f a)\<close> may be taken as a witness for the second existential statement.
329
330  The first proof is rather verbose, exhibiting quite a lot of (redundant)
331  detail. It gives explicit rules, even with some instantiation. Furthermore,
332  we encounter two new language elements: the \<^theory_text>\<open>fix\<close> command augments the
333  context by some new ``arbitrary, but fixed'' element; the \<^theory_text>\<open>is\<close> annotation
334  binds term abbreviations by higher-order pattern matching.
335\<close>
336
337lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
338proof
339  assume "\<exists>x. P (f x)"
340  then show "\<exists>y. P y"
341  proof (rule exE)             \<comment> \<open>rule \<open>exE\<close>: \smash{$\infer{\<open>B\<close>}{\<open>\<exists>x. A(x)\<close> & \infer*{\<open>B\<close>}{[\<open>A(x)\<close>]_x}}$}\<close>
342    fix a
343    assume "P (f a)" (is "P ?witness")
344    then show ?thesis by (rule exI [of P ?witness])
345  qed
346qed
347
348text \<open>
349  While explicit rule instantiation may occasionally improve readability of
350  certain aspects of reasoning, it is usually quite redundant. Above, the
351  basic proof outline gives already enough structural clues for the system to
352  infer both the rules and their instances (by higher-order unification). Thus
353  we may as well prune the text as follows.
354\<close>
355
356lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
357proof
358  assume "\<exists>x. P (f x)"
359  then show "\<exists>y. P y"
360  proof
361    fix a
362    assume "P (f a)"
363    then show ?thesis ..
364  qed
365qed
366
367text \<open>
368  Explicit \<open>\<exists>\<close>-elimination as seen above can become quite cumbersome in
369  practice. The derived Isar language element ``\<^theory_text>\<open>obtain\<close>'' provides a more
370  handsome way to do generalized existence reasoning.
371\<close>
372
373lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
374proof
375  assume "\<exists>x. P (f x)"
376  then obtain a where "P (f a)" ..
377  then show "\<exists>y. P y" ..
378qed
379
380text \<open>
381  Technically, \<^theory_text>\<open>obtain\<close> is similar to \<^theory_text>\<open>fix\<close> and \<^theory_text>\<open>assume\<close> together with a
382  soundness proof of the elimination involved. Thus it behaves similar to any
383  other forward proof element. Also note that due to the nature of general
384  existence reasoning involved here, any result exported from the context of
385  an \<^theory_text>\<open>obtain\<close> statement may \<^emph>\<open>not\<close> refer to the parameters introduced there.
386\<close>
387
388
389subsubsection \<open>Deriving rules in Isabelle\<close>
390
391text \<open>
392  We derive the conjunction elimination rule from the corresponding
393  projections. The proof is quite straight-forward, since Isabelle/Isar
394  supports non-atomic goals and assumptions fully transparently.
395\<close>
396
397theorem conjE: "A \<and> B \<Longrightarrow> (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C"
398proof -
399  assume "A \<and> B"
400  assume r: "A \<Longrightarrow> B \<Longrightarrow> C"
401  show C
402  proof (rule r)
403    show A by (rule conjunct1) fact
404    show B by (rule conjunct2) fact
405  qed
406qed
407
408end
409