History log of /seL4-l4v-10.1.1/HOL4/src/IndDef/InductiveDefinition.sml
Revision Date Author Comments
# 838f97d0 19-Mar-2018 Michael Norrish <Michael.Norrish@nicta.com.au>

Up to core types


# 95d60bd3 02-Oct-2017 Michael Norrish <Michael.Norrish@nicta.com.au>

Remove -- as an alias for Term parser.

As per comment in release notes this has long been replaced as
appropriate style.


# 8c47c9df 15-Jul-2015 Michael Norrish <Michael.Norrish@nicta.com.au>

Detect and abort on vacuous inductive relation clauses

A vacuous clause is something like

!x y. R x y

where R is the relation being defined. Such a thing makes the relation
the universal relation. If you really want such a thing, you can always
write

!x y. T ==> R x y

Closes #269


# 0a263dd6 05-Nov-2012 Ramana Kumar <ramana.kumar@gmail.com>

add MONO_COND to bool_monoset (and hence the_monoset)

This enables inductive definitions that make recursive calls in the
branches of conditionals expressed with COND (if_then_else).
Also added a selftest.


# 53827563 29-Sep-2010 Anthony Fox <anthony.fox@cl.cam.ac.uk>

Avoid a bunch of pattern completion warning messages.


# 1ea88f67 19-Jul-2010 Michael Norrish <Michael.Norrish@nicta.com.au>

Move IndDef earlier in the build sequence (just before BasicProof).

This will let me reinstate my new Induct_on functionality.


# 69eb5904 22-Jun-2010 Michael Norrish <Michael.Norrish@nicta.com.au>

Inductive definitions now make their core definitions with provided stem name.


# 6d4dec0f 17-Sep-2009 Michael Norrish <Michael.Norrish@nicta.com.au>

The pretty-printing done in this file when generating error messages
should be done with a raw terminal.


# 0ab55b57 17-Sep-2009 Michael Norrish <Michael.Norrish@nicta.com.au>

In a meeting with Ramana Kumar, we realised that the error-checking in
InductiveDefinition was being too fussy about type variables; in
particular, type variables that appear only in schematic variables
will eventually end up in the type of the inductively defined
relation.


# cca9a16f 13-Aug-2009 Michael Norrish <Michael.Norrish@nicta.com.au>

Add error-handling for when a user writes duplicate universally
quantified variables in a clause, e.g.:

(!.. v ... v .... clause)

Before this patch, this problem was typically flagged as a free type
variable problem (because the outer v will get an unused type), but
diagnosing the duplication should be more informative for the user.
The check for free type variables remains of course, but is applied
after the duplicate variable check.


# 45c47d6d 11-Jun-2009 Michael Norrish <Michael.Norrish@nicta.com.au>

A proper fix for what r6851 attempted to fix. The problem was
demonstrated by ideaMultScript in examples/Crypto/IDEA. This loaded
its ancestor theory ExtendedEuclid before it loaded the ind_types
module. The problem is that ExtendedEuclid makes "P" a constant, and
ind_types attempts to prove a lemma internally where P has to be a
variable. Now, it also sets up its parser to be with respect to a
grammar not contaminated by ExtendedEuclid, so its statement of the
goal is fine.

The goal is

!P t. (!x. (x = t) ==> P x) ==> $? P

Unfortunately, when the tactic that comes to solve this goal calls

REPEAT GEN_TAC

the code in GEN_TAC spots the "P" and checks the *global* grammar for
clashes with known constants. P is such a beast, and so the goal
after the GEN_TAC is

(!x. (x = t) ==> P' x) ==> $? P'

A later tactic that mentions P (which is still parsed correctly as a
variable called "P") then causes the tactic to derail and the proof to
fail.

It's possible to make the tactic work by using X_GEN_TAC (but not
Q.X_GEN_TAC), but it's all terribly ugly, and so I decided to prove
the theorem in boolTheory, so that the *three* places where the same
code was being duplicated could pick up the theorem without reproving
it. I also removed the code duplication by making the one version in
Prim_rec the source for the two other users.

Now examples/Crypto/IDEA builds properly.

This fix will have to be merged into

branches/release-development/kananaskis-5

This does suggest, incidentally, that Konrad's solution to parsing
protection, involving setting the global grammar reference at the head
of library files, is pragmatically better, for all that futzing around
with references is ugly, and requires the end of the file to set the
global reference back to what it was (yergh). I will have to give up
on my "clean" solution, which rebinds/redefines the Parse structure at
the head of library files.


# a20a4544 06-Dec-2007 Michael Norrish <Michael.Norrish@nicta.com.au>

Improve error reporting for Hol_reln by making common errors (and
messages) be accompanied by location information.


# e36d6953 09-Aug-2007 Michael Norrish <Michael.Norrish@nicta.com.au>

In order to have OPTREL work in inductive definitions, you have to
have its MONO theorem in the database.


# a1124e44 22-Nov-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

Add inductive definition support for constants from relationTheory (TC, RTC,
EQC and SC).


# 935093c2 19-Nov-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

Fix for UNCURRY bug. Now UNCURRY obligations are transformed by
effectively rewriting with the definition of UNCURRY, mimicking what is
done for lambda abstraction. Self-test passes.


# efa19a95 19-Sep-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

Fix for Peter's bug (which I think is embodied in the last example in
selftest.sml). The tactic for dealing with monotonicity reasoning was
not finishing in a clear state. I think this is now handled better.


# 611d3a31 13-Sep-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

These extra monotonicity results should help get more inductive definitions
made.


# e06d8527 12-Sep-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

Add restricted quantifier operators to bool_monoset.


# b458dfe5 12-Sep-2006 Michael Norrish <Michael.Norrish@nicta.com.au>

Change to the inductive relations definition code to support a global, and
updated "monoset", which can be updated as theories load, thereby invisibly
increasing the capabilities of Hol_reln. Also simplify the implementation
of derive_strong_induction, and move the entrypoint to IndDefLib. New
test in src/IndDef/selftest.sml wouldn't have passed before (I believe).
It is more onerous than Peter's original monoset example, which didn't
feature recursion under the EVERY operator. Documented in Description and
release notes. Still to update theories with appropriate calls to
export_mono. (export_mono and export_rewrites are now part of bossLib's
interface.)


# 1de30b3a 15-Jul-2005 Konrad Slind <konrad.slind@gmail.com>

Set the HOL parsing environment for the code in InductiveDefinition.
Missed it when I made the last sweep on this, and it was hard
to track down.


# e82be6a1 26-Oct-2001 Konrad Slind <konrad.slind@gmail.com>

Big terms can be constructed in finite time now. On my laptop,

list_mk_abs(1Mvars,list_mk_conj 1Mvars)

gets built in about 20 seconds. list_mk_forall and list_mk_exists are
twice as slow. If you are going to build terms with lots of
binding, please use these things in favour of

itlist (curry mk_forall) vlist tm

which 1) is slow because of the iterated mk_forall and 2) blows the stack
because of the itlist, at least in MoscowML, which allocates the call stack
on the stack.

Still to do: 1) upgrading this treatment to strip_abs, strip_forall, and
strip_exists; 2) upgrading it to paired abstractions; 3) efficienct versions
of SPECL and GENL (if possible); and 4) a rewriter that uses these facilities
to traverse deeply nested binders efficiently (this will (1) be a first
order rewriter and ... uh lost my train of thought).

There are some tests in src/0/big.term.tests. These should also be fleshed
out to have more strenuous tests, say ones with multiple parallel deeply
nested scopes.

... uh, the above 20 second claim is true if the list_mk_conj had
been done previously (I was mainly testing list_mk_abs).

The implementation uses Polyhash to represent the list of variables to
be bound, and utilizes continuation passing style to get tail recursion.
Otherwise, I couldn't get to 500K variables without blowing the stack.


# 73150a99 03-Aug-2001 Michael Norrish <Michael.Norrish@nicta.com.au>

I went a bit over-the-top adding diagnostic checking to this package;
in particular, I added "definition checking" to the procedure that just
proves the existence of an inductive predicate. Unfortunately, though
you can't have free variables on the RHS of a *definition*, you can have
them appear in a general existence theorem.

There never was any need for prove_non_schematic_definition_exists (or
whatever it's called) to include a call to check_definition, so I've
removed that, and now ind_typesTheory builds OK.

(There's a still a call to check_definition in the function that
actually attempts to make a definition, so we haven't lost anything.)

Very verbosely yours, ...


# f0b9e56b 16-Jul-2001 Michael Norrish <Michael.Norrish@nicta.com.au>

Variety of changes to dramatically improve error reporting. Also
change behaviour to be more forgiving of free variables and try to figure
out scheme variables. Can turn previous ("strict") behaviour on with
trace variable "inddef strict".


# 8b365a0f 05-Jan-2001 Konrad Slind <konrad.slind@gmail.com>

Nearly finished with IndDefLib. Added the ability to reload
definitions (bunch of Absyn hacking). Finalized the names
and locations of various entrypoints.


# 938c638e 29-Dec-2000 Konrad Slind <konrad.slind@gmail.com>

Fixed some subtle bugs in InductiveDefinition, arising from
differences between hol98 and hol-light.

1. variant in hol-light is not the same as in hol98. See
the definition of vary in InductiveDefinition.

2. There was a bug having to do with the (new,old) vs.
{redex,residue} difference in substitutions. This
manifests itself in schematic definitions.


# 44b569f4 28-Dec-2000 Konrad Slind <konrad.slind@gmail.com>

Merge of jrh and tfm packages. The stuff from jrh will be used to
define the predicates and derive the rules, induction, and cases
theorems. The stuff from tfm will provide proof support. I hope
that this will be a fairly innocuous change, but I'm still in
the process of getting old examples to build.