Complex conversions have been refactored to the new utility conv_at,
which is easier to use and has better error detection.
Name changes: “*_to_map” naming scheme changed to more descriptive
“*_to_lookup_list”.
Key transformer argument is now the first argument to tree_lookup and
friends, which matches functional programming conventions.
Preparation for removing duplicate word lemmas. These new lemmas
don't belong in the AFP word library, so we hook in to
`Word_Lemmas_Prefix` to expose them to our own theories.
Adds the `supply_local_method` command and `local_method` methods,
which store and apply methods as a way to shorten repeated
references to large or complicated methods.
It looks like "interpretation" occasionally renames schematic variables.
Finding global facts up to pattern equivalence should give us the original
global version.
Session-qualified imports will be required for Isabelle2018 and help clarify
the structure of sessions in the build tree.
This commit mainly adds a new set of sessions for lib/, including a Lib
session that includes most theories in lib/ and a few separate sessions for
parts that have dependencies beyond CParser or are separate AFP sessions.
The group "lib" collects all lib/ sessions.
As a consequence, other theories should use lib/ theories by session name,
not by path, which in turns means spec and proof sessions should also refer
to each other by session name, not path, to avoid duplicate theory errors in
theory merges later.
Accept "[f x | x \leftarrow t]" in addition to "[f x . x \leftarrow t]",
because the former is what naturally comes out of the Haskell translator, and
the regexps that would be necessary in the Haskell translator for this are
distasteful.
JIRA-VER 927
Adds "non-conditional simplification" method simp_no_cond, and
various equivalents.
This is done by setting the simplifier depth limit to 0, which seems
to be a useful case. It prevents expensive conditional simplification
attempts but leaves the simplifier strategy otherwise unchanged.
This is easy to set up, and link to wpsimp.
Discard some magic that was done to instantiate an induction rule,
and instead use the existing Induct_Tacs package to apply induction
rules, which seems to be successful more often.
Adjusting the strengthen congruence rules for conjunction
and disjunction makes other conjuncts available as assumptions
in strengthening a conjunction. This may be useful occasionally.
To prove that retyping a TCB establishes the state relation for TCBs,
it is necessary to prove that the C FPU null state is always equal to
the Haskell FPU null state. This commit therefore includes some
machinery for maintaining the state relation for the FPU null state,
and repairs many proofs.
These theories supply the interference trace monad with a useful notion of
simulation/refinement, which could be used to prove functional correctness
(similar to corres) in the presence of concurrency.
Adds another style of monad to the existing ones in lib/Monad_WP.
The Interference Trace monad is an extension of the nondeterministic
state monad to record interactions between the task and its environment.
It supports a parallel composition operator.
The VCG for this monad includes the same Hoare triple style as for the
state monads, and also includes a rely-guarantee quintuple which can be
used to verify a parallel composition of programs.
By default, strings (and other lists) cannot be lexicographically
ordered because our theories pull in a conflicting instance of the
"order" class for lists. This theory adds a "lexord_list" wrapper type
that provides lexicographical order.
The subseq_abbreviation mechanism was a useful way of quoting some of a
definition or term, specialised to the case of left-associated sequences.
Lambda abstractions are now handled better.
The previous subseq mechanism required some generalisations. It is now replaced
by match_abbreviation, which is a more general approach.
The match mechanism picks a term, can select a matching subterm, and can
rewrite the selected term based on pattern matching also. The new mechanism
can cover all the cases of the previous one, as shown in examples.
In the cases where the sequence constructor is associative, it can
be handy to immediately save a 'reassociate' theorem, that can be used
to parenthesise out the abbreviated subsequence from any sequence it
appears in.
This can be done by supplying the association rule.
It's annoying that, given automatic definitions (such as we have
with the Haskell translator and C parser), there's no way to capture
a few lines of them.
This mechanism allows you to add an abbreviation for some subsequence of
elements, found somewhere in a theorem, where a sequence is defined by its
constructor and the start and end points are matched by pattern matching.
The main aim of this is for crunch to make consistent decisions about
whether to prove new rules. If any rules in the wp set can be used to
directly solve the goal crunch is working on, then crunch will just
use it.
Other changes include:
- crunch_ignore works properly inside locales again.
- if a rule already exists with the specific name crunch is going
to use, but that rule does not solve the goal crunch is working on
then crunch will now error.
- if crunch fails to prove a goal it will now output a warning if
adding crunch_simps or crunch_wps would allow it to make more
progess.
In particular, some intro! attributes for some wp rules are removed.
These previously caused auto/fastforce to play a really strange role
in some proofs.
The rules for these conditional monadic operators have been a bit
ad-hoc until now, with frequent headaches around the whenE/throwError
pattern.
Adding standard split rules ensures these operators are treated uniformly.
Add two new tactics/methods which can fix common painful problems with
schematic variables.
Method datatype_schem improves unification outcomes, by making judicious use of
selectors like fst/snd/the/hd to bring variables into scope, and also using a
wrapper to avoid singleton constants like True being captured needlessly by
unification.
Method wpfix uses strengthen machinery to instantiate rogue postcondition
schematics to True and to split precondition schematics that are shared across
different sites.
The previous wp_pre would apply a rule (from the named theorems wp_pre) unless
there was already a schematic in the goal. This is frequently prevented by an
irrelevant schematic.
This implementation applies a wp_pre rule unless one of the resulting goals
can be solved by "erule FalseE", that is, unless we would promote a schematic
into the assumption position (or, more rarely, there was already an assumption
schematic or False as an assumption).
These combinator rules do something like what wp_pre does now.
They were helpful in the ancient past, but now that wp_pre exists it is
much better to just use automation.
When given a theorem, find_names finds other names the theorem appears
under, via matching on the whole proposition. It will not identify
unnamed theorems.
It's just a parser tweak for crunch, and runs multiple crunch commands
with the same sections (wps, ignores, etc).
Also update the comments a little, and move them closer to the anchor of
command clicks (the @{command_keyword} antiquotation).
The strengthen implementation can now do a bit more.
The new method strengthen_asm also adjusts assumptions.
The new method strengthen_meth takes a method as a parameter,
e.g. apply (strengthen_meth \<open> rule order.trans \<close>)
does the same thing as apply (strengthen order.trans)
with scope for other exciting applications I haven't thought of.
Notably useful is hoare_vcg_lift_imp' which generates an implication
rather than a disjunction.
Monadic rewrite rules should be modified to preserve bound variable
names, as demonstrated by monadic_rewrite_symb_exec_l'_preserve_names.
Addressing this more comprehensively is left as a TODO item for the
future (see VER-554).
Elimination against the pattern "P v", where both "P" and "v" are free,
can loop, if the rule is marked as a safe elimination rule. In the rules
modified in this commit, variable "v" provides no real benefit, so we
replace the pattern with "P".
This commit adds a method `ac_init`, which converts a ccorres goal into
a corres goal. It also adds an attribute `ac`, which converts a ccorres
fact into a corres fact, in a form suitable for solving goals produced
by `ac_init`.
A number of proofs begin with word_eqI followed by some similar steps,
suggesting a 'word_eqI_solve' proof method, which is implemented here.
Many of these steps are standard, however a tricky part is that constants of
type 'nat' which encode a particular number of bits must often be unfolded.
This was done by expanding the eval_bool machinery to add eval_int_nat, which
tries to evaluate ints and nats.
Testing eval_int_nat revealed the need to improve the code generator setup
somewhat. The Arch locale contains many of the relevant constants, and they are
given global names via requalify_const, but the code generator doesn't know
about them. Some tweaks make them available. I *think* this is safe for
arch_split, as long as the proofs that derive from them are true in each
architecture.