Merge branch 'main' of https://git.logicalhacking.com/Isabelle_DOF/Isabelle_DOF
This commit is contained in:
commit
e2b3184a77
|
@ -1,7 +1,9 @@
|
|||
pipeline:
|
||||
build:
|
||||
image: docker.io/logicalhacking/isabelle2022
|
||||
image: git.logicalhacking.com/lh-docker/lh-docker-isabelle/isabelle2023:latest
|
||||
pull: true
|
||||
commands:
|
||||
- hg log --limit 2 /root/isabelle
|
||||
- ./.woodpecker/check_dangling_theories
|
||||
- ./.woodpecker/check_external_file_refs
|
||||
- ./.woodpecker/check_quick_and_dirty
|
||||
|
@ -21,7 +23,7 @@ pipeline:
|
|||
- cd ../..
|
||||
- ln -s * latest
|
||||
archive:
|
||||
image: docker.io/logicalhacking/isabelle2022
|
||||
image: git.logicalhacking.com/lh-docker/lh-docker-isabelle/isabelle2023:latest
|
||||
commands:
|
||||
- export ARTIFACT_DIR=$CI_WORKSPACE/.artifacts/$CI_REPO/$CI_BRANCH/$CI_BUILD_NUMBER/$LATEX
|
||||
- mkdir -p $ARTIFACT_DIR
|
||||
|
|
|
@ -11,7 +11,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
|
|||
|
||||
### Changed
|
||||
|
||||
- Updated Isabelle version to Isabelle 2022
|
||||
- Updated Isabelle version to Isabelle 2023
|
||||
|
||||
## [1.3.0] - 2022-07-08
|
||||
|
||||
|
|
|
@ -184,7 +184,7 @@ scenarios from the point of view of the ontology modeling. In @{text_section (un
|
|||
we discuss the user-interaction generated from the ontological definitions. Finally, we draw
|
||||
conclusions and discuss related work in @{text_section (unchecked) \<open>conclusion\<close>}. \<close>
|
||||
|
||||
section*[bgrnd::text_section,main_author="Some(@{docitem ''bu''}::author)"]
|
||||
section*[bgrnd::text_section,main_author="Some(@{author ''bu''}::author)"]
|
||||
\<open> Background: The Isabelle System \<close>
|
||||
text*[background::introduction, level="Some 1"]\<open>
|
||||
While Isabelle is widely perceived as an interactive theorem prover for HOL
|
||||
|
@ -246,7 +246,7 @@ can be type-checked before being displayed and can be used for calculations befo
|
|||
typeset. When editing, Isabelle's PIDE offers auto-completion and error-messages while typing the
|
||||
above \<^emph>\<open>semi-formal\<close> content.\<close>
|
||||
|
||||
section*[isadof::technical,main_author="Some(@{docitem ''adb''}::author)"]\<open> \<^isadof> \<close>
|
||||
section*[isadof::technical,main_author="Some(@{author ''adb''}::author)"]\<open> \<^isadof> \<close>
|
||||
|
||||
text\<open> An \<^isadof> document consists of three components:
|
||||
\<^item> the \<^emph>\<open>ontology definition\<close> which is an Isabelle theory file with definitions
|
||||
|
@ -635,20 +635,20 @@ text\<open> We present a selection of interaction scenarios @{example \<open>sc
|
|||
and @{example \<open>cenelec_onto\<close>} with Isabelle/PIDE instrumented by \<^isadof>. \<close>
|
||||
|
||||
(*<*)
|
||||
declare_reference*["text-elements"::float]
|
||||
declare_reference*["text_elements"::float]
|
||||
declare_reference*["hyperlinks"::float]
|
||||
(*>*)
|
||||
|
||||
subsection*[scholar_pide::example]\<open> A Scholarly Paper \<close>
|
||||
text\<open> In @{float (unchecked) "text-elements"}~(a)
|
||||
and @{float (unchecked) "text-elements"}~(b)we show how
|
||||
text\<open> In @{float (unchecked) "text_elements"}~(a)
|
||||
and @{float (unchecked) "text_elements"}~(b)we show how
|
||||
hovering over links permits to explore its meta-information.
|
||||
Clicking on a document class identifier permits to hyperlink into the corresponding
|
||||
class definition (@{float (unchecked) "hyperlinks"}~(a)); hovering over an attribute-definition
|
||||
(which is qualified in order to disambiguate; @{float (unchecked) "hyperlinks"}~(b)).
|
||||
\<close>
|
||||
|
||||
text*["text-elements"::float,
|
||||
text*["text_elements"::float,
|
||||
main_caption="\<open>Exploring text elements.\<close>"]
|
||||
\<open>
|
||||
@{fig_content (width=53, height=5, caption="Exploring a reference of a text element.") "figures/Dogfood-II-bgnd1.png"
|
||||
|
|
|
@ -54,7 +54,7 @@ abstract*[abs, keywordlist="[\<open>Shallow Embedding\<close>,\<open>Process-Alg
|
|||
If you consider citing this paper, please refer to @{cite "HOL-CSP-iFM2020"}.
|
||||
\<close>
|
||||
text\<open>\<close>
|
||||
section*[introheader::introduction,main_author="Some(@{docitem ''bu''}::author)"]\<open> Introduction \<close>
|
||||
section*[introheader::introduction,main_author="Some(@{author ''bu''}::author)"]\<open> Introduction \<close>
|
||||
text*[introtext::introduction, level="Some 1"]\<open>
|
||||
Communicating Sequential Processes (\<^csp>) is a language to specify and verify patterns of
|
||||
interaction of concurrent systems. Together with CCS and LOTOS, it belongs to the family of
|
||||
|
@ -126,10 +126,10 @@ attempt to formalize denotational \<^csp> semantics covering a part of Bill Rosc
|
|||
omitted.\<close>}.
|
||||
\<close>
|
||||
|
||||
section*["pre"::tc,main_author="Some(@{author \<open>bu\<close>}::author)"]
|
||||
section*["pre"::technical,main_author="Some(@{author \<open>bu\<close>}::author)"]
|
||||
\<open>Preliminaries\<close>
|
||||
|
||||
subsection*[cspsemantics::tc, main_author="Some(@{author ''bu''})"]\<open>Denotational \<^csp> Semantics\<close>
|
||||
subsection*[cspsemantics::technical, main_author="Some(@{author ''bu''})"]\<open>Denotational \<^csp> Semantics\<close>
|
||||
|
||||
text\<open> The denotational semantics (following @{cite "roscoe:csp:1998"}) comes in three layers:
|
||||
the \<^emph>\<open>trace model\<close>, the \<^emph>\<open>(stable) failures model\<close> and the \<^emph>\<open>failure/divergence model\<close>.
|
||||
|
@ -189,7 +189,7 @@ of @{cite "IsobeRoggenbach2010"} is restricted to a variant of the failures mode
|
|||
|
||||
\<close>
|
||||
|
||||
subsection*["isabelleHol"::tc, main_author="Some(@{author ''bu''})"]\<open>Isabelle/HOL\<close>
|
||||
subsection*["isabelleHol"::technical, main_author="Some(@{author ''bu''})"]\<open>Isabelle/HOL\<close>
|
||||
text\<open> Nowadays, Isabelle/HOL is one of the major interactive theory development environments
|
||||
@{cite "nipkow.ea:isabelle:2002"}. HOL stands for Higher-Order Logic, a logic based on simply-typed
|
||||
\<open>\<lambda>\<close>-calculus extended by parametric polymorphism and Haskell-like type-classes.
|
||||
|
@ -218,10 +218,10 @@ domain theory for a particular type-class \<open>\<alpha>::pcpo\<close>, \<^ie>
|
|||
fixed-point induction and other (automated) proof infrastructure. Isabelle's type-inference can
|
||||
automatically infer, for example, that if \<open>\<alpha>::pcpo\<close>, then \<open>(\<beta> \<Rightarrow> \<alpha>)::pcpo\<close>. \<close>
|
||||
|
||||
section*["csphol"::tc,main_author="Some(@{author ''bu''}::author)", level="Some 2"]
|
||||
section*["csphol"::technical,main_author="Some(@{author ''bu''}::author)", level="Some 2"]
|
||||
\<open>Formalising Denotational \<^csp> Semantics in HOL \<close>
|
||||
|
||||
subsection*["processinv"::tc, main_author="Some(@{author ''bu''})"]
|
||||
subsection*["processinv"::technical, main_author="Some(@{author ''bu''})"]
|
||||
\<open>Process Invariant and Process Type\<close>
|
||||
text\<open> First, we need a slight revision of the concept
|
||||
of \<^emph>\<open>trace\<close>: if \<open>\<Sigma>\<close> is the type of the atomic events (represented by a type variable), then
|
||||
|
@ -272,7 +272,7 @@ but this can be constructed in a straight-forward manner. Suitable definitions f
|
|||
\<open>\<T>\<close>, \<open>\<F>\<close> and \<open>\<D>\<close> lifting \<open>fst\<close> and \<open>snd\<close> on the new \<open>'\<alpha> process\<close>-type allows to derive
|
||||
the above properties for any \<open>P::'\<alpha> process\<close>. \<close>
|
||||
|
||||
subsection*["operator"::tc, main_author="Some(@{author ''lina''})"]
|
||||
subsection*["operator"::technical, main_author="Some(@{author ''lina''})"]
|
||||
\<open>\<^csp> Operators over the Process Type\<close>
|
||||
text\<open> Now, the operators of \<^csp> \<open>Skip\<close>, \<open>Stop\<close>, \<open>_\<sqinter>_\<close>, \<open>_\<box>_\<close>, \<open>_\<rightarrow>_\<close>,\<open>_\<lbrakk>_\<rbrakk>_\<close> etc.
|
||||
for internal choice, external choice, prefix and parallel composition, can
|
||||
|
@ -297,7 +297,7 @@ The definitional presentation of the \<^csp> process operators according to @{ci
|
|||
follows always this scheme. This part of the theory comprises around 2000 loc.
|
||||
\<close>
|
||||
|
||||
subsection*["orderings"::tc, main_author="Some(@{author ''bu''})"]
|
||||
subsection*["orderings"::technical, main_author="Some(@{author ''bu''})"]
|
||||
\<open>Refinement Orderings\<close>
|
||||
|
||||
text\<open> \<^csp> is centered around the idea of process refinement; many critical properties,
|
||||
|
@ -327,7 +327,7 @@ states, from which no internal progress is possible.
|
|||
\<close>
|
||||
|
||||
|
||||
subsection*["fixpoint"::tc, main_author="Some(@{author ''lina''})"]
|
||||
subsection*["fixpoint"::technical, main_author="Some(@{author ''lina''})"]
|
||||
\<open>Process Ordering and HOLCF\<close>
|
||||
text\<open> For any denotational semantics, the fixed point theory giving semantics to systems
|
||||
of recursive equations is considered as keystone. Its prerequisite is a complete partial ordering
|
||||
|
@ -394,7 +394,7 @@ Fixed-point inductions are the main proof weapon in verifications, together with
|
|||
and the \<^csp> laws. Denotational arguments can be hidden as they are not needed in practical
|
||||
verifications. \<close>
|
||||
|
||||
subsection*["law"::tc, main_author="Some(@{author ''lina''})"]
|
||||
subsection*["law"::technical, main_author="Some(@{author ''lina''})"]
|
||||
\<open>\<^csp> Rules: Improved Proofs and New Results\<close>
|
||||
|
||||
|
||||
|
@ -436,11 +436,11 @@ cases to be considered as well as their complexity makes pen and paper proofs
|
|||
practically infeasible.
|
||||
\<close>
|
||||
|
||||
section*["newResults"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
section*["newResults"::technical,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)", level= "Some 3"]
|
||||
\<open>Theoretical Results on Refinement\<close>
|
||||
text\<open>\<close>
|
||||
subsection*["adm"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
subsection*["adm"::technical,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)"]
|
||||
\<open>Decomposition Rules\<close>
|
||||
text\<open>
|
||||
|
@ -476,7 +476,7 @@ The failure and divergence projections of this operator are also interdependent,
|
|||
sequence operator. Hence, this operator is not monotonic with \<open>\<sqsubseteq>\<^sub>\<F>\<close>, \<open>\<sqsubseteq>\<^sub>\<D>\<close> and \<open>\<sqsubseteq>\<^sub>\<T>\<close>, but monotonic
|
||||
when their combinations are considered. \<close>
|
||||
|
||||
subsection*["processes"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
subsection*["processes"::technical,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)"]
|
||||
\<open>Reference Processes and their Properties\<close>
|
||||
text\<open>
|
||||
|
@ -597,7 +597,7 @@ then it may still be livelock-free. % This makes sense since livelocks are worse
|
|||
|
||||
\<close>
|
||||
|
||||
section*["advanced"::tc,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
section*["advanced"::technical,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
\<open>Advanced Verification Techniques\<close>
|
||||
|
||||
text\<open>
|
||||
|
@ -612,7 +612,7 @@ verification. In the latter case, we present an approach to a verification of a
|
|||
architecture, in this case a ring-structure of arbitrary size.
|
||||
\<close>
|
||||
|
||||
subsection*["illustration"::tc,main_author="Some(@{author ''safouan''}::author)", level="Some 3"]
|
||||
subsection*["illustration"::technical,main_author="Some(@{author ''safouan''}::author)", level="Some 3"]
|
||||
\<open>The General CopyBuffer Example\<close>
|
||||
text\<open>
|
||||
We consider the paradigmatic copy buffer example @{cite "Hoare:1985:CSP:3921" and "Roscoe:UCS:2010"}
|
||||
|
@ -677,7 +677,7 @@ corollary deadlock_free COPY
|
|||
\<close>
|
||||
|
||||
|
||||
subsection*["inductions"::tc,main_author="Some(@{author ''safouan''}::author)"]
|
||||
subsection*["inductions"::technical,main_author="Some(@{author ''safouan''}::author)"]
|
||||
\<open>New Fixed-Point Inductions\<close>
|
||||
|
||||
text\<open>
|
||||
|
@ -727,7 +727,7 @@ The astute reader may notice here that if the induction step is weakened (having
|
|||
the base steps require enforcement.
|
||||
\<close>
|
||||
|
||||
subsection*["norm"::tc,main_author="Some(@{author ''safouan''}::author)"]
|
||||
subsection*["norm"::technical,main_author="Some(@{author ''safouan''}::author)"]
|
||||
\<open>Normalization\<close>
|
||||
text\<open>
|
||||
Our framework can reason not only over infinite alphabets, but also over processes parameterized
|
||||
|
@ -787,7 +787,7 @@ Summing up, our method consists of four stages:
|
|||
|
||||
\<close>
|
||||
|
||||
subsection*["dining_philosophers"::tc,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
subsection*["dining_philosophers"::technical,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
\<open>Generalized Dining Philosophers\<close>
|
||||
|
||||
text\<open> The dining philosophers problem is another paradigmatic example in the \<^csp> literature
|
||||
|
@ -879,7 +879,7 @@ for a dozen of philosophers (on a usual machine) due to the exponential combinat
|
|||
Furthermore, our proof is fairly stable against modifications like adding non synchronized events like
|
||||
thinking or sitting down in contrast to model-checking techniques. \<close>
|
||||
|
||||
section*["relatedwork"::tc,main_author="Some(@{author ''lina''}::author)",level="Some 3"]
|
||||
section*["relatedwork"::technical,main_author="Some(@{author ''lina''}::author)",level="Some 3"]
|
||||
\<open>Related work\<close>
|
||||
|
||||
text\<open>
|
||||
|
|
|
@ -102,13 +102,13 @@ text\<open>
|
|||
functioning of the system and for its integration into the system as a whole. In
|
||||
particular, we need to make the following assumptions explicit: \<^vs>\<open>-0.3cm\<close>\<close>
|
||||
|
||||
text*["perfect-wheel"::assumption]
|
||||
text*["perfect_wheel"::assumption]
|
||||
\<open>\<^item> the wheel is perfectly circular with a given, constant radius. \<^vs>\<open>-0.3cm\<close>\<close>
|
||||
text*["no-slip"::assumption]
|
||||
text*["no_slip"::assumption]
|
||||
\<open>\<^item> the slip between the trains wheel and the track negligible. \<^vs>\<open>-0.3cm\<close>\<close>
|
||||
text*["constant-teeth-dist"::assumption]
|
||||
text*["constant_teeth_dist"::assumption]
|
||||
\<open>\<^item> the distance between all teeth of a wheel is the same and constant, and \<^vs>\<open>-0.3cm\<close>\<close>
|
||||
text*["constant-sampling-rate"::assumption]
|
||||
text*["constant_sampling_rate"::assumption]
|
||||
\<open>\<^item> the sampling rate of positions is a given constant.\<close>
|
||||
|
||||
text\<open>
|
||||
|
@ -126,13 +126,13 @@ text\<open>
|
|||
|
||||
subsection\<open>Capturing ``System Architecture.''\<close>
|
||||
|
||||
figure*["three-phase"::figure,relative_width="70",file_src="''figures/three-phase-odo.pdf''"]
|
||||
figure*["three_phase"::figure,relative_width="70",file_src="''figures/three-phase-odo.pdf''"]
|
||||
\<open>An odometer with three sensors \<open>C1\<close>, \<open>C2\<close>, and \<open>C3\<close>.\<close>
|
||||
|
||||
text\<open>
|
||||
The requirements analysis also contains a document \<^doc_class>\<open>SYSAD\<close>
|
||||
(\<^typ>\<open>system_architecture_description\<close>) that contains technical drawing of the odometer,
|
||||
a timing diagram (see \<^figure>\<open>three-phase\<close>), and tables describing the encoding of the position
|
||||
a timing diagram (see \<^figure>\<open>three_phase\<close>), and tables describing the encoding of the position
|
||||
for the possible signal transitions of the sensors \<open>C1\<close>, \<open>C2\<close>, and \<open>C3\<close>.
|
||||
\<close>
|
||||
|
||||
|
@ -146,7 +146,7 @@ text\<open>
|
|||
sub-system configuration. \<close>
|
||||
|
||||
(*<*)
|
||||
declare_reference*["df-numerics-encshaft"::figure]
|
||||
declare_reference*["df_numerics_encshaft"::figure]
|
||||
(*>*)
|
||||
subsection\<open>Capturing ``Required Performances.''\<close>
|
||||
text\<open>
|
||||
|
@ -160,9 +160,9 @@ text\<open>
|
|||
|
||||
The requirement analysis document describes the physical environment, the architecture
|
||||
of the measuring device, and the required format and precision of the measurements of the odometry
|
||||
function as represented (see @{figure (unchecked) "df-numerics-encshaft"}).\<close>
|
||||
function as represented (see @{figure (unchecked) "df_numerics_encshaft"}).\<close>
|
||||
|
||||
figure*["df-numerics-encshaft"::figure,relative_width="76",file_src="''figures/df-numerics-encshaft.png''"]
|
||||
figure*["df_numerics_encshaft"::figure,relative_width="76",file_src="''figures/df-numerics-encshaft.png''"]
|
||||
\<open>Real distance vs. discrete distance vs. shaft-encoder sequence\<close>
|
||||
|
||||
|
||||
|
@ -215,7 +215,7 @@ text\<open>
|
|||
concepts such as Cauchy Sequences, limits, differentiability, and a very substantial part of
|
||||
classical Calculus. \<open>SOME\<close> is the Hilbert choice operator from HOL; the definitions of the
|
||||
model parameters admit all possible positive values as uninterpreted constants. Our
|
||||
\<^assumption>\<open>perfect-wheel\<close> is translated into a calculation of the circumference of the
|
||||
\<^assumption>\<open>perfect_wheel\<close> is translated into a calculation of the circumference of the
|
||||
wheel, while \<open>\<delta>s\<^sub>r\<^sub>e\<^sub>s\<close>, the resolution of the odometer, can be calculated
|
||||
from the these parameters. HOL-Analysis permits to formalize the fundamental physical observables:
|
||||
\<close>
|
||||
|
|
|
@ -347,7 +347,7 @@ text\<open>
|
|||
\<^item> \<^ML>\<open>Context.proper_subthy : theory * theory -> bool\<close> subcontext test
|
||||
\<^item> \<^ML>\<open>Context.Proof: Proof.context -> Context.generic \<close> A constructor embedding local contexts
|
||||
\<^item> \<^ML>\<open>Context.proof_of : Context.generic -> Proof.context\<close> the inverse
|
||||
\<^item> \<^ML>\<open>Context.theory_name : theory -> string\<close>
|
||||
\<^item> \<^ML>\<open>Context.theory_name : {long:bool} -> theory -> string\<close>
|
||||
\<^item> \<^ML>\<open>Context.map_theory: (theory -> theory) -> Context.generic -> Context.generic\<close>
|
||||
\<close>
|
||||
|
||||
|
@ -358,7 +358,7 @@ text\<open>The structure \<^ML_structure>\<open>Proof_Context\<close> provides a
|
|||
\<^item> \<^ML>\<open> Context.Proof: Proof.context -> Context.generic \<close>
|
||||
the path to a generic Context, i.e. a sum-type of global and local contexts
|
||||
in order to simplify system interfaces
|
||||
\<^item> \<^ML>\<open> Proof_Context.get_global: theory -> string -> Proof.context\<close>
|
||||
\<^item> \<^ML>\<open> Proof_Context.get_global: {long:bool} -> theory -> string -> Proof.context\<close>
|
||||
\<close>
|
||||
|
||||
|
||||
|
@ -807,7 +807,7 @@ text\<open> They reflect the Pure logic depicted in a number of presentations s
|
|||
Notated as logical inference rules, these operations were presented as follows:
|
||||
\<close>
|
||||
|
||||
text*["text-elements"::float,
|
||||
text*["text_elements"::float,
|
||||
main_caption="\<open>Kernel Inference Rules.\<close>"]
|
||||
\<open>
|
||||
@{fig_content (width=48, caption="Pure Kernel Inference Rules I.") "figures/pure-inferences-I.pdf"
|
||||
|
@ -886,7 +886,6 @@ datatype thy = Thy of
|
|||
\<^item> \<^ML>\<open>Theory.axiom_space: theory -> Name_Space.T\<close>
|
||||
\<^item> \<^ML>\<open>Theory.all_axioms_of: theory -> (string * term) list\<close>
|
||||
\<^item> \<^ML>\<open>Theory.defs_of: theory -> Defs.T\<close>
|
||||
\<^item> \<^ML>\<open>Theory.join_theory: theory list -> theory\<close>
|
||||
\<^item> \<^ML>\<open>Theory.at_begin: (theory -> theory option) -> theory -> theory\<close>
|
||||
\<^item> \<^ML>\<open>Theory.at_end: (theory -> theory option) -> theory -> theory\<close>
|
||||
\<^item> \<^ML>\<open>Theory.begin_theory: string * Position.T -> theory list -> theory\<close>
|
||||
|
@ -1139,8 +1138,7 @@ text\<open>
|
|||
necessary infrastructure --- i.e. the operations to pack and unpack theories and
|
||||
queries on it:
|
||||
|
||||
\<^item> \<^ML>\<open> Toplevel.theory_toplevel: theory -> Toplevel.state\<close>
|
||||
\<^item> \<^ML>\<open> Toplevel.init_toplevel: unit -> Toplevel.state\<close>
|
||||
\<^item> \<^ML>\<open> Toplevel.make_state: theory option -> Toplevel.state\<close>
|
||||
\<^item> \<^ML>\<open> Toplevel.is_toplevel: Toplevel.state -> bool\<close>
|
||||
\<^item> \<^ML>\<open> Toplevel.is_theory: Toplevel.state -> bool\<close>
|
||||
\<^item> \<^ML>\<open> Toplevel.is_proof: Toplevel.state -> bool\<close>
|
||||
|
@ -1178,7 +1176,7 @@ text\<open> The extensibility of Isabelle as a system framework depends on a num
|
|||
\<^item> \<^ML>\<open>Toplevel.theory: (theory -> theory) -> Toplevel.transition -> Toplevel.transition\<close>
|
||||
adjoins a theory transformer.
|
||||
\<^item> \<^ML>\<open>Toplevel.generic_theory: (generic_theory -> generic_theory) -> Toplevel.transition -> Toplevel.transition\<close>
|
||||
\<^item> \<^ML>\<open>Toplevel.theory': (bool -> theory -> theory) -> Toplevel.presentation -> Toplevel.transition -> Toplevel.transition\<close>
|
||||
\<^item> \<^ML>\<open>Toplevel.theory': (bool -> theory -> theory) -> Toplevel.presentation option -> Toplevel.transition -> Toplevel.transition\<close>
|
||||
\<^item> \<^ML>\<open>Toplevel.exit: Toplevel.transition -> Toplevel.transition\<close>
|
||||
\<^item> \<^ML>\<open>Toplevel.ignored: Position.T -> Toplevel.transition\<close>
|
||||
\<^item> \<^ML>\<open>Toplevel.present_local_theory: (xstring * Position.T) option ->
|
||||
|
@ -1196,7 +1194,6 @@ text\<open>
|
|||
\<^item> \<^ML>\<open>Document.state : unit -> Document.state\<close>, giving the state as a "collection" of named
|
||||
nodes, each consisting of an editable list of commands, associated with asynchronous
|
||||
execution process,
|
||||
\<^item> \<^ML>\<open>Session.get_keywords : unit -> Keyword.keywords\<close>, this looks to be session global,
|
||||
\<^item> \<^ML>\<open>Thy_Header.get_keywords : theory -> Keyword.keywords\<close> this looks to be just theory global.
|
||||
|
||||
|
||||
|
@ -1270,7 +1267,6 @@ subsection\<open>Miscellaneous\<close>
|
|||
|
||||
text\<open>Here are a few queries relevant for the global config of the isar engine:\<close>
|
||||
ML\<open> Document.state();\<close>
|
||||
ML\<open> Session.get_keywords(); (* this looks to be session global. *) \<close>
|
||||
ML\<open> Thy_Header.get_keywords @{theory};(* this looks to be really theory global. *) \<close>
|
||||
|
||||
|
||||
|
@ -1877,18 +1873,17 @@ Common Stuff related to Inner Syntax Parsing
|
|||
\<^item>\<^ML>\<open>Args.internal_typ : typ parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.internal_term: term parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.internal_fact: thm list parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.internal_attribute: (morphism -> attribute) parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.internal_declaration: declaration parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.internal_attribute: attribute Morphism.entity parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.alt_name : string parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.liberal_name: string parser\<close>
|
||||
|
||||
|
||||
|
||||
Common Isar Syntax
|
||||
\<^item>\<^ML>\<open>Args.named_source: (Token.T -> Token.src) -> Token.src parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.named_typ : (string -> typ) -> typ parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.named_term : (string -> term) -> term parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.embedded_declaration: (Input.source -> declaration) -> declaration parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.embedded_declaration: (Input.source -> Morphism.declaration_entity) ->
|
||||
Morphism.declaration_entity parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.typ_abbrev : typ context_parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.typ: typ context_parser\<close>
|
||||
\<^item>\<^ML>\<open>Args.term: term context_parser\<close>
|
||||
|
@ -2148,7 +2143,7 @@ text\<open>
|
|||
\<^item>\<^ML>\<open>Document_Output.output_document: Proof.context -> {markdown: bool} -> Input.source -> Latex.text \<close>
|
||||
\<^item>\<^ML>\<open>Document_Output.output_token: Proof.context -> Token.T -> Latex.text \<close>
|
||||
\<^item>\<^ML>\<open>Document_Output.output_source: Proof.context -> string -> Latex.text \<close>
|
||||
\<^item>\<^ML>\<open>Document_Output.present_thy: Options.T -> theory -> Document_Output.segment list -> Latex.text \<close>
|
||||
\<^item>\<^ML>\<open>Document_Output.present_thy: Options.T -> Keyword.keywords -> string -> Document_Output.segment list -> Latex.text \<close>
|
||||
|
||||
\<^item>\<^ML>\<open>Document_Output.isabelle: Proof.context -> Latex.text -> Latex.text\<close>
|
||||
\<^item>\<^ML>\<open>Document_Output.isabelle_typewriter: Proof.context -> Latex.text -> Latex.text\<close>
|
||||
|
|
|
@ -131,7 +131,7 @@ type_synonym XX = B
|
|||
|
||||
section\<open>Examples of inheritance \<close>
|
||||
|
||||
doc_class C = XX +
|
||||
doc_class C = B +
|
||||
z :: "A option" <= None (* A LINK, i.e. an attribute that has a type
|
||||
referring to a document class. Mathematical
|
||||
relations over document items can be modeled. *)
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
\RequirePackage{ifvtex}
|
||||
\documentclass[16x9,9pt]{beamer}
|
||||
\PassOptionsToPackage{force}{DOF-scholarly_paper}
|
||||
\title{No Title Given}
|
||||
\usepackage{DOF-core}
|
||||
|
||||
\usepackage{textcomp}
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
\RequirePackage{ifvtex}
|
||||
\documentclass[]{beamer}
|
||||
\PassOptionsToPackage{force}{DOF-scholarly_paper}
|
||||
\title{No Title Given}
|
||||
\usepackage{beamerposter}
|
||||
\usepackage{DOF-core}
|
||||
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
%% preamble.tex.
|
||||
|
||||
\documentclass[submission,copyright,creativecommons]{eptcs}
|
||||
\title{No Title Given}
|
||||
|
||||
\usepackage{DOF-core}
|
||||
\bibliographystyle{eptcs}% the mandatory bibstyle
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
|
||||
\documentclass[a4paper,UKenglish,cleveref, autoref,thm-restate]{lipics-v2021}
|
||||
\bibliographystyle{plainurl}% the mandatory bibstyle
|
||||
\title{No Title Given}
|
||||
\usepackage[numbers, sort&compress, sectionbib]{natbib}
|
||||
|
||||
\usepackage{DOF-core}
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
|
||||
\documentclass[iicol]{sn-jnl}
|
||||
\PassOptionsToPackage{force}{DOF-scholarly_paper}
|
||||
\title{No Title Given}
|
||||
\usepackage{DOF-core}
|
||||
\bibliographystyle{sn-basic}
|
||||
\let\proof\relax
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
\RequirePackage{fix-cm}
|
||||
\documentclass[]{svjour3}
|
||||
|
||||
\title{No Title Given}
|
||||
\usepackage{DOF-core}
|
||||
\usepackage{mathptmx}
|
||||
\bibliographystyle{abbrvnat}
|
||||
|
|
|
@ -17,7 +17,7 @@ no_notation "Isabelle_DOF_file" ("@{file _}")
|
|||
no_notation "Isabelle_DOF_thy" ("@{thy _}")
|
||||
no_notation "Isabelle_DOF_docitem" ("@{docitem _}")
|
||||
no_notation "Isabelle_DOF_docitem_attr" ("@{docitemattr (_) :: (_)}")
|
||||
no_notation "Isabelle_DOF_trace_attribute" ("@{trace-attribute _}")
|
||||
no_notation "Isabelle_DOF_trace_attribute" ("@{trace'_-attribute _}")
|
||||
|
||||
consts Isabelle_DOF_typ :: "string \<Rightarrow> typ" ("@{typ _}")
|
||||
consts Isabelle_DOF_term :: "string \<Rightarrow> term" ("@{term _}")
|
||||
|
@ -27,7 +27,7 @@ datatype "file" = Isabelle_DOF_file string ("@{file _}")
|
|||
datatype "thy" = Isabelle_DOF_thy string ("@{thy _}")
|
||||
consts Isabelle_DOF_docitem :: "string \<Rightarrow> 'a" ("@{docitem _}")
|
||||
datatype "docitem_attr" = Isabelle_DOF_docitem_attr string string ("@{docitemattr (_) :: (_)}")
|
||||
consts Isabelle_DOF_trace_attribute :: "string \<Rightarrow> (string * string) list" ("@{trace-attribute _}")
|
||||
consts Isabelle_DOF_trace_attribute :: "string \<Rightarrow> (string * string) list" ("@{trace'_-attribute _}")
|
||||
|
||||
subsection\<open> Semantics \<close>
|
||||
|
||||
|
|
|
@ -96,8 +96,10 @@ term "C"
|
|||
|
||||
text\<open>Voila what happens on the ML level:\<close>
|
||||
ML\<open>val Type("Conceptual.B.B_ext",[Type("Conceptual.C.C_ext",t)]) = @{typ "C"};
|
||||
val @{typ "D"} = Value_Command.Docitem_Parser.cid_2_cidType "Conceptual.D" @{theory};
|
||||
val @{typ "E"} = Value_Command.Docitem_Parser.cid_2_cidType "Conceptual.E" @{theory};
|
||||
val \<^typ>\<open>D\<close> = DOF_core.get_onto_class_cid \<^theory> "Conceptual.D"
|
||||
|> snd ;
|
||||
val \<^typ>\<open>E\<close> = DOF_core.get_onto_class_cid \<^theory> "Conceptual.E"
|
||||
|> snd;
|
||||
\<close>
|
||||
|
||||
text*[dfgdfg2::C, z = "None"]\<open> Lorem ipsum ... @{thm refl} \<close>
|
||||
|
@ -160,7 +162,7 @@ ML\<open> @{docitem_attribute a2::omega};
|
|||
|
||||
type_synonym ALFACENTAURI = E
|
||||
|
||||
update_instance*[omega::ALFACENTAURI, x+="''inition''"]
|
||||
update_instance*[omega::E, x+="''inition''"]
|
||||
|
||||
ML\<open> val s = HOLogic.dest_string ( @{docitem_attribute x::omega}) \<close>
|
||||
|
||||
|
@ -214,12 +216,12 @@ no_notation Plus (infixr "||" 55)
|
|||
no_notation Times (infixr "~~" 60)
|
||||
no_notation Atom ("\<lfloor>_\<rfloor>" 65)
|
||||
|
||||
value* \<open> DA.accepts (na2da (rexp2na example_expression)) (map fst @{trace-attribute \<open>aaa\<close>}) \<close>
|
||||
value* \<open> DA.accepts (na2da (rexp2na example_expression)) (map fst @{trace_attribute \<open>aaa\<close>}) \<close>
|
||||
|
||||
definition word_test :: "'a list \<Rightarrow> 'a rexp \<Rightarrow> bool" (infix "is-in" 60)
|
||||
where " w is-in rexp \<equiv> DA.accepts (na2da (rexp2na rexp)) (w)"
|
||||
|
||||
value* \<open> (map fst @{trace-attribute \<open>aaa\<close>}) is-in example_expression \<close>
|
||||
value* \<open> (map fst @{trace_attribute \<open>aaa\<close>}) is-in example_expression \<close>
|
||||
|
||||
|
||||
(*<*)
|
||||
|
|
|
@ -213,9 +213,9 @@ value*\<open>@{scholarly_paper.author \<open>church'\<close>}\<close>
|
|||
value*\<open>@{author \<open>church\<close>}\<close>
|
||||
value*\<open>@{Concept_High_Level_Invariants.author \<open>church\<close>}\<close>
|
||||
|
||||
value*\<open>@{scholarly_paper.author_instances}\<close>
|
||||
value*\<open>@{author_instances}\<close>
|
||||
value*\<open>@{Concept_High_Level_Invariants.author_instances}\<close>
|
||||
value*\<open>@{instances_of \<open>scholarly_paper.author\<close>}\<close>
|
||||
value*\<open>@{instances_of \<open>author\<close>}\<close>
|
||||
value*\<open>@{instances_of \<open>Concept_High_Level_Invariants.author\<close>}\<close>
|
||||
|
||||
text*[introduction3::introduction, authored_by = "{@{author \<open>church\<close>}}", level = "Some 2"]\<open>\<close>
|
||||
text*[introduction4::introduction, authored_by = "{@{author \<open>curry\<close>}}", level = "Some 4"]\<open>\<close>
|
||||
|
|
|
@ -104,8 +104,8 @@ ML\<open>
|
|||
val (oid, DOF_core.Instance {value, ...}) =
|
||||
Name_Space.check (Context.Proof \<^context>) (DOF_core.get_instances \<^context>) ("aaa", Position.none)
|
||||
\<close>
|
||||
term*\<open>map fst @{trace-attribute \<open>test_monitor_M\<close>}\<close>
|
||||
value*\<open>map fst @{trace-attribute \<open>test_monitor_M\<close>}\<close>
|
||||
term*\<open>map fst @{trace_attribute \<open>test_monitor_M\<close>}\<close>
|
||||
value*\<open>map fst @{trace_attribute \<open>test_monitor_M\<close>}\<close>
|
||||
|
||||
ML\<open>@{assert} ([("Conceptual.A", "test"), ("Conceptual.F", "sdfg")] = @{trace_attribute aaa}) \<close>
|
||||
|
||||
|
|
|
@ -144,8 +144,8 @@ update_instance*[f::F,r:="[@{thm ''Concept_OntoReferencing.some_proof''}]"]
|
|||
text\<open> ..., mauris amet, id elit aliquam aptent id, ... @{docitem \<open>a\<close>} \<close>
|
||||
(*>*)
|
||||
text\<open>Here we add and maintain a link that is actually modeled as m-to-n relation ...\<close>
|
||||
update_instance*[f::F,b:="{(@{docitem \<open>a\<close>}::A,@{docitem \<open>c1\<close>}::C),
|
||||
(@{docitem \<open>a\<close>}, @{docitem \<open>c2\<close>})}"]
|
||||
update_instance*[f::F,b:="{(@{A \<open>a\<close>}::A,@{C \<open>c1\<close>}::C),
|
||||
(@{A \<open>a\<close>}, @{C \<open>c2\<close>})}"]
|
||||
|
||||
section\<open>Closing the Monitor and testing the Results.\<close>
|
||||
|
||||
|
|
|
@ -116,20 +116,22 @@ section\<open>Putting everything together\<close>
|
|||
|
||||
text\<open>Major sample: test-item of doc-class \<open>F\<close> with a relational link between class instances,
|
||||
and links to formal Isabelle items like \<open>typ\<close>, \<open>term\<close> and \<open>thm\<close>. \<close>
|
||||
declare[[ML_print_depth = 10000]]
|
||||
text*[xcv4::F, r="[@{thm ''HOL.refl''},
|
||||
@{thm \<open>Concept_TermAntiquotations.local_sample_lemma\<close>}]", (* long names required *)
|
||||
b="{(@{docitem ''xcv1''},@{docitem \<open>xcv2\<close>})}", (* notations \<open>...\<close> vs. ''...'' *)
|
||||
b="{(@{A ''xcv1''},@{C \<open>xcv2\<close>})}", (* notations \<open>...\<close> vs. ''...'' *)
|
||||
s="[@{typ \<open>int list\<close>}]",
|
||||
properties = "[@{term \<open>H \<longrightarrow> H\<close>}]" (* notation \<open>...\<close> required for UTF8*)
|
||||
]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
declare[[ML_print_depth = 20]]
|
||||
text*[xcv5::G, g="@{thm \<open>HOL.sym\<close>}"]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
text\<open>... and here we add a relation between @{docitem \<open>xcv3\<close>} and @{docitem \<open>xcv2\<close>}
|
||||
into the relation \verb+b+ of @{docitem \<open>xcv5\<close>}. Note that in the link-relation,
|
||||
a @{typ "C"}-type is required, but a @{typ "G"}-type is offered which is legal in
|
||||
\verb+Isa_DOF+ because of the sub-class relation between those classes: \<close>
|
||||
update_instance*[xcv4::F, b+="{(@{docitem ''xcv3''},@{docitem ''xcv5''})}"]
|
||||
a @{typ "C"}-type is required, so if a @{typ "G"}-type is offered, it is considered illegal
|
||||
in \verb+Isa_DOF+ despite the sub-class relation between those classes: \<close>
|
||||
update_instance-assert-error[xcv4::F, b+="{(@{docitem ''xcv3''},@{docitem ''xcv5''})}"]
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
text\<open>And here is the results of some ML-term antiquotations:\<close>
|
||||
ML\<open> @{docitem_attribute b::xcv4} \<close>
|
||||
|
|
|
@ -136,6 +136,9 @@ ML\<open>@{thm "refl"}\<close>
|
|||
|
||||
section\<open>Request on instances\<close>
|
||||
|
||||
text\<open>The instances directly attached to the default super class \<open>text\<close>: \<close>
|
||||
value*\<open>@{instances_of \<open>text\<close>}\<close>
|
||||
|
||||
text\<open>We define a new class Z:\<close>
|
||||
doc_class Z =
|
||||
z::"int"
|
||||
|
@ -146,28 +149,27 @@ text*[test2Z::Z, z=4]\<open>lorem ipsum...\<close>
|
|||
text*[test3Z::Z, z=3]\<open>lorem ipsum...\<close>
|
||||
|
||||
text\<open>We want to get all the instances of the @{doc_class Z}:\<close>
|
||||
value*\<open>@{Z_instances}\<close>
|
||||
value*\<open>@{instances_of \<open>Z\<close>}\<close>
|
||||
|
||||
text\<open>Now we want to get the instances of the @{doc_class Z} whose attribute z > 2:\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{Z_instances}\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{instances_of \<open>Z\<close>}\<close>
|
||||
|
||||
text\<open>We can check that we have the list of instances we wanted:\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{Z_instances} = [@{Z \<open>test3Z\<close>}, @{Z \<open>test2Z\<close>}]
|
||||
\<or> filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{Z_instances} = [@{Z \<open>test2Z\<close>}, @{Z \<open>test3Z\<close>}]\<close>
|
||||
assert*\<open>filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{instances_of \<open>Z\<close>} = [@{Z \<open>test3Z\<close>}, @{Z \<open>test2Z\<close>}]
|
||||
\<or> filter (\<lambda>\<sigma>. Z.z \<sigma> > 2) @{instances_of \<open>Z\<close>} = [@{Z \<open>test2Z\<close>}, @{Z \<open>test3Z\<close>}]\<close>
|
||||
|
||||
text\<open>Now, we want to get all the instances of the @{doc_class A}\<close>
|
||||
value*\<open>@{A_instances}\<close>
|
||||
value*\<open>@{instances_of \<open>A\<close>}\<close>
|
||||
|
||||
(*<*)
|
||||
text\<open>Warning: If you make a request on attributes that are undefined in some instances,
|
||||
you will get a result which includes these unresolved cases.
|
||||
|
||||
text\<open>Warning: Requests on attributes that are undefined in some instances
|
||||
include all the instances.
|
||||
In the following example, we request the instances of the @{doc_class A}.
|
||||
But we have defined an instance @{docitem \<open>sdf\<close>} in theory @{theory "Isabelle_DOF-Ontologies.Conceptual"}
|
||||
whose our theory inherits from, and this docitem instance does not initialize its attribute \<^emph>\<open>x\<close>.
|
||||
So in the request result we get an unresolved case because the evaluator can not get
|
||||
the value of the \<^emph>\<open>x\<close> attribute of the instance @{docitem \<open>sdf\<close>}:\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. A.x \<sigma> > 5) @{A_instances}\<close>
|
||||
(*>*)
|
||||
But we have defined instances @{A \<open>axx\<close>} and @{A \<open>axx\<close>} previously
|
||||
and these docitem instances do not initialize their \<^const>\<open>A.x\<close> attribute.
|
||||
So the request can not be evaluated:\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. A.x \<sigma> > 5) @{instances_of \<open>A\<close>}\<close>
|
||||
|
||||
section\<open>Limitations\<close>
|
||||
|
||||
text\<open>There are still some limitations.
|
||||
|
@ -187,7 +189,7 @@ to update the instance @{docitem \<open>xcv4\<close>}:
|
|||
\<close>
|
||||
|
||||
update_instance-assert-error[xcv4::F, b+="{(@{A ''xcv3''},@{G ''xcv5''})}"]
|
||||
\<open>type of attribute: Conceptual.F.b does not fit to term\<close>
|
||||
\<open>Type unification failed: Clash of types\<close>
|
||||
|
||||
|
||||
section\<open>\<^theory_text>\<open>assert*\<close>-Annotated assertion-commands\<close>
|
||||
|
@ -225,11 +227,11 @@ text\<open>... and here we reference @{A \<open>assertionA\<close>}.\<close>
|
|||
(*>*)
|
||||
assert*\<open>evidence @{result \<open>resultProof\<close>} = evidence @{result \<open>resultProof2\<close>}\<close>
|
||||
|
||||
text\<open>The optional evaluator of \<open>value*\<close> and \<open>assert*\<close> must be specified after the meta arguments:\<close>
|
||||
value* [optional_test_A::A, x=6] [nbe] \<open>filter (\<lambda>\<sigma>. A.x \<sigma> > 5) @{A_instances}\<close>
|
||||
text\<open>The optional evaluator of \<open>value*\<close> and \<open>assert*\<close> must be specified before the meta arguments:\<close>
|
||||
value* [nbe] [optional_test_A::A, x=6] \<open>filter (\<lambda>\<sigma>. A.x \<sigma> > 5) @{instances_of \<open>A\<close>}\<close>
|
||||
|
||||
assert* [resultProof3::result, evidence = "proof", property="[@{thm \<open>HOL.sym\<close>}]"] [nbe]
|
||||
\<open>evidence @{result \<open>resultProof3\<close>} = evidence @{result \<open>resultProof2\<close>}\<close>
|
||||
assert* [nbe] [resultProof3::result, evidence = "proof", property="[@{thm \<open>HOL.sym\<close>}]"]
|
||||
\<open>evidence @{result \<open>resultProof3\<close>} = evidence @{result \<open>resultProof2\<close>}\<close>
|
||||
|
||||
text\<open>
|
||||
The evaluation of @{command "assert*"} can be disabled
|
||||
|
|
|
@ -390,11 +390,11 @@ text-latex\<open>
|
|||
ML\<open>
|
||||
|
||||
fun gen_enriched_document_command3 name {body} cid_transform attr_transform markdown
|
||||
(((((oid,pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
((((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
xstring_opt:(xstring * Position.T) option),
|
||||
toks:Input.source list)
|
||||
= gen_enriched_document_command2 name {body=body} cid_transform attr_transform markdown
|
||||
(((((oid,pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
((((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
xstring_opt:(xstring * Position.T) option),
|
||||
toks) \<comment> \<open>Hack : drop second and thrd args.\<close>
|
||||
|
||||
|
|
|
@ -382,11 +382,11 @@ text-latex\<open>
|
|||
ML\<open>
|
||||
|
||||
fun gen_enriched_document_command3 name {body} cid_transform attr_transform markdown
|
||||
(((((oid,pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
((((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
xstring_opt:(xstring * Position.T) option),
|
||||
toks:Input.source list)
|
||||
= gen_enriched_document_command2 name {body=body} cid_transform attr_transform markdown
|
||||
(((((oid,pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
((((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t,
|
||||
xstring_opt:(xstring * Position.T) option),
|
||||
toks) \<comment> \<open>Hack : drop second and thrd args.\<close>
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@ session "Isabelle_DOF-Unit-Tests" = "Isabelle_DOF-Ontologies" +
|
|||
"Cenelec_Test"
|
||||
"OutOfOrderPresntn"
|
||||
"COL_Test"
|
||||
"Test_Polymorphic_Classes"
|
||||
document_files
|
||||
"root.bib"
|
||||
"figures/A.png"
|
||||
|
|
|
@ -23,6 +23,8 @@ keywords "text-" "text-latex" :: document_body
|
|||
and "update_instance-assert-error" :: document_body
|
||||
and "declare_reference-assert-error" :: document_body
|
||||
and "value-assert-error" :: document_body
|
||||
and "definition-assert-error" :: document_body
|
||||
and "doc_class-assert-error" :: document_body
|
||||
|
||||
begin
|
||||
|
||||
|
@ -35,13 +37,13 @@ fun gen_enriched_document_command2 name {body} cid_transform attr_transform mark
|
|||
xstring_opt:(xstring * Position.T) option),
|
||||
toks_list:Input.source list)
|
||||
: theory -> theory =
|
||||
let val toplvl = Toplevel.theory_toplevel
|
||||
val (((oid,pos),cid_pos), doc_attrs) = meta_args
|
||||
let val ((binding,cid_pos), doc_attrs) = meta_args
|
||||
val oid = Binding.name_of binding
|
||||
val oid' = if meta_args = ODL_Meta_Args_Parser.empty_meta_args
|
||||
then "output"
|
||||
else oid
|
||||
(* as side-effect, generates markup *)
|
||||
fun check_n_tex_text thy toks = let val ctxt = Toplevel.presentation_context (toplvl thy);
|
||||
fun check_n_tex_text thy toks = let val ctxt = Toplevel.presentation_context (Toplevel.make_state (SOME thy))
|
||||
val pos = Input.pos_of toks;
|
||||
val _ = Context_Position.reports ctxt
|
||||
[(pos, Markup.language_document (Input.is_delimited toks)),
|
||||
|
@ -72,7 +74,7 @@ fun gen_enriched_document_command2 name {body} cid_transform attr_transform mark
|
|||
else
|
||||
Value_Command.Docitem_Parser.create_and_check_docitem
|
||||
{is_monitor = false} {is_inline = false} {define = true}
|
||||
oid pos (cid_transform cid_pos) (attr_transform doc_attrs))
|
||||
binding (cid_transform cid_pos) (attr_transform doc_attrs))
|
||||
(* ... generating the level-attribute syntax *)
|
||||
in handle_margs_opt #> (fn thy => (app (check_n_tex_text thy) toks_list; thy))
|
||||
end;
|
||||
|
@ -137,10 +139,10 @@ val _ =
|
|||
>> (Toplevel.theory o update_instance_command));
|
||||
|
||||
val _ =
|
||||
let fun create_and_check_docitem ((((oid, pos),cid_pos),doc_attrs),src) thy =
|
||||
let fun create_and_check_docitem (((binding,cid_pos),doc_attrs),src) thy =
|
||||
(Value_Command.Docitem_Parser.create_and_check_docitem
|
||||
{is_monitor = false} {is_inline=true}
|
||||
{define = false} oid pos (cid_pos) (doc_attrs) thy)
|
||||
{define = false} binding (cid_pos) (doc_attrs) thy)
|
||||
handle ERROR msg => (if error_match src msg
|
||||
then (writeln ("Correct error: "^msg^": reported.");thy)
|
||||
else error"Wrong error reported")
|
||||
|
@ -152,20 +154,55 @@ val _ =
|
|||
|
||||
|
||||
val _ =
|
||||
let fun pass_trans_to_value_cmd (args, (((name, modes), t),src)) trans =
|
||||
(Value_Command.value_cmd {assert=false} args name modes t @{here} trans
|
||||
handle ERROR msg => (if error_match src msg
|
||||
then (writeln ("Correct error: "^msg^": reported.");trans)
|
||||
else error"Wrong error reported"))
|
||||
let fun pass_trans_to_value_cmd (args, (((name, modes), t),src)) trans =
|
||||
let val pos = Toplevel.pos_of trans
|
||||
in trans |> Toplevel.theory
|
||||
(fn thy => Value_Command.value_cmd {assert=false} args name modes t pos thy
|
||||
handle ERROR msg => (if error_match src msg
|
||||
then (writeln ("Correct error: "^msg^": reported."); thy)
|
||||
else error"Wrong error reported"))
|
||||
end
|
||||
in Outer_Syntax.command \<^command_keyword>\<open>value-assert-error\<close> "evaluate and print term"
|
||||
(ODL_Meta_Args_Parser.opt_attributes --
|
||||
(Value_Command.opt_evaluator
|
||||
-- Value_Command.opt_modes
|
||||
-- Parse.term
|
||||
-- Parse.document_source)
|
||||
>> (Toplevel.theory o pass_trans_to_value_cmd))
|
||||
>> (pass_trans_to_value_cmd))
|
||||
end;
|
||||
|
||||
val _ =
|
||||
let fun definition_cmd' meta_args_opt decl params prems spec src bool ctxt =
|
||||
Local_Theory.background_theory (Value_Command.meta_args_exec meta_args_opt) ctxt
|
||||
|> (fn ctxt => Definition_Star_Command.definition_cmd decl params prems spec bool ctxt
|
||||
handle ERROR msg => if error_match src msg
|
||||
then (writeln ("Correct error: "^msg^": reported.")
|
||||
; pair "Bound 0" @{thm refl}
|
||||
|> pair (Bound 0)
|
||||
|> rpair ctxt)
|
||||
else error"Wrong error reported")
|
||||
in
|
||||
Outer_Syntax.local_theory' \<^command_keyword>\<open>definition-assert-error\<close> "constant definition"
|
||||
(ODL_Meta_Args_Parser.opt_attributes --
|
||||
(Scan.option Parse_Spec.constdecl -- (Parse_Spec.opt_thm_name ":" -- Parse.prop) --
|
||||
Parse_Spec.if_assumes -- Parse.for_fixes -- Parse.document_source)
|
||||
>> (fn (meta_args_opt, ((((decl, spec), prems), params), src)) =>
|
||||
#2 oo definition_cmd' meta_args_opt decl params prems spec src))
|
||||
end;
|
||||
|
||||
|
||||
val _ =
|
||||
let fun add_doc_class_cmd' ((((overloaded, hdr), (parent, attrs)),((rejects,accept_rex),invars)), src) =
|
||||
(fn thy => OntoParser.add_doc_class_cmd {overloaded = overloaded} hdr parent attrs rejects accept_rex invars thy
|
||||
handle ERROR msg => (if error_match src msg
|
||||
then (writeln ("Correct error: "^msg^": reported."); thy)
|
||||
else error"Wrong error reported"))
|
||||
in
|
||||
Outer_Syntax.command \<^command_keyword>\<open>doc_class-assert-error\<close>
|
||||
"define document class"
|
||||
((OntoParser.parse_doc_class -- Parse.document_source)
|
||||
>> (Toplevel.theory o add_doc_class_cmd'))
|
||||
end
|
||||
|
||||
val _ =
|
||||
Outer_Syntax.command ("text-latex", \<^here>) "formal comment (primary style)"
|
||||
|
|
|
@ -0,0 +1,696 @@
|
|||
theory Test_Polymorphic_Classes
|
||||
imports Isabelle_DOF.Isa_DOF
|
||||
TestKit
|
||||
begin
|
||||
|
||||
text\<open>The name \<open>text\<close> is reserved by the implementation and refers to the default super class:\<close>
|
||||
doc_class-assert-error "text" =
|
||||
a::int
|
||||
\<open>text: This name is reserved by the implementation\<close>
|
||||
|
||||
doc_class title =
|
||||
short_title :: "string option" <= "None"
|
||||
|
||||
doc_class Author =
|
||||
email :: "string" <= "''''"
|
||||
|
||||
datatype classification = SIL0 | SIL1 | SIL2 | SIL3 | SIL4
|
||||
|
||||
doc_class abstract =
|
||||
keywordlist :: "string list" <= "[]"
|
||||
safety_level :: "classification" <= "SIL3"
|
||||
|
||||
doc_class text_section =
|
||||
authored_by :: "Author set" <= "{}"
|
||||
level :: "int option" <= "None"
|
||||
|
||||
doc_class ('a::one, 'b, 'c) test0 = text_section +
|
||||
testa :: "'a list"
|
||||
testb :: "'b list"
|
||||
testc :: "'c list"
|
||||
|
||||
typ\<open>('a, 'b, 'c) test0\<close>
|
||||
typ\<open>('a, 'b, 'c, 'd) test0_scheme\<close>
|
||||
|
||||
find_consts name:"test0"
|
||||
find_theorems name:"test0"
|
||||
|
||||
|
||||
doc_class 'a test1 = text_section +
|
||||
test1 :: "'a list"
|
||||
invariant author_finite_test :: "finite (authored_by \<sigma>)"
|
||||
invariant force_level_test :: "(level \<sigma>) \<noteq> None \<and> the (level \<sigma>) > 1"
|
||||
|
||||
find_consts name:"test1*inv"
|
||||
find_theorems name:"test1*inv"
|
||||
|
||||
text*[church::Author, email="\<open>b\<close>"]\<open>\<close>
|
||||
text\<open>@{Author "church"}\<close>
|
||||
value*\<open>@{Author \<open>church\<close>}\<close>
|
||||
|
||||
text\<open>\<^value_>\<open>@{Author \<open>church\<close>}\<close>\<close>
|
||||
|
||||
doc_class ('a, 'b) test2 = "'a test1" +
|
||||
test2 :: "'b list"
|
||||
type_synonym ('a, 'b) test2_syn = "('a, 'b) test2"
|
||||
|
||||
find_theorems name:"test2"
|
||||
|
||||
declare [[invariants_checking_with_tactics]]
|
||||
text*[testtest::"('a, int) test2", level = "Some 2", authored_by = "{@{Author \<open>church\<close>}}", test2 = "[1]"]\<open>\<close>
|
||||
value*\<open>test2 @{test2 \<open>testtest\<close>}\<close>
|
||||
|
||||
text*[testtest2''::"(nat, int) test2", test1 = "[2::nat, 3]", test2 = "[4::int, 5]", level = "Some (2::int)"]\<open>\<close>
|
||||
value*\<open>test1 @{test2 \<open>testtest2''\<close>}\<close>
|
||||
declare [[invariants_checking_with_tactics = false]]
|
||||
|
||||
ML\<open>
|
||||
val t = Syntax.parse_term \<^context> "@{test2 \<open>testtest\<close>}"
|
||||
|
||||
\<close>
|
||||
ML\<open>
|
||||
val t = \<^term>\<open>test2.make 8142730 Test_Parametric_Classes_2_test2_authored_by_Attribute_Not_Initialized Test_Parametric_Classes_2_test2_level_Attribute_Not_Initialized Test_Parametric_Classes_2_test2_test1_Attribute_Not_Initialized
|
||||
Test_Parametric_Classes_2_test2_test2_Attribute_Not_Initialized
|
||||
\<lparr>authored_by := bot, level := None\<rparr> \<close>
|
||||
\<close>
|
||||
|
||||
text\<open>test2 = "[1::'a::one]" should be test2 = "[1::int]" because the type of testtest4 is ('a::one, int) test2:\<close>
|
||||
text-assert-error[testtest4::"('a::one, int) test2", level = "Some 2", authored_by = "{@{Author \<open>church\<close>}}", test2 = "[1::'a::one]"]\<open>\<close>
|
||||
\<open>Type unification failed\<close>
|
||||
text\<open>Indeed this definition fails:\<close>
|
||||
definition-assert-error testtest2::"('a::one, int) test2" where "testtest2 \<equiv>
|
||||
test2.make 11953346
|
||||
{@{Author \<open>church\<close>}}
|
||||
(Some 2)
|
||||
[]
|
||||
[]
|
||||
\<lparr>authored_by := bot
|
||||
, level := None, level := Some 2
|
||||
, authored_by := insert \<lparr>Author.tag_attribute = 11953164, email = []\<rparr> bot
|
||||
, test2.test2 := [1::('a::one)]\<rparr> "
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
text\<open>For now, no more support of type synonyms as parent:\<close>
|
||||
doc_class ('a, 'b, 'c) A =
|
||||
a :: "'a list"
|
||||
b :: "'b list"
|
||||
c :: "'c list"
|
||||
type_synonym ('a, 'b, 'c) A_syn = "('a, 'b, 'c) A"
|
||||
|
||||
doc_class-assert-error ('a, 'b, 'c, 'd) B = "('b, 'c, 'd) A_syn" +
|
||||
d ::"'a::one list" <= "[1]"
|
||||
\<open>Undefined onto class: "A_syn"\<close>
|
||||
|
||||
|
||||
declare[[invariants_checking_with_tactics]]
|
||||
|
||||
definition* testauthor0 where "testauthor0 \<equiv> \<lparr>Author.tag_attribute = 5, email = \<open>test_author_email\<close>\<rparr>"
|
||||
definition* testauthor :: "Author" where "testauthor \<equiv> \<lparr>Author.tag_attribute = 5, email = \<open>test_author_email\<close>\<rparr>"
|
||||
definition* testauthor2 :: "Author" where "testauthor2 \<equiv> \<lparr>Author.tag_attribute = 5, email = \<open>test_author_email\<close>\<rparr> \<lparr>email := \<open>test_author_email_2\<close> \<rparr>"
|
||||
definition* testauthor3 :: "Author" where "testauthor3 \<equiv> testauthor \<lparr>email := \<open>test_author_email_2\<close> \<rparr>"
|
||||
|
||||
ML\<open>
|
||||
val ctxt = \<^context>
|
||||
val input0 = Syntax.read_input "@{Author \<open>church\<close>}"
|
||||
val source = Syntax.read_input "\<^term_>\<open>@{Author \<open>church\<close>}\<close>"
|
||||
val input = source
|
||||
val tt = Document_Output.output_document ctxt {markdown = false} input
|
||||
\<close>
|
||||
|
||||
doc_class ('a, 'b) elaborate1 =
|
||||
a :: "'a list"
|
||||
b :: "'b list"
|
||||
|
||||
doc_class ('a, 'b) elaborate2 =
|
||||
c :: "('a, 'b) elaborate1 list"
|
||||
|
||||
doc_class ('a, 'b) elaborate3 =
|
||||
d :: "('a, 'b) elaborate2 list"
|
||||
|
||||
text*[test_elaborate1::"('a::one, 'b) elaborate1", a = "[1]"]\<open>\<close>
|
||||
|
||||
term*\<open>@{elaborate1 \<open>test_elaborate1\<close>}\<close>
|
||||
value* [nbe]\<open>@{elaborate1 \<open>test_elaborate1\<close>}\<close>
|
||||
|
||||
|
||||
text*[test_elaborate2::"('a::one, 'b) elaborate2", c = "[@{elaborate1 \<open>test_elaborate1\<close>}]"]\<open>\<close>
|
||||
|
||||
text*[test_elaborate3::"('a::one, 'b) elaborate3", d = "[@{elaborate2 \<open>test_elaborate2\<close>}]"]\<open>\<close>
|
||||
|
||||
term*\<open>(concat o concat) ((map o map) a (map c (elaborate3.d @{elaborate3 \<open>test_elaborate3\<close>})))\<close>
|
||||
value*\<open>(concat o concat) ((map o map) a (map c (elaborate3.d @{elaborate3 \<open>test_elaborate3\<close>})))\<close>
|
||||
|
||||
|
||||
text\<open>
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So the following definition only works because the parameter of the class is also \<open>'a\<close>.\<close>
|
||||
declare[[ML_print_depth = 10000]]
|
||||
doc_class 'a elaborate4 =
|
||||
d :: "'a::one list" <= "(concat o concat) ((map o map) a (map c (elaborate3.d @{elaborate3 \<open>test_elaborate3\<close>})))"
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
declare[[ML_print_depth = 10000]]
|
||||
text*[test_elaborate4::"'a::one elaborate4"]\<open>\<close>
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
|
||||
text\<open>Bug:
|
||||
As the term antiquotation is considered as a ground term,
|
||||
its type \<^typ>\<open>'a::one list\<close> conflicts with the type of the attribute \<^typ>\<open>int list\<close>.
|
||||
To support the instantiation of the term antiquotation as an \<^typ>\<open>int list\<close>,
|
||||
the term antiquotation should have the same behavior as a constant definition,
|
||||
which is not the case for now.\<close>
|
||||
declare[[ML_print_depth = 10000]]
|
||||
doc_class-assert-error elaborate4' =
|
||||
d :: "int list" <= "(concat o concat) ((map o map) a (map c (elaborate3.d @{elaborate3 \<open>test_elaborate3\<close>})))"
|
||||
\<open>Type unification failed\<close>
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
text\<open>The behavior we want to support: \<close>
|
||||
|
||||
definition one_list :: "'a::one list" where "one_list \<equiv> [1]"
|
||||
|
||||
text\<open>the constant \<^const>\<open>one_list\<close> can be instantiate as an \<^typ>\<open>int list\<close>:\<close>
|
||||
doc_class elaborate4'' =
|
||||
d :: "int list" <= "one_list"
|
||||
|
||||
declare[[ML_print_depth = 10000]]
|
||||
text*[test_elaborate4''::"elaborate4''"]\<open>\<close>
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
|
||||
term*\<open>concat (map a (elaborate2.c @{elaborate2 \<open>test_elaborate2\<close>}))\<close>
|
||||
value*\<open>concat (map a (elaborate2.c @{elaborate2 \<open>test_elaborate2\<close>}))\<close>
|
||||
|
||||
text\<open>
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So the following definition only works because the parameter of the class is also \<open>'a\<close>.\<close>
|
||||
declare[[ML_print_depth = 10000]]
|
||||
doc_class 'a elaborate5 =
|
||||
d :: "'a::one list" <= "concat (map a (elaborate2.c @{elaborate2 \<open>test_elaborate2\<close>}))"
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
text\<open>Bug: But when defining an instance, as we use a \<open>'b\<close> variable to specify the type
|
||||
of the instance (\<^typ>\<open>'b::one elaborate5\<close>, then the unification fails\<close>
|
||||
declare[[ML_print_depth = 10000]]
|
||||
text-assert-error[test_elaborate5::"'b::one elaborate5"]\<open>\<close>
|
||||
\<open>Inconsistent sort constraints for type variable "'b"\<close>
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
text\<open>Bug:
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So it is not compatible with the type of the attribute \<^typ>\<open>'a::numeral list\<close>\<close>
|
||||
doc_class-assert-error 'a elaborate5' =
|
||||
d :: "'a::numeral list" <= "concat (map a (elaborate2.c @{elaborate2 \<open>test_elaborate2\<close>}))"
|
||||
\<open>Sort constraint\<close>
|
||||
|
||||
text\<open>The behavior we want to support: \<close>
|
||||
|
||||
text\<open>the constant \<^const>\<open>one_list\<close> can be instantiate as an \<^typ>\<open>'a::numeral list\<close>:\<close>
|
||||
doc_class 'a elaborate5'' =
|
||||
d :: "'a::numeral list" <= "one_list"
|
||||
|
||||
|
||||
text*[test_elaborate1a::"('a::one, int) elaborate1", a = "[1]", b = "[2]"]\<open>\<close>
|
||||
|
||||
term*\<open>@{elaborate1 \<open>test_elaborate1a\<close>}\<close>
|
||||
value* [nbe]\<open>@{elaborate1 \<open>test_elaborate1a\<close>}\<close>
|
||||
|
||||
text*[test_elaborate2a::"('a::one, int) elaborate2", c = "[@{elaborate1 \<open>test_elaborate1a\<close>}]"]\<open>\<close>
|
||||
|
||||
text*[test_elaborate3a::"('a::one, int) elaborate3", d = "[@{elaborate2 \<open>test_elaborate2a\<close>}]"]\<open>\<close>
|
||||
|
||||
text\<open>
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So the following definition only works because the parameter of the class is also \<open>'a\<close>.\<close>
|
||||
definition* test_elaborate3_embedding ::"'a::one list"
|
||||
where "test_elaborate3_embedding \<equiv> (concat o concat) ((map o map) elaborate1.a (map elaborate2.c (elaborate3.d @{elaborate3 \<open>test_elaborate3a\<close>})))"
|
||||
|
||||
text\<open>Bug:
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So it is not compatible with the specified type of the definition \<^typ>\<open>int list\<close>:\<close>
|
||||
definition-assert-error test_elaborate3_embedding'::"int list"
|
||||
where "test_elaborate3_embedding' \<equiv> (concat o concat) ((map o map) elaborate1.a (map elaborate2.c (elaborate3.d @{elaborate3 \<open>test_elaborate3a\<close>})))"
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
term*\<open>@{elaborate1 \<open>test_elaborate1a\<close>}\<close>
|
||||
value* [nbe]\<open>@{elaborate1 \<open>test_elaborate1a\<close>}\<close>
|
||||
|
||||
|
||||
record ('a, 'b) elaborate1' =
|
||||
a :: "'a list"
|
||||
b :: "'b list"
|
||||
|
||||
record ('a, 'b) elaborate2' =
|
||||
c :: "('a, 'b) elaborate1' list"
|
||||
|
||||
record ('a, 'b) elaborate3' =
|
||||
d :: "('a, 'b) elaborate2' list"
|
||||
|
||||
doc_class 'a one =
|
||||
a::"'a list"
|
||||
|
||||
text*[test_one::"'a::one one", a = "[1]"]\<open>\<close>
|
||||
|
||||
value* [nbe] \<open>@{one \<open>test_one\<close>}\<close>
|
||||
|
||||
term*\<open>a @{one \<open>test_one\<close>}\<close>
|
||||
|
||||
text\<open>Bug:
|
||||
The term antiquotation is considered a ground term.
|
||||
Then its type here is \<^typ>\<open>'a::one list\<close> with \<open>'a\<close> a fixed-type variable.
|
||||
So it is not compatible with the specified type of the definition \<^typ>\<open>('b::one, 'a::numeral) elaborate1'\<close>
|
||||
because the term antiquotation can not be instantiate as a \<^typ>\<open>'b::one list\<close>
|
||||
and the \<open>'a\<close> is checked against the \<open>'a::numeral\<close> instance type parameter:\<close>
|
||||
definition-assert-error test_elaborate1'::"('b::one, 'a::numeral) elaborate1'"
|
||||
where "test_elaborate1' \<equiv> \<lparr> elaborate1'.a = a @{one \<open>test_one\<close>}, b = [2] \<rparr>"
|
||||
\<open>Sort constraint\<close>
|
||||
|
||||
text\<open>This is the same behavior as the following:\<close>
|
||||
definition-assert-error test_elaborate10::"('b::one, 'a::numeral) elaborate1'"
|
||||
where "test_elaborate10 \<equiv> \<lparr> elaborate1'.a = [1::'a::one], b = [2] \<rparr>"
|
||||
\<open>Sort constraint\<close>
|
||||
|
||||
definition-assert-error test_elaborate11::"('b::one, 'c::numeral) elaborate1'"
|
||||
where "test_elaborate11 \<equiv> \<lparr> elaborate1'.a = [1::'a::one], b = [2] \<rparr>"
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
text\<open>So this works:\<close>
|
||||
definition* test_elaborate1''::"('a::one, 'b::numeral) elaborate1'"
|
||||
where "test_elaborate1'' \<equiv> \<lparr> elaborate1'.a = a @{one \<open>test_one\<close>}, b = [2] \<rparr>"
|
||||
|
||||
term \<open>elaborate1'.a test_elaborate1''\<close>
|
||||
value [nbe] \<open>elaborate1'.a test_elaborate1''\<close>
|
||||
|
||||
text\<open>But if we embed the term antiquotation in a definition,
|
||||
the type unification works:\<close>
|
||||
definition* onedef where "onedef \<equiv> @{one \<open>test_one\<close>}"
|
||||
|
||||
definition test_elaborate1'''::"('b::one, 'a::numeral) elaborate1'"
|
||||
where "test_elaborate1''' \<equiv> \<lparr> elaborate1'.a = a onedef, b = [2] \<rparr>"
|
||||
|
||||
value [nbe] \<open>elaborate1'.a test_elaborate1'''\<close>
|
||||
|
||||
|
||||
definition test_elaborate2'::"(int, 'b::numeral) elaborate2'"
|
||||
where "test_elaborate2' \<equiv> \<lparr> c = [test_elaborate1''] \<rparr>"
|
||||
|
||||
definition test_elaborate3'::"(int, 'b::numeral) elaborate3'"
|
||||
where "test_elaborate3' \<equiv> \<lparr> d = [test_elaborate2'] \<rparr>"
|
||||
|
||||
|
||||
doc_class 'a test3' =
|
||||
test3 :: "int"
|
||||
test3' :: "'a list"
|
||||
|
||||
text*[testtest30::"'a::one test3'", test3'="[1]"]\<open>\<close>
|
||||
text-assert-error[testtest30::"'a test3'", test3'="[1]"]\<open>\<close>
|
||||
\<open>Type unification failed: Variable\<close>
|
||||
|
||||
find_consts name:"test3'.test3"
|
||||
definition testone :: "'a::one test3'" where "testone \<equiv> \<lparr>tag_attribute = 5, test3 = 3, test3' = [1] \<rparr>"
|
||||
definition* testtwo :: "'a::one test3'" where "testtwo \<equiv> \<lparr>tag_attribute = 5, test3 = 1, test3' = [1] \<rparr>\<lparr> test3 := 1\<rparr>"
|
||||
|
||||
text*[testtest3'::"'a test3'", test3 = "1"]\<open>\<close>
|
||||
|
||||
declare [[show_sorts = false]]
|
||||
definition* testtest30 :: "'a test3'" where "testtest30 \<equiv> \<lparr>tag_attribute = 12, test3 = 2, test3' = [] \<rparr>"
|
||||
update_instance*[testtest3'::"'a test3'", test3 := "2"]
|
||||
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>test3 @{test3' \<open>testtest3'\<close>}\<close>}
|
||||
val tt = HOLogic.dest_number t
|
||||
\<close>
|
||||
|
||||
text\<open>@{value_ [] [nbe] \<open>test3 @{test3' \<open>testtest3'\<close>}\<close>}\<close>
|
||||
|
||||
update_instance*[testtest3'::"'a test3'", test3 += "2"]
|
||||
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>test3 @{test3' \<open>testtest3'\<close>}\<close>}
|
||||
val tt = HOLogic.dest_number t
|
||||
\<close>
|
||||
|
||||
value\<open>test3 \<lparr> tag_attribute = 1, test3 = 2, test3' = [2::int, 3] \<rparr>\<close>
|
||||
value\<open>test3 \<lparr> tag_attribute = 1, test3 = 2, test3' = [2::int, 3] \<rparr>\<close>
|
||||
find_consts name:"test3'.test3"
|
||||
|
||||
ML\<open>
|
||||
val test_value = @{value_ \<open>@{test3' \<open>testtest3'\<close>}\<close>}
|
||||
|
||||
\<close>
|
||||
declare [[show_sorts = false]]
|
||||
update_instance*[testtest3'::"'a test3'", test3 += "3"]
|
||||
declare [[show_sorts = false]]
|
||||
value*\<open>test3 @{test3' \<open>testtest3'\<close>}\<close>
|
||||
value\<open>test3 \<lparr> tag_attribute = 12, test3 = 5, test3' = AAAAAA\<rparr>\<close>
|
||||
|
||||
find_consts name:"test3'.test3"
|
||||
|
||||
text*[testtest3''::"int test3'", test3 = "1"]\<open>\<close>
|
||||
|
||||
update_instance*[testtest3''::"int test3'", test3' += "[3]"]
|
||||
|
||||
value*\<open>test3' @{test3' \<open>testtest3''\<close>}\<close>
|
||||
|
||||
update_instance*[testtest3''::"int test3'", test3' := "[3]"]
|
||||
|
||||
value*\<open>test3' @{test3' \<open>testtest3''\<close>}\<close>
|
||||
|
||||
update_instance*[testtest3''::"int test3'", test3' += "[2,5]"]
|
||||
|
||||
value*\<open>test3' @{test3' \<open>testtest3''\<close>}\<close>
|
||||
|
||||
definition testeq where "testeq \<equiv> \<lambda>x. x"
|
||||
find_consts name:"test3'.ma"
|
||||
|
||||
text-assert-error[testtest3''::"int test3'", test3 = "1", test3' = "[3::'a::numeral]"]\<open>\<close>
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
text-assert-error[testtest3''::"int test3'", test3 = "1", test3' = "[3]"]\<open>\<close>
|
||||
\<open>Duplicate instance declaration\<close>
|
||||
|
||||
|
||||
declare[[ML_print_depth = 10000]]
|
||||
definition-assert-error testest3''' :: "int test3'"
|
||||
where "testest3''' \<equiv> \<lparr> tag_attribute = 12, test3 = 1, test3' = [2]\<rparr>\<lparr> test3' := [3::'a::numeral]\<rparr>"
|
||||
\<open>Type unification failed\<close>
|
||||
declare[[ML_print_depth = 20]]
|
||||
|
||||
value* \<open>test3 @{test3' \<open>testtest3''\<close>}\<close>
|
||||
value* \<open>\<lparr> tag_attribute = 12, test3 = 1, test3' = [2]\<rparr>\<lparr> test3' := [3::int]\<rparr>\<close>
|
||||
value* \<open>test3 (\<lparr> tag_attribute = 12, test3 = 1, test3' = [2]\<rparr>\<lparr> test3' := [3::int]\<rparr>)\<close>
|
||||
term*\<open>@{test3' \<open>testtest3''\<close>}\<close>
|
||||
|
||||
ML\<open>val t = \<^term_>\<open>test3 @{test3' \<open>testtest3''\<close>}\<close>\<close>
|
||||
|
||||
value\<open>test3 \<lparr> tag_attribute = 12, test3 = 2, test3' = [2::int ,3]\<rparr>\<close>
|
||||
|
||||
find_consts name:"test3'.test3"
|
||||
find_consts name:"Isabelle_DOF_doc_class_test3'"
|
||||
update_instance*[testtest3''::"int test3'", test3 := "2"]
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>test3 @{test3' \<open>testtest3''\<close>}\<close>}
|
||||
val tt = HOLogic.dest_number t |> snd
|
||||
\<close>
|
||||
|
||||
doc_class 'a testa =
|
||||
a:: "'a set"
|
||||
b:: "int set"
|
||||
|
||||
text*[testtesta::"'a testa", b = "{2}"]\<open>\<close>
|
||||
update_instance*[testtesta::"'a testa", b += "{3}"]
|
||||
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>b @{testa \<open>testtesta\<close>}\<close>}
|
||||
val tt = HOLogic.dest_set t |> map (HOLogic.dest_number #> snd)
|
||||
\<close>
|
||||
|
||||
update_instance-assert-error[testtesta::"'a::numeral testa", a := "{2::'a::numeral}"]
|
||||
\<open>incompatible classes:'a Test_Polymorphic_Classes.testa:'a::numeral Test_Polymorphic_Classes.testa\<close>
|
||||
|
||||
text*[testtesta'::"'a::numeral testa", a = "{2}"]\<open>\<close>
|
||||
|
||||
update_instance*[testtesta'::"'a::numeral testa", a += "{3}"]
|
||||
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>a @{testa \<open>testtesta'\<close>}\<close>}
|
||||
\<close>
|
||||
|
||||
update_instance-assert-error[testtesta'::"'a::numeral testa", a += "{3::int}"]
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
definition-assert-error testtesta'' :: "'a::numeral testa"
|
||||
where "testtesta'' \<equiv> \<lparr>tag_attribute = 5, a = {1}, b = {1} \<rparr>\<lparr> a := {1::int}\<rparr>"
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
update_instance*[testtesta'::"'a::numeral testa", b := "{3::int}"]
|
||||
ML\<open>
|
||||
val t = @{value_ [nbe] \<open>b @{testa \<open>testtesta'\<close>}\<close>}
|
||||
\<close>
|
||||
|
||||
value* [nbe] \<open>b @{testa \<open>testtesta'\<close>}\<close>
|
||||
|
||||
definition testtesta'' :: "'a::numeral testa"
|
||||
where "testtesta'' \<equiv> \<lparr>tag_attribute = 5, a = {1}, b = {1} \<rparr>\<lparr> b := {2::int}\<rparr>"
|
||||
|
||||
value [nbe]\<open>b testtesta''\<close>
|
||||
|
||||
doc_class 'a test3 =
|
||||
test3 :: "'a list"
|
||||
type_synonym 'a test3_syn = "'a test3"
|
||||
|
||||
text*[testtest3::"int test3", test3 = "[1]"]\<open>\<close>
|
||||
update_instance*[testtest3::"int test3", test3 := "[2]"]
|
||||
ML\<open>
|
||||
val t = \<^term_>\<open>test3 @{test3 \<open>testtest3\<close>}\<close>
|
||||
val tt = \<^value_>\<open>test3 @{test3 \<open>testtest3\<close>}\<close> |> HOLogic.dest_list |> map HOLogic.dest_number
|
||||
\<close>
|
||||
|
||||
update_instance*[testtest3::"int test3", test3 += "[3]"]
|
||||
value*\<open>test3 @{test3 \<open>testtest3\<close>}\<close>
|
||||
|
||||
|
||||
doc_class ('a, 'b) test4 = "'a test3" +
|
||||
test4 :: "'b list"
|
||||
|
||||
definition-assert-error testtest0'::"('a::one, int) test4" where "testtest0' \<equiv>
|
||||
test4.make 11953346
|
||||
[] [1::('a::one)]"
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
definition-assert-error testtest0''::"('a, int) test4" where "testtest0'' \<equiv>
|
||||
test4.make 11953346
|
||||
[1] Test_Parametric_Classes_2_test4_test4_Attribute_Not_Initialized"
|
||||
\<open>Type unification failed\<close>
|
||||
|
||||
text\<open>Must fail because the equivalent definition
|
||||
\<open>testtest0'\<close> below fails
|
||||
due to the constraint in the where [1::('a::one)] is not an \<^typ>\<open>int list\<close>
|
||||
but an \<^typ>\<open>'a::one list\<close> list \<close>
|
||||
text-assert-error[testtest0::"('a::one, int) test4", test4 = "[1::'a::one]"]\<open>\<close>
|
||||
\<open>Type unification failed\<close>
|
||||
update_instance-assert-error[testtest0::"('a::one, int) test4"]
|
||||
\<open>Undefined instance: "testtest0"\<close>
|
||||
|
||||
value-assert-error\<open>@{test4 \<open>testtest0\<close>}\<close>\<open>Undefined instance: "testtest0"\<close>
|
||||
|
||||
definition testtest0''::"('a, int) test4" where "testtest0'' \<equiv>
|
||||
\<lparr> tag_attribute = 11953346, test3 = [], test4 = [1]\<rparr>\<lparr>test4 := [2]\<rparr>"
|
||||
|
||||
definition testtest0'''::"('a, int) test4" where "testtest0''' \<equiv>
|
||||
\<lparr> tag_attribute = 11953346, test3 = [], test4 = [1]\<rparr>\<lparr>test4 := [2]\<rparr>"
|
||||
|
||||
|
||||
value [nbe] \<open>test3 testtest0''\<close>
|
||||
|
||||
type_synonym notion = string
|
||||
|
||||
doc_class Introduction = text_section +
|
||||
authored_by :: "Author set" <= "UNIV"
|
||||
uses :: "notion set"
|
||||
invariant author_finite :: "finite (authored_by \<sigma>)"
|
||||
and force_level :: "(level \<sigma>) \<noteq> None \<and> the (level \<sigma>) > 1"
|
||||
|
||||
doc_class claim = Introduction +
|
||||
based_on :: "notion list"
|
||||
|
||||
doc_class technical = text_section +
|
||||
formal_results :: "thm list"
|
||||
|
||||
doc_class "definition" = technical +
|
||||
is_formal :: "bool"
|
||||
property :: "term list" <= "[]"
|
||||
|
||||
datatype kind = expert_opinion | argument | "proof"
|
||||
|
||||
doc_class result = technical +
|
||||
evidence :: kind
|
||||
property :: "thm list" <= "[]"
|
||||
invariant has_property :: "evidence \<sigma> = proof \<longleftrightarrow> property \<sigma> \<noteq> []"
|
||||
|
||||
doc_class example = technical +
|
||||
referring_to :: "(notion + definition) set" <= "{}"
|
||||
|
||||
doc_class conclusion = text_section +
|
||||
establish :: "(claim \<times> result) set"
|
||||
invariant establish_defined :: "\<forall> x. x \<in> Domain (establish \<sigma>)
|
||||
\<longrightarrow> (\<exists> y \<in> Range (establish \<sigma>). (x, y) \<in> establish \<sigma>)"
|
||||
|
||||
text\<open>Next we define some instances (docitems): \<close>
|
||||
|
||||
declare[[invariants_checking_with_tactics = true]]
|
||||
|
||||
text*[church1::Author, email="\<open>church@lambda.org\<close>"]\<open>\<close>
|
||||
|
||||
text*[resultProof::result, evidence = "proof", property="[@{thm \<open>HOL.refl\<close>}]"]\<open>\<close>
|
||||
text*[resultArgument::result, evidence = "argument"]\<open>\<close>
|
||||
|
||||
text\<open>The invariants \<^theory_text>\<open>author_finite\<close> and \<^theory_text>\<open>establish_defined\<close> can not be checked directly
|
||||
and need a little help.
|
||||
We can set the \<open>invariants_checking_with_tactics\<close> theory attribute to help the checking.
|
||||
It will enable a basic tactic, using unfold and auto:\<close>
|
||||
|
||||
declare[[invariants_checking_with_tactics = true]]
|
||||
|
||||
text*[curry::Author, email="\<open>curry@lambda.org\<close>"]\<open>\<close>
|
||||
text*[introduction2::Introduction, authored_by = "{@{Author \<open>church\<close>}}", level = "Some 2"]\<open>\<close>
|
||||
(* When not commented, should violated the invariant:
|
||||
update_instance*[introduction2::Introduction
|
||||
, authored_by := "{@{Author \<open>church\<close>}}"
|
||||
, level := "Some 1"]
|
||||
*)
|
||||
|
||||
text*[introduction_test_parsed_elaborate::Introduction, authored_by = "authored_by @{Introduction \<open>introduction2\<close>}", level = "Some 2"]\<open>\<close>
|
||||
term*\<open>authored_by @{Introduction \<open>introduction_test_parsed_elaborate\<close>}\<close>
|
||||
value*\<open>authored_by @{Introduction \<open>introduction_test_parsed_elaborate\<close>}\<close>
|
||||
text*[introduction3::Introduction, authored_by = "{@{Author \<open>church\<close>}}", level = "Some 2"]\<open>\<close>
|
||||
text*[introduction4::Introduction, authored_by = "{@{Author \<open>curry\<close>}}", level = "Some 4"]\<open>\<close>
|
||||
|
||||
text*[resultProof2::result, evidence = "proof", property="[@{thm \<open>HOL.sym\<close>}]"]\<open>\<close>
|
||||
|
||||
text\<open>Then we can evaluate expressions with instances:\<close>
|
||||
|
||||
term*\<open>authored_by @{Introduction \<open>introduction2\<close>} = authored_by @{Introduction \<open>introduction3\<close>}\<close>
|
||||
value*\<open>authored_by @{Introduction \<open>introduction2\<close>} = authored_by @{Introduction \<open>introduction3\<close>}\<close>
|
||||
value*\<open>authored_by @{Introduction \<open>introduction2\<close>} = authored_by @{Introduction \<open>introduction4\<close>}\<close>
|
||||
|
||||
value*\<open>@{Introduction \<open>introduction2\<close>}\<close>
|
||||
|
||||
value*\<open>{@{Author \<open>curry\<close>}} = {@{Author \<open>church\<close>}}\<close>
|
||||
|
||||
term*\<open>property @{result \<open>resultProof\<close>} = property @{result \<open>resultProof2\<close>}\<close>
|
||||
value*\<open>property @{result \<open>resultProof\<close>} = property @{result \<open>resultProof2\<close>}\<close>
|
||||
|
||||
value*\<open>evidence @{result \<open>resultProof\<close>} = evidence @{result \<open>resultProof2\<close>}\<close>
|
||||
|
||||
declare[[invariants_checking_with_tactics = false]]
|
||||
|
||||
declare[[invariants_strict_checking = false]]
|
||||
|
||||
doc_class test_A =
|
||||
level :: "int option"
|
||||
x :: int
|
||||
|
||||
doc_class test_B =
|
||||
level :: "int option"
|
||||
x :: "string" (* attributes live in their own name-space *)
|
||||
y :: "string list" <= "[]" (* and can have arbitrary type constructors *)
|
||||
(* LaTeX may have problems with this, though *)
|
||||
|
||||
text\<open>We may even use type-synonyms for class synonyms ...\<close>
|
||||
type_synonym test_XX = test_B
|
||||
|
||||
doc_class test_C0 = test_B +
|
||||
z :: "test_A option" <= None (* A LINK, i.e. an attribute that has a type
|
||||
referring to a document class. Mathematical
|
||||
relations over document items can be modeled. *)
|
||||
g :: "thm" (* a reference to the proxy-type 'thm' allowing
|
||||
|
||||
to denote references to theorems inside attributes *)
|
||||
|
||||
|
||||
doc_class test_C = test_B +
|
||||
z :: "test_A option" <= None (* A LINK, i.e. an attribute that has a type
|
||||
referring to a document class. Mathematical
|
||||
relations over document items can be modeled. *)
|
||||
g :: "thm" (* a reference to the proxy-type 'thm' allowing
|
||||
|
||||
to denote references to theorems inside attributes *)
|
||||
|
||||
datatype enum = X1 | X2 | X3 (* we add an enumeration type ... *)
|
||||
|
||||
|
||||
doc_class test_D = test_B +
|
||||
x :: "string" <= "\<open>def \<longrightarrow>\<close>" (* overriding default *)
|
||||
a1 :: enum <= "X2" (* class - definitions may be mixed
|
||||
with arbitrary HOL-commands, thus
|
||||
also local definitions of enumerations *)
|
||||
a2 :: int <= 0
|
||||
|
||||
doc_class test_E = test_D +
|
||||
x :: "string" <= "''qed''" (* overriding default *)
|
||||
|
||||
doc_class test_G = test_C +
|
||||
g :: "thm" <= "@{thm \<open>HOL.refl\<close>}" (* warning overriding attribute expected*)
|
||||
|
||||
doc_class 'a test_F =
|
||||
properties :: "term list"
|
||||
r :: "thm list"
|
||||
u :: "file"
|
||||
s :: "typ list"
|
||||
b :: "(test_A \<times> 'a test_C_scheme) set" <= "{}" (* This is a relation link, roughly corresponding
|
||||
to an association class. It can be used to track
|
||||
claims to result - relations, for example.*)
|
||||
b' :: "(test_A \<times> 'a test_C_scheme) list" <= "[]"
|
||||
invariant br :: "r \<sigma> \<noteq> [] \<and> card(b \<sigma>) \<ge> 3"
|
||||
and br':: "r \<sigma> \<noteq> [] \<and> length(b' \<sigma>) \<ge> 3"
|
||||
and cr :: "properties \<sigma> \<noteq> []"
|
||||
|
||||
lemma*[l::test_E] local_sample_lemma :
|
||||
"@{thm \<open>refl\<close>} = @{thm ''refl''}" by simp
|
||||
\<comment> \<open>un-evaluated references are similar to
|
||||
uninterpreted constants. Not much is known
|
||||
about them, but that doesn't mean that we
|
||||
can't prove some basics over them...\<close>
|
||||
|
||||
text*[xcv1::test_A, x=5]\<open>Lorem ipsum ...\<close>
|
||||
text*[xcv2::test_C, g="@{thm ''HOL.refl''}"]\<open>Lorem ipsum ...\<close>
|
||||
text*[xcv3::test_A, x=7]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
text\<open>Bug: For now, the implementation is no more compatible with the docitem term-antiquotation:\<close>
|
||||
text-assert-error[xcv10::"unit test_F", r="[@{thm ''HOL.refl''},
|
||||
@{thm \<open>local_sample_lemma\<close>}]", (* long names required *)
|
||||
b="{(@{docitem ''xcv1''},@{docitem \<open>xcv2\<close>})}", (* notations \<open>...\<close> vs. ''...'' *)
|
||||
s="[@{typ \<open>int list\<close>}]",
|
||||
properties = "[@{term \<open>H \<longrightarrow> H\<close>}]" (* notation \<open>...\<close> required for UTF8*)
|
||||
]\<open>Lorem ipsum ...\<close>\<open>Type unification failed\<close>
|
||||
|
||||
text*[xcv11::"unit test_F", r="[@{thm ''HOL.refl''},
|
||||
@{thm \<open>local_sample_lemma\<close>}]", (* long names required *)
|
||||
b="{(@{test_A ''xcv1''},@{test_C \<open>xcv2\<close>})}", (* notations \<open>...\<close> vs. ''...'' *)
|
||||
s="[@{typ \<open>int list\<close>}]",
|
||||
properties = "[@{term \<open>H \<longrightarrow> H\<close>}]" (* notation \<open>...\<close> required for UTF8*)
|
||||
]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
value*\<open>b @{test_F \<open>xcv11\<close>}\<close>
|
||||
|
||||
typ\<open>unit test_F\<close>
|
||||
|
||||
text*[xcv4::"unit test_F", r="[@{thm ''HOL.refl''},
|
||||
@{thm \<open>local_sample_lemma\<close>}]", (* long names required *)
|
||||
b="{(@{test_A ''xcv1''},@{test_C \<open>xcv2\<close>})}", (* notations \<open>...\<close> vs. ''...'' *)
|
||||
s="[@{typ \<open>int list\<close>}]",
|
||||
properties = "[@{term \<open>H \<longrightarrow> H\<close>}]" (* notation \<open>...\<close> required for UTF8*)
|
||||
]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
value*\<open>b @{test_F \<open>xcv4\<close>}\<close>
|
||||
|
||||
text*[xcv5::test_G, g="@{thm \<open>HOL.sym\<close>}"]\<open>Lorem ipsum ...\<close>
|
||||
|
||||
update_instance*[xcv4::"unit test_F", b+="{(@{test_A ''xcv3''},@{test_C ''xcv2''})}"]
|
||||
|
||||
update_instance-assert-error[xcv4::"unit test_F", b+="{(@{test_A ''xcv3''},@{test_G ''xcv5''})}"]
|
||||
\<open>Type unification failed: Clash of types\<close>
|
||||
|
||||
|
||||
|
||||
typ\<open>unit test_G_ext\<close>
|
||||
typ\<open>\<lparr>test_G.tag_attribute :: int\<rparr>\<close>
|
||||
text*[xcv6::"\<lparr>test_G.tag_attribute :: int\<rparr> test_F", b="{(@{test_A ''xcv3''},@{test_G ''xcv5''})}"]\<open>\<close>
|
||||
|
||||
|
||||
text\<open>\<open>lemma*\<close>, etc. do not support well polymorphic classes term antiquotations.
|
||||
For now only embedded term-antiquotation in a definition could work:\<close>
|
||||
definition* testtest_level where "testtest_level \<equiv> the (text_section.level @{test2 \<open>testtest2''\<close>})"
|
||||
lemma*[e5::E] testtest : "xx + testtest_level = yy + testtest_level \<Longrightarrow> xx = yy" by simp
|
||||
|
||||
text\<open>Indeed this fails:\<close>
|
||||
(*lemma*[e6::E] testtest : "xx + the (level @{test2 \<open>testtest2''\<close>}) = yy + the (level @{test2 \<open>testtest2''\<close>}) \<Longrightarrow> xx = yy" by simp*)
|
||||
|
||||
end
|
|
@ -23,7 +23,7 @@
|
|||
\documentclass{llncs}
|
||||
\usepackage{DOF-core}
|
||||
\bibliographystyle{splncs04}
|
||||
|
||||
\title{No Title Given}
|
||||
\usepackage{hyperref}
|
||||
\setcounter{tocdepth}{3}
|
||||
\hypersetup{%
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
\RequirePackage{ifvtex}
|
||||
\documentclass[abstract=true,fontsize=11pt,DIV=12,paper=a4]{scrartcl}
|
||||
|
||||
\title{No Title Given}
|
||||
\usepackage{DOF-core}
|
||||
|
||||
\usepackage{textcomp}
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
\RequirePackage{ifvtex}
|
||||
\documentclass[fontsize=11pt,paper=a4,open=right,twoside,abstract=true]{scrreprt}
|
||||
|
||||
\title{No Title Given}
|
||||
|
||||
\usepackage{textcomp}
|
||||
\bibliographystyle{abbrvnat}
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
|
||||
\RequirePackage{ifvtex}
|
||||
\documentclass[fontsize=11pt,paper=a4,open=right,twoside,abstract=true]{scrreprt}
|
||||
\title{No Title Given}
|
||||
|
||||
\usepackage{DOF-core}
|
||||
|
||||
|
|
|
@ -156,17 +156,17 @@
|
|||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% begin: label and ref
|
||||
\newkeycommand\isaDof@label[label=,type=][1]{\label{#1}}
|
||||
\def\isaDofDOTlabel{\isaDof@label}
|
||||
\newcommand{\isaDofDOTlabel}{\isaDof@label}
|
||||
\newkeycommand\isaDof@ref[label=,type=][1]{\autoref{#1}}
|
||||
\def\isaDofDOTref{\isaDof@ref}
|
||||
\newcommand{\isaDofDOTref}{\isaDof@ref}
|
||||
\newkeycommand\isaDof@macro[label=,type=][1]{MMM \label{#1}} %% place_holder
|
||||
\def\isaDofDOTmacroDef{\iisaDof@macro}
|
||||
\newcommand{\isaDofDOTmacroDef}{\iisaDof@macro}
|
||||
\newkeycommand\isaDof@macroExp[label=,type=][1]{MMM \autoref{#1}} %% place_holder
|
||||
\def\isaDofDOTmacroExp{\isaDof@macroExp}
|
||||
\newcommand{\isaDofDOTmacroExp}{\isaDof@macroExp}
|
||||
% end: label and ref
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
\title{No Title Given}
|
||||
%\title{No Title Given}
|
||||
\input{ontologies}
|
||||
\IfFileExists{preamble.tex}{\input{preamble.tex}}{}%
|
||||
|
||||
|
|
|
@ -165,11 +165,11 @@ text\<open>The intended use for the \<open>doc_class\<close>es \<^verbatim>\<ope
|
|||
\<^verbatim>\<open>math_example\<close> (or \<^verbatim>\<open>math_ex\<close> for short)
|
||||
are \<^emph>\<open>informal\<close> descriptions of semi-formal definitions (by inheritance).
|
||||
Math-Examples can be made referentiable triggering explicit, numbered presentations.\<close>
|
||||
doc_class math_motivation = tc +
|
||||
doc_class math_motivation = technical +
|
||||
referentiable :: bool <= False
|
||||
type_synonym math_mtv = math_motivation
|
||||
|
||||
doc_class math_explanation = tc +
|
||||
doc_class math_explanation = technical +
|
||||
referentiable :: bool <= False
|
||||
type_synonym math_exp = math_explanation
|
||||
|
||||
|
@ -207,7 +207,7 @@ datatype math_content_class =
|
|||
text\<open>Instances of the \<open>doc_class\<close> \<^verbatim>\<open>math_content\<close> are by definition @{term "semiformal"}; they may
|
||||
be non-referential, but in this case they will not have a @{term "short_name"}.\<close>
|
||||
|
||||
doc_class math_content = tc +
|
||||
doc_class math_content = technical +
|
||||
referentiable :: bool <= False
|
||||
short_name :: string <= "''''"
|
||||
status :: status <= "semiformal"
|
||||
|
@ -516,34 +516,34 @@ subsection\<open>Content in Engineering/Tech Papers \<close>
|
|||
text\<open>This section is currently experimental and not supported by the documentation
|
||||
generation backend.\<close>
|
||||
|
||||
doc_class engineering_content = tc +
|
||||
doc_class engineering_content = technical +
|
||||
short_name :: string <= "''''"
|
||||
status :: status
|
||||
type_synonym eng_content = engineering_content
|
||||
|
||||
|
||||
doc_class "experiment" = eng_content +
|
||||
doc_class "experiment" = engineering_content +
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class "evaluation" = eng_content +
|
||||
doc_class "evaluation" = engineering_content +
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class "data" = eng_content +
|
||||
doc_class "data" = engineering_content +
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class tech_definition = eng_content +
|
||||
doc_class tech_definition = engineering_content +
|
||||
referentiable :: bool <= True
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class tech_code = eng_content +
|
||||
doc_class tech_code = engineering_content +
|
||||
referentiable :: bool <= True
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class tech_example = eng_content +
|
||||
doc_class tech_example = engineering_content +
|
||||
referentiable :: bool <= True
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
doc_class eng_example = eng_content +
|
||||
doc_class eng_example = engineering_content +
|
||||
referentiable :: bool <= True
|
||||
tag :: "string" <= "''''"
|
||||
|
||||
|
|
|
@ -39,10 +39,10 @@ import isabelle._
|
|||
object DOF {
|
||||
/** parameters **/
|
||||
|
||||
val isabelle_version = "2022"
|
||||
val isabelle_url = "https://isabelle.in.tum.de/website-Isabelle2022"
|
||||
val isabelle_version = "2023"
|
||||
val isabelle_url = "https://isabelle.in.tum.de/website-Isabelle2023"
|
||||
|
||||
val afp_version = "afp-2022-10-27"
|
||||
val afp_version = "afp-2023-09-13"
|
||||
|
||||
// Isabelle/DOF version: "Unreleased" for development, semantic version for releases
|
||||
val version = "Unreleased"
|
||||
|
@ -55,7 +55,7 @@ object DOF {
|
|||
val generic_doi = "10.5281/zenodo.3370482"
|
||||
|
||||
// Isabelle/DOF source repository
|
||||
val url = "https://git.logicalhacking.com/Isabelle_DOF/Isabelle_DOF"
|
||||
val url = "https://git.logicalhacking.com/Isabelle_DOF/Isabelle_DOF/src/branch/Isabelle_dev"
|
||||
|
||||
// Isabelle/DOF release artifacts
|
||||
val artifact_dir = "releases/Isabelle_DOF/Isabelle_DOF"
|
||||
|
|
|
@ -42,7 +42,7 @@ object DOF_Document_Build
|
|||
def the_document_entry(context: Document_Build.Context, name: String): Export.Entry = {
|
||||
val entries =
|
||||
for {
|
||||
node_name <- context.document_theories
|
||||
node_name <- context.all_document_theories
|
||||
entry <- context.session_context.get(node_name.theory, name)
|
||||
} yield entry
|
||||
|
||||
|
@ -60,11 +60,12 @@ object DOF_Document_Build
|
|||
override def prepare_directory(
|
||||
context: Document_Build.Context,
|
||||
dir: Path,
|
||||
doc: Document_Build.Document_Variant): Document_Build.Directory =
|
||||
doc: Document_Build.Document_Variant,
|
||||
verbose: Boolean): Document_Build.Directory =
|
||||
{
|
||||
val options = DOF.options(context.options)
|
||||
val latex_output = new Latex_Output(options)
|
||||
val directory = context.prepare_directory(dir, doc, latex_output)
|
||||
val directory = context.prepare_directory(dir, doc, latex_output, verbose)
|
||||
|
||||
val isabelle_dof_dir = context.session_context.sessions_structure(DOF.session).dir
|
||||
|
||||
|
|
|
@ -429,11 +429,11 @@ fun convert_src_from_margs ctxt (X, (((str,_),value)::R)) =
|
|||
fun float_command (name, pos) descr cid =
|
||||
let fun set_default_class NONE = SOME(cid,pos)
|
||||
|set_default_class (SOME X) = SOME X
|
||||
fun create_instance ((((oid, pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
fun create_instance (((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
Value_Command.Docitem_Parser.create_and_check_docitem
|
||||
{is_monitor = false}
|
||||
{is_inline = true}
|
||||
{define = true} oid pos (set_default_class cid_pos) doc_attrs
|
||||
{define = true} binding (set_default_class cid_pos) doc_attrs
|
||||
fun generate_fig_ltx_ctxt ctxt cap_src oid body =
|
||||
Latex.macro0 "centering"
|
||||
@ body
|
||||
|
@ -441,25 +441,31 @@ fun float_command (name, pos) descr cid =
|
|||
@ Latex.macro "label" (DOF_core.get_instance_name_global oid (Proof_Context.theory_of ctxt)
|
||||
|> DOF_core.output_name
|
||||
|> Latex.string)
|
||||
fun parse_and_tex (margs as (((oid, _),_), _), cap_src) ctxt =
|
||||
(convert_src_from_margs ctxt margs)
|
||||
|> pair (upd_caption (K Input.empty) #> convert_meta_args ctxt margs)
|
||||
|> fig_content ctxt
|
||||
|> generate_fig_ltx_ctxt ctxt cap_src oid
|
||||
|> (Latex.environment ("figure") )
|
||||
fun parse_and_tex (margs as ((binding,_), _), cap_src) ctxt =
|
||||
let val oid = Binding.name_of binding
|
||||
in
|
||||
(convert_src_from_margs ctxt margs)
|
||||
|> pair (upd_caption (K Input.empty) #> convert_meta_args ctxt margs)
|
||||
|> fig_content ctxt
|
||||
|> generate_fig_ltx_ctxt ctxt cap_src oid
|
||||
|> (Latex.environment ("figure") )
|
||||
end
|
||||
in Monitor_Command_Parser.onto_macro_cmd_command (name, pos) descr create_instance parse_and_tex
|
||||
end
|
||||
|
||||
fun listing_command (name, pos) descr cid =
|
||||
let fun set_default_class NONE = SOME(cid,pos)
|
||||
|set_default_class (SOME X) = SOME X
|
||||
fun create_instance ((((oid, pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
fun create_instance (((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
Value_Command.Docitem_Parser.create_and_check_docitem
|
||||
{is_monitor = false}
|
||||
{is_inline = true}
|
||||
{define = true} oid pos (set_default_class cid_pos) doc_attrs
|
||||
fun parse_and_tex (margs as (((_, pos),_), _), _) _ =
|
||||
ISA_core.err ("Not yet implemented.\n Please use text*[oid::listing]\<open>\<close> instead.") pos
|
||||
{define = true} binding (set_default_class cid_pos) doc_attrs
|
||||
fun parse_and_tex (margs as ((binding,_), _), _) _ =
|
||||
let val pos = Binding.pos_of binding
|
||||
in
|
||||
ISA_core.err ("Not yet implemented.\n Please use text*[oid::listing]\<open>\<close> instead.") pos
|
||||
end
|
||||
in Monitor_Command_Parser.onto_macro_cmd_command (name, pos) descr create_instance parse_and_tex
|
||||
end
|
||||
|
||||
|
@ -798,14 +804,14 @@ fun upd_block f =
|
|||
fun upd_block_title f =
|
||||
upd_block (fn title => f title)
|
||||
|
||||
val unenclose_end = unenclose
|
||||
val unenclose_string = unenclose o unenclose o unenclose_end
|
||||
val unenclose_string = unenclose o unenclose
|
||||
|
||||
fun read_string s =
|
||||
let val symbols = unenclose_end s |> Symbol_Pos.explode0
|
||||
let val s' = DOF_core.markup2string s
|
||||
val symbols = s' |> Symbol_Pos.explode0
|
||||
in if hd symbols |> fst |> equal Symbol.open_
|
||||
then Token.read_cartouche symbols |> Token.input_of
|
||||
else unenclose_string s |> Syntax.read_input
|
||||
else unenclose_string s' |> Syntax.read_input
|
||||
end
|
||||
|
||||
val block_titleN = "title"
|
||||
|
@ -858,11 +864,11 @@ fun convert_meta_args ctxt (X, (((str,_),value) :: R)) =
|
|||
fun frame_command (name, pos) descr cid =
|
||||
let fun set_default_class NONE = SOME(cid,pos)
|
||||
|set_default_class (SOME X) = SOME X
|
||||
fun create_instance ((((oid, pos),cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
fun create_instance (((binding,cid_pos), doc_attrs) : ODL_Meta_Args_Parser.meta_args_t) =
|
||||
Value_Command.Docitem_Parser.create_and_check_docitem
|
||||
{is_monitor = false}
|
||||
{is_inline = true}
|
||||
{define = true} oid pos (set_default_class cid_pos) doc_attrs
|
||||
{define = true} binding (set_default_class cid_pos) doc_attrs
|
||||
fun titles_src ctxt frametitle framesubtitle src =
|
||||
Latex.string "{"
|
||||
@ Document_Output.output_document ctxt {markdown = false} frametitle
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -208,13 +208,13 @@ text\<open>
|
|||
in many features over-accomplishes the required features of \<^dof>.
|
||||
\<close>
|
||||
|
||||
figure*["fig:dof-ide",relative_width="95",file_src="''figures/cicm2018-combined.png''"]\<open>
|
||||
figure*["fig_dof_ide",relative_width="95",file_src="''figures/cicm2018-combined.png''"]\<open>
|
||||
The \<^isadof> IDE (left) and the corresponding PDF (right), showing the first page
|
||||
of~@{cite "brucker.ea:isabelle-ontologies:2018"}.\<close>
|
||||
|
||||
text\<open>
|
||||
We call the present implementation of \<^dof> on the Isabelle platform \<^isadof> .
|
||||
@{figure "fig:dof-ide"} shows a screen-shot of an introductory paper on
|
||||
@{figure "fig_dof_ide"} shows a screen-shot of an introductory paper on
|
||||
\<^isadof>~@{cite "brucker.ea:isabelle-ontologies:2018"}: the \<^isadof> PIDE can be seen on the left,
|
||||
while the generated presentation in PDF is shown on the right.
|
||||
|
||||
|
|
|
@ -477,7 +477,7 @@ on the level of generated \<^verbatim>\<open>.aux\<close>-files, which are not n
|
|||
error-message and compiling it with a consistent bibtex usually makes disappear this behavior.
|
||||
\<close>
|
||||
|
||||
subsection*["using-term-aq"::technical, main_author = "Some @{author ''bu''}"]
|
||||
subsection*["using_term_aq"::technical, main_author = "Some @{author ''bu''}"]
|
||||
\<open>Using Term-Antiquotations\<close>
|
||||
|
||||
text\<open>The present version of \<^isadof> is the first version that supports the novel feature of
|
||||
|
@ -577,11 +577,11 @@ term antiquotations:
|
|||
\<close>
|
||||
|
||||
(*<*)
|
||||
declare_reference*["subsec:onto-term-ctxt"::technical]
|
||||
declare_reference*["subsec_onto_term_ctxt"::technical]
|
||||
(*>*)
|
||||
|
||||
text\<open>They are text-contexts equivalents to the \<^theory_text>\<open>term*\<close> and \<^theory_text>\<open>value*\<close> commands
|
||||
for term-contexts introduced in @{technical (unchecked) \<open>subsec:onto-term-ctxt\<close>}\<close>
|
||||
for term-contexts introduced in @{technical (unchecked) \<open>subsec_onto_term_ctxt\<close>}\<close>
|
||||
|
||||
subsection\<open>A Technical Report with Tight Checking\<close>
|
||||
text\<open>An example of tight checking is a small programming manual to document programming trick
|
||||
|
|
|
@ -164,7 +164,7 @@ text\<open>
|
|||
to and between ontological concepts.
|
||||
\<close>
|
||||
|
||||
subsection*["odl-manual0"::technical]\<open>Some Isabelle/HOL Specification Constructs Revisited\<close>
|
||||
subsection*["odl_manual0"::technical]\<open>Some Isabelle/HOL Specification Constructs Revisited\<close>
|
||||
text\<open>
|
||||
As ODL is an extension of Isabelle/HOL, document class definitions can therefore be arbitrarily
|
||||
mixed with standard HOL specification constructs. To make this manual self-contained, we present
|
||||
|
@ -231,7 +231,7 @@ corresponding type-name \<^boxed_theory_text>\<open>0.foo\<close> is not. For th
|
|||
definition of a \<^boxed_theory_text>\<open>doc_class\<close> reject problematic lexical overlaps.\<close>
|
||||
|
||||
|
||||
subsection*["odl-manual1"::technical]\<open>Defining Document Classes\<close>
|
||||
subsection*["odl_manual1"::technical]\<open>Defining Document Classes\<close>
|
||||
text\<open>
|
||||
A document class\<^bindex>\<open>document class\<close> can be defined using the @{command "doc_class"} keyword:
|
||||
\<^item> \<open>class_id\<close>:\<^bindex>\<open>class\_id@\<open>class_id\<close>\<close> a type-\<open>name\<close> that has been introduced
|
||||
|
@ -350,7 +350,7 @@ layout; these commands have to be wrapped into
|
|||
text\<open>
|
||||
|
||||
\<^item> \<open>obj_id\<close>:\<^index>\<open>obj\_id@\<open>obj_id\<close>\<close> (or \<^emph>\<open>oid\<close>\<^index>\<open>oid!oid@\<open>see obj_id\<close>\<close> for short) a \<^emph>\<open>name\<close>
|
||||
as specified in @{technical \<open>odl-manual0\<close>}.
|
||||
as specified in @{technical \<open>odl_manual0\<close>}.
|
||||
\<^item> \<open>meta_args\<close> :
|
||||
\<^rail>\<open>obj_id ('::' class_id) ((',' attribute '=' HOL_term) *) \<close>
|
||||
\<^item> \<^emph>\<open>evaluator\<close>: from @{cite "wenzel:isabelle-isar:2020"}, evaluation is tried first using ML,
|
||||
|
@ -465,16 +465,16 @@ text*[b::B'_test']\<open>\<close>
|
|||
|
||||
term*\<open>@{B'_test' \<open>b\<close>}\<close>
|
||||
|
||||
declare_reference*["text-elements-expls"::technical]
|
||||
declare_reference*["text_elements_expls"::technical]
|
||||
(*>*)
|
||||
|
||||
subsection*["subsec:onto-term-ctxt"::technical]\<open>Ontological Term-Contexts and their Management\<close>
|
||||
subsection*["subsec_onto_term_ctxt"::technical]\<open>Ontological Term-Contexts and their Management\<close>
|
||||
text\<open>
|
||||
\<^item> \<open>annotated_term_element\<close>
|
||||
\<^rail>\<open>
|
||||
(@@{command "term*"} ('[' meta_args ']')? '\<open>' HOL_term '\<close>'
|
||||
| (@@{command "value*"}
|
||||
| @@{command "assert*"}) \<newline> ('[' meta_args ']')? ('[' evaluator ']')? '\<open>' HOL_term '\<close>'
|
||||
| @@{command "assert*"}) \<newline> ('[' evaluator ']')? ('[' meta_args ']')? '\<open>' HOL_term '\<close>'
|
||||
| (@@{command "definition*"}) ('[' meta_args ']')?
|
||||
('... see ref manual')
|
||||
| (@@{command "lemma*"} | @@{command "theorem*"} | @@{command "corollary*"}
|
||||
|
@ -503,9 +503,9 @@ for example). With the exception of the @{command "term*"}-command, the term-ant
|
|||
This expansion happens \<^emph>\<open>before\<close> evaluation of the term, thus permitting
|
||||
executable HOL-functions to interact with meta-objects.
|
||||
The @{command "assert*"}-command allows for logical statements to be checked in the global context
|
||||
(see @{technical (unchecked) \<open>text-elements-expls\<close>}).
|
||||
(see @{technical (unchecked) \<open>text_elements_expls\<close>}).
|
||||
% TODO:
|
||||
% Section reference @{docitem (unchecked) \<open>text-elements-expls\<close>} has not the right number
|
||||
% Section reference @{docitem (unchecked) \<open>text_elements_expls\<close>} has not the right number
|
||||
This is particularly useful to explore formal definitions wrt. their border cases.
|
||||
For @{command "assert*"}, the evaluation of the term can be disabled
|
||||
with the \<^boxed_theory_text>\<open>disable_assert_evaluation\<close> theory attribute:
|
||||
|
@ -558,7 +558,7 @@ of this meta-object. The latter leads to a failure of the entire command.
|
|||
\<close>
|
||||
|
||||
(*<*)
|
||||
declare_reference*["sec:advanced"::technical]
|
||||
declare_reference*["sec_advanced"::technical]
|
||||
(*>*)
|
||||
|
||||
subsection\<open>Status and Query Commands\<close>
|
||||
|
@ -586,7 +586,7 @@ text\<open>
|
|||
The raw term will be available in the \<open>input_term\<close> field of \<^theory_text>\<open>print_doc_items\<close> output and,
|
||||
\<^item> \<^theory_text>\<open>check_doc_global\<close> checks if all declared object references have been
|
||||
defined, all monitors are in a final state, and checks the final invariant
|
||||
on all objects (cf. @{technical (unchecked) \<open>sec:advanced\<close>})
|
||||
on all objects (cf. @{technical (unchecked) \<open>sec_advanced\<close>})
|
||||
\<close>
|
||||
|
||||
subsection\<open>Macros\<close>
|
||||
|
@ -738,7 +738,7 @@ text\<open>The command syntax follows the implicit convention to add a ``*''
|
|||
to distinguish them from the (similar) standard Isabelle text-commands
|
||||
which are not ontology-aware.\<close>
|
||||
|
||||
subsection*["text-elements"::technical]\<open>The Ontology \<^verbatim>\<open>scholarly_paper\<close>\<close>
|
||||
subsection*["text_elements"::technical]\<open>The Ontology \<^verbatim>\<open>scholarly_paper\<close>\<close>
|
||||
(*<*)
|
||||
ML\<open>val toLaTeX = String.translate (fn c => if c = #"_" then "\\_" else String.implode[c])\<close>
|
||||
ML\<open>writeln (DOF_core.print_doc_class_tree
|
||||
|
@ -821,9 +821,9 @@ or
|
|||
\<open>text*[\<dots>::example, main_author = "Some(@{author \<open>bu\<close>})"] \<open> \<dots> \<close>\<close>}
|
||||
|
||||
where \<^boxed_theory_text>\<open>"''bu''"\<close> is a string presentation of the reference to the author
|
||||
text element (see below in @{docitem (unchecked) \<open>text-elements-expls\<close>}).
|
||||
text element (see below in @{docitem (unchecked) \<open>text_elements_expls\<close>}).
|
||||
% TODO:
|
||||
% Section reference @{docitem (unchecked) \<open>text-elements-expls\<close>} has not the right number
|
||||
% Section reference @{docitem (unchecked) \<open>text_elements_expls\<close>} has not the right number
|
||||
\<close>
|
||||
|
||||
text\<open>Some of these concepts were supported as command-abbreviations leading to the extension
|
||||
|
@ -866,7 +866,7 @@ of Isabelle is its ability to handle both, and to establish links between both w
|
|||
Therefore, the formal assertion command has been integrated to capture some form of formal content.\<close>
|
||||
|
||||
|
||||
subsubsection*["text-elements-expls"::example]\<open>Examples\<close>
|
||||
subsubsection*["text_elements_expls"::example]\<open>Examples\<close>
|
||||
|
||||
text\<open>
|
||||
While the default user interface for class definitions via the
|
||||
|
@ -1018,9 +1018,41 @@ schemata:
|
|||
|
||||
|
||||
|
||||
section*["sec:advanced"::technical]\<open>Advanced ODL Concepts\<close>
|
||||
section*["sec_advanced"::technical]\<open>Advanced ODL Concepts\<close>
|
||||
(*<*)
|
||||
doc_class title =
|
||||
short_title :: "string option" <= "None"
|
||||
doc_class author =
|
||||
email :: "string" <= "''''"
|
||||
datatype classification = SIL0 | SIL1 | SIL2 | SIL3 | SIL4
|
||||
doc_class abstract =
|
||||
keywordlist :: "string list" <= "[]"
|
||||
safety_level :: "classification" <= "SIL3"
|
||||
doc_class text_section =
|
||||
authored_by :: "author set" <= "{}"
|
||||
level :: "int option" <= "None"
|
||||
type_synonym notion = string
|
||||
doc_class introduction = text_section +
|
||||
authored_by :: "author set" <= "UNIV"
|
||||
uses :: "notion set"
|
||||
doc_class claim = introduction +
|
||||
based_on :: "notion list"
|
||||
doc_class technical = text_section +
|
||||
formal_results :: "thm list"
|
||||
doc_class "definition" = technical +
|
||||
is_formal :: "bool"
|
||||
property :: "term list" <= "[]"
|
||||
datatype kind = expert_opinion | argument | "proof"
|
||||
doc_class result = technical +
|
||||
evidence :: kind
|
||||
property :: "thm list" <= "[]"
|
||||
doc_class example = technical +
|
||||
referring_to :: "(notion + definition) set" <= "{}"
|
||||
doc_class "conclusion" = text_section +
|
||||
establish :: "(claim \<times> result) set"
|
||||
(*>*)
|
||||
|
||||
subsection*["sec:example"::technical]\<open>Example\<close>
|
||||
subsection*["sec_example"::technical]\<open>Example\<close>
|
||||
text\<open>We assume in this section the following local ontology:
|
||||
|
||||
@{boxed_theory_text [display]\<open>
|
||||
|
@ -1089,11 +1121,11 @@ text\<open>
|
|||
\<close>
|
||||
|
||||
(*<*)
|
||||
declare_reference*["sec:monitors"::technical]
|
||||
declare_reference*["sec:low_level_inv"::technical]
|
||||
declare_reference*["sec_monitors"::technical]
|
||||
declare_reference*["sec_low_level_inv"::technical]
|
||||
(*>*)
|
||||
|
||||
subsection*["sec:class_inv"::technical]\<open>ODL Class Invariants\<close>
|
||||
subsection*["sec_class_inv"::technical]\<open>ODL Class Invariants\<close>
|
||||
|
||||
text\<open>
|
||||
Ontological classes as described so far are too liberal in many situations.
|
||||
|
@ -1144,7 +1176,7 @@ text\<open>
|
|||
Hence, the \<^boxed_theory_text>\<open>inv1\<close> invariant is checked
|
||||
when the instance \<^boxed_theory_text>\<open>testinv2\<close> is defined.
|
||||
|
||||
Now let's add some invariants to our example in \<^technical>\<open>sec:example\<close>.
|
||||
Now let's add some invariants to our example in \<^technical>\<open>sec_example\<close>.
|
||||
For example, one
|
||||
would like to express that any instance of a \<^boxed_theory_text>\<open>result\<close> class finally has
|
||||
a non-empty property list, if its \<^boxed_theory_text>\<open>kind\<close> is \<^boxed_theory_text>\<open>proof\<close>, or that
|
||||
|
@ -1178,22 +1210,22 @@ text\<open>
|
|||
declare[[invariants_checking_with_tactics = true]]\<close>}
|
||||
There are still some limitations with this high-level syntax.
|
||||
For now, the high-level syntax does not support the checking of
|
||||
specific monitor behaviors (see @{technical (unchecked) "sec:monitors"}).
|
||||
specific monitor behaviors (see @{technical (unchecked) "sec_monitors"}).
|
||||
For example, one would like to delay a final error message till the
|
||||
closing of a monitor.
|
||||
For this use-case you can use low-level class invariants
|
||||
(see @{technical (unchecked) "sec:low_level_inv"}).
|
||||
(see @{technical (unchecked) "sec_low_level_inv"}).
|
||||
Also, for now, term-antiquotations can not be used in an invariant formula.
|
||||
\<close>
|
||||
|
||||
|
||||
subsection*["sec:low_level_inv"::technical]\<open>ODL Low-level Class Invariants\<close>
|
||||
subsection*["sec_low_level_inv"::technical]\<open>ODL Low-level Class Invariants\<close>
|
||||
|
||||
text\<open>
|
||||
If one want to go over the limitations of the actual high-level syntax of the invariant,
|
||||
one can define a function using SML.
|
||||
A formulation, in SML, of the class-invariant \<^boxed_theory_text>\<open>has_property\<close>
|
||||
in \<^technical>\<open>sec:class_inv\<close>, defined in the supposedly \<open>Low_Level_Syntax_Invariants\<close> theory
|
||||
in \<^technical>\<open>sec_class_inv\<close>, defined in the supposedly \<open>Low_Level_Syntax_Invariants\<close> theory
|
||||
(note the long name of the class),
|
||||
is straight-forward:
|
||||
|
||||
|
@ -1222,7 +1254,7 @@ val _ = Theory.setup (DOF_core.make_ml_invariant (check_result_inv, cid_long)
|
|||
\<^boxed_theory_text>\<open>oid\<close> is bound to a variable here and can therefore not be statically expanded.
|
||||
\<close>
|
||||
|
||||
subsection*["sec:monitors"::technical]\<open>ODL Monitors\<close>
|
||||
subsection*["sec_monitors"::technical]\<open>ODL Monitors\<close>
|
||||
text\<open>
|
||||
We call a document class with an \<open>accepts_clause\<close> a \<^emph>\<open>monitor\<close>.\<^bindex>\<open>monitor\<close> Syntactically, an
|
||||
\<open>accepts_clause\<close>\<^index>\<open>accepts\_clause@\<open>accepts_clause\<close>\<close> contains a regular expression over class identifiers.
|
||||
|
@ -1291,18 +1323,24 @@ text\<open>
|
|||
sections.
|
||||
For now, the high-level syntax of invariants does not support the checking of
|
||||
specific monitor behaviors like the one just described and you must use
|
||||
the low-level class invariants (see @{technical "sec:low_level_inv"}).
|
||||
the low-level class invariants (see @{technical "sec_low_level_inv"}).
|
||||
|
||||
Low-level invariants checking can be set up to be triggered
|
||||
when opening a monitor, when closing a monitor, or both
|
||||
by using the \<^ML>\<open>DOF_core.add_opening_ml_invariant\<close>,
|
||||
\<^ML>\<open>DOF_core.add_closing_ml_invariant\<close>, or \<^ML>\<open>DOF_core.add_ml_invariant\<close> commands
|
||||
respectively, to add the invariants to the theory context
|
||||
(See @{technical "sec:low_level_inv"} for an example).
|
||||
(See @{technical "sec_low_level_inv"} for an example).
|
||||
\<close>
|
||||
|
||||
(*<*)
|
||||
value*\<open>map (result.property) @{instances_of \<open>result\<close>}\<close>
|
||||
value*\<open>map (text_section.authored_by) @{instances_of \<open>introduction\<close>}\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. result.evidence \<sigma> = proof) @{instances_of \<open>result\<close>}\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. the (text_section.level \<sigma>) > 1) @{instances_of \<open>introduction\<close>}\<close>
|
||||
(*>*)
|
||||
|
||||
subsection*["sec:queries_on_instances"::technical]\<open>Queries On Instances\<close>
|
||||
subsection*["sec_queries_on_instances"::technical]\<open>Queries On Instances\<close>
|
||||
|
||||
text\<open>
|
||||
Any class definition generates term antiquotations checking a class instance or
|
||||
|
@ -1315,19 +1353,18 @@ text\<open>
|
|||
or to get the list of the authors of the instances of \<open>introduction\<close>,
|
||||
it suffices to treat this meta-data as usual:
|
||||
@{theory_text [display,indent=5, margin=70] \<open>
|
||||
value*\<open>map (result.property) @{result-instances}\<close>
|
||||
value*\<open>map (text_section.authored_by) @{introduction-instances}\<close>
|
||||
value*\<open>map (result.property) @{instances_of \<open>result\<close>}\<close>
|
||||
value*\<open>map (text_section.authored_by) @{instances_of \<open>introduction\<close>}\<close>
|
||||
\<close>}
|
||||
In order to get the list of the instances of the class \<open>myresult\<close>
|
||||
whose \<open>evidence\<close> is a \<open>proof\<close>, one can use the command:
|
||||
@{theory_text [display,indent=5, margin=70] \<open>
|
||||
value*\<open>filter (\<lambda>\<sigma>. result.evidence \<sigma> = proof) @{result-instances}\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. result.evidence \<sigma> = proof) @{instances_of \<open>result\<close>}\<close>
|
||||
\<close>}
|
||||
The list of the instances of the class \<open>introduction\<close> whose \<open>level\<close> > 1,
|
||||
can be filtered by:
|
||||
@{theory_text [display,indent=5, margin=70] \<open>
|
||||
value*\<open>filter (\<lambda>\<sigma>. the (text_section.level \<sigma>) > 1)
|
||||
@{introduction-instances}\<close>
|
||||
value*\<open>filter (\<lambda>\<sigma>. the (text_section.level \<sigma>) > 1) @{instances_of \<open>introduction\<close>}\<close>
|
||||
\<close>}
|
||||
\<close>
|
||||
|
||||
|
@ -1414,7 +1451,7 @@ text\<open>
|
|||
\<close>
|
||||
|
||||
|
||||
section*["document-templates"::technical]\<open>Defining Document Templates\<close>
|
||||
section*["document_templates"::technical]\<open>Defining Document Templates\<close>
|
||||
subsection\<open>The Core Template\<close>
|
||||
|
||||
text\<open>
|
||||
|
|
|
@ -188,7 +188,7 @@ text\<open>
|
|||
|
||||
section\<open>Programming Class Invariants\<close>
|
||||
text\<open>
|
||||
See \<^technical>\<open>sec:low_level_inv\<close>.
|
||||
See \<^technical>\<open>sec_low_level_inv\<close>.
|
||||
\<close>
|
||||
|
||||
section\<open>Implementing Monitors\<close>
|
||||
|
@ -203,7 +203,7 @@ text\<open>
|
|||
val next : automaton -> env -> cid -> automaton\<close>}
|
||||
where \<^boxed_sml>\<open>env\<close> is basically a map between internal automaton states and class-id's
|
||||
(\<^boxed_sml>\<open>cid\<close>'s). An automaton is said to be \<^emph>\<open>enabled\<close> for a class-id,
|
||||
iff it either occurs in its accept-set or its reject-set (see @{docitem "sec:monitors"}). During
|
||||
iff it either occurs in its accept-set or its reject-set (see @{docitem "sec_monitors"}). During
|
||||
top-down document validation, whenever a text-element is encountered, it is checked if a monitor
|
||||
is \emph{enabled} for this class; in this case, the \<^boxed_sml>\<open>next\<close>-operation is executed. The
|
||||
transformed automaton recognizing the suffix is stored in \<^boxed_sml>\<open>docobj_tab\<close> if
|
||||
|
@ -228,7 +228,7 @@ text\<open>
|
|||
\expandafter\providekeycommand\csname isaDof.#1\endcsname}%\<close>}
|
||||
|
||||
The \<^LaTeX>-generator of \<^isadof> maps each \<^boxed_theory_text>\<open>doc_item\<close> to an \<^LaTeX>-environment (recall
|
||||
@{docitem "text-elements"}). As generic \<^boxed_theory_text>\<open>doc_item\<close>s are derived from the text element,
|
||||
@{docitem "text_elements"}). As generic \<^boxed_theory_text>\<open>doc_item\<close>s are derived from the text element,
|
||||
the environment \inlineltx|isamarkuptext*| builds the core of \<^isadof>'s \<^LaTeX> implementation.
|
||||
|
||||
\<close>
|
||||
|
|
|
@ -7,7 +7,7 @@ Isabelle/DOF allows for both conventional typesetting and formal development.
|
|||
|
||||
Isabelle/DOF has two major prerequisites:
|
||||
|
||||
* **Isabelle:** Isabelle/DOF requires [Isabelle](https://isabelle.in.tum.de/)
|
||||
* **Isabelle 2023:** Isabelle/DOF requires [Isabelle](https://isabelle.in.tum.de/)
|
||||
and several entries from the [Archive of Formal Proofs
|
||||
(AFP)](https://www.isa-afp.org/).
|
||||
* **LaTeX:** Isabelle/DOF requires a modern LaTeX installation, i.e., at least
|
||||
|
|
|
@ -5,9 +5,9 @@
|
|||
Isabelle/DOF has three major prerequisites:
|
||||
|
||||
* **Isabelle:** Isabelle/DOF requires [Isabelle
|
||||
2022](https://isabelle.in.tum.de/website-Isabelle2022/). Please download the
|
||||
Isabelle 2022 distribution for your operating system from the [Isabelle
|
||||
website](https://isabelle.in.tum.de/website-Isabelle2022/).
|
||||
2023](https://isabelle.in.tum.de/website-Isabelle2023/). Please download the
|
||||
Isabelle 2023 distribution for your operating system from the [Isabelle
|
||||
website](https://isabelle.in.tum.de/website-Isabelle2023/).
|
||||
* **AFP:** Isabelle/DOF requires several entries from the [Archive of Formal Proofs
|
||||
(AFP)](https://www.isa-afp.org/).
|
||||
* **LaTeX:** Isabelle/DOF requires a modern LaTeX installation, i.e., at least
|
||||
|
|
Loading…
Reference in New Issue