Merge branch 'main' of git.logicalhacking.com:Isabelle_DOF/Isabelle_DOF
ci/woodpecker/push/build Pipeline was successful
Details
ci/woodpecker/push/build Pipeline was successful
Details
This commit is contained in:
commit
b62b391410
|
@ -160,16 +160,14 @@ Based on a novel adaption of the Isabelle IDE, a document is checked to be
|
|||
changes, where the \<^emph>\<open>coherence\<close> between the formal and the informal parts of the
|
||||
content can be mechanically checked.
|
||||
|
||||
To avoid any misunderstanding: \<^isadof> is \<^emph>\<open>not a theory in HOL\<close>
|
||||
on ontologies and operations to track and trace links in texts,
|
||||
it is an \<^emph>\<open>environment to write structured text\<close> which \<^emph>\<open>may contain\<close>
|
||||
\<^isabelle> definitions and proofs like mathematical articles, tech-reports and
|
||||
scientific papers---as the present one, which is written in \<^isadof>
|
||||
itself. \<^isadof> is a plugin into the Isabelle/Isar
|
||||
framework in the style of~@{cite "wenzel.ea:building:2007"}.
|
||||
To avoid any misunderstanding: \<^isadof> is \<^emph>\<open>not a theory in HOL\<close> on ontologies and operations
|
||||
to track and trace links in texts, it is an \<^emph>\<open>environment to write structured text\<close> which
|
||||
\<^emph>\<open>may contain\<close> \<^isabelle> definitions and proofs like mathematical articles, tech-reports and
|
||||
scientific papers---as the present one, which is written in \<^isadof> itself. \<^isadof> is a plugin
|
||||
into the Isabelle/Isar framework in the style of~@{cite "wenzel.ea:building:2007"}.
|
||||
\<close>
|
||||
|
||||
(* declaring the forward references used in the subsequent section *)
|
||||
(* declaring the forward references used in the subsequent sections *)
|
||||
(*<*)
|
||||
declare_reference*[bgrnd::text_section]
|
||||
declare_reference*[isadof::text_section]
|
||||
|
@ -177,9 +175,9 @@ declare_reference*[ontomod::text_section]
|
|||
declare_reference*[ontopide::text_section]
|
||||
declare_reference*[conclusion::text_section]
|
||||
(*>*)
|
||||
text*[plan::introduction, level="Some 1"]\<open> The plan of the paper is as follows: we start by introducing
|
||||
the underlying Isabelle system (@{text_section (unchecked) \<open>bgrnd\<close>}) followed by presenting the
|
||||
essentials of \<^isadof> and its ontology language (@{text_section (unchecked) \<open>isadof\<close>}).
|
||||
text*[plan::introduction, level="Some 1"]\<open> The plan of the paper is as follows: we start by
|
||||
introducing the underlying Isabelle system (@{text_section (unchecked) \<open>bgrnd\<close>}) followed by
|
||||
presenting the essentials of \<^isadof> and its ontology language (@{text_section (unchecked) \<open>isadof\<close>}).
|
||||
It follows @{text_section (unchecked) \<open>ontomod\<close>}, where we present three application
|
||||
scenarios from the point of view of the ontology modeling. In @{text_section (unchecked) \<open>ontopide\<close>}
|
||||
we discuss the user-interaction generated from the ontological definitions. Finally, we draw
|
||||
|
@ -188,18 +186,14 @@ conclusions and discuss related work in @{text_section (unchecked) \<open>conclu
|
|||
section*[bgrnd::text_section,main_author="Some(@{docitem ''bu''}::author)"]
|
||||
\<open> Background: The Isabelle System \<close>
|
||||
text*[background::introduction, level="Some 1"]\<open>
|
||||
While Isabelle is widely perceived as an interactive theorem prover
|
||||
for HOL (Higher-order Logic)~@{cite "nipkow.ea:isabelle:2002"}, we
|
||||
would like to emphasize the view that Isabelle is far more than that:
|
||||
it is the \<^emph>\<open>Eclipse of Formal Methods Tools\<close>. This refers to the
|
||||
``\<^slanted_text>\<open>generic system framework of Isabelle/Isar underlying recent
|
||||
versions of Isabelle. Among other things, Isar provides an
|
||||
infrastructure for Isabelle plug-ins, comprising extensible state
|
||||
components and extensible syntax that can be bound to ML
|
||||
programs. Thus, the Isabelle/Isar architecture may be understood as
|
||||
an extension and refinement of the traditional `LCF approach', with
|
||||
explicit infrastructure for building derivative
|
||||
\<^emph>\<open>systems\<close>.\<close>''~@{cite "wenzel.ea:building:2007"}
|
||||
While Isabelle is widely perceived as an interactive theorem prover for HOL
|
||||
(Higher-order Logic)~@{cite "nipkow.ea:isabelle:2002"}, we would like to emphasize the view that
|
||||
Isabelle is far more than that: it is the \<^emph>\<open>Eclipse of Formal Methods Tools\<close>. This refers to the
|
||||
``\<^slanted_text>\<open>generic system framework of Isabelle/Isar underlying recent versions of Isabelle.
|
||||
Among other things, Isar provides an infrastructure for Isabelle plug-ins, comprising extensible
|
||||
state components and extensible syntax that can be bound to ML programs. Thus, the Isabelle/Isar
|
||||
architecture may be understood as an extension and refinement of the traditional `LCF approach',
|
||||
with explicit infrastructure for building derivative \<^emph>\<open>systems\<close>.\<close>''~@{cite "wenzel.ea:building:2007"}
|
||||
|
||||
The current system framework offers moreover the following features:
|
||||
|
||||
|
@ -236,11 +230,8 @@ and will result in the corresponding output in generated \<^LaTeX> or HTML docum
|
|||
Now, \<^emph>\<open>inside\<close> the textual content, it is possible to embed a \<^emph>\<open>text-antiquotation\<close>:
|
||||
@{boxed_theory_text [display]\<open>
|
||||
text\<open> According to the \<^emph>\<open>reflexivity\<close> axiom @{thm refl},
|
||||
we obtain in \<Gamma> for @{term "fac 5"} the result @{value "fac 5"}.\<close>\<close>}
|
||||
@{boxed_theory_text [display]\<open>
|
||||
text\<open>According to the reflexivity axiom @{thm refl}, we obtain in \<Gamma>
|
||||
for @{term "fac 5"} the result \at{value "fac 5"}.\<close>
|
||||
\<close>}
|
||||
we obtain in \<Gamma> for @{term "fac 5"} the result @{value "fac 5"}.\<close>\<close>}
|
||||
|
||||
which is represented in the generated output by:
|
||||
@{boxed_pdf [display]\<open>According to the reflexivity axiom $x = x$, we obtain in $\Gamma$ for $\operatorname{fac} 5$ the result $120$.\<close>}
|
||||
|
||||
|
@ -270,7 +261,7 @@ three parts. Note that the document core \<^emph>\<open>may\<close>, but \<^emph
|
|||
use Isabelle definitions or proofs for checking the formal content---the
|
||||
present paper is actually an example of a document not containing any proof.
|
||||
|
||||
The document generation process of \<^isadof> is currently restricted to \<^LaTeX> , which means
|
||||
The document generation process of \<^isadof> is currently restricted to \<^LaTeX>, which means
|
||||
that the layout is defined by a set of \<^LaTeX> style files. Several layout
|
||||
definitions for one ontology are possible and pave the way that different \<^emph>\<open>views\<close> for
|
||||
the same central document were generated, addressing the needs of different purposes `
|
||||
|
@ -284,51 +275,46 @@ style-files (\<^verbatim>\<open>.sty\<close>-files). In the document core author
|
|||
their source, but this limits the possibility of using different representation technologies,
|
||||
\<^eg>, HTML, and increases the risk of arcane error-messages in generated \<^LaTeX>.
|
||||
|
||||
The \<^isadof> ontology specification language consists basically on a notation for
|
||||
document classes, where the attributes were typed with HOL-types and can be instantiated
|
||||
by terms HOL-terms, \<^ie>, the actual parsers and type-checkers of the Isabelle system were reused.
|
||||
This has the particular advantage that \<^isadof> commands can be arbitrarily mixed with
|
||||
Isabelle/HOL commands providing the machinery for type declarations and term specifications such
|
||||
as enumerations. In particular, document class definitions provide:
|
||||
The \<^isadof> ontology specification language consists basically on a notation for document classes,
|
||||
where the attributes were typed with HOL-types and can be instantiated by terms HOL-terms, \<^ie>,
|
||||
the actual parsers and type-checkers of the Isabelle system were reused. This has the particular
|
||||
advantage that \<^isadof> commands can be arbitrarily mixed with Isabelle/HOL commands providing the
|
||||
machinery for type declarations and term specifications such as enumerations. In particular,
|
||||
document class definitions provide:
|
||||
\<^item> a HOL-type for each document class as well as inheritance,
|
||||
\<^item> support for attributes with HOL-types and optional default values,
|
||||
\<^item> support for overriding of attribute defaults but not overloading, and
|
||||
\<^item> text-elements annotated with document classes; they are mutable
|
||||
instances of document classes.
|
||||
\<close>
|
||||
text\<open>
|
||||
Attributes referring to other ontological concepts are called \<^emph>\<open>links\<close>.
|
||||
The HOL-types inside the document specification language support built-in types for Isabelle/HOL
|
||||
\<^theory_text>\<open>typ\<close>'s, \<^theory_text>\<open>term\<close>'s, and \<^theory_text>\<open>thm\<close>'s reflecting internal Isabelle's
|
||||
internal types for these entities; when denoted in HOL-terms to instantiate an attribute, for
|
||||
example, there is a specific syntax (called \<^emph>\<open>inner syntax antiquotations\<close>) that is checked by
|
||||
\<^isadof> for consistency.
|
||||
instances of document classes.\<close>
|
||||
|
||||
Document classes can have a \<^theory_text>\<open>where\<close> clause containing a regular
|
||||
expression over class names. Classes with such a \<^theory_text>\<open>where\<close> were called \<^emph>\<open>monitor classes\<close>.
|
||||
While document classes and their inheritance relation structure meta-data of text-elements
|
||||
in an object-oriented manner, monitor classes enforce structural organization
|
||||
of documents via the language specified by the regular expression
|
||||
enforcing a sequence of text-elements that must belong to the corresponding classes.
|
||||
\<close>
|
||||
text\<open>
|
||||
Attributes referring to other ontological concepts are called \<^emph>\<open>links\<close>. The HOL-types inside the
|
||||
document specification language support built-in types for Isabelle/HOL \<^theory_text>\<open>typ\<close>'s, \<^theory_text>\<open>term\<close>'s, and
|
||||
\<^theory_text>\<open>thm\<close>'s reflecting internal Isabelle's internal types for these entities; when denoted in
|
||||
HOL-terms to instantiate an attribute, for example, there is a specific syntax
|
||||
(called \<^emph>\<open>inner syntax antiquotations\<close>) that is checked by \<^isadof> for consistency.
|
||||
|
||||
Document classes can have a \<^theory_text>\<open>where\<close> clause containing a regular expression over class names.
|
||||
Classes with such a \<^theory_text>\<open>where\<close> were called \<^emph>\<open>monitor classes\<close>. While document classes and their
|
||||
inheritance relation structure meta-data of text-elements in an object-oriented manner, monitor
|
||||
classes enforce structural organization of documents via the language specified by the regular
|
||||
expression enforcing a sequence of text-elements that belong to the corresponding classes. \<^vs>\<open>-0.4cm\<close>\<close>
|
||||
|
||||
section*[ontomod::text_section]\<open> Modeling Ontologies in \<^isadof> \<close>
|
||||
text\<open> In this section, we will use the \<^isadof> document ontology language
|
||||
for three different application scenarios: for scholarly papers, for mathematical
|
||||
exam sheets as well as standardization documents where the concepts of the
|
||||
standard are captured in the ontology. For space reasons, we will concentrate in all three
|
||||
cases on aspects of the modeling due to space limitations.\<close>
|
||||
text\<open> In this section, we will use the \<^isadof> document ontology language for three different
|
||||
application scenarios: for scholarly papers, for mathematical exam sheets as well as standardization
|
||||
documents where the concepts of the standard are captured in the ontology. For space reasons, we
|
||||
will concentrate in all three cases on aspects of the modeling due to space limitations.\<close>
|
||||
|
||||
subsection*[scholar_onto::example]\<open> The Scholar Paper Scenario: Eating One's Own Dog Food. \<close>
|
||||
text\<open> The following ontology is a simple ontology modeling scientific papers. In this
|
||||
\<^isadof> application scenario, we deliberately refrain from integrating references to
|
||||
(Isabelle) formal content in order demonstrate that \<^isadof> is not a framework from
|
||||
Isabelle users to Isabelle users only.
|
||||
Of course, such references can be added easily and represent a particular strength
|
||||
of \<^isadof>.
|
||||
Isabelle users to Isabelle users only. Of course, such references can be added easily and
|
||||
represent a particular strength of \<^isadof>.\<close>
|
||||
|
||||
|
||||
\begin{figure}
|
||||
(*
|
||||
text\<open>\begin{figure}
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class title =
|
||||
short_title :: "string option" <= None
|
||||
|
@ -343,13 +329,35 @@ doc_class abstract =
|
|||
keyword_list :: "string list" <= None
|
||||
|
||||
doc_class text_section =
|
||||
main_author :: "author option" <= None
|
||||
main_author :: "author option" <= None
|
||||
todo_list :: "string list" <= "[]"
|
||||
\<close>}
|
||||
\caption{The core of the ontology definition for writing scholarly papers.}
|
||||
\label{fig:paper-onto-core}
|
||||
\end{figure}
|
||||
The first part of the ontology \<^theory_text>\<open>scholarly_paper\<close> (see \autoref{fig:paper-onto-core})
|
||||
\end{figure}\<close>
|
||||
*)
|
||||
text*["paper_onto_core"::figure2,
|
||||
caption="\<open>The core of the ontology definition for writing scholarly papers.\<close>"]
|
||||
\<open>@{boxed_theory_text [display]\<open>
|
||||
doc_class title =
|
||||
short_title :: "string option" <= None
|
||||
|
||||
doc_class subtitle =
|
||||
abbrev :: "string option" <= None
|
||||
|
||||
doc_class author =
|
||||
affiliation :: "string"
|
||||
|
||||
doc_class abstract =
|
||||
keyword_list :: "string list" <= None
|
||||
|
||||
doc_class text_section =
|
||||
main_author :: "author option" <= None
|
||||
todo_list :: "string list" <= "[]"
|
||||
\<close>}\<close>
|
||||
|
||||
text\<open> The first part of the ontology \<^theory_text>\<open>scholarly_paper\<close>
|
||||
(see @{figure2 "paper_onto_core"})
|
||||
contains the document class definitions
|
||||
with the usual text-elements of a scientific paper. The attributes \<^theory_text>\<open>short_title\<close>,
|
||||
\<^theory_text>\<open>abbrev\<close> etc are introduced with their types as well as their default values.
|
||||
|
@ -357,16 +365,18 @@ Our model prescribes an optional \<^theory_text>\<open>main_author\<close> and a
|
|||
text section; since instances of this class are mutable (meta)-objects of text-elements, they
|
||||
can be modified arbitrarily through subsequent text and of course globally during text evolution.
|
||||
Since \<^theory_text>\<open>author\<close> is a HOL-type internally generated by \<^isadof> framework and can therefore
|
||||
appear in the \<^theory_text>\<open>main_author\<close> attribute of the \<^theory_text>\<open>text_section\<close> class;
|
||||
appear in the \<^theory_text>\<open>main_author\<close> attribute of the \<^theory_text>\<open>text_section\<close> class;
|
||||
semantic links between concepts can be modeled this way.
|
||||
|
||||
The translation of its content to, \<^eg>, Springer's \<^LaTeX> setup for the Lecture Notes in Computer
|
||||
Science Series, as required by many scientific conferences, is mostly straight-forward. \<close>
|
||||
Science Series, as required by many scientific conferences, is mostly straight-forward.
|
||||
\<^vs>\<open>-0.8cm\<close>\<close>
|
||||
|
||||
figure*[fig1::figure,spawn_columns=False,relative_width="95",src="''figures/Dogfood-Intro''"]
|
||||
\<open> Ouroboros I: This paper from inside \<^dots> \<close>
|
||||
|
||||
text\<open> @{figure \<open>fig1\<close>} shows the corresponding view in the Isabelle/PIDE of the present paper.
|
||||
(*<*)declare_reference*[paper_onto_sections::figure2](*>*)
|
||||
text\<open>\<^vs>\<open>-0.8cm\<close> @{figure \<open>fig1\<close>} shows the corresponding view in the Isabelle/PIDE of the present paper.
|
||||
Note that the text uses \<^isadof>'s own text-commands containing the meta-information provided by
|
||||
the underlying ontology.
|
||||
We proceed by a definition of \<^theory_text>\<open>introduction\<close>'s, which we define as the extension of
|
||||
|
@ -395,13 +405,10 @@ of automated forms of validation check for specific categories of papers is envi
|
|||
Since this requires deeper knowledge in Isabelle programming, however, we consider this out
|
||||
of the scope of this paper.
|
||||
|
||||
|
||||
We proceed more or less conventionally by the subsequent sections (\autoref{fig:paper-onto-sections})
|
||||
\begin{figure}
|
||||
We proceed more or less conventionally by the subsequent sections (@{figure2 (unchecked)\<open>paper_onto_sections\<close>})\<close>
|
||||
text*["paper_onto_sections"::figure2,
|
||||
caption = "''Various types of sections of a scholarly papers.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class technical = text_section +
|
||||
definition_list :: "string list" <= "[]"
|
||||
|
||||
doc_class example = text_section +
|
||||
comment :: string
|
||||
|
||||
|
@ -413,13 +420,12 @@ doc_class related_work = conclusion +
|
|||
|
||||
doc_class bibliography =
|
||||
style :: "string option" <= "''LNCS''"
|
||||
\<close>}
|
||||
\caption{Various types of sections of a scholarly papers.}
|
||||
\label{fig:paper-onto-sections}
|
||||
\end{figure}
|
||||
and finish with a monitor class definition that enforces a textual ordering
|
||||
in the document core by a regular expression (\autoref{fig:paper-onto-monitor}).
|
||||
\begin{figure}
|
||||
\<close>}\<close>
|
||||
(*<*)declare_reference*[paper_onto_monitor::figure2](*>*)
|
||||
text\<open>... and finish with a monitor class definition that enforces a textual ordering
|
||||
in the document core by a regular expression (@{figure2 (unchecked) "paper_onto_monitor"}).\<close>
|
||||
text*["paper_onto_monitor"::figure2,
|
||||
caption = "''A monitor for the scholarly paper ontology.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class article =
|
||||
trace :: "(title + subtitle + author+ abstract +
|
||||
|
@ -429,9 +435,6 @@ doc_class article =
|
|||
introduction ~~ \<lbrace>technical || example\<rbrace>$^+$ ~~ conclusion ~~
|
||||
bibliography)"
|
||||
\<close>}
|
||||
\caption{A monitor for the scholarly paper ontology.}
|
||||
\label{fig:paper-onto-monitor}
|
||||
\end{figure}
|
||||
\<close>
|
||||
text\<open> We might wish to add a component into our ontology that models figures to be included into
|
||||
the document. This boils down to the exercise of modeling structured data in the style of a
|
||||
|
@ -479,9 +482,9 @@ We assume that the content has four different types of addressees, which have a
|
|||
text\<open> The latter quality assurance mechanism is used in many universities,
|
||||
where for organizational reasons the execution of an exam takes place in facilities
|
||||
where the author of the exam is not expected to be physically present.
|
||||
Furthermore, we assume a simple grade system (thus, some calculation is required).
|
||||
|
||||
\begin{figure}
|
||||
Furthermore, we assume a simple grade system (thus, some calculation is required). \<close>
|
||||
text*["onto_exam"::figure2,
|
||||
caption = "''The core of the ontology modeling math exams.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class Author = ...
|
||||
datatype Subject = algebra | geometry | statistical
|
||||
|
@ -504,17 +507,17 @@ doc_class Exam_item =
|
|||
concerns :: "ContentClass set"
|
||||
|
||||
type_synonym SubQuestion = string
|
||||
\<close>}
|
||||
\caption{The core of the ontology modeling math exams.}
|
||||
\label{fig:onto-exam}
|
||||
\end{figure}
|
||||
The heart of this ontology (see \autoref{fig:onto-exam}) is an alternation of questions and answers,
|
||||
\<close>}\<close>
|
||||
|
||||
(*<*)declare_reference*[onto_questions::figure2](*>*)
|
||||
text\<open>The heart of this ontology (see @{figure2 "onto_exam"}) is an alternation of questions and answers,
|
||||
where the answers can consist of simple yes-no answers (QCM style check-boxes) or lists of formulas.
|
||||
Since we do not
|
||||
assume familiarity of the students with Isabelle (\<^theory_text>\<open>term\<close> would assume that this is a
|
||||
parse-able and type-checkable entity), we basically model a derivation as a sequence of strings
|
||||
(see \autoref{fig:onto-questions}).
|
||||
\begin{figure}
|
||||
(see @{figure2 (unchecked)"onto_questions"}).\<close>
|
||||
text*["onto_questions"::figure2,
|
||||
caption = "''An exam can contain different types of questions.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class Answer_Formal_Step = Exam_item +
|
||||
justification :: string
|
||||
|
@ -539,18 +542,17 @@ doc_class Exercise = Exam_item +
|
|||
content :: "(Task) list"
|
||||
concerns :: "ContentClass set" <= "UNIV"
|
||||
mark :: int
|
||||
\<close>}
|
||||
\caption{An exam can contain different types of questions.}
|
||||
\label{fig:onto-questions}
|
||||
\end{figure}
|
||||
|
||||
\<close>}\<close>
|
||||
(*<*)declare_reference*[onto_exam_monitor::figure2](*>*)
|
||||
text\<open>
|
||||
In many institutions, it makes sense to have a rigorous process of validation
|
||||
for exam subjects: is the initial question correct? Is a proof in the sense of the
|
||||
question possible? We model the possibility that the @{term examiner} validates a
|
||||
question by a sample proof validated by Isabelle (see \autoref{fig:onto-exam-monitor}).
|
||||
question by a sample proof validated by Isabelle (see @{figure2 (unchecked) "onto_exam_monitor"}).
|
||||
In our scenario this sample proofs are completely \<^emph>\<open>intern\<close>, \<^ie>, not exposed to the
|
||||
students but just additional material for the internal review process of the exam.
|
||||
\begin{figure}
|
||||
students but just additional material for the internal review process of the exam.\<close>
|
||||
text*["onto_exam_monitor"::figure2,
|
||||
caption = "''Validating exams.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class Validation =
|
||||
tests :: "term list" <="[]"
|
||||
|
@ -565,12 +567,7 @@ doc_class MathExam=
|
|||
content :: "(Header + Author + Exercise) list"
|
||||
global_grade :: Grade
|
||||
where "\<lbrace>Author\<rbrace>$^+$ ~~ Header ~~ \<lbrace>Exercise ~~ Solution\<rbrace>$^+$ "
|
||||
\<close>}
|
||||
\caption{Validating exams.}
|
||||
\label{fig:onto-exam-monitor}
|
||||
\end{figure}
|
||||
\<close>
|
||||
|
||||
\<close>}\<close>
|
||||
|
||||
(*<*)declare_reference*["fig_qcm"::figure](*>*)
|
||||
|
||||
|
@ -586,8 +583,8 @@ figure*[fig_qcm::figure,spawn_columns=False,
|
|||
|
||||
subsection*[cenelec_onto::example]\<open> The Certification Scenario following CENELEC \<close>
|
||||
text\<open> Documents to be provided in formal certifications (such as CENELEC
|
||||
50126/50128, the DO-178B/C, or Common Criteria) can much profit from the control of ontological consistency:
|
||||
a lot of an evaluators work consists in tracing down the links from requirements over
|
||||
50126/50128, the DO-178B/C, or Common Criteria) can much profit from the control of ontological
|
||||
consistency: a lot of an evaluators work consists in tracing down the links from requirements over
|
||||
assumptions down to elements of evidence, be it in the models, the code, or the tests.
|
||||
In a certification process, traceability becomes a major concern; and providing
|
||||
mechanisms to ensure complete traceability already at the development of the
|
||||
|
@ -599,14 +596,16 @@ of developments targeting certifications. Continuously checking the links betwee
|
|||
and the semi-formal parts of such documents is particularly valuable during the (usually
|
||||
collaborative) development effort.
|
||||
|
||||
As in many other cases, formal certification documents come with an own terminology and
|
||||
pragmatics of what has to be demonstrated and where, and how the trace-ability of requirements through
|
||||
As in many other cases, formal certification documents come with an own terminology and pragmatics
|
||||
of what has to be demonstrated and where, and how the trace-ability of requirements through
|
||||
design-models over code to system environment assumptions has to be assured.
|
||||
\<close>
|
||||
(*<*)declare_reference*["conceptual"::figure2](*>*)
|
||||
text\<open> In the sequel, we present a simplified version of an ontological model used in a
|
||||
case-study~ @{cite "bezzecchi.ea:making:2018"}. We start with an introduction of the concept of requirement
|
||||
(see \autoref{fig:conceptual}).
|
||||
\begin{figure}
|
||||
(see @{figure2 (unchecked) "conceptual"}). \<close>
|
||||
text*["conceptual"::figure2,
|
||||
caption = "''Modeling requirements.''"]\<open>
|
||||
@{boxed_theory_text [display]\<open>
|
||||
doc_class requirement = long_name :: "string option"
|
||||
|
||||
|
@ -620,11 +619,9 @@ datatype ass_kind = informal | semiformal | formal
|
|||
|
||||
doc_class assumption = requirement +
|
||||
assumption_kind :: ass_kind <= informal
|
||||
\<close>}
|
||||
\caption{Modeling requirements.}
|
||||
\label{fig:conceptual}
|
||||
\end{figure}
|
||||
Such ontologies can be enriched by larger explanations and examples, which may help
|
||||
\<close>}\<close>
|
||||
|
||||
text\<open>Such ontologies can be enriched by larger explanations and examples, which may help
|
||||
the team of engineers substantially when developing the central document for a certification,
|
||||
like an explication what is precisely the difference between an \<^emph>\<open>hypothesis\<close> and an
|
||||
\<^emph>\<open>assumption\<close> in the context of the evaluation standard. Since the PIDE makes for each
|
||||
|
@ -689,7 +686,7 @@ non-compatible type, the text is not validated. \<close>
|
|||
figure*[figDogfoodVIlinkappl::figure,relative_width="80",src="''figures/Dogfood-V-attribute''"]
|
||||
\<open> Exploring an attribute (hyperlinked to the class). \<close>
|
||||
subsection*[cenelec_pide::example]\<open> CENELEC \<close>
|
||||
declare_reference*[figfig3::figure]
|
||||
(*<*)declare_reference*[figfig3::figure](*>*)
|
||||
text\<open> The corresponding view in @{docitem (unchecked) \<open>figfig3\<close>} shows core part of a document,
|
||||
coherent to the @{example \<open>cenelec_onto\<close>}. The first sample shows standard Isabelle antiquotations
|
||||
@{cite "wenzel:isabelle-isar:2017"} into formal entities of a theory. This way, the informal parts
|
||||
|
@ -722,7 +719,7 @@ informal parts. \<close>
|
|||
section*[onto_future::technical]\<open> Monitor Classes \<close>
|
||||
text\<open> Besides sub-typing, there is another relation between
|
||||
document classes: a class can be a \<^emph>\<open>monitor\<close> to other ones,
|
||||
which is expressed by the occurrence of a @{boxed_theory_text [display]\<open>where\<close>} clause
|
||||
which is expressed by the occurrence of a @{theory_text \<open>where\<close>} clause
|
||||
in the document class definition containing a regular
|
||||
expression (see @{example \<open>scholar_onto\<close>}).
|
||||
While class-extension refers to data-inheritance of attributes,
|
||||
|
@ -780,8 +777,7 @@ work in this area we are aware of is rOntorium~@{cite "rontorium"}, a plugin
|
|||
for \<^Protege> that integrates R~@{cite "adler:r:2010"} into an
|
||||
ontology environment. Here, the main motivation behind this
|
||||
integration is to allow for statistically analyze ontological
|
||||
documents. Thus, this is complementary to our work.
|
||||
\<close>
|
||||
documents. Thus, this is complementary to our work.\<close>
|
||||
|
||||
text\<open> \<^isadof> in its present form has a number of technical short-comings as well
|
||||
as potentials not yet explored. On the long list of the short-comings is the
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
%% This is a placeholder for user-specific configuration and packages.
|
||||
|
||||
\usepackage{stmaryrd}
|
||||
\usepackage{pifont}% http://ctan.org/pkg/pifont
|
||||
|
||||
|
||||
\title{<TITLE>}
|
||||
\author{<AUTHOR>}
|
||||
|
|
|
@ -56,31 +56,30 @@ abstract*[abs, keywordlist="[\<open>Shallow Embedding\<close>,\<open>Process-Alg
|
|||
text\<open>\<close>
|
||||
section*[introheader::introduction,main_author="Some(@{docitem ''bu''}::author)"]\<open> Introduction \<close>
|
||||
text*[introtext::introduction, level="Some 1"]\<open>
|
||||
Communicating Sequential Processes (\<^csp>) is a language
|
||||
to specify and verify patterns of interaction of concurrent systems.
|
||||
Together with CCS and LOTOS, it belongs to the family of \<^emph>\<open>process algebras\<close>.
|
||||
\<^csp>'s rich theory comprises denotational, operational and algebraic semantic facets
|
||||
and has influenced programming languages such as Limbo, Crystal, Clojure and
|
||||
most notably Golang @{cite "donovan2015go"}. \<^csp> has been applied in
|
||||
industry as a tool for specifying and verifying the concurrent aspects of hardware
|
||||
systems, such as the T9000 transansputer @{cite "Barret95"}.
|
||||
Communicating Sequential Processes (\<^csp>) is a language to specify and verify patterns of
|
||||
interaction of concurrent systems. Together with CCS and LOTOS, it belongs to the family of
|
||||
\<^emph>\<open>process algebras\<close>. \<^csp>'s rich theory comprises denotational, operational and algebraic semantic
|
||||
facets and has influenced programming languages such as Limbo, Crystal, Clojure and most notably
|
||||
Golang @{cite "donovan2015go"}. \<^csp> has been applied in industry as a tool for specifying and
|
||||
verifying the concurrent aspects of hardware systems, such as the T9000 transansputer
|
||||
@{cite "Barret95"}.
|
||||
|
||||
The theory of \<^csp> was first described in 1978 in a book by Tony Hoare @{cite "Hoare:1985:CSP:3921"},
|
||||
but has since evolved substantially @{cite "BrookesHR84" and "brookes-roscoe85" and "roscoe:csp:1998"}.
|
||||
\<^csp> describes the most common communication and synchronization mechanisms
|
||||
with one single language primitive: synchronous communication written \<open>_\<lbrakk>_\<rbrakk>_\<close>. \<^csp> semantics is
|
||||
described by a fully abstract model of behaviour designed to be \<^emph>\<open>compositional\<close>: the denotational
|
||||
semantics of a process \<open>P\<close> encompasses all possible behaviours of this process in the context of all
|
||||
possible environments \<open>P \<lbrakk>S\<rbrakk> Env\<close> (where \<open>S\<close> is the set of \<open>atomic events\<close> both \<open>P\<close> and \<open>Env\<close> must
|
||||
synchronize). This design objective has the consequence that two kinds of choice have to
|
||||
be distinguished:
|
||||
\<^enum> the \<^emph>\<open>external choice\<close>, written \<open>_\<box>_\<close>, which forces a process "to follow" whatever
|
||||
the environment offers, and
|
||||
\<^enum> the \<^emph>\<open>internal choice\<close>, written \<open>_\<sqinter>_\<close>, which imposes on the environment of a process
|
||||
"to follow" the non-deterministic choices made.
|
||||
\<^csp> describes the most common communication and synchronization mechanisms with one single language
|
||||
primitive: synchronous communication written \<open>_\<lbrakk>_\<rbrakk>_\<close>. \<^csp> semantics is described by a fully abstract
|
||||
model of behaviour designed to be \<^emph>\<open>compositional\<close>: the denotational semantics of a process \<open>P\<close>
|
||||
encompasses all possible behaviours of this process in the context of all possible environments
|
||||
\<open>P \<lbrakk>S\<rbrakk> Env\<close> (where \<open>S\<close> is the set of \<open>atomic events\<close> both \<open>P\<close> and \<open>Env\<close> must synchronize). This
|
||||
design objective has the consequence that two kinds of choice have to be distinguished: \<^vs>\<open>0.1cm\<close>
|
||||
|
||||
\<^enum> the \<^emph>\<open>external choice\<close>, written \<open>_\<box>_\<close>, which forces a process "to follow" whatever
|
||||
the environment offers, and \<^vs>\<open>-0.4cm\<close>
|
||||
\<^enum> the \<^emph>\<open>internal choice\<close>, written \<open>_\<sqinter>_\<close>, which imposes on the environment of a process
|
||||
"to follow" the non-deterministic choices made.\<^vs>\<open>0.3cm\<close>
|
||||
\<close>
|
||||
text\<open>
|
||||
|
||||
text\<open> \<^vs>\<open>-0.6cm\<close>
|
||||
Generalizations of these two operators \<open>\<box>x\<in>A. P(x)\<close> and \<open>\<Sqinter>x\<in>A. P(x)\<close> allow for modeling the concepts
|
||||
of \<^emph>\<open>input\<close> and \<^emph>\<open>output\<close>: Based on the prefix operator \<open>a\<rightarrow>P\<close> (event \<open>a\<close> happens, then the process
|
||||
proceeds with \<open>P\<close>), receiving input is modeled by \<open>\<box>x\<in>A. x\<rightarrow>P(x)\<close> while sending output is represented
|
||||
|
@ -127,10 +126,10 @@ attempt to formalize denotational \<^csp> semantics covering a part of Bill Rosc
|
|||
omitted.\<close>}.
|
||||
\<close>
|
||||
|
||||
section*["pre"::tc,main_author="Some(@{docitem \<open>bu\<close>}::author)"]
|
||||
section*["pre"::tc,main_author="Some(@{author \<open>bu\<close>}::author)"]
|
||||
\<open>Preliminaries\<close>
|
||||
|
||||
subsection*[cspsemantics::tc, main_author="Some(@{docitem ''bu''})"]\<open>Denotational \<^csp> Semantics\<close>
|
||||
subsection*[cspsemantics::tc, main_author="Some(@{author ''bu''})"]\<open>Denotational \<^csp> Semantics\<close>
|
||||
|
||||
text\<open> The denotational semantics (following @{cite "roscoe:csp:1998"}) comes in three layers:
|
||||
the \<^emph>\<open>trace model\<close>, the \<^emph>\<open>(stable) failures model\<close> and the \<^emph>\<open>failure/divergence model\<close>.
|
||||
|
@ -144,9 +143,9 @@ processes \<open>Skip\<close> (successful termination) and \<open>Stop\<close> (
|
|||
Note that the trace sets, representing all \<^emph>\<open>partial\<close> history, is in general prefix closed.\<close>
|
||||
|
||||
text*[ex1::math_example, status=semiformal, level="Some 1"] \<open>
|
||||
Let two processes be defined as follows:
|
||||
Let two processes be defined as follows:\<^vs>\<open>0.2cm\<close>
|
||||
|
||||
\<^enum> \<open>P\<^sub>d\<^sub>e\<^sub>t = (a \<rightarrow> Stop) \<box> (b \<rightarrow> Stop)\<close>
|
||||
\<^enum> \<open>P\<^sub>d\<^sub>e\<^sub>t = (a \<rightarrow> Stop) \<box> (b \<rightarrow> Stop)\<close>
|
||||
\<^enum> \<open>P\<^sub>n\<^sub>d\<^sub>e\<^sub>t = (a \<rightarrow> Stop) \<sqinter> (b \<rightarrow> Stop)\<close>
|
||||
\<close>
|
||||
|
||||
|
@ -190,7 +189,7 @@ of @{cite "IsobeRoggenbach2010"} is restricted to a variant of the failures mode
|
|||
|
||||
\<close>
|
||||
|
||||
subsection*["isabelleHol"::tc, main_author="Some(@{docitem ''bu''})"]\<open>Isabelle/HOL\<close>
|
||||
subsection*["isabelleHol"::tc, main_author="Some(@{author ''bu''})"]\<open>Isabelle/HOL\<close>
|
||||
text\<open> Nowadays, Isabelle/HOL is one of the major interactive theory development environments
|
||||
@{cite "nipkow.ea:isabelle:2002"}. HOL stands for Higher-Order Logic, a logic based on simply-typed
|
||||
\<open>\<lambda>\<close>-calculus extended by parametric polymorphism and Haskell-like type-classes.
|
||||
|
@ -212,25 +211,23 @@ distribution comes with rich libraries comprising Sets, Numbers, Lists, etc. whi
|
|||
For this work, a particular library called \<^theory_text>\<open>HOLCF\<close> is intensively used. It provides classical
|
||||
domain theory for a particular type-class \<open>\<alpha>::pcpo\<close>, \<^ie> the class of types \<open>\<alpha>\<close> for which
|
||||
|
||||
\<^enum> a least element \<open>\<bottom>\<close> is defined, and
|
||||
\<^enum> a least element \<open>\<bottom>\<close> is defined, and
|
||||
\<^enum> a complete partial order \<open>_\<sqsubseteq>_\<close> is defined.
|
||||
|
||||
For these types, \<^theory_text>\<open>HOLCF\<close> provides a fixed-point operator \<open>\<mu>X. f X\<close> as well as the
|
||||
fixed-point induction and other (automated) proof infrastructure. Isabelle's type-inference can
|
||||
automatically infer, for example, that if \<open>\<alpha>::pcpo\<close>, then \<open>(\<beta> \<Rightarrow> \<alpha>)::pcpo\<close>. \<close>
|
||||
|
||||
section*["csphol"::tc,main_author="Some(@{docitem ''bu''}::author)", level="Some 2"]
|
||||
section*["csphol"::tc,main_author="Some(@{author ''bu''}::author)", level="Some 2"]
|
||||
\<open>Formalising Denotational \<^csp> Semantics in HOL \<close>
|
||||
|
||||
text\<open>\<close>
|
||||
|
||||
subsection*["processinv"::tc, main_author="Some(@{docitem ''bu''})"]
|
||||
subsection*["processinv"::tc, main_author="Some(@{author ''bu''})"]
|
||||
\<open>Process Invariant and Process Type\<close>
|
||||
text\<open> First, we need a slight revision of the concept
|
||||
of \<^emph>\<open>trace\<close>: if \<open>\<Sigma>\<close> is the type of the atomic events (represented by a type variable), then
|
||||
we need to extend this type by a special event \<open>\<surd>\<close> (called "tick") signaling termination.
|
||||
Thus, traces have the type \<open>(\<Sigma>+\<surd>)\<^sup>*\<close>, written \<open>\<Sigma>\<^sup>\<surd>\<^sup>*\<close>; since \<open>\<surd>\<close> may only occur at the end of a trace,
|
||||
we need to define a predicate \<open>front\<^sub>-tickFree t\<close> that requires from traces that \<open>\<surd>\<close> can only occur
|
||||
we need to extend this type by a special event \<open>\<checkmark>\<close> (called "tick") signaling termination.
|
||||
Thus, traces have the type \<open>(\<Sigma>\<uplus>\<checkmark>)\<^sup>*\<close>, written \<open>\<Sigma>\<^sup>\<checkmark>\<^sup>*\<close>; since \<open>\<checkmark>\<close> may only occur at the end of a trace,
|
||||
we need to define a predicate \<open>front\<^sub>-tickFree t\<close> that requires from traces that \<open>\<checkmark>\<close> can only occur
|
||||
at the end.
|
||||
|
||||
Second, in the traditional literature, the semantic domain is implicitly described by 9 "axioms"
|
||||
|
@ -245,24 +242,24 @@ Informally, these are:
|
|||
\<^item> the tick accepted after a trace \<open>s\<close> implies that all other events are refused;
|
||||
\<^item> a divergence trace with any suffix is itself a divergence one
|
||||
\<^item> once a process has diverged, it can engage in or refuse any sequence of events.
|
||||
\<^item> a trace ending with \<open>\<surd>\<close> belonging to divergence set implies that its
|
||||
maximum prefix without \<open>\<surd>\<close> is also a divergent trace.
|
||||
\<^item> a trace ending with \<open>\<checkmark>\<close> belonging to divergence set implies that its
|
||||
maximum prefix without \<open>\<checkmark>\<close> is also a divergent trace.
|
||||
|
||||
More formally, a process \<open>P\<close> of the type \<open>\<Sigma> process\<close> should have the following properties:
|
||||
|
||||
@{cartouche [display] \<open>([],{}) \<in> \<F> P \<and>
|
||||
@{cartouche [display, indent=10] \<open>([],{}) \<in> \<F> P \<and>
|
||||
(\<forall> s X. (s,X) \<in> \<F> P \<longrightarrow> front_tickFree s) \<and>
|
||||
(\<forall> s t . (s@t,{}) \<in> \<F> P \<longrightarrow> (s,{}) \<in> \<F> P) \<and>
|
||||
(\<forall> s X Y. (s,Y) \<in> \<F> P \<and> X\<subseteq>Y \<longrightarrow> (s,X) \<in> \<F> P) \<and>
|
||||
(\<forall> s X Y. (s,X) \<in> \<F> P \<and> (\<forall>c \<in> Y. ((s@[c],{}) \<notin> \<F> P)) \<longrightarrow> (s,X \<union> Y) \<in> \<F> P) \<and>
|
||||
(\<forall> s X. (s@[\<surd>],{}) \<in> \<F> P \<longrightarrow> (s,X-{\<surd>}) \<in> \<F> P) \<and>
|
||||
(\<forall> s X. (s@[\<checkmark>],{}) \<in> \<F> P \<longrightarrow> (s,X-{\<checkmark>}) \<in> \<F> P) \<and>
|
||||
(\<forall> s t. s \<in> \<D> P \<and> tickFree s \<and> front_tickFree t \<longrightarrow> s@t \<in> \<D> P) \<and>
|
||||
(\<forall> s X. s \<in> \<D> P \<longrightarrow> (s,X) \<in> \<F> P) \<and>
|
||||
(\<forall> s. s@[\<surd>] \<in> \<D> P \<longrightarrow> s \<in> \<D> P)\<close>}
|
||||
(\<forall> s. s@[\<checkmark>] \<in> \<D> P \<longrightarrow> s \<in> \<D> P)\<close>}
|
||||
|
||||
Our objective is to encapsulate this wishlist into a type constructed as a conservative
|
||||
theory extension in our theory \<^holcsp>.
|
||||
Therefore third, we define a pre-type for processes \<open>\<Sigma> process\<^sub>0\<close> by \<open> \<P>(\<Sigma>\<^sup>\<surd>\<^sup>* \<times> \<P>(\<Sigma>\<^sup>\<surd>)) \<times> \<P>(\<Sigma>\<^sup>\<surd>)\<close>.
|
||||
Therefore third, we define a pre-type for processes \<open>\<Sigma> process\<^sub>0\<close> by \<open> \<P>(\<Sigma>\<^sup>\<checkmark>\<^sup>* \<times> \<P>(\<Sigma>\<^sup>\<checkmark>)) \<times> \<P>(\<Sigma>\<^sup>\<checkmark>)\<close>.
|
||||
Forth, we turn our wishlist of "axioms" above into the definition of a predicate \<open>is_process P\<close>
|
||||
of type \<open>\<Sigma> process\<^sub>0 \<Rightarrow> bool\<close> deciding if its conditions are fulfilled. Since \<open>P\<close> is a pre-process,
|
||||
we replace \<open>\<F>\<close> by \<open>fst\<close> and \<open>\<D>\<close> by \<open>snd\<close> (the HOL projections into a pair).
|
||||
|
@ -275,7 +272,7 @@ but this can be constructed in a straight-forward manner. Suitable definitions f
|
|||
\<open>\<T>\<close>, \<open>\<F>\<close> and \<open>\<D>\<close> lifting \<open>fst\<close> and \<open>snd\<close> on the new \<open>'\<alpha> process\<close>-type allows to derive
|
||||
the above properties for any \<open>P::'\<alpha> process\<close>. \<close>
|
||||
|
||||
subsection*["operator"::tc, main_author="Some(@{docitem ''lina''})"]
|
||||
subsection*["operator"::tc, main_author="Some(@{author ''lina''})"]
|
||||
\<open>\<^csp> Operators over the Process Type\<close>
|
||||
text\<open> Now, the operators of \<^csp> \<open>Skip\<close>, \<open>Stop\<close>, \<open>_\<sqinter>_\<close>, \<open>_\<box>_\<close>, \<open>_\<rightarrow>_\<close>,\<open>_\<lbrakk>_\<rbrakk>_\<close> etc.
|
||||
for internal choice, external choice, prefix and parallel composition, can
|
||||
|
@ -300,7 +297,7 @@ The definitional presentation of the \<^csp> process operators according to @{ci
|
|||
follows always this scheme. This part of the theory comprises around 2000 loc.
|
||||
\<close>
|
||||
|
||||
subsection*["orderings"::tc, main_author="Some(@{docitem ''bu''})"]
|
||||
subsection*["orderings"::tc, main_author="Some(@{author ''bu''})"]
|
||||
\<open>Refinement Orderings\<close>
|
||||
|
||||
text\<open> \<^csp> is centered around the idea of process refinement; many critical properties,
|
||||
|
@ -330,7 +327,7 @@ states, from which no internal progress is possible.
|
|||
\<close>
|
||||
|
||||
|
||||
subsection*["fixpoint"::tc, main_author="Some(@{docitem ''lina''})"]
|
||||
subsection*["fixpoint"::tc, main_author="Some(@{author ''lina''})"]
|
||||
\<open>Process Ordering and HOLCF\<close>
|
||||
text\<open> For any denotational semantics, the fixed point theory giving semantics to systems
|
||||
of recursive equations is considered as keystone. Its prerequisite is a complete partial ordering
|
||||
|
@ -351,7 +348,7 @@ We define \<open>P \<sqsubseteq> Q \<equiv> \<psi>\<^sub>\<D> \<and> \<psi>\<^su
|
|||
text\<open>The third condition \<open>\<psi>\<^sub>\<M>\<close> implies that the set of minimal divergent traces
|
||||
(ones with no proper prefix that is also a divergence) in \<open>P\<close>, denoted by \<open>Mins(\<D> P)\<close>,
|
||||
should be a subset of the trace set of \<open>Q\<close>.
|
||||
%One may note that each element in \<open>Mins(\<D> P)\<close> do actually not contain the \<open>\<surd>\<close>,
|
||||
%One may note that each element in \<open>Mins(\<D> P)\<close> do actually not contain the \<open>\<checkmark>\<close>,
|
||||
%which can be deduced from the process invariants described
|
||||
%in the precedent @{technical "processinv"}. This can be explained by the fact that we are not
|
||||
%really concerned with what a process does after it terminates.
|
||||
|
@ -397,7 +394,7 @@ Fixed-point inductions are the main proof weapon in verifications, together with
|
|||
and the \<^csp> laws. Denotational arguments can be hidden as they are not needed in practical
|
||||
verifications. \<close>
|
||||
|
||||
subsection*["law"::tc, main_author="Some(@{docitem ''lina''})"]
|
||||
subsection*["law"::tc, main_author="Some(@{author ''lina''})"]
|
||||
\<open>\<^csp> Rules: Improved Proofs and New Results\<close>
|
||||
|
||||
|
||||
|
@ -439,12 +436,12 @@ cases to be considered as well as their complexity makes pen and paper proofs
|
|||
practically infeasible.
|
||||
\<close>
|
||||
|
||||
section*["newResults"::tc,main_author="Some(@{docitem ''safouan''}::author)",
|
||||
main_author="Some(@{docitem ''lina''}::author)", level= "Some 3"]
|
||||
section*["newResults"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)", level= "Some 3"]
|
||||
\<open>Theoretical Results on Refinement\<close>
|
||||
text\<open>\<close>
|
||||
subsection*["adm"::tc,main_author="Some(@{docitem ''safouan''}::author)",
|
||||
main_author="Some(@{docitem ''lina''}::author)"]
|
||||
subsection*["adm"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)"]
|
||||
\<open>Decomposition Rules\<close>
|
||||
text\<open>
|
||||
In our framework, we implemented the pcpo process refinement together with the five refinement
|
||||
|
@ -479,8 +476,8 @@ The failure and divergence projections of this operator are also interdependent,
|
|||
sequence operator. Hence, this operator is not monotonic with \<open>\<sqsubseteq>\<^sub>\<F>\<close>, \<open>\<sqsubseteq>\<^sub>\<D>\<close> and \<open>\<sqsubseteq>\<^sub>\<T>\<close>, but monotonic
|
||||
when their combinations are considered. \<close>
|
||||
|
||||
subsection*["processes"::tc,main_author="Some(@{docitem ''safouan''}::author)",
|
||||
main_author="Some(@{docitem ''lina''}::author)"]
|
||||
subsection*["processes"::tc,main_author="Some(@{author ''safouan''}::author)",
|
||||
main_author="Some(@{author ''lina''}::author)"]
|
||||
\<open>Reference Processes and their Properties\<close>
|
||||
text\<open>
|
||||
We now present reference processes that exhibit basic behaviors, introduced in
|
||||
|
@ -566,13 +563,13 @@ the Failure/Divergence Semantics of \<^csp>.\<close>
|
|||
|
||||
Definition*[X10::"definition", level="Some 2"]\<open> \<open>deadlock\<^sub>-free P \<equiv> DF\<^sub>S\<^sub>K\<^sub>I\<^sub>P UNIV \<sqsubseteq>\<^sub>\<F> P\<close> \<close>
|
||||
|
||||
text\<open>\<^noindent> A process \<open>P\<close> is deadlock-free if and only if after any trace \<open>s\<close> without \<open>\<surd>\<close>, the union of \<open>\<surd>\<close>
|
||||
text\<open>\<^noindent> A process \<open>P\<close> is deadlock-free if and only if after any trace \<open>s\<close> without \<open>\<checkmark>\<close>, the union of \<open>\<checkmark>\<close>
|
||||
and all events of \<open>P\<close> can never be a refusal set associated to \<open>s\<close>, which means that \<open>P\<close> cannot
|
||||
be deadlocked after any non-terminating trace.
|
||||
\<close>
|
||||
|
||||
Theorem*[T1, short_name="\<open>DF definition captures deadlock-freeness\<close>", level="Some 2"]
|
||||
\<open> \<^hfill> \<^br> \<open>deadlock_free P \<longleftrightarrow> (\<forall>s\<in>\<T> P. tickFree s \<longrightarrow> (s, {\<surd>}\<union>events_of P) \<notin> \<F> P)\<close> \<close>
|
||||
\<open> \<^hfill> \<^br> \<open>deadlock_free P \<longleftrightarrow> (\<forall>s\<in>\<T> P. tickFree s \<longrightarrow> (s, {\<checkmark>}\<union>events_of P) \<notin> \<F> P)\<close> \<close>
|
||||
Definition*[X11, level="Some 2"]\<open> \<open>livelock\<^sub>-free P \<equiv> \<D> P = {} \<close> \<close>
|
||||
|
||||
text\<open> Recall that all five reference processes are livelock-free.
|
||||
|
@ -600,11 +597,11 @@ then it may still be livelock-free. % This makes sense since livelocks are worse
|
|||
|
||||
\<close>
|
||||
|
||||
section*["advanced"::tc,main_author="Some(@{docitem ''safouan''}::author)",level="Some 3"]
|
||||
section*["advanced"::tc,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
\<open>Advanced Verification Techniques\<close>
|
||||
|
||||
text\<open>
|
||||
Based on the refinement framework discussed in @{docitem "newResults"}, we will now
|
||||
Based on the refinement framework discussed in @{technical "newResults"}, we will now
|
||||
turn to some more advanced proof principles, tactics and verification techniques.
|
||||
We will demonstrate them on two paradigmatic examples well-known in the \<^csp> literature:
|
||||
The CopyBuffer and Dijkstra's Dining Philosophers. In both cases, we will exploit
|
||||
|
@ -615,7 +612,7 @@ verification. In the latter case, we present an approach to a verification of a
|
|||
architecture, in this case a ring-structure of arbitrary size.
|
||||
\<close>
|
||||
|
||||
subsection*["illustration"::tc,main_author="Some(@{docitem ''safouan''}::author)", level="Some 3"]
|
||||
subsection*["illustration"::tc,main_author="Some(@{author ''safouan''}::author)", level="Some 3"]
|
||||
\<open>The General CopyBuffer Example\<close>
|
||||
text\<open>
|
||||
We consider the paradigmatic copy buffer example @{cite "Hoare:1985:CSP:3921" and "Roscoe:UCS:2010"}
|
||||
|
@ -663,7 +660,7 @@ of 2 lines proof-script involving the derived algebraic laws of \<^csp>.
|
|||
|
||||
After proving that \<open>SYSTEM\<close> implements \<open>COPY\<close> for arbitrary alphabets, we aim to profit from this
|
||||
first established result to check which relations \<open>SYSTEM\<close> has wrt. to the reference processes of
|
||||
@{docitem "processes"}. Thus, we prove that \<open>COPY\<close> is deadlock-free which implies livelock-free,
|
||||
@{technical "processes"}. Thus, we prove that \<open>COPY\<close> is deadlock-free which implies livelock-free,
|
||||
(proof by fixed-induction similar to \<open>lemma: COPY \<sqsubseteq> SYSTEM\<close>), from which we can immediately infer
|
||||
from transitivity that \<open>SYSTEM\<close> is. Using refinement relations, we killed four birds with one stone
|
||||
as we proved the deadlock-freeness and the livelock-freeness for both \<open>COPY\<close> and \<open>SYSTEM\<close> processes.
|
||||
|
@ -680,7 +677,7 @@ corollary deadlock_free COPY
|
|||
\<close>
|
||||
|
||||
|
||||
subsection*["inductions"::tc,main_author="Some(@{docitem ''safouan''}::author)"]
|
||||
subsection*["inductions"::tc,main_author="Some(@{author ''safouan''}::author)"]
|
||||
\<open>New Fixed-Point Inductions\<close>
|
||||
|
||||
text\<open>
|
||||
|
@ -697,7 +694,7 @@ For this reason, we derived a number of alternative induction schemes (which are
|
|||
in the HOLCF library), which are also relevant for our final Dining Philophers example.
|
||||
These are essentially adaptions of k-induction schemes applied to domain-theoretic
|
||||
setting (so: requiring \<open>f\<close> continuous and \<open>P\<close> admissible; these preconditions are
|
||||
skipped here):
|
||||
skipped here):\<^vs>\<open>0.2cm\<close>
|
||||
\<^item> \<open>... \<Longrightarrow> \<forall>i<k. P (f\<^sup>i \<bottom>) \<Longrightarrow> (\<forall>X. (\<forall>i<k. P (f\<^sup>i X)) \<longrightarrow> P (f\<^sup>k X)) \<Longrightarrow> P (\<mu>X. f X)\<close>
|
||||
\<^item> \<open>... \<Longrightarrow> \<forall>i<k. P (f\<^sup>i \<bottom>) \<Longrightarrow> (\<forall>X. P X \<longrightarrow> P (f\<^sup>k X)) \<Longrightarrow> P (\<mu>X. f X)\<close>
|
||||
|
||||
|
@ -706,7 +703,7 @@ skipped here):
|
|||
it reduces the goal size.
|
||||
|
||||
Another problem occasionally occurring in refinement proofs happens when the right side term
|
||||
involves more than one fixed-point process (\<^eg> \<open>P \<lbrakk>{A}\<rbrakk> Q \<sqsubseteq> S\<close>). In this situation,
|
||||
involves more than one fixed-point process (\<^eg> \<open>P \<lbrakk>A\<rbrakk> Q \<sqsubseteq> S\<close>). In this situation,
|
||||
we need parallel fixed-point inductions. The HOLCF library offers only a basic one:
|
||||
\<^item> \<open>... \<Longrightarrow> P \<bottom> \<bottom> \<Longrightarrow> (\<forall>X Y. P X Y \<Longrightarrow> P (f X) (g Y)) \<Longrightarrow> P (\<mu>X. f X) (\<mu>X. g X)\<close>
|
||||
|
||||
|
@ -730,7 +727,7 @@ The astute reader may notice here that if the induction step is weakened (having
|
|||
the base steps require enforcement.
|
||||
\<close>
|
||||
|
||||
subsection*["norm"::tc,main_author="Some(@{docitem ''safouan''}::author)"]
|
||||
subsection*["norm"::tc,main_author="Some(@{author ''safouan''}::author)"]
|
||||
\<open>Normalization\<close>
|
||||
text\<open>
|
||||
Our framework can reason not only over infinite alphabets, but also over processes parameterized
|
||||
|
@ -790,7 +787,7 @@ Summing up, our method consists of four stages:
|
|||
|
||||
\<close>
|
||||
|
||||
subsection*["dining_philosophers"::tc,main_author="Some(@{docitem ''safouan''}::author)",level="Some 3"]
|
||||
subsection*["dining_philosophers"::tc,main_author="Some(@{author ''safouan''}::author)",level="Some 3"]
|
||||
\<open>Generalized Dining Philosophers\<close>
|
||||
|
||||
text\<open> The dining philosophers problem is another paradigmatic example in the \<^csp> literature
|
||||
|
@ -882,7 +879,7 @@ for a dozen of philosophers (on a usual machine) due to the exponential combinat
|
|||
Furthermore, our proof is fairly stable against modifications like adding non synchronized events like
|
||||
thinking or sitting down in contrast to model-checking techniques. \<close>
|
||||
|
||||
section*["relatedwork"::tc,main_author="Some(@{docitem ''lina''}::author)",level="Some 3"]
|
||||
section*["relatedwork"::tc,main_author="Some(@{author ''lina''}::author)",level="Some 3"]
|
||||
\<open>Related work\<close>
|
||||
|
||||
text\<open>
|
||||
|
@ -949,7 +946,7 @@ restrictions on the structure of components. None of our paradigmatic examples c
|
|||
be automatically proven with any of the discussed SMT techniques without restrictions.
|
||||
\<close>
|
||||
|
||||
section*["conclusion"::conclusion,main_author="Some(@{docitem ''bu''}::author)"]\<open>Conclusion\<close>
|
||||
section*["conclusion"::conclusion,main_author="Some(@{author ''bu''}::author)"]\<open>Conclusion\<close>
|
||||
text\<open>We presented a formalisation of the most comprehensive semantic model for \<^csp>, a 'classical'
|
||||
language for the specification and analysis of concurrent systems studied in a rich body of
|
||||
literature. For this purpose, we ported @{cite "tej.ea:corrected:1997"} to a modern version
|
||||
|
|
|
@ -203,11 +203,14 @@ print_doc_classes
|
|||
print_doc_items
|
||||
|
||||
ML\<open>
|
||||
map fst (Name_Space.dest_table (DOF_core.get_onto_classes \<^context>));
|
||||
|
||||
let val class_ids_so_far = ["Conceptual.A", "Conceptual.B", "Conceptual.C", "Conceptual.D",
|
||||
"Conceptual.E", "Conceptual.F", "Conceptual.G", "Conceptual.M",
|
||||
"Isa_COL.figure", "Isa_COL.chapter", "Isa_COL.figure2", "Isa_COL.section",
|
||||
"Isa_COL.subsection", "Isa_COL.figure_group", "Isa_COL.text_element",
|
||||
"Isa_COL.subsubsection", "Isa_COL.side_by_side_figure"]
|
||||
"Isa_COL.paragraph", "Isa_COL.subsection", "Isa_COL.figure_group",
|
||||
"Isa_COL.text_element", "Isa_COL.subsubsection",
|
||||
"Isa_COL.side_by_side_figure"]
|
||||
val docclass_tab = map fst (Name_Space.dest_table (DOF_core.get_onto_classes \<^context>));
|
||||
in @{assert} (class_ids_so_far = docclass_tab) end\<close>
|
||||
|
||||
|
|
|
@ -0,0 +1,82 @@
|
|||
theory
|
||||
COL_Test
|
||||
imports
|
||||
"Isabelle_DOF_Unit_Tests_document"
|
||||
begin
|
||||
|
||||
|
||||
print_doc_items
|
||||
print_doc_classes
|
||||
|
||||
section\<open>General Heading COL Elements\<close>
|
||||
|
||||
chapter*[S1::"chapter"]\<open>Chapter\<close>
|
||||
text*[S1'::"chapter"]\<open>Chapter\<close>
|
||||
|
||||
section*[S2::"section"]\<open>Section\<close>
|
||||
text*[S2'::"section"]\<open>Section\<close>
|
||||
|
||||
subsection*[S3::"subsection"]\<open>Subsection\<close>
|
||||
text*[S3'::"subsection"]\<open>Subsection\<close>
|
||||
|
||||
subsubsection*[S4::"subsubsection"]\<open>Subsection\<close>
|
||||
text*[S4'::"subsubsection"]\<open>Subsubsection\<close>
|
||||
|
||||
paragraph*[S5::"paragraph"]\<open>Paragraph\<close>
|
||||
text*[S5'::"paragraph"]\<open>Paragraph\<close>
|
||||
|
||||
|
||||
section\<open>General Figure COL Elements\<close>
|
||||
|
||||
figure*[fig1_test::figure,spawn_columns=False,relative_width="95",src="''figures/A''"]
|
||||
\<open> This is the label text \<close>
|
||||
|
||||
text*[fig2_test::figure, spawn_columns=False, relative_width="95",src="''figures/A''"
|
||||
]\<open> This is the label text\<close>
|
||||
|
||||
text\<open>check @{figure fig1_test} cmp to @{figure fig2_test}\<close>
|
||||
|
||||
side_by_side_figure*["sbsfig1"::side_by_side_figure,
|
||||
anchor="''Asub1''",
|
||||
caption="''First caption.''",
|
||||
relative_width="48",
|
||||
src="''figures/A''",
|
||||
anchor2="''Asub2''",
|
||||
caption2="''Second caption.''",
|
||||
relative_width2="47",
|
||||
src2="''figures/B''"]\<open> Exploring text elements. \<close>
|
||||
|
||||
|
||||
text*["sbsfig2"::side_by_side_figure,
|
||||
anchor="''fig:Asub1''",
|
||||
caption="''First caption.''",
|
||||
relative_width="48",
|
||||
src="''figures/A''",
|
||||
anchor2="''fig:Asub2''",
|
||||
caption2="''Second caption.''",
|
||||
relative_width2="47",
|
||||
src2="''figures/B''"]\<open>The global caption\<close>
|
||||
|
||||
text\<open>check @{side_by_side_figure sbsfig1} cmp to @{side_by_side_figure sbsfig2}
|
||||
\autoref{Asub1} vs. \autoref{Asub2}
|
||||
\autoref{fig:Asub1} vs. \autoref{fig:Asub2}
|
||||
\<close>
|
||||
|
||||
(* And a side-chick ... *)
|
||||
|
||||
text*[inlinefig::figure2,
|
||||
caption="\<open>The Caption.\<close>"]
|
||||
\<open>@{theory_text [display, margin = 5] \<open>lemma A :: "a \<longrightarrow> b"\<close>}\<close>
|
||||
|
||||
(*<*)
|
||||
text*[inlinegraph::figure2,
|
||||
caption="\<open>Another \<sigma>\<^sub>i+2 Caption.\<close>"]
|
||||
\<open>@{fig_content [display] (scale = 80, width=80, caption=\<open>This is \<^term>\<open>\<sigma>\<^sub>i+2\<close> \<dots>\<close>)
|
||||
\<open>document/figures/A.png\<close>}\<close>
|
||||
(*>*)
|
||||
|
||||
|
||||
|
||||
(*<*)
|
||||
end
|
||||
(*>*)
|
|
@ -15,6 +15,7 @@ session "Isabelle_DOF-Unit-Tests" = "Isabelle_DOF-Ontologies" +
|
|||
"Ontology_Matching_Example"
|
||||
"Cenelec_Test"
|
||||
"OutOfOrderPresntn"
|
||||
"COL_Test"
|
||||
document_files
|
||||
"root.bib"
|
||||
"figures/A.png"
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% begin: figure*
|
||||
\NewEnviron{isamarkupfigure*}[1][]{\isaDof[env={figure},#1]{\BODY}}
|
||||
\newisadof{figureDOTIsaUNDERSCORECOLDOTfigure}%
|
||||
\newisadof{IsaUNDERSCORECOLDOTfigure}%
|
||||
[label=,type=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTrelativeUNDERSCOREwidth=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTplacement=%
|
||||
|
@ -46,12 +46,31 @@
|
|||
% end: figure*
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% begin: figure2*
|
||||
\NewEnviron{isamarkupfigureTWO*}[1][]{\isaDof[env={figureTWO},#1]{\BODY}}
|
||||
\newisadof{IsaUNDERSCORECOLDOTfigureTWO}%
|
||||
[label=,type=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTrelativeUNDERSCOREwidth=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTplacement=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTsrc=%
|
||||
,IsaUNDERSCORECOLDOTfigureTWODOTcaption=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTspawnUNDERSCOREcolumns=enum False True%
|
||||
][1]{%
|
||||
\begin{figure}[]
|
||||
#1
|
||||
\caption{\commandkey{IsaUNDERSCORECOLDOTfigureTWODOTcaption}}
|
||||
\label{\commandkey{label}}%
|
||||
\end{figure}
|
||||
}
|
||||
% end: figure2*
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% begin: side_by_side_figure*
|
||||
\NewEnviron{isamarkupsideUNDERSCOREbyUNDERSCOREsideUNDERSCOREfigure*}[1][]{\isaDof[env={sideUNDERSCOREbyUNDERSCOREsideUNDERSCOREfigure},#1]{\BODY}}
|
||||
\newisadof{sideUNDERSCOREbyUNDERSCOREsideUNDERSCOREfigureDOTIsaUNDERSCORECOLDOTsideUNDERSCOREbyUNDERSCOREsideUNDERSCOREfigure}%
|
||||
\newisadof{IsaUNDERSCORECOLDOTsideUNDERSCOREbyUNDERSCOREsideUNDERSCOREfigure}%
|
||||
[label=,type=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTrelativeUNDERSCOREwidth=%
|
||||
,IsaUNDERSCORECOLDOTfigureDOTplacement=%
|
||||
|
|
|
@ -24,9 +24,8 @@ text\<open> Building a fundamental infrastructure for common document elements s
|
|||
theory Isa_COL
|
||||
imports Isa_DOF
|
||||
keywords "title*" "subtitle*"
|
||||
"chapter*" "section*"
|
||||
"chapter*" "section*" "paragraph*"
|
||||
"subsection*" "subsubsection*"
|
||||
"paragraph*" "subparagraph*"
|
||||
"figure*" "side_by_side_figure*" :: document_body
|
||||
|
||||
begin
|
||||
|
@ -57,6 +56,8 @@ doc_class "subsection" = text_element +
|
|||
level :: "int option" <= "Some 2"
|
||||
doc_class "subsubsection" = text_element +
|
||||
level :: "int option" <= "Some 3"
|
||||
doc_class "paragraph" = text_element +
|
||||
level :: "int option" <= "Some 4"
|
||||
|
||||
|
||||
subsection\<open>Ontological Macros\<close>
|
||||
|
@ -138,8 +139,7 @@ val _ = heading_command \<^command_keyword>\<open>chapter*\<close> "section head
|
|||
val _ = heading_command \<^command_keyword>\<open>section*\<close> "section heading" (SOME (SOME 1));
|
||||
val _ = heading_command \<^command_keyword>\<open>subsection*\<close> "subsection heading" (SOME (SOME 2));
|
||||
val _ = heading_command \<^command_keyword>\<open>subsubsection*\<close> "subsubsection heading" (SOME (SOME 3));
|
||||
val _ = heading_command \<^command_keyword>\<open>paragraph*\<close> "paragraph heading" (SOME (SOME 4));
|
||||
val _ = heading_command \<^command_keyword>\<open>subparagraph*\<close> "subparagraph heading" (SOME (SOME 5));
|
||||
val _ = heading_command \<^command_keyword>\<open>paragraph*\<close> "paragraph" (SOME (SOME 4));
|
||||
|
||||
end
|
||||
end
|
||||
|
@ -154,7 +154,7 @@ datatype placement = pl_h | (*here*)
|
|||
pl_hb (*here -> bottom*)
|
||||
|
||||
ML\<open> "side_by_side_figure" |> Name_Space.declared (DOF_core.get_onto_classes \<^context>
|
||||
|> Name_Space.space_of_table)\<close>
|
||||
|> Name_Space.space_of_table)\<close>
|
||||
|
||||
print_doc_classes
|
||||
|
||||
|
|
|
@ -701,7 +701,7 @@ abbreviations:
|
|||
\<^rail>\<open>
|
||||
( ( @@{command "chapter*"}
|
||||
| @@{command "section*"} | @@{command "subsection*"} | @@{command "subsubsection*"}
|
||||
| @@{command "paragraph*"} | @@{command "subparagraph*"}
|
||||
| @@{command "paragraph*"}
|
||||
| @@{command "figure*"} | @@{command "side_by_side_figure*"}
|
||||
)
|
||||
\<newline>
|
||||
|
|
Loading…
Reference in New Issue