revised introduction
This commit is contained in:
parent
a38d13198c
commit
c5cdf5f826
|
@ -11225,6 +11225,33 @@ abstract="Reasoning using process algebras often involves doing complex proofs,
|
|||
isbn="978-1-4471-3182-3"
|
||||
}
|
||||
|
||||
@book{books/daglib/0032976,
|
||||
added-at = {2014-03-12T00:00:00.000+0100},
|
||||
author = {Euzenat, Jérôme and Shvaiko, Pavel},
|
||||
biburl = {https://www.bibsonomy.org/bibtex/28d5372a81f181d9d5a761ca12209cf39/dblp},
|
||||
interhash = {fc55a5b84d114e38db0a0303cc1bd7da},
|
||||
intrahash = {8d5372a81f181d9d5a761ca12209cf39},
|
||||
isbn = {978-3-642-38720-3},
|
||||
keywords = {dblp},
|
||||
pages = {I-XVII, 1-511},
|
||||
publisher = {Springer},
|
||||
timestamp = {2015-06-18T09:49:52.000+0200},
|
||||
title = {Ontology Matching, Second Edition.},
|
||||
year = 2013
|
||||
}
|
||||
|
||||
|
||||
|
||||
@misc{AFP-ref22,
|
||||
title = "{A}rchive of {F}ormal {P}roofs",
|
||||
author = "{M.Eberl and G. Klein and A. Lochbihler and
|
||||
T. Nipkow and L. Paulson and R. Thiemann (eds)}",
|
||||
howpublished = "\url{https://afp-isa.org}",
|
||||
year = 2022,
|
||||
note = "Accessed: 2018-12-06"
|
||||
}
|
||||
|
||||
|
||||
@article{HOL-CSP-AFP,
|
||||
author = {Safouan Taha and Lina Ye and Burkhart Wolff},
|
||||
title = {{HOL-CSP Version 2.0}},
|
||||
|
|
|
@ -51,37 +51,97 @@ abstract*[abs, keywordlist="[\<open>Ontologies\<close>,\<open>Formal Documents\<
|
|||
section*[introheader::introduction,main_author="Some(@{author ''bu''})"]
|
||||
\<open> Introduction \<close>
|
||||
text*[introtext::introduction]\<open>
|
||||
The linking of \<^emph>\<open>formal\<close> and \<^emph>\<open>informal\<close> information is perhaps the
|
||||
most pervasive challenge in the digitization of knowledge and its
|
||||
propagation. This challenge incites numerous research efforts
|
||||
summarized under the labels ``semantic web'', ``integrated document management'', or any
|
||||
form of advanced ``semantic'' text processing. Turning informal into
|
||||
(more) formal content is the key for advanced techniques of research,
|
||||
combination, and the maintenance of consistency in evolving data.
|
||||
The linking of \<^emph>\<open>formal\<close> and \<^emph>\<open>informal\<close> information is perhaps the most pervasive challenge
|
||||
in the digitization of knowledge and its propagation. Unsurprisingly, this problem reappears
|
||||
in the libraries with formalized mathematics and engineering such as the Isabelle Archive of
|
||||
Formal Proofs @{cite "AFP-ref22"}, which passed the impressive numbers of 650 articles,
|
||||
written by 420 authors at the beginning of 2022. Still, while the problem of logical consistency
|
||||
even under system-changes and pervasive theory evolution is technically solved via continuous
|
||||
proof-checking, the problem of knowledge retrieval and of linking semi-formal explanations to
|
||||
definitions and proofs remains largely unresolved.
|
||||
|
||||
The \<^emph>\<open>knowledge\<close> problem of the increasingly massive \<^emph>\<open>digital information\<close> available
|
||||
incites numerous research efforts summarized under the labels ``semantic web'',
|
||||
``integrated document management'', or any form of advanced ``semantic'' text processing.
|
||||
The central role in these technologies, that are increasingly important in jurisprudence,
|
||||
medical research and life-sciences in order to tame their respective publication tsunamies,
|
||||
is played by \<^emph>\<open>document ontologies\<close>, \<^ie>, a machine-readable form of meta-data attached to
|
||||
document-elements as well as their document discourse. In order to make these techniques
|
||||
applicable to the area of \<^emph>\<open>formal theory development\<close>,
|
||||
the following is needed:
|
||||
\<^item> a general mechanism to define and develop \<^emph>\<open>domain-specific\<close> ontologies,
|
||||
\<^item> this mechanism should be adapted to entities occurring in formal theories,
|
||||
\<^ie>, provide built-in support for types, terms, theorems, proofs, etc.,
|
||||
\<^item> ways to annotate meta-data generated by ontologies to the document elements,
|
||||
as ``deep'' as possible, together with strong validation checks,
|
||||
\<^item> a smooth integration into the theory document development process, and
|
||||
\<^item> ways to relate ontologies and ontology-conform documents along different
|
||||
ontologies by \<^emph>\<open>ontological mappings\<close> and \<^emph>\<open>data translations\<close>
|
||||
@{footnote \<open>We follow throughout this text the terminology established in
|
||||
@{cite "books/daglib/0032976"}, pp. 39 ff.\<close>}.
|
||||
\<close>
|
||||
|
||||
text\<open>Recently, \<^dof> @{cite "brucker.ea:isabelledof:2019" and "brucker.ea:isabelle-ontologies:2018"}
|
||||
\<^footnote>\<open>The official releases are available on \<^url>\<open>https://zenodo.org/record/3370483#.YfWg6S-B1qt\<close>, the
|
||||
developer version on \<^url>\<open>https://github.com/logicalhacking/Isabelle_DOF\<close>\<close>
|
||||
has been designed as an Isabelle component that attempts to answer these needs.
|
||||
\<^dof> generates from ontology definitions directly integrated into Isabelle theories
|
||||
typed meta-data, that may be annotated to a number of document elements and that were
|
||||
validated ``on-the-fly'' during the general continuous type and proof-checking process
|
||||
in the Prover IDE (Isabelle/PIDE). Thus, \<^dof> profits and extends Isabelle's
|
||||
document-centric view on code, definitions, proofs, text-elements and other modeling
|
||||
elements.
|
||||
|
||||
In more detail, the schema of \<^dof> text elements --- whose syntax follows closely
|
||||
the corresponding Isabelle standard elements, except that these do not
|
||||
posses the brackets with the meta-data --- looks as follows:
|
||||
@{theory_text [display,indent=10, margin=70]
|
||||
\<open>
|
||||
text*[label::classid, attr\<^sub>1=E\<^sub>1, ... attr\<^sub>n=E\<^sub>n]\<open> some formal text \<close>
|
||||
|
||||
\<close>}
|
||||
while code-elements have the form:
|
||||
@{theory_text [display,indent=10, margin=70]
|
||||
\<open>
|
||||
ML*[label::classid, attr\<^sub>1=E\<^sub>1, ... attr\<^sub>n=E\<^sub>n]\<open> some SML code \<close>
|
||||
\<close>}
|
||||
which means for each element that a meta-data object is created and associated to it.
|
||||
This meta-data can be referenced via its label and used in further computations in text -
|
||||
or code elements.
|
||||
%; the details will be explained in the subsequent section.
|
||||
|
||||
Admittedly, Isabelle is not the first system that comes into one's mind when
|
||||
writing a document, be it a scientific paper, a book, or a larger technical
|
||||
documentation. However, it has a typesetting system inside which is in the
|
||||
writing a scientific paper, a book, or a larger technical documentation.
|
||||
However, it has a typesetting system inside which is in the
|
||||
tradition of document generation systems such as mkd, Document! X, Doxygen,
|
||||
Javadoc, etc., and which embed elements of formal content such as formula pretty-prints
|
||||
into informal text. In Isabelle, these "links" or embedded meta-text elements
|
||||
are a form of machine-checked macro called \<^emph>\<open>antiquotations\<close>.
|
||||
Javadoc, etc., and which embed formal content elements such as formula pretty-prints
|
||||
into formal text. In Isabelle, these embedded meta-text elements using references
|
||||
to the context represent a machine-checked macro called \<^emph>\<open>antiquotation\<close>.
|
||||
|
||||
For example, the text element as appearing in the Isabelle frontend:
|
||||
With standard Isabelle antiquotations, for example, the following text element
|
||||
inside the integrated source will appear in Isabelle/PIDE as follows:
|
||||
@{theory_text [display,indent=10, margin=70]
|
||||
\<open>
|
||||
text\<open> According to the reflexivity axiom @{thm refl}, we obtain in \<Gamma>
|
||||
for @{term "fac 5"} the result @{value "fac 5"}.\<close>
|
||||
|
||||
\<close>}
|
||||
is represented in the generated \<^LaTeX> or HTML output by:
|
||||
In the generated document represented in the generated \<^LaTeX> or HTML output, it is shown as:
|
||||
@{theory_text [display,indent=10, margin=70]
|
||||
\<open>According to the reflexivity axiom \<open>x = x\<close>, we obtain in \<Gamma> for \<open>fac 5\<close> the result \<open>120\<close>.\<close>
|
||||
}
|
||||
where the meta-texts \<open>@{thm refl}\<close> ("give the presentation of theorem 'refl'),
|
||||
\<open>@{term "fac 5"}\<close> ("parse and type-check 'fac 5' in the previous logical context)
|
||||
where the meta-texts \<open>@{thm refl}\<close> ("give the presentation of theorem 'refl'"),
|
||||
\<open>@{term "fac 5"}\<close> ("parse and type-check 'fac 5' in the previous logical context")
|
||||
and \<open>@{value "fac 5"}\<close> ("compile and execute 'fac 5' according to its
|
||||
definitions") are built-in antiquotations in \<^hol>.
|
||||
definitions") are built-in antiquotations in \<^hol>.
|
||||
|
||||
One distinguishing feature of \<^dof> is that specific antiquotations were generated from
|
||||
an ontology rather than being hard-coded into the Isabelle system infrastructure.
|
||||
\<close>
|
||||
|
||||
(*
|
||||
text
|
||||
\<open>
|
||||
|
||||
%too long !
|
||||
This leads to an evolution strategy we call "integrate the document, strengthen the
|
||||
|
@ -104,6 +164,10 @@ has been designed as an Isabelle component that \<^emph>\<open>generates\<close>
|
|||
from a more abstract description, namely an \<^emph>\<open>ontology\<close> that provides typed meta-data
|
||||
and typed reference mechanisms inside text- and ML-contexts.
|
||||
|
||||
*)
|
||||
|
||||
text\<open>
|
||||
|
||||
In this paper, we extend prior versions of \<^dof> by
|
||||
\<^enum> a new form of contexts, namely \<^emph>\<open>term contexts\<close>. Thus, annotations generated
|
||||
from \<^dof> may also occur in \<open>\<lambda>\<close>-terms used to denote meta-data, and
|
||||
|
|
Loading…
Reference in New Issue