Compare commits

...

26 Commits

Author SHA1 Message Date
Achim D. Brucker b14c78f2ba Fixed URL to build artifacts.
ci/woodpecker/push/build Pipeline was successful Details
2022-03-25 10:25:29 +00:00
Achim D. Brucker ec99c91d35 Fixed URL to build artifacts.
ci/woodpecker/push/build Pipeline was successful Details
2022-03-25 10:21:38 +00:00
Achim D. Brucker fb1b0c6638 Generate PDF documentation. 2022-03-25 10:21:17 +00:00
Achim D. Brucker f9399a1283 Fixed secret name.
ci/woodpecker/push/build Pipeline was successful Details
2022-03-01 13:13:15 +00:00
Achim D. Brucker ec263ef544 Only deploy browser_info directory.
ci/woodpecker/push/build Pipeline failed Details
2022-03-01 13:13:15 +00:00
Achim D. Brucker def4a53102 Fixed build directory.
ci/woodpecker/push/build Pipeline failed Details
2022-03-01 13:13:15 +00:00
Achim D. Brucker cabdfd55a1 Moved from Jenkins to Woodpecker.
ci/woodpecker/push/build Pipeline failed Details
2022-03-01 13:13:15 +00:00
Achim D. Brucker 61e34223aa Renamed upstream repository. 2021-12-29 09:23:15 +00:00
Achim D. Brucker 03cc0ac118 Migration to Isabelle 2021-1 (based on afp-2021-12-28). 2021-12-28 09:00:23 +00:00
Achim D. Brucker f04c93ce06 Added installation instructions.
afp-mirror/UPF/pipeline/head There was a failure building this commit Details
2020-05-23 11:32:35 +01:00
Achim D. Brucker 5a35bc7ca0 Migration to Isabelle 2020. 2020-05-23 10:54:07 +01:00
Achim D. Brucker ae982dface Import of official AFP entry for Isabelle 2019.
afp-mirror/UPF/master This commit looks good Details
2019-06-22 22:49:04 +01:00
Achim D. Brucker 90520c07fe Renamed session to avoid conflict with activated AFP.
afp-mirror/UPF/master This commit looks good Details
2019-01-05 17:47:38 +00:00
Achim D. Brucker 0e44f9d23d Initial commit. 2019-01-05 17:45:30 +00:00
Achim D. Brucker 270d675500 Import of official AFP entry for Isabelle 2018. 2018-12-22 12:58:34 +00:00
Achim D. Brucker 2766cec66c Fixed markdown. 2018-07-04 02:02:00 +01:00
Achim D. Brucker 07f8d47edc Added master repository URL. 2018-07-02 08:42:38 +01:00
Achim D. Brucker ccc6ea8841 Added SPDX License Identifier. 2018-07-02 08:41:35 +01:00
Achim D. Brucker a2d9cc26e4 Fixed markdown. 2018-07-02 08:41:06 +01:00
Achim D. Brucker f6ae1d216e Import of current (Isabelle 2017) release of UPF. 2017-12-01 23:07:25 +00:00
Achim D. Brucker 0108e7575b Removed references to system generated names. 2017-01-08 22:12:57 +00:00
Achim D. Brucker 23edf34523 Updated indentation. 2017-01-08 20:03:39 +00:00
Achim D. Brucker d28d814621 Added publications. 2017-01-08 12:00:38 +00:00
Achim D. Brucker dbe48c1ef6 Clarified file structure. Official AFP entry is now in directory UPF. 2017-01-07 00:05:52 +00:00
Achim D. Brucker 635bc0f990 Import of current (Isabelle 2016-1) release of UPF. 2016-12-26 11:43:16 +00:00
Achim D. Brucker 243939c32e Import of current (Isabelle 2016) release of UPF. 2016-08-10 10:43:34 +01:00
23 changed files with 1093 additions and 935 deletions

13
.woodpecker/README.md Normal file
View File

@ -0,0 +1,13 @@
# Continuous Build and Release Setup
[![status-badge](https://ci.logicalhacking.com/api/badges/afp-mirror/UPF/status.svg)](https://ci.logicalhacking.com/afp-mirror/UPF)
This directory contains the CI configuration for the [Woodpecker CI](https://woodpecker-ci.org/).
It may also contain additional tools and script that are useful for preparing a release.
## Generated Artifacts
### Latest Build
* [browser_info](https://artifacts.logicalhacking.com/ci/afp-mirror/UPF/main/latest/browser_info/AFP/UPF-devel/)
* [aux files](https://artifacts.logicalhacking.com/ci/afp-mirror/UPF/main/latest/)

23
.woodpecker/build.yml Normal file
View File

@ -0,0 +1,23 @@
pipeline:
build:
image: docker.io/logicalhacking/isabelle2021-1
commands:
- export ARTIFACT_DIR=$CI_WORKSPACE/.artifacts/$CI_REPO/$CI_BRANCH/$CI_BUILD_NUMBER/
- mkdir -p $ARTIFACT_DIR
- isabelle build -D UPF -o browser_info -o document=pdf
- export `isabelle getenv ISABELLE_HOME_USER`
- cp -r $ISABELLE_HOME_USER/browser_info $ARTIFACT_DIR
- cd $ARTIFACT_DIR
- cd ..
- ln -s * latest
deploy:
image: docker.io/drillster/drone-rsync
settings:
hosts: [ "ci.logicalhacking.com"]
port: 22
source: .artifacts/$CI_REPO_OWNER/*
target: $CI_REPO_OWNER
include: [ "**.*"]
key:
from_secret: artifacts_ssh
user: artifacts

37
CITATION Normal file
View File

@ -0,0 +1,37 @@
To cite the use of this formal theory, please use
Achim D. Brucker, Lukas Brügger, and Burkhart Wolff. The Unified
Policy Framework (UPF). In Archive of Formal
Proofs, 2014. http://www.isa-afp.org/entries/UPF.shtml, Formal proof
development
A BibTeX entry for LaTeX users is
@Article{ brucker.ea:upf:2014,
abstract = {We present the Unified Policy Framework (UPF), a
generic framework for modelling security
(access-control) policies. UPF emphasizes the view
that a policy is a policy decision function that
grants or denies access to resources, permissions,
etc. In other words, instead of modelling the
relations of permitted or prohibited requests
directly, we model the concrete function that
implements the policy decision point in a
system. In more detail, UPF is based on the
following four principles: 1) Functional
representation of policies, 2) No conflicts are
possible, 3) Three-valued decision type (allow,
deny, undefined), 4) Output type not containing the
decision only.},
author = {Achim D. Brucker and Lukas Br{\"u}gger and Burkhart Wolff},
date = {2014-11-28},
file = {https://www.brucker.ch/bibliography/download/2014/brucker.ea-upf-outline-2014.pdf},
filelabel = {Outline},
issn = {2150-914x},
journal = {Archive of Formal Proofs},
month = {sep},
note = {\url{http://www.isa-afp.org/entries/UPF.shtml}, Formal proof development},
pdf = {https://www.brucker.ch/bibliography/download/2014/brucker.ea-upf-2014.pdf},
title = {The Unified Policy Framework (UPF)},
url = {https://www.brucker.ch/bibliography/abstract/brucker.ea-upf-2014},
year = {2014},
}

View File

@ -1,4 +1,5 @@
# The Unified Policy Framework (UPF)
This git repository contains a local mirror of
[The Unified Policy Framework (UPF)](https://www.isa-afp.org/entries/UPF.shtml)
entry of the
@ -8,10 +9,34 @@ The official AFP releases are tagged. Additionally, this repository
may contain extensions (i.e., a development version) that may be
submitted (as an update of the UPF entry) at a later stage.
## Installation
```console
achim@logicalhacking:~$ isabelle build -D UPF
```
## Authors
* [Achim D. Brucker](http://www.brucker.ch/)
* Lukas Brügger
* [Burkhart Wolff](https://www.lri.fr/~wolff/)
## License
This project is licensed under a 3-clause BSD-style license.
SPDX-License-Identifier: BSD-3-Clause
## Upstream Repository
The upstream git repository, i.e., the single source of truth, for this project is hosted
by the [Software Assurance & Security Research Team](https://logicalhacking.com) at
<https://git.logicalhacking.com/afp-mirror/UPF>.
## Publications
* Achim D. Brucker, Lukas Brügger, and Burkhart Wolff. The Unified
Policy Framework (UPF). In Archive of Formal
Proofs, 2014. http://www.isa-afp.org/entries/UPF.shtml, Formal proof
development

View File

@ -6,8 +6,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -39,30 +40,29 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: Analysis.thy 10953 2014-11-24 11:23:40Z wolff $ *)
section{* Properties on Policies *}
section\<open>Properties on Policies\<close>
theory
Analysis
imports
ParallelComposition
SeqComposition
imports
ParallelComposition
SeqComposition
begin
text {*
text \<open>
In this theory, several standard policy properties are paraphrased in UPF terms.
*}
\<close>
subsection{* Basic Properties *}
subsection\<open>Basic Properties\<close>
subsubsection{* A Policy Has no Gaps *}
subsubsection\<open>A Policy Has no Gaps\<close>
definition gap_free :: "('a \<mapsto> 'b) \<Rightarrow> bool"
where "gap_free p = (dom p = UNIV)"
subsubsection{* Comparing Policies *}
text {* Policy p is more defined than q: *}
subsubsection\<open>Comparing Policies\<close>
text \<open>Policy p is more defined than q:\<close>
definition more_defined :: "('a \<mapsto> 'b) \<Rightarrow>('a \<mapsto> 'b) \<Rightarrow>bool"
where "more_defined p q = (dom q \<subseteq> dom p)"
@ -71,10 +71,10 @@ definition strictly_more_defined :: "('a \<mapsto> 'b) \<Rightarrow>('a \<mapsto
where "strictly_more_defined p q = (dom q \<subset> dom p)"
lemma strictly_more_vs_more: "strictly_more_defined p q \<Longrightarrow> more_defined p q"
unfolding more_defined_def strictly_more_defined_def
by auto
unfolding more_defined_def strictly_more_defined_def
by auto
text{* Policy p is more permissive than q: *}
text\<open>Policy p is more permissive than q:\<close>
definition more_permissive :: "('a \<mapsto> 'b) \<Rightarrow> ('a \<mapsto> 'b) \<Rightarrow> bool" (infixl "\<sqsubseteq>\<^sub>A" 60)
where " p \<sqsubseteq>\<^sub>A q = (\<forall> x. (case q x of \<lfloor>allow y\<rfloor> \<Rightarrow> (\<exists> z. (p x = \<lfloor>allow z\<rfloor>))
| \<lfloor>deny y\<rfloor> \<Rightarrow> True
@ -82,58 +82,60 @@ where " p \<sqsubseteq>\<^sub>A q = (\<forall> x. (case q x of \<lfloor>allow y
lemma more_permissive_refl : "p \<sqsubseteq>\<^sub>A p "
unfolding more_permissive_def
by(auto split : option.split decision.split)
unfolding more_permissive_def
by(auto split : option.split decision.split)
lemma more_permissive_trans : "p \<sqsubseteq>\<^sub>A p' \<Longrightarrow> p' \<sqsubseteq>\<^sub>A p'' \<Longrightarrow> p \<sqsubseteq>\<^sub>A p''"
unfolding more_permissive_def
apply(auto split : option.split decision.split)
apply(erule_tac x = x and
P = "\<lambda>x. case p'' x of \<bottom> \<Rightarrow> True
unfolding more_permissive_def
apply(auto split : option.split decision.split)
subgoal for x y
apply(erule_tac x = x and
P = "\<lambda>x. case p'' x of \<bottom> \<Rightarrow> True
| \<lfloor>allow y\<rfloor> \<Rightarrow> \<exists>z. p' x = \<lfloor>allow z\<rfloor>
| \<lfloor>deny y\<rfloor> \<Rightarrow> True" in allE)
apply(simp, elim exE)
by(erule_tac x = x in allE, simp)
apply(simp, elim exE)
by(erule_tac x = x in allE, simp)
done
text{* Policy p is more rejective than q: *}
text\<open>Policy p is more rejective than q:\<close>
definition more_rejective :: "('a \<mapsto> 'b) \<Rightarrow> ('a \<mapsto> 'b) \<Rightarrow> bool" (infixl "\<sqsubseteq>\<^sub>D" 60)
where " p \<sqsubseteq>\<^sub>D q = (\<forall> x. (case q x of \<lfloor>deny y\<rfloor> \<Rightarrow> (\<exists> z. (p x = \<lfloor>deny z\<rfloor>))
where " p \<sqsubseteq>\<^sub>D q = (\<forall> x. (case q x of \<lfloor>deny y\<rfloor> \<Rightarrow> (\<exists> z. (p x = \<lfloor>deny z\<rfloor>))
| \<lfloor>allow y\<rfloor> \<Rightarrow> True
| \<bottom> \<Rightarrow> True))"
lemma more_rejective_trans : "p \<sqsubseteq>\<^sub>D p' \<Longrightarrow> p' \<sqsubseteq>\<^sub>D p'' \<Longrightarrow> p \<sqsubseteq>\<^sub>D p''"
unfolding more_rejective_def
apply(auto split : option.split decision.split)
apply(erule_tac x = x and
P = "\<lambda>x. case p'' x of \<bottom> \<Rightarrow> True
unfolding more_rejective_def
apply(auto split : option.split decision.split)
subgoal for x y
apply(erule_tac x = x and
P = "\<lambda>x. case p'' x of \<bottom> \<Rightarrow> True
| \<lfloor>allow y\<rfloor> \<Rightarrow> True
| \<lfloor>deny y\<rfloor> \<Rightarrow> \<exists>z. p' x = \<lfloor>deny z\<rfloor>" in allE)
apply(simp, elim exE)
by(erule_tac x = x in allE, simp)
apply(simp, elim exE)
by(erule_tac x = x in allE, simp)
done
lemma more_rejective_refl : "p \<sqsubseteq>\<^sub>D p "
unfolding more_rejective_def
by(auto split : option.split decision.split)
unfolding more_rejective_def
by(auto split : option.split decision.split)
lemma "A\<^sub>f f \<sqsubseteq>\<^sub>A p"
unfolding more_permissive_def allow_all_fun_def allow_pfun_def
by(auto split: option.split decision.split)
lemma "A\<^sub>I \<sqsubseteq>\<^sub>A p"
unfolding more_permissive_def allow_all_fun_def allow_pfun_def allow_all_id_def
by(auto split: option.split decision.split)
subsection{* Combined Data-Policy Refinement *}
subsection\<open>Combined Data-Policy Refinement\<close>
definition policy_refinement ::
"('a \<mapsto> 'b) \<Rightarrow> ('a' \<Rightarrow> 'a) \<Rightarrow>('b' \<Rightarrow> 'b) \<Rightarrow> ('a' \<mapsto> 'b') \<Rightarrow> bool"
("_ \<sqsubseteq>\<^bsub>_\<^esub>\<^sub>,\<^bsub>_\<^esub> _" [50,50,50,50]50)
where "p \<sqsubseteq>\<^bsub>abs\<^sub>a\<^esub>\<^sub>,\<^bsub>abs\<^sub>b\<^esub> q \<equiv>
"('a \<mapsto> 'b) \<Rightarrow> ('a' \<Rightarrow> 'a) \<Rightarrow>('b' \<Rightarrow> 'b) \<Rightarrow> ('a' \<mapsto> 'b') \<Rightarrow> bool"
("_ \<sqsubseteq>\<^bsub>_\<^esub>\<^sub>,\<^bsub>_\<^esub> _" [50,50,50,50]50)
where "p \<sqsubseteq>\<^bsub>abs\<^sub>a\<^esub>\<^sub>,\<^bsub>abs\<^sub>b\<^esub> q \<equiv>
(\<forall> a. case p a of
\<bottom> \<Rightarrow> True
| \<lfloor>allow y\<rfloor> \<Rightarrow> (\<forall> a'\<in>{x. abs\<^sub>a x=a}.
@ -142,69 +144,72 @@ where "p \<sqsubseteq>\<^bsub>abs\<^sub>a\<^esub>\<^sub>,\<^bsub>abs\<^sub>b
| \<lfloor>deny y\<rfloor> \<Rightarrow> (\<forall> a'\<in>{x. abs\<^sub>a x=a}.
\<exists> b'. q a' = \<lfloor>deny b'\<rfloor>
\<and> abs\<^sub>b b' = y))"
theorem polref_refl: "p \<sqsubseteq>\<^bsub>id\<^esub>\<^sub>,\<^bsub>id\<^esub> p"
unfolding policy_refinement_def
by(auto split: option.split decision.split)
unfolding policy_refinement_def
by(auto split: option.split decision.split)
theorem polref_trans:
assumes A: "p \<sqsubseteq>\<^bsub>f\<^esub>\<^sub>,\<^bsub>g\<^esub> p'"
and B: "p' \<sqsubseteq>\<^bsub>f'\<^esub>\<^sub>,\<^bsub>g'\<^esub> p''"
shows "p \<sqsubseteq>\<^bsub>f o f'\<^esub>\<^sub>,\<^bsub>g o g'\<^esub> p''"
apply(insert A B)
unfolding policy_refinement_def
apply(auto split: option.split decision.split simp: o_def)
apply(erule_tac x="f (f' a')" in allE, simp)
apply(erule_tac x="f' a'" in allE, auto)
apply(erule_tac x=" (f' a')" in allE, auto)
apply(erule_tac x="f (f' a')" in allE, simp)
apply(erule_tac x="f' a'" in allE, auto)
apply(erule_tac x=" (f' a')" in allE, auto)
done
subsection {* Equivalence of Policies *}
subsubsection{* Equivalence over domain D *}
assumes A: "p \<sqsubseteq>\<^bsub>f\<^esub>\<^sub>,\<^bsub>g\<^esub> p'"
and B: "p' \<sqsubseteq>\<^bsub>f'\<^esub>\<^sub>,\<^bsub>g'\<^esub> p''"
shows "p \<sqsubseteq>\<^bsub>f o f'\<^esub>\<^sub>,\<^bsub>g o g'\<^esub> p''"
apply(insert A B)
unfolding policy_refinement_def
apply(auto split: option.split decision.split simp: o_def)[1]
subgoal for a a'
apply(erule_tac x="f (f' a')" in allE, simp)[1]
apply(erule_tac x="f' a'" in allE, auto)[1]
apply(erule_tac x=" (f' a')" in allE, auto)[1]
done
subgoal for a a'
apply(erule_tac x="f (f' a')" in allE, simp)[1]
apply(erule_tac x="f' a'" in allE, auto)[1]
apply(erule_tac x=" (f' a')" in allE, auto)[1]
done
done
subsection \<open>Equivalence of Policies\<close>
subsubsection\<open>Equivalence over domain D\<close>
definition p_eq_dom :: "('a \<mapsto> 'b) \<Rightarrow> 'a set \<Rightarrow> ('a \<mapsto> 'b) \<Rightarrow>bool" ("_ \<approx>\<^bsub>_\<^esub> _" [60,60,60]60)
where "p \<approx>\<^bsub>D\<^esub> q = (\<forall>x\<in>D. p x = q x)"
text{* p and q have no conflicts *}
where "p \<approx>\<^bsub>D\<^esub> q = (\<forall>x\<in>D. p x = q x)"
text\<open>p and q have no conflicts\<close>
definition no_conflicts :: "('a \<mapsto> 'b) \<Rightarrow>('a \<mapsto> 'b) \<Rightarrow>bool" where
"no_conflicts p q = (dom p = dom q \<and> (\<forall>x\<in>(dom p).
"no_conflicts p q = (dom p = dom q \<and> (\<forall>x\<in>(dom p).
(case p x of \<lfloor>allow y\<rfloor> \<Rightarrow> (\<exists>z. q x = \<lfloor>allow z\<rfloor>)
| \<lfloor>deny y\<rfloor> \<Rightarrow> (\<exists>z. q x = \<lfloor>deny z\<rfloor>))))"
lemma policy_eq:
assumes p_over_qA: "p \<sqsubseteq>\<^sub>A q "
and q_over_pA: "q \<sqsubseteq>\<^sub>A p"
and p_over_qD: "q \<sqsubseteq>\<^sub>D p"
and q_over_pD: "p \<sqsubseteq>\<^sub>D q"
and dom_eq: "dom p = dom q"
shows "no_conflicts p q"
and q_over_pA: "q \<sqsubseteq>\<^sub>A p"
and p_over_qD: "q \<sqsubseteq>\<^sub>D p"
and q_over_pD: "p \<sqsubseteq>\<^sub>D q"
and dom_eq: "dom p = dom q"
shows "no_conflicts p q"
apply (insert p_over_qA q_over_pA p_over_qD q_over_pD dom_eq)
apply (simp add: no_conflicts_def more_permissive_def more_rejective_def
split: option.splits decision.splits)
split: option.splits decision.splits)
apply (safe)
apply (metis domI domIff dom_eq)
apply (metis)+
done
subsubsection{* Miscellaneous *}
apply (metis domI domIff dom_eq)
apply (metis)+
done
subsubsection\<open>Miscellaneous\<close>
lemma dom_inter: "\<lbrakk>dom p \<inter> dom q = {}; p x = \<lfloor>y\<rfloor>\<rbrakk> \<Longrightarrow> q x = \<bottom>"
by (auto)
lemma dom_eq: "dom p \<inter> dom q = {} \<Longrightarrow> p \<Oplus>\<^sub>A q = p \<Oplus>\<^sub>D q"
unfolding override_A_def override_D_def
by (rule ext, auto simp: dom_def split: prod.splits option.splits decision.splits )
lemma dom_split_alt_def : "(f, g) \<Delta> p = (dom(p \<triangleright> Allow) \<triangleleft> (A\<^sub>f f)) \<Oplus> (dom(p \<triangleright> Deny) \<triangleleft> (D\<^sub>f g))"
apply (rule ext)
apply (simp add: dom_split2_def Allow_def Deny_def dom_restrict_def
deny_all_fun_def allow_all_fun_def map_add_def)
deny_all_fun_def allow_all_fun_def map_add_def)
apply (simp split: option.splits decision.splits)
apply (auto simp: map_add_def o_def deny_pfun_def ran_restrict_def image_def)
done
done
end

View File

@ -6,8 +6,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -39,22 +40,21 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: ElementaryPolicies.thy 10945 2014-11-21 12:50:43Z wolff $ *)
section{* Elementary Policies *}
section\<open>Elementary Policies\<close>
theory
ElementaryPolicies
imports
UPFCore
imports
UPFCore
begin
text{*
text\<open>
In this theory, we introduce the elementary policies of UPF that build the basis
for more complex policies. These complex policies, respectively, embedding of
well-known access control or security models, are build by composing the elementary
policies defined in this theory.
*}
\<close>
subsection{* The Core Policy Combinators: Allow and Deny Everything *}
subsection\<open>The Core Policy Combinators: Allow and Deny Everything\<close>
definition
deny_pfun :: "('\<alpha> \<rightharpoonup>'\<beta>) \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>)" ("AllD")
@ -70,48 +70,50 @@ definition
\<lfloor>y\<rfloor> \<Rightarrow> \<lfloor>allow (y)\<rfloor>
|\<bottom> \<Rightarrow> \<bottom>)"
syntax (xsymbols)
syntax
"_allow_pfun" :: "('\<alpha> \<rightharpoonup>'\<beta>) \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>)" ("A\<^sub>p")
translations
"A\<^sub>p f" \<rightleftharpoons> "AllA f"
syntax (xsymbols)
syntax
"_deny_pfun" :: "('\<alpha> \<rightharpoonup>'\<beta>) \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>)" ("D\<^sub>p")
translations
"D\<^sub>p f" \<rightleftharpoons> "AllD f"
notation (xsymbols)
notation
"deny_pfun" (binder "\<forall>D" 10) and
"allow_pfun" (binder "\<forall>A" 10)
lemma AllD_norm[simp]: "deny_pfun (id o (\<lambda>x. \<lfloor>x\<rfloor>)) = (\<forall>Dx. \<lfloor>x\<rfloor>)"
by(simp add:id_def comp_def)
lemma AllD_norm2[simp]: "deny_pfun (Some o id) = (\<forall>Dx. \<lfloor>x\<rfloor>)"
by(simp add:id_def comp_def)
lemma AllA_norm[simp]: "allow_pfun (id o Some) = (\<forall>Ax. \<lfloor>x\<rfloor>)"
by(simp add:id_def comp_def)
lemma AllA_norm2[simp]: "allow_pfun (Some o id) = (\<forall>Ax. \<lfloor>x\<rfloor>)"
by(simp add:id_def comp_def)
lemma AllA_apply[simp]: "(\<forall>Ax. Some (P x)) x = \<lfloor>allow (P x)\<rfloor>"
by(simp add:allow_pfun_def)
lemma AllD_apply[simp]: "(\<forall>Dx. Some (P x)) x = \<lfloor>deny (P x)\<rfloor>"
by(simp add:deny_pfun_def)
lemma neq_Allow_Deny: "pf \<noteq> \<emptyset> \<Longrightarrow> (deny_pfun pf) \<noteq> (allow_pfun pf)"
apply (erule contrapos_nn)
apply (rule ext)
apply (drule_tac x=x in fun_cong)
apply (auto simp: deny_pfun_def allow_pfun_def)
apply (case_tac "pf x = \<bottom>")
apply (auto)
done
subgoal for x
apply (drule_tac x=x in fun_cong)
apply (auto simp: deny_pfun_def allow_pfun_def)
apply (case_tac "pf x = \<bottom>")
apply (auto)
done
done
subsection{* Common Instances *}
subsection\<open>Common Instances\<close>
definition allow_all_fun :: "('\<alpha> \<Rightarrow> '\<beta>) \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>)" ("A\<^sub>f")
where "allow_all_fun f = allow_pfun (Some o f)"
@ -135,17 +137,15 @@ definition
deny_all :: "('\<alpha> \<mapsto> unit)" ("D\<^sub>U") where
"deny_all p = \<lfloor>deny ()\<rfloor>"
text{* ... and resulting properties: *}
text\<open>... and resulting properties:\<close>
lemma "A\<^sub>I \<Oplus> empty = A\<^sub>I"
apply simp
done
lemma "A\<^sub>f f \<Oplus> empty = A\<^sub>f f"
apply simp
done
lemma "allow_pfun empty = empty"
lemma "A\<^sub>I \<Oplus> Map.empty = A\<^sub>I"
by simp
lemma "A\<^sub>f f \<Oplus> Map.empty = A\<^sub>f f"
by simp
lemma "allow_pfun Map.empty = Map.empty"
apply (rule ext)
apply (simp add: allow_pfun_def)
done
@ -158,12 +158,11 @@ lemma allow_left_cancel :"dom pf = UNIV \<Longrightarrow> (allow_pfun pf) \<Oplu
lemma deny_left_cancel :"dom pf = UNIV \<Longrightarrow> (deny_pfun pf) \<Oplus> x = (deny_pfun pf)"
apply (rule ext)+
apply (auto simp: deny_pfun_def option.splits)
done
by (auto simp: deny_pfun_def option.splits)
subsection{* Domain, Range, and Restrictions *}
subsection\<open>Domain, Range, and Restrictions\<close>
text{*
text\<open>
Since policies are essentially maps, we inherit the basic definitions for
domain and range on Maps: \\
\verb+Map.dom_def+ : @{thm Map.dom_def} \\
@ -187,30 +186,31 @@ text{*
\item\verb+Map.dom_if+ @{thm Map.dom_if}
\item\verb+Map.dom_map_add+ @{thm Map.dom_map_add}
\end{itemize}
*}
\<close>
text{*
text\<open>
However, some properties are specific to policy concepts:
*}
\<close>
lemma sub_ran : "ran p \<subseteq> Allow \<union> Deny"
apply (auto simp: Allow_def Deny_def ran_def full_SetCompr_eq[symmetric])
apply (case_tac "x")
apply (simp_all)
apply (rename_tac \<alpha>)
apply (erule_tac x="\<alpha>" in allE)
apply (simp)
done
apply (auto simp: Allow_def Deny_def ran_def full_SetCompr_eq[symmetric])[1]
subgoal for x a
apply (case_tac "x")
apply (simp_all)
done
done
lemma dom_allow_pfun [simp]:"dom(allow_pfun f) = dom f"
apply (auto simp: allow_pfun_def)
apply (case_tac "f x", simp_all)
subgoal for x y
apply (case_tac "f x", simp_all)
done
done
lemma dom_allow_all: "dom(A\<^sub>f f) = UNIV"
by(auto simp: allow_all_fun_def o_def)
lemma dom_deny_pfun [simp]:"dom(deny_pfun f) = dom f"
apply (auto simp: deny_pfun_def)
apply (auto simp: deny_pfun_def)[1]
apply (case_tac "f x")
apply (simp_all)
done
@ -221,37 +221,47 @@ lemma dom_deny_all: " dom(D\<^sub>f f) = UNIV"
lemma ran_allow_pfun [simp]:"ran(allow_pfun f) = allow `(ran f)"
apply (simp add: allow_pfun_def ran_def)
apply (rule set_eqI)
apply (auto)
apply (case_tac "f a")
apply (auto simp: image_def)
apply (rule_tac x=a in exI)
apply (simp)
done
apply (auto)[1]
subgoal for x a
apply (case_tac "f a")
apply (auto simp: image_def)[1]
apply (auto simp: image_def)[1]
done
subgoal for xa a
apply (rule_tac x=a in exI)
apply (simp)
done
done
lemma ran_allow_all: "ran(A\<^sub>f id) = Allow"
apply (simp add: allow_all_fun_def Allow_def o_def)
apply (rule set_eqI)
apply (auto simp: image_def ran_def)
done
done
lemma ran_deny_pfun[simp]: "ran(deny_pfun f) = deny ` (ran f)"
apply (simp add: deny_pfun_def ran_def)
apply (rule set_eqI)
apply (auto)
apply (case_tac "f a")
apply (auto simp: image_def)
apply (rule_tac x=a in exI)
apply (simp)
done
apply (auto)[1]
subgoal for x a
apply (case_tac "f a")
apply (auto simp: image_def)[1]
apply (auto simp: image_def)[1]
done
subgoal for xa a
apply (rule_tac x=a in exI)
apply (simp)
done
done
lemma ran_deny_all: "ran(D\<^sub>f id) = Deny"
apply (simp add: deny_all_fun_def Deny_def o_def)
apply (rule set_eqI)
apply (auto simp: image_def ran_def)
done
done
text{*
text\<open>
Reasoning over \verb+dom+ is most crucial since it paves the way for simplification and
reordering of policies composed by override (i.e. by the normal left-to-right rule composition
method.
@ -265,30 +275,34 @@ text{*
\item \verb+Map.map_add_upd_left+ @{thm Map.map_add_upd_left}
\end{itemize}
The latter rule also applies to allow- and deny-override.
*}
\<close>
definition dom_restrict :: "['\<alpha> set, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<beta>" (infixr "\<triangleleft>" 55)
where "S \<triangleleft> p \<equiv> (\<lambda>x. if x \<in> S then p x else \<bottom>)"
lemma dom_dom_restrict[simp] : "dom(S \<triangleleft> p) = S \<inter> dom p"
apply (auto simp: dom_restrict_def)
apply (case_tac "x \<in> S")
apply (simp_all)
apply (case_tac "x \<in> S")
apply (simp_all)
done
subgoal for x y
apply (case_tac "x \<in> S")
apply (simp_all)
done
subgoal for x y
apply (case_tac "x \<in> S")
apply (simp_all)
done
done
lemma dom_restrict_idem[simp] : "(dom p) \<triangleleft> p = p"
apply (rule ext)
apply (auto simp: dom_restrict_def
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
lemma dom_restrict_inter[simp] : "T \<triangleleft> S \<triangleleft> p = T \<inter> S \<triangleleft> p"
apply (rule ext)
apply (auto simp: dom_restrict_def
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
definition ran_restrict :: "['\<alpha>\<mapsto>'\<beta>,'\<beta> decision set] \<Rightarrow> '\<alpha> \<mapsto>'\<beta>" (infixr "\<triangleright>" 55)
where "p \<triangleright> S \<equiv> (\<lambda>x. if p x \<in> (Some`S) then p x else \<bottom>)"
@ -299,38 +313,39 @@ where "p \<triangleright>2 S \<equiv> (\<lambda>x. if (the (p x)) \<in> (S)
lemma "ran_restrict = ran_restrict2"
apply (rule ext)+
apply (simp add: ran_restrict_def ran_restrict2_def)
apply (case_tac "x xb")
apply simp_all
apply (metis inj_Some inj_image_mem_iff)
done
subgoal for x xa xb
apply (case_tac "x xb")
apply simp_all
apply (metis inj_Some inj_image_mem_iff)
done
done
lemma ran_ran_restrict[simp] : "ran(p \<triangleright> S) = S \<inter> ran p"
by(auto simp: ran_restrict_def image_def ran_def)
lemma ran_restrict_idem[simp] : "p \<triangleright> (ran p) = p"
apply (rule ext)
apply (auto simp: ran_restrict_def image_def Ball_def ran_def)
apply (erule contrapos_pp)
apply (auto dest!: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
done
lemma ran_restrict_inter[simp] : "(p \<triangleright> S) \<triangleright> T = p \<triangleright> T \<inter> S"
apply (rule ext)
apply (auto simp: ran_restrict_def
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
dest: neq_commute[THEN iffD1,THEN not_None_eq [THEN iffD1]])
done
lemma ran_gen_A[simp] : "(\<forall>Ax. \<lfloor>P x\<rfloor>) \<triangleright> Allow = (\<forall>Ax. \<lfloor>P x\<rfloor>)"
apply (rule ext)
apply (auto simp: Allow_def ran_restrict_def)
done
done
lemma ran_gen_D[simp] : "(\<forall>Dx. \<lfloor>P x\<rfloor>) \<triangleright> Deny = (\<forall>Dx. \<lfloor>P x\<rfloor>)"
apply (rule ext)
apply (auto simp: Deny_def ran_restrict_def)
done
done
lemmas ElementaryPoliciesDefs = deny_pfun_def allow_pfun_def allow_all_fun_def deny_all_fun_def
allow_all_id_def deny_all_id_def allow_all_def deny_all_def

View File

@ -6,8 +6,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2009-2014 Univ. Paris-Sud, France
* 2009-2014 Achim D. Brucker, Germany
* 2009-2017 Univ. Paris-Sud, France
* 2009-2015 Achim D. Brucker, Germany
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -39,17 +40,16 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: Monads.thy 10922 2014-11-10 15:41:49Z wolff $ *)
section {* Basic Monad Theory for Sequential Computations *}
section \<open>Basic Monad Theory for Sequential Computations\<close>
theory
Monads
imports
Main
imports
Main
begin
subsection{* General Framework for Monad-based Sequence-Test *}
text{*
subsection\<open>General Framework for Monad-based Sequence-Test\<close>
text\<open>
As such, Higher-order Logic as a purely functional specification formalism has no built-in
mechanism for state and state-transitions. Forms of testing involving state require therefore
explicit mechanisms for their treatment inside the logic; a well-known technique to model
@ -67,9 +67,9 @@ text{*
\item non-deterministic i/o automata, and
\item labelled transition systems (LTS)
\end{enumerate}
*}
\<close>
subsubsection{* State Exception Monads *}
subsubsection\<open>State Exception Monads\<close>
type_synonym ('o, '\<sigma>) MON\<^sub>S\<^sub>E = "'\<sigma> \<rightharpoonup> ('o \<times> '\<sigma>)"
definition bind_SE :: "('o,'\<sigma>)MON\<^sub>S\<^sub>E \<Rightarrow> ('o \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>E) \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>E"
@ -77,7 +77,7 @@ where "bind_SE f g = (\<lambda>\<sigma>. case f \<sigma> of None \<Rightarro
| Some (out, \<sigma>') \<Rightarrow> g out \<sigma>')"
notation bind_SE ("bind\<^sub>S\<^sub>E")
syntax (xsymbols)
syntax
"_bind_SE" :: "[pttrn,('o,'\<sigma>)MON\<^sub>S\<^sub>E,('o','\<sigma>)MON\<^sub>S\<^sub>E] \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>E"
("(2 _ \<leftarrow> _; _)" [5,8,8]8)
translations
@ -103,29 +103,35 @@ definition if_SE :: "['\<sigma> \<Rightarrow> bool, ('\<alpha>, '\<sigma>)MON\<^
where "if_SE c E F = (\<lambda>\<sigma>. if c \<sigma> then E \<sigma> else F \<sigma>)"
notation if_SE ("if\<^sub>S\<^sub>E")
text{*
text\<open>
The standard monad theorems about unit and associativity:
*}
\<close>
lemma bind_left_unit : "(x \<leftarrow> return a; k) = k"
apply (simp add: unit_SE_def bind_SE_def)
done
done
lemma bind_right_unit: "(x \<leftarrow> m; return x) = m"
apply (simp add: unit_SE_def bind_SE_def)
apply (rule ext)
apply (case_tac "m \<sigma>")
apply ( simp_all)
done
subgoal for "\<sigma>"
apply (case_tac "m \<sigma>")
apply ( simp_all)
done
done
lemma bind_assoc: "(y \<leftarrow> (x \<leftarrow> m; k); h) = (x \<leftarrow> m; (y \<leftarrow> k; h))"
apply (simp add: unit_SE_def bind_SE_def)
apply (rule ext)
apply (case_tac "m \<sigma>", simp_all)
apply (case_tac "a", simp_all)
done
subgoal for "\<sigma>"
apply (case_tac "m \<sigma>", simp_all)
subgoal for a
apply (case_tac "a", simp_all)
done
done
done
text{*
text\<open>
In order to express test-sequences also on the object-level and to make our theory amenable to
formal reasoning over test-sequences, we represent them as lists of input and generalize the
bind-operator of the state-exception monad accordingly. The approach is straightforward, but
@ -141,9 +147,9 @@ text{*
of side-conditions have to be expressed inside \HOL. From the user perspective, this will not
make much difference, since junk-data resulting from too weak typing can be ruled out by adopted
front-ends.
*}
\<close>
text{*
text\<open>
In order to express test-sequences also on the object-level and to make our theory amenable to
formal reasoning over test-sequences, we represent them as lists of input and generalize the
bind-operator of the state-exception monad accordingly. Thus, the notion of test-sequence
@ -162,71 +168,77 @@ text{*
same operation will occur; this form of side-conditions have to be expressed
inside \HOL. From the user perspective, this will not make much difference,
since junk-data resulting from too weak typing can be ruled out by adopted
front-ends. *}
front-ends.\<close>
text{* Note that the subsequent notion of a test-sequence allows the io stepping
text\<open>Note that the subsequent notion of a test-sequence allows the io stepping
function (and the special case of a program under test) to stop execution
\emph{within} the sequence; such premature terminations are characterized by an
output list which is shorter than the input list. Note that our primary
notion of multiple execution ignores failure and reports failure
steps only by missing results ... *}
steps only by missing results ...\<close>
fun mbind :: "'\<iota> list \<Rightarrow> ('\<iota> \<Rightarrow> ('o,'\<sigma>) MON\<^sub>S\<^sub>E) \<Rightarrow> ('o list,'\<sigma>) MON\<^sub>S\<^sub>E"
where "mbind [] iostep \<sigma> = Some([], \<sigma>)" |
"mbind (a#H) iostep \<sigma> =
where "mbind [] iostep \<sigma> = Some([], \<sigma>)" |
"mbind (a#H) iostep \<sigma> =
(case iostep a \<sigma> of
None \<Rightarrow> Some([], \<sigma>)
| Some (out, \<sigma>') \<Rightarrow> (case mbind H iostep \<sigma>' of
None \<Rightarrow> Some([out],\<sigma>')
| Some(outs,\<sigma>'') \<Rightarrow> Some(out#outs,\<sigma>'')))"
text{* As mentioned, this definition is fail-safe; in case of an exception,
text\<open>As mentioned, this definition is fail-safe; in case of an exception,
the current state is maintained, no result is reported.
An alternative is the fail-strict variant @{text "mbind'"} defined below. *}
An alternative is the fail-strict variant \<open>mbind'\<close> defined below.\<close>
lemma mbind_unit [simp]: "mbind [] f = (return [])"
by(rule ext, simp add: unit_SE_def)
by(rule ext, simp add: unit_SE_def)
lemma mbind_nofailure [simp]: "mbind S f \<sigma> \<noteq> None"
apply (rule_tac x=\<sigma> in spec)
apply (induct S)
apply (auto simp:unit_SE_def)
apply (case_tac "f a x")
apply ( auto)
apply (erule_tac x="b" in allE)
apply (erule exE)
apply (erule exE)
apply (simp)
done
using mbind.simps(1) apply force
apply(simp add:unit_SE_def)
apply(safe)[1]
subgoal for a S x
apply (case_tac "f a x")
apply(simp)
apply(safe)[1]
subgoal for aa b
apply (erule_tac x="b" in allE)
apply (erule exE)+
apply (simp)
done
done
done
text{* The fail-strict version of @{text mbind'} looks as follows: *}
text\<open>The fail-strict version of \<open>mbind'\<close> looks as follows:\<close>
fun mbind' :: "'\<iota> list \<Rightarrow> ('\<iota> \<Rightarrow> ('o,'\<sigma>) MON\<^sub>S\<^sub>E) \<Rightarrow> ('o list,'\<sigma>) MON\<^sub>S\<^sub>E"
where "mbind' [] iostep \<sigma> = Some([], \<sigma>)" |
"mbind' (a#H) iostep \<sigma> =
(case iostep a \<sigma> of
None \<Rightarrow> None
| Some (out, \<sigma>') \<Rightarrow> (case mbind H iostep \<sigma>' of
None \<Rightarrow> None (* fail-strict *)
None \<Rightarrow> None \<comment> \<open>fail-strict\<close>
| Some(outs,\<sigma>'') \<Rightarrow> Some(out#outs,\<sigma>'')))"
text{*
text\<open>
mbind' as failure strict operator can be seen as a foldr on bind---if the types would
match \ldots
*}
\<close>
definition try_SE :: "('o,'\<sigma>) MON\<^sub>S\<^sub>E \<Rightarrow> ('o option,'\<sigma>) MON\<^sub>S\<^sub>E"
where "try_SE ioprog = (\<lambda>\<sigma>. case ioprog \<sigma> of
None \<Rightarrow> Some(None, \<sigma>)
| Some(outs, \<sigma>') \<Rightarrow> Some(Some outs, \<sigma>'))"
text{* In contrast @{term mbind} as a failure safe operator can roughly be seen
text\<open>In contrast @{term mbind} as a failure safe operator can roughly be seen
as a @{term foldr} on bind - try:
@{text "m1 ; try m2 ; try m3; ..."}. Note, that the rough equivalence only holds for
\<open>m1 ; try m2 ; try m3; ...\<close>. Note, that the rough equivalence only holds for
certain predicates in the sequence - length equivalence modulo None,
for example. However, if a conditional is added, the equivalence
can be made precise: *}
can be made precise:\<close>
lemma mbind_try:
@ -237,15 +249,20 @@ lemma mbind_try:
else (x \<leftarrow> mbind S F; M (the a' # x)))"
apply (rule ext)
apply (simp add: bind_SE_def try_SE_def)
apply (case_tac "F a x")
apply (auto)
apply (simp add: bind_SE_def try_SE_def)
apply (case_tac "mbind S F b")
apply (auto)
done
subgoal for x
apply (case_tac "F a x")
apply(simp)
apply (safe)[1]
apply (simp add: bind_SE_def try_SE_def)
subgoal for aa b
apply (case_tac "mbind S F b")
apply (auto)
done
done
done
text{* On this basis, a symbolic evaluation scheme can be established
that reduces @{term mbind}-code to @{term try_SE}-code and If-cascades. *}
text\<open>On this basis, a symbolic evaluation scheme can be established
that reduces @{term mbind}-code to @{term try_SE}-code and If-cascades.\<close>
definition alt_SE :: "[('o, '\<sigma>)MON\<^sub>S\<^sub>E, ('o, '\<sigma>)MON\<^sub>S\<^sub>E] \<Rightarrow> ('o, '\<sigma>)MON\<^sub>S\<^sub>E" (infixl "\<sqinter>\<^sub>S\<^sub>E" 10)
@ -257,18 +274,18 @@ where "malt_SE S = foldr alt_SE S fail\<^sub>S\<^sub>E"
notation malt_SE ("\<Sqinter>\<^sub>S\<^sub>E")
lemma malt_SE_mt [simp]: "\<Sqinter>\<^sub>S\<^sub>E [] = fail\<^sub>S\<^sub>E"
by(simp add: malt_SE_def)
by(simp add: malt_SE_def)
lemma malt_SE_cons [simp]: "\<Sqinter>\<^sub>S\<^sub>E (a # S) = (a \<sqinter>\<^sub>S\<^sub>E (\<Sqinter>\<^sub>S\<^sub>E S))"
by(simp add: malt_SE_def)
by(simp add: malt_SE_def)
subsubsection{* State-Backtrack Monads *}
text{*This subsection is still rudimentary and as such an interesting
subsubsection\<open>State-Backtrack Monads\<close>
text\<open>This subsection is still rudimentary and as such an interesting
formal analogue to the previous monad definitions. It is doubtful that it is
interesting for testing and as a computational structure at all.
Clearly more relevant is ``sequence'' instead of ``set,'' which would
rephrase Isabelle's internal tactic concept.
*}
\<close>
type_synonym ('o, '\<sigma>) MON\<^sub>S\<^sub>B = "'\<sigma> \<Rightarrow> ('o \<times> '\<sigma>) set"
@ -281,7 +298,7 @@ definition unit_SB :: "'o \<Rightarrow> ('o, '\<sigma>)MON\<^sub>S\<^sub>B" ("
where "unit_SB e = (\<lambda>\<sigma>. {(e,\<sigma>)})"
notation unit_SB ("unit\<^sub>S\<^sub>B")
syntax (xsymbols) "_bind_SB" :: "[pttrn,('o,'\<sigma>)MON\<^sub>S\<^sub>B,('o','\<sigma>)MON\<^sub>S\<^sub>B] \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B"
syntax "_bind_SB" :: "[pttrn,('o,'\<sigma>)MON\<^sub>S\<^sub>B,('o','\<sigma>)MON\<^sub>S\<^sub>B] \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B"
("(2 _ := _; _)" [5,8,8]8)
translations
"x := f; g" \<rightleftharpoons> "CONST bind_SB f (% x . g)"
@ -289,7 +306,7 @@ translations
lemma bind_left_unit_SB : "(x := returns a; m) = m"
apply (rule ext)
apply (simp add: unit_SB_def bind_SB_def)
done
done
lemma bind_right_unit_SB: "(x := m; returns x) = m"
apply (rule ext)
@ -301,13 +318,13 @@ lemma bind_assoc_SB: "(y := (x := m; k); h) = (x := m; (y := k; h))"
apply (simp add: unit_SB_def bind_SB_def split_def)
done
subsubsection{* State Backtrack Exception Monad *}
text{*
subsubsection\<open>State Backtrack Exception Monad\<close>
text\<open>
The following combination of the previous two Monad-Constructions allows for the semantic
foundation of a simple generic assertion language in the style of Schirmer's Simpl-Language or
Rustan Leino's Boogie-PL language. The key is to use the exceptional element None for violations
of the assert-statement.
*}
\<close>
type_synonym ('o, '\<sigma>) MON\<^sub>S\<^sub>B\<^sub>E = "'\<sigma> \<Rightarrow> (('o \<times> '\<sigma>) set) option"
definition bind_SBE :: "('o,'\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E \<Rightarrow> ('o \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E) \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E"
@ -316,7 +333,7 @@ where "bind_SBE f g = (\<lambda>\<sigma>. case f \<sigma> of None \<Rightarr
in if None \<in> S' then None
else Some(\<Union> (the ` S'))))"
syntax (xsymbols) "_bind_SBE" :: "[pttrn,('o,'\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E,('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E] \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E"
syntax "_bind_SBE" :: "[pttrn,('o,'\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E,('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E] \<Rightarrow> ('o','\<sigma>)MON\<^sub>S\<^sub>B\<^sub>E"
("(2 _ :\<equiv> _; _)" [5,8,8]8)
translations
"x :\<equiv> f; g" \<rightleftharpoons> "CONST bind_SBE f (% x . g)"
@ -341,69 +358,74 @@ notation havoc_SBE ("havoc\<^sub>S\<^sub>B\<^sub>E")
lemma bind_left_unit_SBE : "(x :\<equiv> returning a; m) = m"
apply (rule ext)
apply (simp add: unit_SBE_def bind_SBE_def)
done
done
lemma bind_right_unit_SBE: "(x :\<equiv> m; returning x) = m"
apply (rule ext)
apply (simp add: unit_SBE_def bind_SBE_def)
apply (case_tac "m x")
apply (simp_all add:Let_def)
apply (rule HOL.ccontr)
apply (simp add: Set.image_iff)
done
subgoal for x
apply (case_tac "m x")
apply (simp_all add:Let_def)
apply (rule HOL.ccontr)
apply (simp add: Set.image_iff)
done
done
lemmas aux = trans[OF HOL.neq_commute,OF Option.not_None_eq]
lemma bind_assoc_SBE: "(y :\<equiv> (x :\<equiv> m; k); h) = (x :\<equiv> m; (y :\<equiv> k; h))"
proof (rule ext, simp add: unit_SBE_def bind_SBE_def,
case_tac "m x", simp_all add: Let_def Set.image_iff, safe)
case goal1 then show ?case
by(rule_tac x="(a, b)" in bexI, simp_all)
proof (rule ext, simp add: unit_SBE_def bind_SBE_def, rename_tac x,
case_tac "m x", simp_all add: Let_def Set.image_iff, safe,goal_cases)
case (1 x a aa b ab ba a b)
then show ?case by(rule_tac x="(a, b)" in bexI, simp_all)
next
case goal2 then show ?case
apply (rule_tac x="(aa, b)" in bexI, simp_all add:split_def)
apply (erule_tac x="(aa,b)" in ballE)
apply (auto simp: aux image_def split_def intro!: rev_bexI)
done
case (2 x a aa b ab ba)
then show ?case
apply (rule_tac x="(aa, b)" in bexI, simp_all add:split_def)
apply (erule_tac x="(aa,b)" in ballE)
apply (auto simp: aux image_def split_def intro!: rev_bexI)
done
next
case goal3 then show ?case
by(rule_tac x="(a, b)" in bexI, simp_all)
case (3 x a a b)
then show ?case by(rule_tac x="(a, b)" in bexI, simp_all)
next
case goal4 then show ?case
apply (erule_tac Q="None = X" for X in contrapos_pp)
apply (erule_tac x="(aa,b)" and P="\<lambda> x. None \<noteq> split (\<lambda>out. k) x" in ballE)
apply (auto simp: aux (*Option.not_None_eq*) image_def split_def intro!: rev_bexI)
done
next
case goal5 then show ?case
apply simp apply ((erule_tac x="(ab,ba)" in ballE)+)
apply (simp_all add: aux (* Option.not_None_eq *), (erule exE)+, simp add:split_def)
apply (erule rev_bexI, case_tac "None\<in>(\<lambda>p. h(snd p))`y",auto simp:split_def)
done
case (4 x a aa b)
then show ?case
apply (erule_tac Q="None = X" for X in contrapos_pp)
apply (erule_tac x="(aa,b)" and P="\<lambda> x. None \<noteq> case_prod (\<lambda>out. k) x" in ballE)
apply (auto simp: aux image_def split_def intro!: rev_bexI)
done
next
case goal6 then show ?case
apply simp apply ((erule_tac x="(a,b)" in ballE)+)
apply (simp_all add: aux (* Option.not_None_eq *), (erule exE)+, simp add:split_def)
apply (erule rev_bexI, case_tac "None\<in>(\<lambda>p. h(snd p))`y",auto simp:split_def)
done
qed
case (5 x a aa b ab ba a b)
then show ?case apply simp apply ((erule_tac x="(ab,ba)" in ballE)+)
apply (simp_all add: aux, (erule exE)+, simp add:split_def)
apply (erule rev_bexI, case_tac "None\<in>(\<lambda>p. h(snd p))`y",auto simp:split_def)
done
next
case (6 x a aa b a b)
then show ?case apply simp apply ((erule_tac x="(a,b)" in ballE)+)
apply (simp_all add: aux, (erule exE)+, simp add:split_def)
apply (erule rev_bexI, case_tac "None\<in>(\<lambda>p. h(snd p))`y",auto simp:split_def)
done
qed
subsection{* Valid Test Sequences in the State Exception Monad *}
text{*
subsection\<open>Valid Test Sequences in the State Exception Monad\<close>
text\<open>
This is still an unstructured merge of executable monad concepts and specification oriented
high-level properties initiating test procedures.
*}
\<close>
definition valid_SE :: "'\<sigma> \<Rightarrow> (bool,'\<sigma>) MON\<^sub>S\<^sub>E \<Rightarrow> bool" (infix "\<Turnstile>" 15)
where "(\<sigma> \<Turnstile> m) = (m \<sigma> \<noteq> None \<and> fst(the (m \<sigma>)))"
text{*
text\<open>
This notation consideres failures as valid---a definition inspired by I/O conformance.
Note that it is not possible to define this concept once and for all in a Hindley-Milner
type-system. For the moment, we present it only for the state-exception monad, although for
the same definition, this notion is applicable to other monads as well.
*}
\<close>
lemma syntax_test :
"\<sigma> \<Turnstile> (os \<leftarrow> (mbind \<iota>s ioprog); return(length \<iota>s = length os))"
@ -413,37 +435,37 @@ oops
lemma valid_true[simp]: "(\<sigma> \<Turnstile> (s \<leftarrow> return x ; return (P s))) = P x"
by(simp add: valid_SE_def unit_SE_def bind_SE_def)
text{* Recall mbind\_unit for the base case. *}
text\<open>Recall mbind\_unit for the base case.\<close>
lemma valid_failure: "ioprog a \<sigma> = None \<Longrightarrow>
(\<sigma> \<Turnstile> (s \<leftarrow> mbind (a#S) ioprog ; M s)) =
(\<sigma> \<Turnstile> (M []))"
by(simp add: valid_SE_def unit_SE_def bind_SE_def)
by(simp add: valid_SE_def unit_SE_def bind_SE_def)
lemma valid_failure': "A \<sigma> = None \<Longrightarrow> \<not>(\<sigma> \<Turnstile> ((s \<leftarrow> A ; M s)))"
by(simp add: valid_SE_def unit_SE_def bind_SE_def)
by(simp add: valid_SE_def unit_SE_def bind_SE_def)
lemma valid_successElem: (* atomic boolean Monad "Query Functions" *)
"M \<sigma> = Some(f \<sigma>,\<sigma>) \<Longrightarrow> (\<sigma> \<Turnstile> M) = f \<sigma>"
by(simp add: valid_SE_def unit_SE_def bind_SE_def )
by(simp add: valid_SE_def unit_SE_def bind_SE_def )
lemma valid_success: "ioprog a \<sigma> = Some(b,\<sigma>') \<Longrightarrow>
(\<sigma> \<Turnstile> (s \<leftarrow> mbind (a#S) ioprog ; M s)) =
(\<sigma>' \<Turnstile> (s \<leftarrow> mbind S ioprog ; M (b#s)))"
apply (simp add: valid_SE_def unit_SE_def bind_SE_def )
apply (cases "mbind S ioprog \<sigma>'", auto)
done
done
lemma valid_success'': "ioprog a \<sigma> = Some(b,\<sigma>') \<Longrightarrow>
(\<sigma> \<Turnstile> (s \<leftarrow> mbind (a#S) ioprog ; return (P s))) =
(\<sigma>' \<Turnstile> (s \<leftarrow> mbind S ioprog ; return (P (b#s))))"
apply (simp add: valid_SE_def unit_SE_def bind_SE_def )
apply (cases "mbind S ioprog \<sigma>'")
apply (simp_all)
apply (simp_all)
apply (auto)
done
done
lemma valid_success': "A \<sigma> = Some(b,\<sigma>') \<Longrightarrow> (\<sigma> \<Turnstile> ((s \<leftarrow> A ; M s))) = (\<sigma>' \<Turnstile> (M b))"
by(simp add: valid_SE_def unit_SE_def bind_SE_def )
@ -453,48 +475,47 @@ lemma valid_both: "(\<sigma> \<Turnstile> (s \<leftarrow> mbind (a#S) ioprog ; r
None \<Rightarrow> (\<sigma> \<Turnstile> (return (P [])))
| Some(b,\<sigma>') \<Rightarrow> (\<sigma>' \<Turnstile> (s \<leftarrow> mbind S ioprog ; return (P (b#s)))))"
apply (case_tac "ioprog a \<sigma>")
apply (simp_all add: valid_failure valid_success'' split: prod.splits)
done
apply (simp_all add: valid_failure valid_success'' split: prod.splits)
done
lemma valid_propagate_1 [simp]: "(\<sigma> \<Turnstile> (return P)) = (P)"
by(auto simp: valid_SE_def unit_SE_def)
lemma valid_propagate_2: "\<sigma> \<Turnstile> ((s \<leftarrow> A ; M s)) \<Longrightarrow> \<exists> v \<sigma>'. the(A \<sigma>) = (v,\<sigma>') \<and> \<sigma>' \<Turnstile> (M v)"
apply (auto simp: valid_SE_def unit_SE_def bind_SE_def)
apply (cases "A \<sigma>")
apply (simp_all)
apply (simp_all)
apply (drule_tac x="A \<sigma>" and f=the in arg_cong)
apply (simp)
apply (simp)
apply (rename_tac a b aa )
apply (rule_tac x="fst aa" in exI)
apply (rule_tac x="snd aa" in exI)
apply (auto)
done
by (auto)
lemma valid_propagate_2': "\<sigma> \<Turnstile> ((s \<leftarrow> A ; M s)) \<Longrightarrow> \<exists> a. (A \<sigma>) = Some a \<and> (snd a) \<Turnstile> (M (fst a))"
apply (auto simp: valid_SE_def unit_SE_def bind_SE_def)
apply (cases "A \<sigma>")
apply (simp_all)
apply (simp_all)
apply (simp_all split: prod.splits)
apply (drule_tac x="A \<sigma>" and f=the in arg_cong)
apply (simp)
apply (rename_tac a b aa x1 x2)
apply (rule_tac x="fst aa" in exI)
apply (rule_tac x="snd aa" in exI)
apply (auto)
done
done
lemma valid_propagate_2'': "\<sigma> \<Turnstile> ((s \<leftarrow> A ; M s)) \<Longrightarrow> \<exists> v \<sigma>'. A \<sigma> = Some(v,\<sigma>') \<and> \<sigma>' \<Turnstile> (M v)"
apply (auto simp: valid_SE_def unit_SE_def bind_SE_def)
apply (cases "A \<sigma>")
apply (simp_all)
apply (simp_all)
apply (drule_tac x="A \<sigma>" and f=the in arg_cong)
apply (simp)
apply (rename_tac a b aa )
apply (rule_tac x="fst aa" in exI)
apply (rule_tac x="snd aa" in exI)
apply (auto)
done
done
lemma valid_propoagate_3[simp]: "(\<sigma>\<^sub>0 \<Turnstile> (\<lambda>\<sigma>. Some (f \<sigma>, \<sigma>))) = (f \<sigma>\<^sub>0)"
by(simp add: valid_SE_def )
@ -512,65 +533,66 @@ lemma assert_disch3 :" \<not> P \<sigma> \<Longrightarrow> \<not> (\<sigma> \<Tu
by(auto simp: bind_SE_def assert_SE_def valid_SE_def)
lemma assert_D : "(\<sigma> \<Turnstile> (x \<leftarrow> assert\<^sub>S\<^sub>E P; M x)) \<Longrightarrow> P \<sigma> \<and> (\<sigma> \<Turnstile> (M True))"
by(auto simp: bind_SE_def assert_SE_def valid_SE_def split: HOL.split_if_asm)
by(auto simp: bind_SE_def assert_SE_def valid_SE_def split: HOL.if_split_asm)
lemma assume_D : "(\<sigma> \<Turnstile> (x \<leftarrow> assume\<^sub>S\<^sub>E P; M x)) \<Longrightarrow> \<exists> \<sigma>. (P \<sigma> \<and> \<sigma> \<Turnstile> (M ()))"
apply (auto simp: bind_SE_def assume_SE_def valid_SE_def split: HOL.split_if_asm)
apply (auto simp: bind_SE_def assume_SE_def valid_SE_def split: HOL.if_split_asm)
apply (rule_tac x="Eps P" in exI)
apply (auto)
apply (rule_tac x="True" in exI, rule_tac x="b" in exI)
apply (subst Hilbert_Choice.someI)
apply (assumption)
apply (auto)[1]
subgoal for x a b
apply (rule_tac x="True" in exI, rule_tac x="b" in exI)
apply (subst Hilbert_Choice.someI)
apply (assumption)
apply (simp)
done
apply (subst Hilbert_Choice.someI,assumption)
apply (simp)
done
done
text{*
text\<open>
These two rule prove that the SE Monad in connection with the notion of valid sequence is
actually sufficient for a representation of a Boogie-like language. The SBE monad with explicit
sets of states---to be shown below---is strictly speaking not necessary (and will therefore
be discontinued in the development).
*}
\<close>
lemma if_SE_D1 : "P \<sigma> \<Longrightarrow> (\<sigma> \<Turnstile> if\<^sub>S\<^sub>E P B\<^sub>1 B\<^sub>2) = (\<sigma> \<Turnstile> B\<^sub>1)"
by(auto simp: if_SE_def valid_SE_def)
lemma if_SE_D2 : "\<not> P \<sigma> \<Longrightarrow> (\<sigma> \<Turnstile> if\<^sub>S\<^sub>E P B\<^sub>1 B\<^sub>2) = (\<sigma> \<Turnstile> B\<^sub>2)"
by(auto simp: if_SE_def valid_SE_def)
lemma if_SE_split_asm : " (\<sigma> \<Turnstile> if\<^sub>S\<^sub>E P B\<^sub>1 B\<^sub>2) = ((P \<sigma> \<and> (\<sigma> \<Turnstile> B\<^sub>1)) \<or> (\<not> P \<sigma> \<and> (\<sigma> \<Turnstile> B\<^sub>2)))"
by(cases "P \<sigma>",auto simp: if_SE_D1 if_SE_D2)
lemma if_SE_split : " (\<sigma> \<Turnstile> if\<^sub>S\<^sub>E P B\<^sub>1 B\<^sub>2) = ((P \<sigma> \<longrightarrow> (\<sigma> \<Turnstile> B\<^sub>1)) \<and> (\<not> P \<sigma> \<longrightarrow> (\<sigma> \<Turnstile> B\<^sub>2)))"
by(cases "P \<sigma>", auto simp: if_SE_D1 if_SE_D2)
lemma [code]: "(\<sigma> \<Turnstile> m) = (case (m \<sigma>) of None \<Rightarrow> False | (Some (x,y)) \<Rightarrow> x)"
apply (simp add: valid_SE_def)
apply (cases "m \<sigma> = None")
apply (simp_all)
apply (simp_all)
apply (insert not_None_eq)
apply (auto)
done
subsection{* Valid Test Sequences in the State Exception Backtrack Monad *}
text{*
done
subsection\<open>Valid Test Sequences in the State Exception Backtrack Monad\<close>
text\<open>
This is still an unstructured merge of executable monad concepts and specification oriented
high-level properties initiating test procedures.
*}
\<close>
definition valid_SBE :: "'\<sigma> \<Rightarrow> ('a,'\<sigma>) MON\<^sub>S\<^sub>B\<^sub>E \<Rightarrow> bool" (infix "\<Turnstile>\<^sub>S\<^sub>B\<^sub>E" 15)
where "\<sigma> \<Turnstile>\<^sub>S\<^sub>B\<^sub>E m \<equiv> (m \<sigma> \<noteq> None)"
text{*
where "\<sigma> \<Turnstile>\<^sub>S\<^sub>B\<^sub>E m \<equiv> (m \<sigma> \<noteq> None)"
text\<open>
This notation considers all non-failures as valid.
*}
\<close>
lemma assume_assert: "(\<sigma> \<Turnstile>\<^sub>S\<^sub>B\<^sub>E ( _ :\<equiv> assume\<^sub>S\<^sub>B\<^sub>E P ; assert\<^sub>S\<^sub>B\<^sub>E Q)) = (P \<sigma> \<longrightarrow> Q \<sigma>)"
by(simp add: valid_SBE_def assume_SBE_def assert_SBE_def bind_SBE_def)
lemma assert_intro: "Q \<sigma> \<Longrightarrow> \<sigma> \<Turnstile>\<^sub>S\<^sub>B\<^sub>E (assert\<^sub>S\<^sub>B\<^sub>E Q)"
by(simp add: valid_SBE_def assume_SBE_def assert_SBE_def bind_SBE_def)
(* legacy : lemmas valid_failure''=valid_failure *)
end

View File

@ -5,8 +5,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -38,206 +39,205 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: Normalisation.thy 10879 2014-10-26 11:35:31Z brucker $ *)
section{* Policy Transformations *}
section\<open>Policy Transformations\<close>
theory
Normalisation
imports
SeqComposition
ParallelComposition
imports
SeqComposition
ParallelComposition
begin
text{*
text\<open>
This theory provides the formalisations required for the transformation of UPF policies.
A typical usage scenario can be observed in the firewall case
study~\cite{brucker.ea:formal-fw-testing:2014}.
*}
subsection{* Elementary Operators *}
text{*
\<close>
subsection\<open>Elementary Operators\<close>
text\<open>
We start by providing several operators and theorems useful when reasoning about a list of
rules which should eventually be interpreted as combined using the standard override operator.
*}
text{*
\<close>
text\<open>
The following definition takes as argument a list of rules and returns a policy where the
rules are combined using the standard override operator.
*}
\<close>
definition list2policy::"('a \<mapsto> 'b) list \<Rightarrow> ('a \<mapsto> 'b)" where
"list2policy l = foldr (\<lambda> x y. (x \<Oplus> y)) l \<emptyset>"
text{*
text\<open>
Determine the position of element of a list.
*}
\<close>
fun position :: "'\<alpha> \<Rightarrow> '\<alpha> list \<Rightarrow> nat" where
"position a [] = 0"
|"(position a (x#xs)) = (if a = x then 1 else (Suc (position a xs)))"
text{*
"position a [] = 0"
|"(position a (x#xs)) = (if a = x then 1 else (Suc (position a xs)))"
text\<open>
Provides the first applied rule of a policy given as a list of rules.
*}
\<close>
fun applied_rule where
"applied_rule C a (x#xs) = (if a \<in> dom (C x) then (Some x)
else (applied_rule C a xs))"
|"applied_rule C a [] = None"
text {*
|"applied_rule C a [] = None"
text \<open>
The following is used if the list is constructed backwards.
*}
\<close>
definition applied_rule_rev where
"applied_rule_rev C a x = applied_rule C a (rev x)"
text{*
text\<open>
The following is a typical policy transformation. It can be applied to any type of policy and
removes all the rules from a policy with an empty domain. It takes two arguments: a semantic
interpretation function and a list of rules.
*}
\<close>
fun rm_MT_rules where
"rm_MT_rules C (x#xs) = (if dom (C x)= {}
"rm_MT_rules C (x#xs) = (if dom (C x)= {}
then rm_MT_rules C xs
else x#(rm_MT_rules C xs))"
|"rm_MT_rules C [] = []"
text {*
|"rm_MT_rules C [] = []"
text \<open>
The following invariant establishes that there are no rules with an empty domain in a list
of rules.
*}
\<close>
fun none_MT_rules where
"none_MT_rules C (x#xs) = (dom (C x) \<noteq> {} \<and> (none_MT_rules C xs))"
|"none_MT_rules C [] = True"
text{*
|"none_MT_rules C [] = True"
text\<open>
The following related invariant establishes that the policy has not a completely empty domain.
*}
\<close>
fun not_MT where
"not_MT C (x#xs) = (if (dom (C x) = {}) then (not_MT C xs) else True)"
|"not_MT C [] = False"
text{*
|"not_MT C [] = False"
text\<open>
Next, a few theorems about the two invariants and the transformation:
*}
\<close>
lemma none_MT_rules_vs_notMT: "none_MT_rules C p \<Longrightarrow> p \<noteq> [] \<Longrightarrow> not_MT C p"
apply (induct p)
apply (simp_all)
done
apply (simp_all)
done
lemma rmnMT: "none_MT_rules C (rm_MT_rules C p)"
apply (induct p)
apply (simp_all)
done
apply (simp_all)
done
lemma rmnMT2: "none_MT_rules C p \<Longrightarrow> (rm_MT_rules C p) = p"
apply (induct p)
apply (simp_all)
done
apply (simp_all)
done
lemma nMTcharn: "none_MT_rules C p = (\<forall> r \<in> set p. dom (C r) \<noteq> {})"
apply (induct p)
apply (simp_all)
done
apply (simp_all)
done
lemma nMTeqSet: "set p = set s \<Longrightarrow> none_MT_rules C p = none_MT_rules C s"
apply (simp add: nMTcharn)
done
done
lemma notMTnMT: "\<lbrakk>a \<in> set p; none_MT_rules C p\<rbrakk> \<Longrightarrow> dom (C a) \<noteq> {}"
apply (simp add: nMTcharn)
done
done
lemma none_MT_rulesconc: "none_MT_rules C (a@[b]) \<Longrightarrow> none_MT_rules C a"
apply (induct a)
apply (simp_all)
done
apply (simp_all)
done
lemma nMTtail: "none_MT_rules C p \<Longrightarrow> none_MT_rules C (tl p)"
apply (induct p)
apply (simp_all)
done
apply (simp_all)
done
lemma not_MTimpnotMT[simp]: "not_MT C p \<Longrightarrow> p \<noteq> []"
apply (auto)
done
done
lemma SR3nMT: "\<not> not_MT C p \<Longrightarrow> rm_MT_rules C p = []"
apply (induct p)
apply (auto simp: if_splits)
done
apply (auto simp: if_splits)
done
lemma NMPcharn: "\<lbrakk>a \<in> set p; dom (C a) \<noteq> {}\<rbrakk> \<Longrightarrow> not_MT C p"
apply (induct p)
apply (auto simp: if_splits)
done
apply (auto simp: if_splits)
done
lemma NMPrm: "not_MT C p \<Longrightarrow> not_MT C (rm_MT_rules C p)"
apply (induct p)
apply (simp_all)
done
text{* Next, a few theorems about applied\_rule: *}
apply (simp_all)
done
text\<open>Next, a few theorems about applied\_rule:\<close>
lemma mrconc: "applied_rule_rev C x p = Some a \<Longrightarrow> applied_rule_rev C x (b#p) = Some a"
proof (induct p rule: rev_induct)
case Nil show ?case using Nil
by (simp add: applied_rule_rev_def)
case Nil show ?case using Nil
by (simp add: applied_rule_rev_def)
next
case (snoc xs x) show ?case using snoc
apply (simp add: applied_rule_rev_def if_splits)
by (metis option.inject)
case (snoc xs x) show ?case using snoc
apply (simp add: applied_rule_rev_def if_splits)
by (metis option.inject)
qed
lemma mreq_end: "\<lbrakk>applied_rule_rev C x b = Some r; applied_rule_rev C x c = Some r\<rbrakk> \<Longrightarrow>
applied_rule_rev C x (a#b) = applied_rule_rev C x (a#c)"
by (simp add: mrconc)
by (simp add: mrconc)
lemma mrconcNone: "applied_rule_rev C x p = None \<Longrightarrow>
applied_rule_rev C x (b#p) = applied_rule_rev C x [b]"
proof (induct p rule: rev_induct)
case Nil show ?case
by (simp add: applied_rule_rev_def)
case Nil show ?case
by (simp add: applied_rule_rev_def)
next
case (snoc ys y) show ?case using snoc
proof (cases "x \<in> dom (C ys)")
case True show ?thesis using True snoc
by (auto simp: applied_rule_rev_def)
next
case False show ?thesis using False snoc
by (auto simp: applied_rule_rev_def)
case (snoc ys y) show ?case using snoc
proof (cases "x \<in> dom (C ys)")
case True show ?thesis using True snoc
by (auto simp: applied_rule_rev_def)
next
case False show ?thesis using False snoc
by (auto simp: applied_rule_rev_def)
qed
qed
qed
lemma mreq_endNone: "\<lbrakk>applied_rule_rev C x b = None; applied_rule_rev C x c = None\<rbrakk> \<Longrightarrow>
applied_rule_rev C x (a#b) = applied_rule_rev C x (a#c)"
by (metis mrconcNone)
by (metis mrconcNone)
lemma mreq_end2: "applied_rule_rev C x b = applied_rule_rev C x c \<Longrightarrow>
applied_rule_rev C x (a#b) = applied_rule_rev C x (a#c)"
apply (case_tac "applied_rule_rev C x b = None")
apply (auto intro: mreq_end mreq_endNone)
done
apply (auto intro: mreq_end mreq_endNone)
done
lemma mreq_end3: "applied_rule_rev C x p \<noteq> None \<Longrightarrow>
applied_rule_rev C x (b # p) = applied_rule_rev C x (p)"
by (auto simp: mrconc)
by (auto simp: mrconc)
lemma mrNoneMT: "\<lbrakk>r \<in> set p; applied_rule_rev C x p = None\<rbrakk> \<Longrightarrow>
x \<notin> dom (C r)"
proof (induct p rule: rev_induct)
case Nil show ?case using Nil
by (simp add: applied_rule_rev_def)
next
case (snoc y ys) show ?case using snoc
case Nil show ?case using Nil
by (simp add: applied_rule_rev_def)
next
case (snoc y ys) show ?case using snoc
proof (cases "r \<in> set ys")
case True show ?thesis using snoc True
by (simp add: applied_rule_rev_def split: split_if_asm)
next
case False show ?thesis using snoc False
by (simp add: applied_rule_rev_def split: split_if_asm)
qed
case True show ?thesis using snoc True
by (simp add: applied_rule_rev_def split: if_split_asm)
next
case False show ?thesis using snoc False
by (simp add: applied_rule_rev_def split: if_split_asm)
qed
qed
subsection{* Distributivity of the Transformation. *}
text{*
subsection\<open>Distributivity of the Transformation.\<close>
text\<open>
The scenario is the following (can be applied iteratively):
\begin{itemize}
\item Two policies are combined using one of the parallel combinators
@ -246,12 +246,12 @@ text{*
\item policies that are semantically equivalent to the original policy if
\item combined from left to right using the override operator.
\end{itemize}
*}
\<close>
text{*
text\<open>
The following function is crucial for the distribution. Its arguments are a policy, a list
of policies, a parallel combinator, and a range and a domain coercion function.
*}
\<close>
fun prod_list :: "('\<alpha> \<mapsto>'\<beta>) \<Rightarrow> (('\<gamma> \<mapsto>'\<delta>) list) \<Rightarrow>
(('\<alpha> \<mapsto>'\<beta>) \<Rightarrow> ('\<gamma> \<mapsto>'\<delta>) \<Rightarrow> (('\<alpha> \<times> '\<gamma>) \<mapsto> ('\<beta> \<times> '\<delta>))) \<Rightarrow>
(('\<beta> \<times> '\<delta>) \<Rightarrow> 'y) \<Rightarrow> ('x \<Rightarrow> ('\<alpha> \<times> '\<gamma>)) \<Rightarrow>
@ -259,28 +259,28 @@ fun prod_list :: "('\<alpha> \<mapsto>'\<beta>) \<Rightarrow> (('\<gamma> \<maps
"prod_list x (y#ys) par_comb ran_adapt dom_adapt =
((ran_adapt o_f ((par_comb x y) o dom_adapt))#(prod_list x ys par_comb ran_adapt dom_adapt))"
| "prod_list x [] par_comb ran_adapt dom_adapt = []"
text{*
text\<open>
An instance, as usual there are four of them.
*}
\<close>
definition prod_2_list :: "[('\<alpha> \<mapsto>'\<beta>), (('\<gamma> \<mapsto>'\<delta>) list)] \<Rightarrow>
(('\<beta> \<times> '\<delta>) \<Rightarrow> 'y) \<Rightarrow> ('x \<Rightarrow> ('\<alpha> \<times> '\<gamma>)) \<Rightarrow>
(('x \<mapsto> 'y) list)" (infixr "\<Otimes>\<^sub>2\<^sub>L" 55) where
"x \<Otimes>\<^sub>2\<^sub>L y = (\<lambda> d r. (x \<Otimes>\<^sub>L y) (op \<Otimes>\<^sub>2) d r)"
"x \<Otimes>\<^sub>2\<^sub>L y = (\<lambda> d r. (x \<Otimes>\<^sub>L y) (\<Otimes>\<^sub>2) d r)"
lemma list2listNMT: "x \<noteq> [] \<Longrightarrow> map sem x \<noteq> []"
apply (case_tac x)
apply (simp_all)
done
apply (simp_all)
done
lemma two_conc: "(prod_list x (y#ys) p r d) = ((r o_f ((p x y) o d))#(prod_list x ys p r d))"
by simp
by simp
text{*
text\<open>
The following two invariants establish if the law of distributivity holds for a combinator
and if an operator is strict regarding undefinedness.
*}
\<close>
definition is_distr where
"is_distr p = (\<lambda> g f. (\<forall> N P1 P2. ((g o_f ((p N (P1 \<Oplus> P2)) o f)) =
((g o_f ((p N P1) o f)) \<Oplus> (g o_f ((p N P2) o f))))))"
@ -288,141 +288,141 @@ definition is_distr where
definition is_strict where
"is_strict p = (\<lambda> r d. \<forall> P1. (r o_f (p P1 \<emptyset> \<circ> d)) = \<emptyset>)"
lemma is_distr_orD: "is_distr (op \<Otimes>\<^sub>\<or>\<^sub>D) d r"
lemma is_distr_orD: "is_distr (\<Otimes>\<^sub>\<or>\<^sub>D) d r"
apply (simp add: is_distr_def)
apply (rule allI)+
apply (rule distr_orD)
apply (simp)
done
lemma is_strict_orD: "is_strict (op \<Otimes>\<^sub>\<or>\<^sub>D) d r"
done
lemma is_strict_orD: "is_strict (\<Otimes>\<^sub>\<or>\<^sub>D) d r"
apply (simp add: is_strict_def)
apply (simp add: policy_range_comp_def)
done
lemma is_distr_2: "is_distr (op \<Otimes>\<^sub>2) d r"
done
lemma is_distr_2: "is_distr (\<Otimes>\<^sub>2) d r"
apply (simp add: is_distr_def)
apply (rule allI)+
apply (rule distr_or2)
by simp
lemma is_strict_2: "is_strict (op \<Otimes>\<^sub>2) d r"
by simp
lemma is_strict_2: "is_strict (\<Otimes>\<^sub>2) d r"
apply (simp only: is_strict_def)
apply simp
apply (simp add: policy_range_comp_def)
done
done
lemma domStart: "t \<in> dom p1 \<Longrightarrow> (p1 \<Oplus> p2) t = p1 t"
apply (simp add: map_add_dom_app_simps)
done
done
lemma notDom: "x \<in> dom A \<Longrightarrow> \<not> A x = None"
apply auto
done
text{*
done
text\<open>
The following theorems are crucial: they establish the correctness of the distribution.
*}
lemma Norm_Distr_1: "((r o_f (((op \<Otimes>\<^sub>1) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (op \<Otimes>\<^sub>1) r d)) x))"
\<close>
lemma Norm_Distr_1: "((r o_f (((\<Otimes>\<^sub>1) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (\<Otimes>\<^sub>1) r d)) x))"
proof (induct P2)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
next
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>1 p) \<circ> d))")
case True show ?thesis using True
by (auto simp: list2policy_def policy_range_comp_def prod_1_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: list2policy_def policy_range_comp_def map_add_dom_app_simps(3) prod_1_def
split: option.splits decision.splits prod.splits)
qed
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>1 p) \<circ> d))")
case True show ?thesis using True
by (auto simp: list2policy_def policy_range_comp_def prod_1_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: list2policy_def policy_range_comp_def map_add_dom_app_simps(3) prod_1_def
split: option.splits decision.splits prod.splits)
qed
qed
lemma Norm_Distr_2: "((r o_f (((op \<Otimes>\<^sub>2) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (op \<Otimes>\<^sub>2) r d)) x))"proof (induct P2)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
lemma Norm_Distr_2: "((r o_f (((\<Otimes>\<^sub>2) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (\<Otimes>\<^sub>2) r d)) x))"proof (induct P2)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
next
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>2 p) \<circ> d))")
case True show ?thesis using True
by (auto simp: list2policy_def prod_2_def policy_range_comp_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_2_def
split: option.splits decision.splits prod.splits)
qed
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>2 p) \<circ> d))")
case True show ?thesis using True
by (auto simp: list2policy_def prod_2_def policy_range_comp_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_2_def
split: option.splits decision.splits prod.splits)
qed
qed
lemma Norm_Distr_A: "((r o_f (((op \<Otimes>\<^sub>\<or>\<^sub>A) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (op \<Otimes>\<^sub>\<or>\<^sub>A) r d)) x))"
lemma Norm_Distr_A: "((r o_f (((\<Otimes>\<^sub>\<or>\<^sub>A) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (\<Otimes>\<^sub>\<or>\<^sub>A) r d)) x))"
proof (induct P2)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
next
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>\<or>\<^sub>A p) \<circ> d))")
case True show ?thesis using True
by (auto simp: policy_range_comp_def list2policy_def prod_orA_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_orA_def
split: option.splits decision.splits prod.splits)
qed
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>\<or>\<^sub>A p) \<circ> d))")
case True show ?thesis using True
by (auto simp: policy_range_comp_def list2policy_def prod_orA_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_orA_def
split: option.splits decision.splits prod.splits)
qed
qed
lemma Norm_Distr_D: "((r o_f (((op \<Otimes>\<^sub>\<or>\<^sub>D) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (op \<Otimes>\<^sub>\<or>\<^sub>D) r d)) x))"
lemma Norm_Distr_D: "((r o_f (((\<Otimes>\<^sub>\<or>\<^sub>D) P1 (list2policy P2)) o d)) x =
((list2policy ((P1 \<Otimes>\<^sub>L P2) (\<Otimes>\<^sub>\<or>\<^sub>D) r d)) x))"
proof (induct P2)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
case Nil show ?case
by (simp add: policy_range_comp_def list2policy_def)
next
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>\<or>\<^sub>D p) \<circ> d))")
case True show ?thesis using True
by (auto simp: policy_range_comp_def list2policy_def prod_orD_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_orD_def
split: option.splits decision.splits prod.splits)
qed
case (Cons p ps) show ?case using Cons
proof (cases "x \<in> dom (r o_f ((P1 \<Otimes>\<^sub>\<or>\<^sub>D p) \<circ> d))")
case True show ?thesis using True
by (auto simp: policy_range_comp_def list2policy_def prod_orD_def
split: option.splits decision.splits prod.splits)
next
case False show ?thesis using Cons False
by (auto simp: policy_range_comp_def list2policy_def map_add_dom_app_simps(3) prod_orD_def
split: option.splits decision.splits prod.splits)
qed
qed
text {* Some domain reasoning *}
text \<open>Some domain reasoning\<close>
lemma domSubsetDistr1: "dom A = UNIV \<Longrightarrow> dom ((\<lambda>(x, y). x) o_f (A \<Otimes>\<^sub>1 B) o (\<lambda> x. (x,x))) = dom B"
apply (rule set_eqI)
apply (rule iffI)
apply (auto simp: prod_1_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
apply (auto simp: prod_1_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
lemma domSubsetDistr2: "dom A = UNIV \<Longrightarrow> dom ((\<lambda>(x, y). x) o_f (A \<Otimes>\<^sub>2 B) o (\<lambda> x. (x,x))) = dom B"
apply (rule set_eqI)
apply (rule iffI)
apply (auto simp: prod_2_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
apply (auto simp: prod_2_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
lemma domSubsetDistrA: "dom A = UNIV \<Longrightarrow> dom ((\<lambda>(x, y). x) o_f (A \<Otimes>\<^sub>\<or>\<^sub>A B) o (\<lambda> x. (x,x))) = dom B"
apply (rule set_eqI)
apply (rule iffI)
apply (auto simp: prod_orA_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
apply (auto simp: prod_orA_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
lemma domSubsetDistrD: "dom A = UNIV \<Longrightarrow> dom ((\<lambda>(x, y). x) o_f (A \<Otimes>\<^sub>\<or>\<^sub>D B) o (\<lambda> x. (x,x))) = dom B"
apply (rule set_eqI)
apply (rule iffI)
apply (auto simp: prod_orD_def policy_range_comp_def dom_def
split: decision.splits option.splits prod.splits)
done
split: decision.splits option.splits prod.splits)
done
end

View File

@ -5,8 +5,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -38,16 +39,15 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: NormalisationTestSpecification.thy 10879 2014-10-26 11:35:31Z brucker $ *)
section {* Policy Transformation for Testing *}
section \<open>Policy Transformation for Testing\<close>
theory
NormalisationTestSpecification
imports
Normalisation
imports
Normalisation
begin
text{*
text\<open>
This theory provides functions and theorems which are useful if one wants to test policy
which are transformed. Most exist in two versions: one where the domains of the rules
of the list (which is the result of a transformation) are pairwise disjoint, and one where
@ -55,11 +55,11 @@ text{*
The examples in the firewall case study provide a good documentation how these theories can
be applied.
*}
\<close>
text{*
text\<open>
This invariant establishes that the domains of a list of rules are pairwise disjoint.
*}
\<close>
fun disjDom where
"disjDom (x#xs) = ((\<forall>y\<in>(set xs). dom x \<inter> dom y = {}) \<and> disjDom xs)"
|"disjDom [] = True"
@ -72,50 +72,52 @@ fun PUTList :: "('a \<mapsto> 'b) \<Rightarrow> 'a \<Rightarrow> ('a \<mapsto> '
lemma distrPUTL1: "x \<in> dom P \<Longrightarrow> (list2policy PL) x = P x
\<Longrightarrow> (PUTList PUT x PL \<Longrightarrow> (PUT x = P x))"
apply (induct PL)
apply (auto simp: list2policy_def dom_def)
done
apply (auto simp: list2policy_def dom_def)
done
lemma PUTList_None: "x \<notin> dom (list2policy list) \<Longrightarrow> PUTList PUT x list"
apply (induct list)
apply (auto simp: list2policy_def dom_def)
done
apply (auto simp: list2policy_def dom_def)
done
lemma PUTList_DomMT:
"(\<forall>y\<in>set list. dom a \<inter> dom y = {}) \<Longrightarrow> x \<in> (dom a) \<Longrightarrow> x \<notin> dom (list2policy list)"
apply (induct list)
apply (auto simp: dom_def list2policy_def)
done
apply (auto simp: dom_def list2policy_def)
done
lemma distrPUTL2:
"x \<in> dom P \<Longrightarrow> (list2policy PL) x = P x \<Longrightarrow> disjDom PL \<Longrightarrow> (PUT x = P x) \<Longrightarrow> PUTList PUT x PL "
apply (induct PL)
apply (simp_all add: list2policy_def)
apply (auto)
apply (case_tac "x \<in> dom a")
apply (case_tac "list2policy PL x = P x")
apply (simp add: list2policy_def)
apply (rule PUTList_None)
apply (rule_tac a = a in PUTList_DomMT)
apply (simp_all add: list2policy_def dom_def)
done
apply (simp_all add: list2policy_def)
apply (auto)[1]
subgoal for a PL p
apply (case_tac "x \<in> dom a")
apply (case_tac "list2policy PL x = P x")
apply (simp add: list2policy_def)
apply (rule PUTList_None)
apply (rule_tac a = a in PUTList_DomMT)
apply (simp_all add: list2policy_def dom_def)
done
done
lemma distrPUTL:
"\<lbrakk>x \<in> dom P; (list2policy PL) x = P x; disjDom PL\<rbrakk> \<Longrightarrow> (PUT x = P x) = PUTList PUT x PL "
apply (rule iffI)
apply (rule distrPUTL2)
apply (simp_all)
apply (rule distrPUTL2)
apply (simp_all)
apply (rule_tac PL = PL in distrPUTL1)
apply (auto)
done
apply (auto)
done
text{*
text\<open>
It makes sense to cater for the common special case where the normalisation returns a list
where the last element is a default-catch-all rule. It seems easier to cater for this globally,
rather than to require the normalisation procedures to do this.
*}
\<close>
fun gatherDomain_aux where
"gatherDomain_aux (x#xs) = (dom x \<union> (gatherDomain_aux xs))"
"gatherDomain_aux (x#xs) = (dom x \<union> (gatherDomain_aux xs))"
|"gatherDomain_aux [] = {}"
definition gatherDomain where "gatherDomain p = (gatherDomain_aux (butlast p))"
@ -129,16 +131,16 @@ definition disjDomGD where "disjDomGD p = disjDom (butlast p)"
lemma distrPUTLG1: "\<lbrakk>x \<in> dom P; (list2policy PL) x = P x; PUTListGD PUT x PL\<rbrakk> \<Longrightarrow> PUT x = P x"
apply (induct PL)
apply (simp_all add: domIff PUTListGD_def disjDomGD_def gatherDomain_def list2policy_def)
apply (auto simp: dom_def domIff split: split_if_asm)
done
apply (simp_all add: domIff PUTListGD_def disjDomGD_def gatherDomain_def list2policy_def)
apply (auto simp: dom_def domIff split: if_split_asm)
done
lemma distrPUTLG2:
"PL \<noteq> [] \<Longrightarrow> x \<in> dom P \<Longrightarrow> (list2policy (PL)) x = P x \<Longrightarrow> disjDomGD PL \<Longrightarrow>
(PUT x = P x) \<Longrightarrow> PUTListGD PUT x (PL)"
apply (simp add: PUTListGD_def disjDomGD_def gatherDomain_def list2policy_def)
apply (induct PL)
apply (auto)
apply (auto)
apply (metis PUTList_DomMT PUTList_None domI)
done
@ -146,12 +148,12 @@ lemma distrPUTLG:
"\<lbrakk>x \<in> dom P; (list2policy PL) x = P x; disjDomGD PL; PL \<noteq> []\<rbrakk> \<Longrightarrow>
(PUT x = P x) = PUTListGD PUT x PL "
apply (rule iffI)
apply (rule distrPUTLG2)
apply (simp_all)
apply (rule distrPUTLG2)
apply (simp_all)
apply (rule_tac PL = PL in distrPUTLG1)
apply (auto)
done
apply (auto)
done
end

View File

@ -5,8 +5,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -38,16 +39,15 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: ParallelComposition.thy 10879 2014-10-26 11:35:31Z brucker $ *)
section{* Parallel Composition*}
section\<open>Parallel Composition\<close>
theory
ParallelComposition
imports
ElementaryPolicies
imports
ElementaryPolicies
begin
text{*
text\<open>
The following combinators are based on the idea that two policies are executed in parallel.
Since both input and the output can differ, we chose to pair them.
@ -60,16 +60,16 @@ text{*
In any case, although we have strictly speaking a pairing of decisions and not a nesting of
them, we will apply the same notational conventions as for the latter, i.e. as for
flattening.
*}
\<close>
subsection{* Parallel Combinators: Foundations *}
text {*
subsection\<open>Parallel Combinators: Foundations\<close>
text \<open>
There are four possible semantics how the decision can be combined, thus there are four
parallel composition operators. For each of them, we prove several properties.
*}
\<close>
definition prod_orA ::"['\<alpha>\<mapsto>'\<beta>, '\<gamma> \<mapsto>'\<delta>] \<Rightarrow> ('\<alpha>\<times>'\<gamma> \<mapsto> '\<beta>\<times>'\<delta>)" (infixr "\<Otimes>\<^sub>\<or>\<^sub>A" 55)
where "p1 \<Otimes>\<^sub>\<or>\<^sub>A p2 =
where "p1 \<Otimes>\<^sub>\<or>\<^sub>A p2 =
(\<lambda>(x,y). (case p1 x of
\<lfloor>allow d1\<rfloor> \<Rightarrow>(case p2 y of
\<lfloor>allow d2\<rfloor> \<Rightarrow> \<lfloor>allow(d1,d2)\<rfloor>
@ -80,25 +80,25 @@ where "p1 \<Otimes>\<^sub>\<or>\<^sub>A p2 =
| \<lfloor>deny d2\<rfloor> \<Rightarrow> \<lfloor>deny (d1,d2)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)
| \<bottom> \<Rightarrow> \<bottom>))"
lemma prod_orA_mt[simp]:"p \<Otimes>\<^sub>\<or>\<^sub>A \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_orA_def)
apply (auto)
apply (simp split: option.splits decision.splits)
done
done
lemma mt_prod_orA[simp]:"\<emptyset> \<Otimes>\<^sub>\<or>\<^sub>A p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_orA_def)
done
done
lemma prod_orA_quasi_commute: "p2 \<Otimes>\<^sub>\<or>\<^sub>A p1 = (((\<lambda>(x,y). (y,x)) o_f (p1 \<Otimes>\<^sub>\<or>\<^sub>A p2))) o (\<lambda>(a,b).(b,a))"
apply (rule ext)
apply (simp add: prod_orA_def policy_range_comp_def o_def)
apply (auto)
apply (auto)[1]
apply (simp split: option.splits decision.splits)
done
done
definition prod_orD ::"['\<alpha> \<mapsto> '\<beta>, '\<gamma> \<mapsto> '\<delta>] \<Rightarrow> ('\<alpha> \<times> '\<gamma> \<mapsto> '\<beta> \<times> '\<delta> )" (infixr "\<Otimes>\<^sub>\<or>\<^sub>D" 55)
where "p1 \<Otimes>\<^sub>\<or>\<^sub>D p2 =
@ -116,28 +116,28 @@ where "p1 \<Otimes>\<^sub>\<or>\<^sub>D p2 =
lemma prod_orD_mt[simp]:"p \<Otimes>\<^sub>\<or>\<^sub>D \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_orD_def)
apply (auto)
apply (auto)[1]
apply (simp split: option.splits decision.splits)
done
done
lemma mt_prod_orD[simp]:"\<emptyset> \<Otimes>\<^sub>\<or>\<^sub>D p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_orD_def)
done
done
lemma prod_orD_quasi_commute: "p2 \<Otimes>\<^sub>\<or>\<^sub>D p1 = (((\<lambda>(x,y). (y,x)) o_f (p1 \<Otimes>\<^sub>\<or>\<^sub>D p2))) o (\<lambda>(a,b).(b,a))"
apply (rule ext)
apply (simp add: prod_orD_def policy_range_comp_def o_def)
apply (auto)
apply (auto)[1]
apply (simp split: option.splits decision.splits)
done
done
text{*
text\<open>
The following two combinators are by definition non-commutative, but still strict.
*}
\<close>
definition prod_1 :: "['\<alpha>\<mapsto>'\<beta>, '\<gamma> \<mapsto>'\<delta>] \<Rightarrow> ('\<alpha>\<times>'\<gamma> \<mapsto> '\<beta>\<times>'\<delta>)" (infixr "\<Otimes>\<^sub>1" 55)
where "p1 \<Otimes>\<^sub>1 p2 \<equiv>
where "p1 \<Otimes>\<^sub>1 p2 \<equiv>
(\<lambda>(x,y). (case p1 x of
\<lfloor>allow d1\<rfloor>\<Rightarrow>(case p2 y of
\<lfloor>allow d2\<rfloor> \<Rightarrow> \<lfloor>allow(d1,d2)\<rfloor>
@ -148,21 +148,21 @@ where "p1 \<Otimes>\<^sub>1 p2 \<equiv>
| \<lfloor>deny d2\<rfloor> \<Rightarrow> \<lfloor>deny(d1,d2)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)
|\<bottom> \<Rightarrow> \<bottom>))"
lemma prod_1_mt[simp]:"p \<Otimes>\<^sub>1 \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_1_def)
apply (auto)
apply (auto)[1]
apply (simp split: option.splits decision.splits)
done
done
lemma mt_prod_1[simp]:"\<emptyset> \<Otimes>\<^sub>1 p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_1_def)
done
done
definition prod_2 :: "['\<alpha>\<mapsto>'\<beta>, '\<gamma> \<mapsto>'\<delta>] \<Rightarrow> ('\<alpha>\<times>'\<gamma> \<mapsto> '\<beta>\<times>'\<delta>)" (infixr "\<Otimes>\<^sub>2" 55)
where "p1 \<Otimes>\<^sub>2 p2 \<equiv>
where "p1 \<Otimes>\<^sub>2 p2 \<equiv>
(\<lambda>(x,y). (case p1 x of
\<lfloor>allow d1\<rfloor> \<Rightarrow>(case p2 y of
\<lfloor>allow d2\<rfloor> \<Rightarrow> \<lfloor>allow(d1,d2)\<rfloor>
@ -173,54 +173,54 @@ where "p1 \<Otimes>\<^sub>2 p2 \<equiv>
| \<lfloor>deny d2\<rfloor> \<Rightarrow> \<lfloor>deny (d1,d2)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)
|\<bottom> \<Rightarrow>\<bottom>))"
lemma prod_2_mt[simp]:"p \<Otimes>\<^sub>2 \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_2_def)
apply (auto)
apply (auto)[1]
apply (simp split: option.splits decision.splits)
done
done
lemma mt_prod_2[simp]:"\<emptyset> \<Otimes>\<^sub>2 p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_2_def)
done
done
definition prod_1_id ::"['\<alpha>\<mapsto>'\<beta>, '\<alpha>\<mapsto>'\<gamma>] \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>\<times>'\<gamma>)" (infixr "\<Otimes>\<^sub>1\<^sub>I" 55)
where "p \<Otimes>\<^sub>1\<^sub>I q = (p \<Otimes>\<^sub>1 q) o (\<lambda>x. (x,x))"
where "p \<Otimes>\<^sub>1\<^sub>I q = (p \<Otimes>\<^sub>1 q) o (\<lambda>x. (x,x))"
lemma prod_1_id_mt[simp]:"p \<Otimes>\<^sub>1\<^sub>I \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_1_id_def)
done
done
lemma mt_prod_1_id[simp]:"\<emptyset> \<Otimes>\<^sub>1\<^sub>I p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_1_id_def prod_1_def)
done
done
definition prod_2_id ::"['\<alpha>\<mapsto>'\<beta>, '\<alpha>\<mapsto>'\<gamma>] \<Rightarrow> ('\<alpha> \<mapsto> '\<beta>\<times>'\<gamma>)" (infixr "\<Otimes>\<^sub>2\<^sub>I" 55)
where"p \<Otimes>\<^sub>2\<^sub>I q = (p \<Otimes>\<^sub>2 q) o (\<lambda>x. (x,x))"
where"p \<Otimes>\<^sub>2\<^sub>I q = (p \<Otimes>\<^sub>2 q) o (\<lambda>x. (x,x))"
lemma prod_2_id_mt[simp]:"p \<Otimes>\<^sub>2\<^sub>I \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: prod_2_id_def)
done
done
lemma mt_prod_2_id[simp]:"\<emptyset> \<Otimes>\<^sub>2\<^sub>I p = \<emptyset>"
apply (rule ext)
apply (simp add: prod_2_id_def prod_2_def)
done
subsection{* Combinators for Transition Policies *}
text {*
done
subsection\<open>Combinators for Transition Policies\<close>
text \<open>
For constructing transition policies, two additional combinators are required: one combines
state transitions by pairing the states, the other works equivalently on general maps.
*}
\<close>
definition parallel_map :: "('\<alpha> \<rightharpoonup> '\<beta>) \<Rightarrow> ('\<delta> \<rightharpoonup> '\<gamma>) \<Rightarrow>
('\<alpha> \<times> '\<delta> \<rightharpoonup> '\<beta> \<times> '\<gamma>)" (infixr "\<Otimes>\<^sub>M" 60)
where "p1 \<Otimes>\<^sub>M p2 = (\<lambda> (x,y). case p1 x of \<lfloor>d1\<rfloor> \<Rightarrow>
where "p1 \<Otimes>\<^sub>M p2 = (\<lambda> (x,y). case p1 x of \<lfloor>d1\<rfloor> \<Rightarrow>
(case p2 y of \<lfloor>d2\<rfloor> \<Rightarrow> \<lfloor>(d1,d2)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)
| \<bottom> \<Rightarrow> \<bottom>)"
@ -231,11 +231,11 @@ where
"p1 \<Otimes>\<^sub>S p2 = (p1 \<Otimes>\<^sub>M p2) o (\<lambda> (a,b,c). ((a,b),a,c))"
subsection{* Range Splitting *}
text{*
subsection\<open>Range Splitting\<close>
text\<open>
The following combinator is a special case of both a parallel composition operator and a
range splitting operator. Its primary use case is when combining a policy with state transitions.
*}
\<close>
definition comp_ran_split :: "[('\<alpha> \<rightharpoonup> '\<gamma>) \<times> ('\<alpha> \<rightharpoonup>'\<gamma>), 'd \<mapsto> '\<beta>] \<Rightarrow> ('d \<times> '\<alpha>) \<mapsto> ('\<beta> \<times> '\<gamma>)"
(infixr "\<Otimes>\<^sub>\<nabla>" 100)
@ -244,86 +244,98 @@ where "P \<Otimes>\<^sub>\<nabla> p \<equiv> \<lambda>x. case p (fst x) of
| \<lfloor>deny y\<rfloor> \<Rightarrow> (case ((snd P) (snd x)) of \<bottom> \<Rightarrow> \<bottom> | \<lfloor>z\<rfloor> \<Rightarrow> \<lfloor>deny (y,z)\<rfloor>)
| \<bottom> \<Rightarrow> \<bottom>"
text{* An alternative characterisation of the operator is as follows: *}
text\<open>An alternative characterisation of the operator is as follows:\<close>
lemma comp_ran_split_charn:
"(f, g) \<Otimes>\<^sub>\<nabla> p = (
(((p \<triangleright> Allow)\<Otimes>\<^sub>\<or>\<^sub>A (A\<^sub>p f)) \<Oplus>
((p \<triangleright> Deny) \<Otimes>\<^sub>\<or>\<^sub>A (D\<^sub>p g))))"
apply (rule ext)
apply (simp add: comp_ran_split_def map_add_def o_def ran_restrict_def image_def
Allow_def Deny_def dom_restrict_def prod_orA_def
allow_pfun_def deny_pfun_def
split:option.splits decision.splits)
apply (rule ext)
apply (simp add: comp_ran_split_def map_add_def o_def ran_restrict_def image_def
Allow_def Deny_def dom_restrict_def prod_orA_def
allow_pfun_def deny_pfun_def
split:option.splits decision.splits)
apply (auto)
done
subsection {* Distributivity of the parallel combinators *}
done
subsection \<open>Distributivity of the parallel combinators\<close>
lemma distr_or1_a: "(F = F1 \<Oplus> F2) \<Longrightarrow> (((N \<Otimes>\<^sub>1 F) o f) =
(((N \<Otimes>\<^sub>1 F1) o f) \<Oplus> ((N \<Otimes>\<^sub>1 F2) o f))) "
apply (rule ext)
apply (simp add: prod_1_def map_add_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: prod_1_def map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: prod_1_def map_add_def
split: decision.splits option.splits)
done
done
lemma distr_or1: "(F = F1 \<Oplus> F2) \<Longrightarrow> ((g o_f ((N \<Otimes>\<^sub>1 F) o f)) =
((g o_f ((N \<Otimes>\<^sub>1 F1) o f)) \<Oplus> (g o_f ((N \<Otimes>\<^sub>1 F2) o f)))) "
apply (rule ext)+
apply (simp add: prod_1_def map_add_def policy_range_comp_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: prod_1_def map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: prod_1_def map_add_def
split: decision.splits option.splits)
done
done
lemma distr_or2_a: "(F = F1 \<Oplus> F2) \<Longrightarrow> (((N \<Otimes>\<^sub>2 F) o f) =
(((N \<Otimes>\<^sub>2 F1) o f) \<Oplus> ((N \<Otimes>\<^sub>2 F2) o f))) "
apply (rule ext)
apply (simp add: prod_2_id_def prod_2_def map_add_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: prod_2_def map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: prod_2_def map_add_def
split: decision.splits option.splits)
done
done
lemma distr_or2: "(F = F1 \<Oplus> F2) \<Longrightarrow> ((r o_f ((N \<Otimes>\<^sub>2 F) o f)) =
((r o_f ((N \<Otimes>\<^sub>2 F1) o f)) \<Oplus> (r o_f ((N \<Otimes>\<^sub>2 F2) o f)))) "
apply (rule ext)
apply (simp add: prod_2_id_def prod_2_def map_add_def policy_range_comp_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: prod_2_def map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: prod_2_def map_add_def
split: decision.splits option.splits)
done
done
lemma distr_orA: "(F = F1 \<Oplus> F2) \<Longrightarrow> ((g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>A F) o f)) =
((g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>A F1) o f)) \<Oplus> (g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>A F2) o f)))) "
apply (rule ext)+
apply (simp add: prod_orA_def map_add_def policy_range_comp_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: map_add_def
split: decision.splits option.splits)
done
done
lemma distr_orD: "(F = F1 \<Oplus> F2) \<Longrightarrow> ((g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>D F) o f)) =
((g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>D F1) o f)) \<Oplus> (g o_f ((N \<Otimes>\<^sub>\<or>\<^sub>D F2) o f)))) "
apply (rule ext)+
apply (simp add: prod_orD_def map_add_def policy_range_comp_def
split: decision.splits option.splits)
apply (case_tac "f x")
apply (simp_all add: map_add_def
split: decision.splits option.splits)
done
split: decision.splits option.splits)
subgoal for x
apply (case_tac "f x")
apply (simp_all add: map_add_def
split: decision.splits option.splits)
done
done
lemma coerc_assoc: "(r o_f P) o d = r o_f (P o d)"
apply (simp add: policy_range_comp_def)
apply (rule ext)
apply (simp split: option.splits decision.splits)
done
done
lemmas ParallelDefs = prod_orA_def prod_orD_def prod_1_def prod_2_def parallel_map_def
parallel_st_def comp_ran_split_def

View File

@ -1,8 +1,8 @@
chapter AFP
session "UPF" (AFP) = HOL +
description {* The Unified Policy Framework (UPF) *}
options [timeout=300]
session "UPF-devel" (AFP) = HOL +
description "The Unified Policy Framework (UPF) "
options [timeout = 300]
theories
Monads
UPF

View File

@ -5,8 +5,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -38,38 +39,37 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: SeqComposition.thy 10879 2014-10-26 11:35:31Z brucker $ *)
section{* Sequential Composition *}
section\<open>Sequential Composition\<close>
theory
SeqComposition
imports
ElementaryPolicies
imports
ElementaryPolicies
begin
text{*
text\<open>
Sequential composition is based on the idea that two policies are to be combined by applying
the second policy to the output of the first one. Again, there are four possibilities how the
decisions can be combined. *}
decisions can be combined.\<close>
subsection {* Flattening *}
text{*
subsection \<open>Flattening\<close>
text\<open>
A key concept of sequential policy composition is the flattening of nested decisions. There are
four possibilities, and these possibilities will give the various flavours of policy composition.
*}
\<close>
fun flat_orA :: "('\<alpha> decision) decision \<Rightarrow> ('\<alpha> decision)"
where "flat_orA(allow(allow y)) = allow y"
|"flat_orA(allow(deny y)) = allow y"
|"flat_orA(deny(allow y)) = allow y"
|"flat_orA(deny(deny y)) = deny y"
lemma flat_orA_deny[dest]:"flat_orA x = deny y \<Longrightarrow> x = deny(deny y)"
apply (case_tac "x")
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
lemma flat_orA_allow[dest]: "flat_orA x = allow y \<Longrightarrow> x = allow(allow y)
\<or> x = allow(deny y)
@ -79,21 +79,21 @@ lemma flat_orA_allow[dest]: "flat_orA x = allow y \<Longrightarrow> x = allow(al
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
fun flat_orD :: "('\<alpha> decision) decision \<Rightarrow> ('\<alpha> decision)"
where "flat_orD(allow(allow y)) = allow y"
|"flat_orD(allow(deny y)) = deny y"
|"flat_orD(deny(allow y)) = deny y"
|"flat_orD(deny(deny y)) = deny y"
lemma flat_orD_allow[dest]: "flat_orD x = allow y \<Longrightarrow> x = allow(allow y)"
apply (case_tac "x")
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
lemma flat_orD_deny[dest]: "flat_orD x = deny y \<Longrightarrow> x = deny(deny y)
\<or> x = allow(deny y)
@ -103,7 +103,7 @@ lemma flat_orD_deny[dest]: "flat_orD x = deny y \<Longrightarrow> x = deny(deny
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
fun flat_1 :: "('\<alpha> decision) decision \<Rightarrow> ('\<alpha> decision)"
where "flat_1(allow(allow y)) = allow y"
@ -117,7 +117,7 @@ lemma flat_1_allow[dest]: "flat_1 x = allow y \<Longrightarrow> x = allow(allow
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
lemma flat_1_deny[dest]: "flat_1 x = deny y \<Longrightarrow> x = deny(deny y) \<or> x = deny(allow y)"
apply (case_tac "x")
@ -125,7 +125,7 @@ lemma flat_1_deny[dest]: "flat_1 x = deny y \<Longrightarrow> x = deny(deny y)
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
fun flat_2 :: "('\<alpha> decision) decision \<Rightarrow> ('\<alpha> decision)"
where "flat_2(allow(allow y)) = allow y"
@ -147,12 +147,12 @@ lemma flat_2_deny[dest]: "flat_2 x = deny y \<Longrightarrow> x = deny(deny y)
apply (case_tac "\<alpha>", simp_all)[1]
apply (rename_tac \<alpha>)
apply (case_tac "\<alpha>", simp_all)[1]
done
done
subsection{* Policy Composition *}
text{*
subsection\<open>Policy Composition\<close>
text\<open>
The following definition allows to compose two policies. Denies and allows are transferred.
*}
\<close>
fun lift :: "('\<alpha> \<mapsto> '\<beta>) \<Rightarrow> ('\<alpha> decision \<mapsto>'\<beta> decision)"
where "lift f (deny s) = (case f s of
@ -164,20 +164,21 @@ where "lift f (deny s) = (case f s of
lemma lift_mt [simp]: "lift \<emptyset> = \<emptyset>"
apply (rule ext)
apply (case_tac "x")
apply (simp_all)
done
subgoal for x
apply (case_tac "x")
apply (simp_all)
done
done
text{*
text\<open>
Since policies are maps, we inherit a composition on them. However, this results in nestings
of decisions---which must be flattened. As we now that there are four different forms of
flattening, we have four different forms of policy composition: *}
flattening, we have four different forms of policy composition:\<close>
definition
comp_orA :: "['\<beta>\<mapsto>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<gamma>" (infixl "o'_orA" 55) where
"p2 o_orA p1 \<equiv> (map_option flat_orA) o (lift p2 o_m p1)"
"p2 o_orA p1 \<equiv> (map_option flat_orA) o (lift p2 \<circ>\<^sub>m p1)"
notation (xsymbols)
notation
comp_orA (infixl "\<circ>\<^sub>\<or>\<^sub>A" 55)
lemma comp_orA_mt[simp]:"p \<circ>\<^sub>\<or>\<^sub>A \<emptyset> = \<emptyset>"
@ -188,9 +189,9 @@ lemma mt_comp_orA[simp]:"\<emptyset> \<circ>\<^sub>\<or>\<^sub>A p = \<emptyset>
definition
comp_orD :: "['\<beta>\<mapsto>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<gamma>" (infixl "o'_orD" 55) where
"p2 o_orD p1 \<equiv> (map_option flat_orD) o (lift p2 o_m p1)"
"p2 o_orD p1 \<equiv> (map_option flat_orD) o (lift p2 \<circ>\<^sub>m p1)"
notation (xsymbols)
notation
comp_orD (infixl "\<circ>\<^sub>orD" 55)
lemma comp_orD_mt[simp]:"p o_orD \<emptyset> = \<emptyset>"
@ -201,9 +202,9 @@ lemma mt_comp_orD[simp]:"\<emptyset> o_orD p = \<emptyset>"
definition
comp_1 :: "['\<beta>\<mapsto>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<gamma>" (infixl "o'_1" 55) where
"p2 o_1 p1 \<equiv> (map_option flat_1) o (lift p2 o_m p1)"
"p2 o_1 p1 \<equiv> (map_option flat_1) o (lift p2 \<circ>\<^sub>m p1)"
notation (xsymbols)
notation
comp_1 (infixl "\<circ>\<^sub>1" 55)
lemma comp_1_mt[simp]:"p \<circ>\<^sub>1 \<emptyset> = \<emptyset>"
@ -214,9 +215,9 @@ lemma mt_comp_1[simp]:"\<emptyset> \<circ>\<^sub>1 p = \<emptyset>"
definition
comp_2 :: "['\<beta>\<mapsto>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<gamma>" (infixl "o'_2" 55) where
"p2 o_2 p1 \<equiv> (map_option flat_2) o (lift p2 o_m p1)"
"p2 o_2 p1 \<equiv> (map_option flat_2) o (lift p2 \<circ>\<^sub>m p1)"
notation (xsymbols)
notation
comp_2 (infixl "\<circ>\<^sub>2" 55)
lemma comp_2_mt[simp]:"p \<circ>\<^sub>2 \<emptyset> = \<emptyset>"

View File

@ -5,8 +5,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2010-2012 ETH Zurich, Switzerland
* 2010-2014 Achim D. Brucker, Germany
* 2010-2014 Université Paris-Sud, France
* 2010-2015 Achim D. Brucker, Germany
* 2010-2017 Université Paris-Sud, France
* 2015-2017 The Univeristy of Sheffield, UK
*
* All rights reserved.
*
@ -38,54 +39,53 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: Service.thy 10945 2014-11-21 12:50:43Z wolff $ *)
section {* Secure Service Specification *}
section \<open>Secure Service Specification\<close>
theory
Service
imports
UPF
imports
UPF
begin
text {*
text \<open>
In this section, we model a simple Web service and its access control model
that allows the staff in a hospital to access health care records of patients.
*}
\<close>
subsection{* Datatypes for Modelling Users and Roles*}
subsubsection {* Users *}
text{*
subsection\<open>Datatypes for Modelling Users and Roles\<close>
subsubsection \<open>Users\<close>
text\<open>
First, we introduce a type for users that we use to model that each
staff member has a unique id:
*}
\<close>
type_synonym user = int (* Each NHS employee has a unique NHS_ID. *)
text {*
text \<open>
Similarly, each patient has a unique id:
*}
\<close>
type_synonym patient = int (* Each patient gets a unique id *)
subsubsection {* Roles and Relationships*}
text{* In our example, we assume three different roles for members of the clinical staff: *}
subsubsection \<open>Roles and Relationships\<close>
text\<open>In our example, we assume three different roles for members of the clinical staff:\<close>
datatype role = ClinicalPractitioner | Nurse | Clerical
text{*
text\<open>
We model treatment relationships (legitimate relationships) between staff and patients
(respectively, their health records. This access control model is inspired by our detailed
NHS model.
*}
\<close>
type_synonym lr_id = int
type_synonym LR = "lr_id \<rightharpoonup> (user set)"
text{* The security context stores all the existing LRs. *}
text\<open>The security context stores all the existing LRs.\<close>
type_synonym \<Sigma> = "patient \<rightharpoonup> LR"
text{* The user context stores the roles the users are in. *}
text\<open>The user context stores the roles the users are in.\<close>
type_synonym \<upsilon> = "user \<rightharpoonup> role"
subsection {* Modelling Health Records and the Web Service API*}
subsubsection {* Health Records *}
text {* The content and the status of the entries of a health record *}
subsection \<open>Modelling Health Records and the Web Service API\<close>
subsubsection \<open>Health Records\<close>
text \<open>The content and the status of the entries of a health record\<close>
datatype data = dummyContent
datatype status = Open | Closed
type_synonym entry_id = int
@ -93,8 +93,8 @@ type_synonym entry = "status \<times> user \<times> data"
type_synonym SCR = "(entry_id \<rightharpoonup> entry)"
type_synonym DB = "patient \<rightharpoonup> SCR"
subsubsection {* The Web Service API *}
text{* The operations provided by the service: *}
subsubsection \<open>The Web Service API\<close>
text\<open>The operations provided by the service:\<close>
datatype Operation = createSCR user role patient
| appendEntry user role patient entry_id entry
| deleteEntry user role patient entry_id
@ -207,17 +207,17 @@ fun allContentStatic where
|"allContentStatic [] = True"
subsection{* Modelling Access Control*}
text {*
subsection\<open>Modelling Access Control\<close>
text \<open>
In the following, we define a rather complex access control model for our
scenario that extends traditional role-based access control
(RBAC)~\cite{sandhu.ea:role-based:1996} with treatment relationships and sealed
envelopes. Sealed envelopes (see~\cite{bruegger:generation:2012} for details)
are a variant of break-the-glass access control (see~\cite{brucker.ea:extending:2009}
for a general motivation and explanation of break-the-glass access control).
*}
\<close>
subsubsection {* Sealed Envelopes *}
subsubsection \<open>Sealed Envelopes\<close>
type_synonym SEPolicy = "(Operation \<times> DB \<mapsto> unit) "
@ -259,7 +259,7 @@ definition SEPolicy :: SEPolicy where
lemmas SEsimps = SEPolicy_def get_entry_def userHasAccess_def
editEntrySE_def deleteEntrySE_def readEntrySE_def
subsubsection {* Legitimate Relationships *}
subsubsection \<open>Legitimate Relationships\<close>
type_synonym LRPolicy = "(Operation \<times> \<Sigma>, unit) policy"
@ -365,7 +365,7 @@ definition FunPolicy where
removeLRFunPolicy \<Oplus> readSCRFunPolicy \<Oplus>
addLRFunPolicy \<Oplus> createFunPolicy \<Oplus> A\<^sub>U"
subsubsection{* Modelling Core RBAC *}
subsubsection\<open>Modelling Core RBAC\<close>
type_synonym RBACPolicy = "Operation \<times> \<upsilon> \<mapsto> unit"
@ -389,9 +389,9 @@ definition RBACPolicy :: RBACPolicy where
then \<lfloor>allow ()\<rfloor>
else \<lfloor>deny ()\<rfloor>)"
subsection {* The State Transitions and Output Function*}
subsection \<open>The State Transitions and Output Function\<close>
subsubsection{* State Transition *}
subsubsection\<open>State Transition\<close>
fun OpSuccessDB :: "(Operation \<times> DB) \<rightharpoonup> DB" where
"OpSuccessDB (createSCR u r p,S) = (case S p of \<bottom> \<Rightarrow> \<lfloor>S(p\<mapsto>\<emptyset>)\<rfloor>
@ -423,7 +423,7 @@ fun OpSuccessSigma :: "(Operation \<times> \<Sigma>) \<rightharpoonup> \<Sigma>"
(case S p of \<lfloor>lrs\<rfloor> \<Rightarrow> (case (lrs lr_id) of
\<bottom> \<Rightarrow> \<lfloor>S(p\<mapsto>(lrs(lr_id\<mapsto>us)))\<rfloor>
| \<lfloor>x\<rfloor> \<Rightarrow> \<lfloor>S\<rfloor>)
| \<bottom> \<Rightarrow> \<lfloor>S(p\<mapsto>(empty(lr_id\<mapsto>us)))\<rfloor>)"
| \<bottom> \<Rightarrow> \<lfloor>S(p\<mapsto>(Map.empty(lr_id\<mapsto>us)))\<rfloor>)"
|"OpSuccessSigma (removeLR u r p lr_id,S) =
(case S p of Some lrs \<Rightarrow> \<lfloor>S(p\<mapsto>(lrs(lr_id:=\<bottom>)))\<rfloor>
| \<bottom> \<Rightarrow> \<lfloor>S\<rfloor>)"
@ -434,7 +434,7 @@ fun OpSuccessSigma :: "(Operation \<times> \<Sigma>) \<rightharpoonup> \<Sigma>"
fun OpSuccessUC :: "(Operation \<times> \<upsilon>) \<rightharpoonup> \<upsilon>" where
"OpSuccessUC (f,u) = \<lfloor>u\<rfloor>"
subsubsection {* Output *}
subsubsection \<open>Output\<close>
type_synonym Output = unit
@ -445,7 +445,7 @@ fun OpSuccessOutput :: "(Operation) \<rightharpoonup> Output" where
fun OpFailOutput :: "Operation \<rightharpoonup> Output" where
"OpFailOutput x = \<lfloor>()\<rfloor>"
subsection {* Combine All Parts *}
subsection \<open>Combine All Parts\<close>
definition SE_LR_Policy :: "(Operation \<times> DB \<times> \<Sigma>, unit) policy" where
"SE_LR_Policy = (\<lambda>(x,x). x) o\<^sub>f (SEPolicy \<Otimes>\<^sub>\<or>\<^sub>D LR_Policy) o (\<lambda>(a,b,c). ((a,b),a,c))"

View File

@ -38,21 +38,20 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: ServiceExample.thy 10954 2014-11-24 12:43:29Z wolff $ *)
section {* Instantiating Our Secure Service Example *}
section \<open>Instantiating Our Secure Service Example\<close>
theory
ServiceExample
imports
Service
imports
Service
begin
text {*
text \<open>
In the following, we briefly present an instantiations of our secure service example
from the last section. We assume three different members of the health care staff and
two patients:
*}
\<close>
subsection {* Access Control Configuration *}
subsection \<open>Access Control Configuration\<close>
definition alice :: user where "alice = 1"
definition bob :: user where "bob = 2"
definition charlie :: user where "charlie = 3"
@ -60,7 +59,7 @@ definition patient1 :: patient where "patient1 = 5"
definition patient2 :: patient where "patient2 = 6"
definition UC0 :: \<upsilon> where
"UC0 = empty(alice\<mapsto>Nurse)(bob\<mapsto>ClinicalPractitioner)(charlie\<mapsto>Clerical)"
"UC0 = Map.empty(alice\<mapsto>Nurse)(bob\<mapsto>ClinicalPractitioner)(charlie\<mapsto>Clerical)"
definition entry1 :: entry where
"entry1 = (Open,alice, dummyContent)"
@ -78,19 +77,19 @@ definition SCR2 :: SCR where
"SCR2 = (Map.empty)"
definition Spine0 :: DB where
"Spine0 = empty(patient1\<mapsto>SCR1)(patient2\<mapsto>SCR2)"
"Spine0 = Map.empty(patient1\<mapsto>SCR1)(patient2\<mapsto>SCR2)"
definition LR1 :: LR where
"LR1 =(empty(1\<mapsto>{alice}))"
"LR1 =(Map.empty(1\<mapsto>{alice}))"
definition \<Sigma>0 :: \<Sigma> where
"\<Sigma>0 = (empty(patient1\<mapsto>LR1))"
"\<Sigma>0 = (Map.empty(patient1\<mapsto>LR1))"
subsection {* The Initial System State *}
subsection \<open>The Initial System State\<close>
definition \<sigma>0 :: "DB \<times> \<Sigma>\<times>\<upsilon>" where
"\<sigma>0 = (Spine0,\<Sigma>0,UC0)"
subsection{* Basic Properties *}
subsection\<open>Basic Properties\<close>
lemma [simp]: "(case a of allow d \<Rightarrow> \<lfloor>X\<rfloor> | deny d2 \<Rightarrow> \<lfloor>Y\<rfloor>) = \<bottom> \<Longrightarrow> False"
by (case_tac a,simp_all)
@ -98,7 +97,7 @@ lemma [simp]: "(case a of allow d \<Rightarrow> \<lfloor>X\<rfloor> | deny d2 \<
lemma [cong,simp]:
"((if hasLR urp1_alice 1 \<Sigma>0 then \<lfloor>allow ()\<rfloor> else \<lfloor>deny ()\<rfloor>) = \<bottom>) = False"
by (simp)
by (simp)
lemmas MonSimps = valid_SE_def unit_SE_def bind_SE_def
lemmas Psplits = option.splits unit.splits prod.splits decision.splits
@ -112,23 +111,26 @@ lemmas PolSimps = valid_SE_def unit_SE_def bind_SE_def if_splits policy2MON_def
entry3_def FunPolicy_def SE_LR_FUN_Policy_def o_def image_def UPFDefs
lemma "SE_LR_RBAC_Policy ((createSCR alice Clerical patient1),\<sigma>0)= Some (deny ())"
by (simp add: PolSimps)
lemma exBool[simp]: "\<exists>a\<Colon>bool. a" by auto
lemma deny_allow[simp]: " \<lfloor>deny ()\<rfloor> \<notin> Some ` range allow" by auto
lemma allow_deny[simp]: " \<lfloor>allow ()\<rfloor> \<notin> Some ` range deny" by auto
text{* Policy as monad. Alice using her first urp can read the SCR of patient1. *}
by (simp add: PolSimps)
lemma exBool[simp]: "\<exists>a::bool. a"
by auto
lemma deny_allow[simp]: " \<lfloor>deny ()\<rfloor> \<notin> Some ` range allow"
by auto
lemma allow_deny[simp]: " \<lfloor>allow ()\<rfloor> \<notin> Some ` range deny"
by auto
text\<open>Policy as monad. Alice using her first urp can read the SCR of patient1.\<close>
lemma
"(\<sigma>0 \<Turnstile> (os \<leftarrow> mbind [(createSCR alice Clerical patient1)] (PolMon);
"(\<sigma>0 \<Turnstile> (os \<leftarrow> mbind [(createSCR alice Clerical patient1)] (PolMon);
(return (os = [(deny (Out) )]))))"
by (simp add: PolMon_def MonSimps PolSimps)
text{* Presenting her other urp, she is not allowed to read it. *}
by (simp add: PolMon_def MonSimps PolSimps)
text\<open>Presenting her other urp, she is not allowed to read it.\<close>
lemma "SE_LR_RBAC_Policy ((appendEntry alice Clerical patient1 ei d),\<sigma>0)= \<lfloor>deny ()\<rfloor>"
by (simp add: PolSimps)
by (simp add: PolSimps)
end

View File

@ -6,8 +6,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -39,21 +40,20 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: UPF.thy 10879 2014-10-26 11:35:31Z brucker $ *)
section {* Putting Everything Together: UPF *}
section \<open>Putting Everything Together: UPF\<close>
theory
UPF
imports
Normalisation
NormalisationTestSpecification
Analysis
imports
Normalisation
NormalisationTestSpecification
Analysis
begin
text{*
text\<open>
This is the top-level theory for the Unified Policy Framework (UPF) and, thus,
builds the base theory for using UPF. For the moment, we only define a set of
lemmas for all core UPF definitions that is useful for using UPF:
*}
\<close>
lemmas UPFDefs = UPFCoreDefs ParallelDefs ElementaryPoliciesDefs
end

View File

@ -6,8 +6,9 @@
* This file is part of HOL-TestGen.
*
* Copyright (c) 2005-2012 ETH Zurich, Switzerland
* 2008-2014 Achim D. Brucker, Germany
* 2009-2014 Université Paris-Sud, France
* 2008-2015 Achim D. Brucker, Germany
* 2009-2017 Université Paris-Sud, France
* 2015-2017 The University of Sheffield, UK
*
* All rights reserved.
*
@ -39,18 +40,17 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************)
(* $Id: UPFCore.thy 10951 2014-11-21 21:54:46Z wolff $ *)
section{* The Core of the Unified Policy Framework (UPF) *}
section\<open>The Core of the Unified Policy Framework (UPF)\<close>
theory
UPFCore
imports
Monads
imports
Monads
begin
subsection{* Foundation *}
text{*
subsection\<open>Foundation\<close>
text\<open>
The purpose of this theory is to formalize a somewhat non-standard view
on the fundamental concept of a security policy which is worth outlining.
This view has arisen from prior experience in the modelling of network
@ -74,37 +74,37 @@ text{*
In more detail, we model policies as partial functions based on input
data $\alpha$ (arguments, system state, security context, ...) to output
data $\beta$:
*}
\<close>
datatype '\<alpha> decision = allow '\<alpha> | deny '\<alpha>
type_synonym ('\<alpha>,'\<beta>) policy = "'\<alpha> \<rightharpoonup> '\<beta> decision" (infixr "|->" 0)
text{*In the following, we introduce a number of shortcuts and alternative notations.
The type of policies is represented as: *}
text\<open>In the following, we introduce a number of shortcuts and alternative notations.
The type of policies is represented as:\<close>
translations (type) "'\<alpha> |-> '\<beta>" <= (type) "'\<alpha> \<rightharpoonup> '\<beta> decision"
type_notation (xsymbols) "policy" (infixr "\<mapsto>" 0)
type_notation "policy" (infixr "\<mapsto>" 0)
text{* ... allowing the notation @{typ "'\<alpha> \<mapsto> '\<beta>"} for the policy type and the
text\<open>... allowing the notation @{typ "'\<alpha> \<mapsto> '\<beta>"} for the policy type and the
alternative notations for @{term None} and @{term Some} of the \HOL library
@{typ "'\<alpha> option"} type:*}
@{typ "'\<alpha> option"} type:\<close>
notation "None" ("\<bottom>")
notation "Some" ("\<lfloor>_\<rfloor>" 80)
text{* Thus, the range of a policy may consist of @{term "\<lfloor>accept x\<rfloor>"} data,
text\<open>Thus, the range of a policy may consist of @{term "\<lfloor>accept x\<rfloor>"} data,
of @{term "\<lfloor>deny x\<rfloor>"} data, as well as @{term "\<bottom>"} modeling the undefinedness
of a policy, i.e. a policy is considered as a partial function. Partial
functions are used since we describe elementary policies by partial system
behaviour, which are glued together by operators such as function override and
functional composition.
*}
\<close>
text{* We define the two fundamental sets, the allow-set $Allow$ and the
text\<open>We define the two fundamental sets, the allow-set $Allow$ and the
deny-set $Deny$ (written $A$ and $D$ set for short), to characterize these
two main sets of the range of a policy.
*}
\<close>
definition Allow :: "('\<alpha> decision) set"
where "Allow = range allow"
@ -112,43 +112,39 @@ definition Deny :: "('\<alpha> decision) set"
where "Deny = range deny"
subsection{* Policy Constructors *}
text{*
subsection\<open>Policy Constructors\<close>
text\<open>
Most elementary policy constructors are based on the
update operation @{thm [source] "Fun.fun_upd_def"} @{thm Fun.fun_upd_def}
and the maplet-notation @{term "a(x \<mapsto> y)"} used for @{term "a(x:=\<lfloor>y\<rfloor>)"}.
Furthermore, we add notation adopted to our problem domain: *}
Furthermore, we add notation adopted to our problem domain:\<close>
nonterminal policylets and policylet
syntax
"_policylet1" :: "['a, 'a] => policylet" ("_ /+=/ _")
"_policylet2" :: "['a, 'a] => policylet" ("_ /-=/ _")
"_policylet1" :: "['a, 'a] => policylet" ("_ /\<mapsto>\<^sub>+/ _")
"_policylet2" :: "['a, 'a] => policylet" ("_ /\<mapsto>\<^sub>-/ _")
"" :: "policylet => policylets" ("_")
"_Maplets" :: "[policylet, policylets] => policylets" ("_,/ _")
"_Maplets" :: "[policylet, policylets] => policylets" ("_,/ _")
"_MapUpd" :: "['a |-> 'b, policylets] => 'a |-> 'b" ("_/'(_')" [900,0]900)
syntax (xsymbols)
"_policylet1" :: "['a, 'a] => policylet" ("_ /\<mapsto>\<^sub>+/ _")
"_policylet2" :: "['a, 'a] => policylet" ("_ /\<mapsto>\<^sub>-/ _")
"_emptypolicy" :: "'a |-> 'b" ("\<emptyset>")
translations
"_MapUpd m (_Maplets xy ms)" \<rightleftharpoons> "_MapUpd (_MapUpd m xy) ms"
"_MapUpd m (_policylet1 x y)" \<rightleftharpoons> "m(x := CONST Some (CONST allow y))"
"_MapUpd m (_policylet2 x y)" \<rightleftharpoons> "m(x := CONST Some (CONST deny y))"
"\<emptyset>" \<rightleftharpoons> "CONST empty"
"\<emptyset>" \<rightleftharpoons> "CONST Map.empty"
text{* Here are some lemmas essentially showing syntactic equivalences: *}
lemma test: "empty(x+=a, y-= b) = \<emptyset>(x \<mapsto>\<^sub>+ a, y \<mapsto>\<^sub>- b)" by simp
text\<open>Here are some lemmas essentially showing syntactic equivalences:\<close>
lemma test: "\<emptyset>(x\<mapsto>\<^sub>+a, y\<mapsto>\<^sub>-b) = \<emptyset>(x \<mapsto>\<^sub>+ a, y \<mapsto>\<^sub>- b)" by simp
lemma test2: "p(x\<mapsto>\<^sub>+a,x\<mapsto>\<^sub>-b) = p(x\<mapsto>\<^sub>-b)" by simp
text{*
text\<open>
We inherit a fairly rich theory on policy updates from Map here. Some examples are:
*}
\<close>
lemma pol_upd_triv1: "t k = \<lfloor>allow x\<rfloor> \<Longrightarrow> t(k\<mapsto>\<^sub>+x) = t"
by (rule ext) simp
@ -172,28 +168,24 @@ lemma pol_upd_neq1 [simp]: "m(a\<mapsto>\<^sub>+x) \<noteq> n(a\<mapsto>\<^sub>-
by(auto dest: map_upd_eqD1)
subsection{* Override Operators *}
text{*
subsection\<open>Override Operators\<close>
text\<open>
Key operators for constructing policies are the override operators. There are four different
versions of them, with one of them being the override operator from the Map theory. As it is
common to compose policy rules in a ``left-to-right-first-fit''-manner, that one is taken as
default, defined by a syntax translation from the provided override operator from the Map
theory (which does it in reverse order).
*}
\<close>
syntax
"_policyoverride" :: "['a \<mapsto> 'b, 'a \<mapsto> 'b] \<Rightarrow> 'a \<mapsto> 'b" (infixl "(+p/)" 100)
syntax (xsymbols)
syntax
"_policyoverride" :: "['a \<mapsto> 'b, 'a \<mapsto> 'b] \<Rightarrow> 'a \<mapsto> 'b" (infixl "\<Oplus>" 100)
translations
"p \<Oplus> q" \<rightleftharpoons> "q ++ p"
text{*
text\<open>
Some elementary facts inherited from Map are:
*}
\<close>
lemma override_empty: "p \<Oplus> \<emptyset> = p"
by simp
@ -204,10 +196,10 @@ lemma empty_override: "\<emptyset> \<Oplus> p = p"
lemma override_assoc: "p1 \<Oplus> (p2 \<Oplus> p3) = (p1 \<Oplus> p2) \<Oplus> p3"
by simp
text{*
text\<open>
The following two operators are variants of the standard override. For override\_A,
an allow of wins over a deny. For override\_D, the situation is dual.
*}
\<close>
definition override_A :: "['\<alpha>\<mapsto>'\<beta>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha>\<mapsto>'\<beta>" (infixl "++'_A" 100)
where "m2 ++_A m1 =
@ -218,9 +210,8 @@ where "m2 ++_A m1 =
| \<bottom> \<Rightarrow> m2 x)
)"
syntax (xsymbols)
syntax
"_policyoverride_A" :: "['a \<mapsto> 'b, 'a \<mapsto> 'b] \<Rightarrow> 'a \<mapsto> 'b" (infixl "\<Oplus>\<^sub>A" 100)
translations
"p \<Oplus>\<^sub>A q" \<rightleftharpoons> "p ++_A q"
@ -230,11 +221,15 @@ lemma override_A_empty[simp]: "p \<Oplus>\<^sub>A \<emptyset> = p"
lemma empty_override_A[simp]: "\<emptyset> \<Oplus>\<^sub>A p = p"
apply (rule ext)
apply (simp add:override_A_def)
apply (case_tac "p x")
apply (simp_all)
apply (case_tac a)
apply (simp_all)
done
subgoal for x
apply (case_tac "p x")
apply (simp_all)
subgoal for a
apply (case_tac a)
apply (simp_all)
done
done
done
lemma override_A_assoc: "p1 \<Oplus>\<^sub>A (p2 \<Oplus>\<^sub>A p3) = (p1 \<Oplus>\<^sub>A p2) \<Oplus>\<^sub>A p3"
@ -249,7 +244,7 @@ where "m1 ++_D m2 =
| \<bottom> \<Rightarrow> m1 x
)"
syntax (xsymbols)
syntax
"_policyoverride_D" :: "['a \<mapsto> 'b, 'a \<mapsto> 'b] \<Rightarrow> 'a \<mapsto> 'b" (infixl "\<Oplus>\<^sub>D" 100)
translations
"p \<Oplus>\<^sub>D q" \<rightleftharpoons> "p ++_D q"
@ -260,24 +255,28 @@ lemma override_D_empty[simp]: "p \<Oplus>\<^sub>D \<emptyset> = p"
lemma empty_override_D[simp]: "\<emptyset> \<Oplus>\<^sub>D p = p"
apply (rule ext)
apply (simp add:override_D_def)
apply (case_tac "p x", simp_all)
apply (case_tac a, simp_all)
done
subgoal for x
apply (case_tac "p x", simp_all)
subgoal for a
apply (case_tac a, simp_all)
done
done
done
lemma override_D_assoc: "p1 \<Oplus>\<^sub>D (p2 \<Oplus>\<^sub>D p3) = (p1 \<Oplus>\<^sub>D p2) \<Oplus>\<^sub>D p3"
apply (rule ext)
apply (simp add: override_D_def split: decision.splits option.splits)
done
subsection{* Coercion Operators *}
text{*
subsection\<open>Coercion Operators\<close>
text\<open>
Often, especially when combining policies of different type, it is necessary to
adapt the input or output domain of a policy to a more refined context.
*}
\<close>
text{*
text\<open>
An analogous for the range of a policy is defined as follows:
*}
\<close>
definition policy_range_comp :: "['\<beta>\<Rightarrow>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha> \<mapsto>'\<gamma>" (infixl "o'_f" 55)
where
@ -286,22 +285,21 @@ where
| \<lfloor>deny y\<rfloor> \<Rightarrow> \<lfloor>deny (f y)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)"
syntax (xsymbols)
syntax
"_policy_range_comp" :: "['\<beta>\<Rightarrow>'\<gamma>, '\<alpha>\<mapsto>'\<beta>] \<Rightarrow> '\<alpha> \<mapsto>'\<gamma>" (infixl "o\<^sub>f" 55)
translations
"p o\<^sub>f q" \<rightleftharpoons> "p o_f q"
lemma policy_range_comp_strict : "f o\<^sub>f \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: policy_range_comp_def)
done
done
text{*
text\<open>
A generalized version is, where separate coercion functions are applied to the result
depending on the decision of the policy is as follows:
*}
\<close>
definition range_split :: "[('\<beta>\<Rightarrow>'\<gamma>)\<times>('\<beta>\<Rightarrow>'\<gamma>),'\<alpha> \<mapsto> '\<beta>] \<Rightarrow> '\<alpha> \<mapsto> '\<gamma>"
(infixr "\<nabla>" 100)
@ -313,56 +311,67 @@ where "(P) \<nabla> p = (\<lambda>x. case p x of
lemma range_split_strict[simp]: "P \<nabla> \<emptyset> = \<emptyset>"
apply (rule ext)
apply (simp add: range_split_def)
done
done
lemma range_split_charn:
"(f,g) \<nabla> p = (\<lambda>x. case p x of
"(f,g) \<nabla> p = (\<lambda>x. case p x of
\<lfloor>allow x\<rfloor> \<Rightarrow> \<lfloor>allow (f x)\<rfloor>
| \<lfloor>deny x\<rfloor> \<Rightarrow> \<lfloor>deny (g x)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)"
apply (simp add: range_split_def)
apply (rule ext)
apply (case_tac "p x")
apply (simp_all)
apply (case_tac "a")
apply (simp_all)
done
subgoal for x
apply (case_tac "p x")
apply (simp_all)
subgoal for a
apply (case_tac "a")
apply (simp_all)
done
done
done
text{*
text\<open>
The connection between these two becomes apparent if considering the following lemma:
*}
\<close>
lemma range_split_vs_range_compose: "(f,f) \<nabla> p = f o\<^sub>f p"
by(simp add: range_split_charn policy_range_comp_def)
lemma range_split_id [simp]: "(id,id) \<nabla> p = p"
apply (rule ext)
apply (simp add: range_split_charn id_def)
apply (case_tac "p x")
apply (simp_all)
apply (case_tac "a")
apply (simp_all)
done
subgoal for x
apply (case_tac "p x")
apply (simp_all)
subgoal for a
apply (case_tac "a")
apply (simp_all)
done
done
done
lemma range_split_bi_compose [simp]: "(f1,f2) \<nabla> (g1,g2) \<nabla> p = (f1 o g1,f2 o g2) \<nabla> p"
apply (rule ext)
apply (simp add: range_split_charn comp_def)
apply (case_tac "p x")
apply (simp_all)
apply (case_tac "a")
apply (simp_all)
done
subgoal for x
apply (case_tac "p x")
apply (simp_all)
subgoal for a
apply (case_tac "a")
apply (simp_all)
done
done
done
text{*
text\<open>
The next three operators are rather exotic and in most cases not used.
*}
\<close>
text {*
text \<open>
The following is a variant of range\_split, where the change in the decision depends
on the input instead of the output.
*}
\<close>
definition dom_split2a :: "[('\<alpha> \<rightharpoonup> '\<gamma>) \<times> ('\<alpha> \<rightharpoonup>'\<gamma>),'\<alpha> \<mapsto> '\<beta>] \<Rightarrow> '\<alpha> \<mapsto> '\<gamma>" (infixr "\<Delta>a" 100)
where "P \<Delta>a p = (\<lambda>x. case p x of
@ -382,11 +391,11 @@ where "P \<nabla>2 p = (\<lambda>x. case p x of
| \<lfloor>deny y\<rfloor> \<Rightarrow> \<lfloor>deny (y,(snd P) x)\<rfloor>
| \<bottom> \<Rightarrow> \<bottom>)"
text{*
text\<open>
The following operator is used for transition policies only: a transition policy is transformed
into a state-exception monad. Such a monad can for example be used for test case generation using
HOL-Testgen~\cite{brucker.ea:theorem-prover:2012}.
*}
\<close>
definition policy2MON :: "('\<iota>\<times>'\<sigma> \<mapsto> 'o\<times>'\<sigma>) \<Rightarrow> ('\<iota> \<Rightarrow>('o decision,'\<sigma>) MON\<^sub>S\<^sub>E)"
where "policy2MON p = (\<lambda> \<iota> \<sigma>. case p (\<iota>,\<sigma>) of

View File

@ -8,7 +8,7 @@
# {\providecommand{\isbn}{\textsc{isbn}} }
# {\providecommand{\Cpp}{C++} }
# {\providecommand{\Specsharp}{Spec\#} }
# {\providecommand{\doi}[1]{\href{http://dx.doi.org/#1}{doi:
# {\providecommand{\doi}[1]{\href{https://doi.org/#1}{doi:
{\urlstyle{rm}\nolinkurl{#1}}}}} }
@STRING{conf-sacmat="ACM symposium on access control models and technologies
(SACMAT)" }
@ -319,7 +319,7 @@
revocation are provided, and proofs are given for the
important properties of our delegation framework.},
issn = {0306-4379},
doi = {http://dx.doi.org/10.1016/j.is.2005.11.008},
doi = {https://doi.org/10.1016/j.is.2005.11.008},
publisher = pub-elsevier,
address = {Oxford, UK, UK},
tags = {ReadingList, SoKNOS},

View File

@ -1,4 +1,5 @@
\documentclass[11pt,DIV10,a4paper,twoside=semi,openright,titlepage]{scrreprt}
\usepackage[T1]{fontenc}
\usepackage{fixltx2e}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Overrides the (rightfully issued) warning by Koma Script that \rm

9
config
View File

@ -1,9 +0,0 @@
# -*- shell-script -*-
# Get email when automated build fails. May be empty.
# values: "email1 email2 .. emailn"
NOTIFY="adbrucker@0x5f.org wolff@lri.fr lukas.a.bruegger@gmail.com"
# Participate in frequent (nightly) build (only for small submissions)
# values: "yes" "no"
FREQUENT="no"