-
Notifications
You must be signed in to change notification settings - Fork 30
/
Lecture09.tex
198 lines (168 loc) · 7.94 KB
/
Lecture09.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
% Lecture 9: 08 October 2014
\sektion{9}{Information flow and multi-level security}
Information flow: how to control propagation of information within a program or
between programs on a system where there is some confidentiality requirement.
Consider a program $P(v, s, r)$:
\begin{itemize}
\item $v$: visible (public) input
\item $s$: secret input
\item $r$: random seed
\end{itemize}
Output: all visible actions of program. Output doesn't "leak" s.
Does the output of $P$ leak information about $s$? Define a game against
adversary guessing between two possible secrets $s$ (similar to semantic security). To avoid leakage, the distribution of outputs must be independent of $s$ for all possible values of $v$.
Game:
\begin{itemize}
\item adversary chooses v, $s_0$, $s_1$
\item we announce P(v, $s_b$, r), where $b \in \{0,1\}$, r are secret and random
\item adversary guesses b
\end{itemize}
We say P doesn't leak s if adversary can't be correct with probability non-negligibly greater than $50\%$ (assuming a computationally-limited adversary).
How to enforce non-leakiness?
Unlike with previous properties, cannot enforce by watching $P$ run.
(Just because no output came out doesn't mean there wasn't a leak - ``dog that
didn't bark problem''). We can't prove this property by testing. It's inherently necessary to consider what-ifs that differ from what you actually saw.
Also, in practice requirement are more complex (more complex labels, and different labels on different data).
Generalization:
\begin{itemize}
\item label information (e.g. inputs)
\item put requirements on outputs
\item enforce that outputs respect requirements
\end{itemize}
\subsektion{Lattice model}
General model for information flow policy
\begin{definition}{Lattice}
$(S, \sqsubseteq)$, $S$: set of states, $\sqsubseteq$: partial order such
that for any $a, b \in S$, there is a least upper bound of $a, b$ and a
greatest lower bound of $a,b$.
partial order:
\begin{itemize}
\item reflexive: $a \sqsubseteq a$
\item transitive: $a \sqsubseteq b$ and $b \sqsubseteq c$, then
$a \sqsubseteq c$
\item asymmetric: $a \sqsubseteq b$ and $b \sqsubseteq a$, then $a = b$
\end{itemize}
least upper bound of $a, b$:
\begin{itemize}
\item $a \sqsubseteq U$ and $b \sqsubseteq U$ and for all $V \in S$,
$a \sqsubseteq V$ and $b \sqsubseteq V$ $\Rightarrow$
$U \sqsubseteq V$
\end{itemize}
greatest lower bound of $a, b$:
\begin{itemize}
\item $L \sqsubseteq a$ and $L \sqsubseteq b$ and for all $V \in S$,
$V \sqsubseteq a$ and $V \sqsubseteq a$ $\Rightarrow$
$V \sqsubseteq L$
\end{itemize}
\end{definition}
\begin{example}{Lattices}
\begin{enumerate}
\item linear chain of labels:
public $\sqsubseteq$ confidential
unclassified $\sqsubseteq$ classified $\sqsubseteq$ secret
$\sqsubseteq$ top secret
\item compartments (e.g. project, client ID, job function)
state is set of labels, $\sqsubseteq$ is subset
\item org chart
state is node in chart, $\sqsubseteq$ is ancestor/descendant
\item combination/cross product of lattices
state is $(S_1, S_2)$, $(A_1, B_1) \sqsubseteq (A_2, B_2)$ iff
$A_1 \sqsubseteq A_2$ and $B_1 \sqsubseteq B_2$
\end{enumerate}
\end{example}
\subsektion{Information flow in a program}
At each point in the program, every variable has a state/label (that comes from the
lattice we're using). Inputs are tagged with state. Outputs are tagged with a requirement. States are propagated
when code executes.
Example: $a = b + c$; State($a$) = LUB(State($b$), State($c$))
[LUB = Least Upper Bound]
Before providing output, check that state of output value is consistent with
policy. (For example, only allowed to emit unclassified output.) Formally, ensure that $label(v) \sqsubseteq L$, where L is the required policy.
But this isn't enough (only monitoring and rejecting when inconsistent with
policy) -- ``dog that didn't bark'':
\begin{verbatim}
// State(a) = ``secret''
// State(c) = State(d) = public
b = c;
if (a > 5) b = d; // Requires static (compile-time) analysis to get right
output b; // Says something about a, so should be labelled as secret
\end{verbatim}
Static analysis won't catch all, but will catch some of the leaks.
\begin{description}
\item[Problem 1:] conservative analysis leads to being overly cautious
\item[Problem 2:] timing might depend on values (can lead to covert channel attacks)
\end{description}
{\bf What if you can't prevent a program from leaking the information it has?}
Conservative assumption (contagion model): every programs leaks all its inputs to all its outputs.
{\bf Bell-LaPadula model}: lattice-based information flow for programs and files
\begin{itemize}
\item every program has a state (from lattice): what it's allowed to access
\item every file has a state: what it contains
\item Rule 1: ``No Read Up'' - Program $P$ can read File $F$ only if State($F$)
$\sqsubseteq$ State($P$)
\item Rule 2: ``No Write Down'' - Program $P$ can write File $F$ only if
State($P$) $\sqsubseteq$ State($F$)
\end{itemize}
\begin{theorem*} If State($F_1$) $\sqsubseteq$ State($F_2$) and the two rules are
enforced, then information from $F_2$ cannot leak into $F_1$.
\end{theorem*}
Problems:
\begin{enumerate}
\item exceptions (need to make explicit loopholes in system to allow)
\begin{itemize}
\item declassify/unprotect old data
\item what about encryption (hope "secret" ciphertext doesn't leak plaintext)
\item aggregate/``anonymized'' data
\item policy decision to make exception
\end{itemize}
\item usability - system can't tell you if there are classified files in a
directory you're trying to delete or no space on disk for you to add a
file
\item outside channels - people talk to each other outside the system
\end{enumerate}
This, so far, has been about confidentiality. Can we do the same thing for
integrity?
\begin{itemize}
\item State: level of trust in integrity of information
\item ensure high-integrity data doesn't depend on low-integrity inputs
(try to avoid GIGO problem)
\end{itemize}
{\bf Biba model}: (B-LP for integrity)
\begin{itemize}
\item Label/state: how much we trust program with respect to integrity/how important file is
\item Rule 1: ``No Read Down''
\item Rule 2: ``No Write Up''
\end{itemize}
B-LP model and Biba model at the same time?
\begin{itemize}
\item if use same labels for both (high confidentiality = high integrity),
then no communication between levels
\item if different labels, then some information flows become possible, but
could result in being much more difficult for users
\item result: usually focus on confidentiality or integrity and let humans
worry about this outside of the system
\end{itemize}
\sidenote{{\bf Back to crypto...}
Secret sharing:
\begin{itemize}
\item divide a secret into ``shares'' so that all share are required to
reconstruct secret
\begin{itemize}
\item 2-way: pick a large value $M$, secret is some $s$,
$0 \le s < M$\\
pick $r$ randomly, $0 \le r < M$\\
shares are $r$, $(s-r) \mod M$\\
to reconstruct, add shares $\mod M$
\item $k$-way: shares $r_0, r_1, \dots, r_{k-2}$ random,
$(s - (r_0 + \cdots + r_{l-2})) \mod M$
\item can also construct degree $k$ polynomials such that $k$ values
are needed to reconstruct
\end{itemize}
\item suppose RSA private key is $(d, N)$, shares
$(d_1, N), (d_2, N), (d_3, N)$ such that
$d_1 + d_2 + d_3 = d \mod(p-1)(q-1)$
$\left(X^{d_1} \mod N\right)\left(X^{d_2}\right)\left(X^{d_3}\right) =
X^{(d_1 + d_2 + d_3)\mod(p-1)(q-1)} \mod N = X^d \mod N$
(splits up an RSA operation)
\end{itemize}
}