Code
# Imports
diff --git a/chapters/02_floodmapping.html b/chapters/02_floodmapping.html
index 8aeb9a2..e94a131 100644
--- a/chapters/02_floodmapping.html
+++ b/chapters/02_floodmapping.html
@@ -264,7 +264,7 @@ 2
%matplotlib widget
import numpy as np
@@ -276,7 +276,7 @@ 2 from scipy.stats import norm
from eomaps import Maps
= xr.open_dataset('../data/s1_parameters/S1_CSAR_IWGRDH/SIG0/V1M1R1/EQUI7_EU020M/E054N006T3/SIG0_20180228T043908__VV_D080_E054N006T3_EU020M_V1M1R1_S1AIWGRDH_TUWIEN.nc') sig0_dc
@@ -476,7 +476,7 @@
@@ -491,7 +491,7 @@ \(\sigma^0\). These so-called posteriors need one more piece of information, as can be seen in the equation above. We need the probability that a pixel is flooded \(P(F)\) or not flooded \(P(NF)\). Of course, these are the figures we’ve been trying to find this whole time. We don’t actually have them yet, so what can we do? In Bayesian statistics, we can just start with our best guess. These guesses are called our “priors”, because they are the beliefs we hold prior to looking at the data. This subjective prior belief is the foundation Bayesian statistics, and we use the likelihoods we just calculated to update our belief in this particular hypothesis. This updated belief is called the “posterior”.
Let’s say that our best estimate for the chance of flooding versus non-flooding of a pixel is 50-50: a coin flip. We now can also calculate the probability of backscattering \(P(\sigma^0)\), as the weighted average of the water and land likelihoods, ensuring that our posteriors range between 0 to 1.
The following code block shows how we calculate the priors.
-
+
def calc_posteriors(water_likelihood, land_likelihood):
= (water_likelihood * 0.5) + (land_likelihood * 0.5)
evidence return (water_likelihood * 0.5) / evidence, (land_likelihood * 0.5) / evidence
@@ -503,7 +503,7 @@
@@ -516,7 +516,7 @@
2.5 Flood Classification
We are now ready to combine all this information and classify the pixels according to the probability of flooding given the backscatter value of each pixel. Here we just look whether the probability of flooding is higher than non-flooding:
-
+
def bayesian_flood_decision(id, sig0_dc):
= calc_posteriors(*calc_likelihoods(id, sig0_dc))
nf_post_prob, f_post_prob return np.greater(f_post_prob, nf_post_prob)
@@ -528,7 +528,7 @@
@@ -551,7 +551,7 @@
@@ -491,7 +491,7 @@ \(\sigma^0\). These so-called posteriors need one more piece of information, as can be seen in the equation above. We need the probability that a pixel is flooded \(P(F)\) or not flooded \(P(NF)\). Of course, these are the figures we’ve been trying to find this whole time. We don’t actually have them yet, so what can we do? In Bayesian statistics, we can just start with our best guess. These guesses are called our “priors”, because they are the beliefs we hold prior to looking at the data. This subjective prior belief is the foundation Bayesian statistics, and we use the likelihoods we just calculated to update our belief in this particular hypothesis. This updated belief is called the “posterior”.
Let’s say that our best estimate for the chance of flooding versus non-flooding of a pixel is 50-50: a coin flip. We now can also calculate the probability of backscattering \(P(\sigma^0)\), as the weighted average of the water and land likelihoods, ensuring that our posteriors range between 0 to 1.
The following code block shows how we calculate the priors.
-
+
def calc_posteriors(water_likelihood, land_likelihood):
= (water_likelihood * 0.5) + (land_likelihood * 0.5)
evidence return (water_likelihood * 0.5) / evidence, (land_likelihood * 0.5) / evidence
@@ -503,7 +503,7 @@
@@ -516,7 +516,7 @@
2.5 Flood Classification
We are now ready to combine all this information and classify the pixels according to the probability of flooding given the backscatter value of each pixel. Here we just look whether the probability of flooding is higher than non-flooding:
-
+
def bayesian_flood_decision(id, sig0_dc):
= calc_posteriors(*calc_likelihoods(id, sig0_dc))
nf_post_prob, f_post_prob return np.greater(f_post_prob, nf_post_prob)
@@ -528,7 +528,7 @@
@@ -551,7 +551,7 @@
\(\sigma^0\). These so-called posteriors need one more piece of information, as can be seen in the equation above. We need the probability that a pixel is flooded \(P(F)\) or not flooded \(P(NF)\). Of course, these are the figures we’ve been trying to find this whole time. We don’t actually have them yet, so what can we do? In Bayesian statistics, we can just start with our best guess. These guesses are called our “priors”, because they are the beliefs we hold prior to looking at the data. This subjective prior belief is the foundation Bayesian statistics, and we use the likelihoods we just calculated to update our belief in this particular hypothesis. This updated belief is called the “posterior”.
Let’s say that our best estimate for the chance of flooding versus non-flooding of a pixel is 50-50: a coin flip. We now can also calculate the probability of backscattering \(P(\sigma^0)\), as the weighted average of the water and land likelihoods, ensuring that our posteriors range between 0 to 1.
The following code block shows how we calculate the priors.
-def calc_posteriors(water_likelihood, land_likelihood):
= (water_likelihood * 0.5) + (land_likelihood * 0.5)
evidence return (water_likelihood * 0.5) / evidence, (land_likelihood * 0.5) / evidence
@@ -516,7 +516,7 @@
2.5 Flood Classification
We are now ready to combine all this information and classify the pixels according to the probability of flooding given the backscatter value of each pixel. Here we just look whether the probability of flooding is higher than non-flooding:
-
+
def bayesian_flood_decision(id, sig0_dc):
= calc_posteriors(*calc_likelihoods(id, sig0_dc))
nf_post_prob, f_post_prob return np.greater(f_post_prob, nf_post_prob)
@@ -528,7 +528,7 @@
@@ -551,7 +551,7 @@
2.5 Flood Classification
We are now ready to combine all this information and classify the pixels according to the probability of flooding given the backscatter value of each pixel. Here we just look whether the probability of flooding is higher than non-flooding:
-
+
def bayesian_flood_decision(id, sig0_dc):
= calc_posteriors(*calc_likelihoods(id, sig0_dc))
nf_post_prob, f_post_prob return np.greater(f_post_prob, nf_post_prob)
@@ -528,7 +528,7 @@
@@ -551,7 +551,7 @@
def bayesian_flood_decision(id, sig0_dc):
= calc_posteriors(*calc_likelihoods(id, sig0_dc))
nf_post_prob, f_post_prob return np.greater(f_post_prob, nf_post_prob)