Skip to content

Commit

Permalink
Feat/518 calculate eal (#531)
Browse files Browse the repository at this point in the history
* chore: renaming representative_damage_percentile

* feat: risk_calculation_factory/base and its inherited classes are created. risk_calculation_mode/year added to the analysis_config

* feat: risk_calculation_factory returns the correct RiskCalculationBase classes

* feat: self._to_integrate = self.rework_damage_data() added to each risk_calculation class

* chore: renaming to _rework_damage_data

* feat: minor + creating the risk column in the self.result

* feat: aggregate_wl is added to analysis_config_data; can be read from ini and config.

* chore: aggregate_wl removed form AnalysisSectionLosses, because aggregate_wl is now part of AnalysisConfigData

* chore: self.analysis_config.config_data.aggregate_wl is used in _get_disrupted_criticality_analysis_results

* chore: aggregate_wl_suffix added to filter on the criticality_results that have any EV or RP columns with aggregate_wl_suffix of above aggregate_wl.

* chore: analysis_config_data_reader updated

* chore: analysis_config_data_reader updated

* chore: re package imported correctly

* chore: re package imported correctly

* chore: removed aggregate_wl from test analysis.ini

* chore: aggregate_wl added to the config of the test

* feat: EventTypeEnum NONE added and analysis.ini is updated for correcting failing tests

* feat: aggregate_wl is removed from analysis.ini for correcting failing tests

* feat: aggregate_wl is removed from analysis.ini for correcting failing tests

* chore: error numbers are reduced. event_type default has become NONE instead of INVALID. The latter used to give error.

* chore: getting the return_periods is revised to get the numbers in the column names

* chore: cut_from_year rework function is updated to return correct _to_integrate

* test: updating test_calc_vlh for the cases with more than one event and different aggregate_wl[:2]

* test: resilience_curve updated. Test assertion added. expected_results updated to include the second resilience curve

* test: resilience_curve updated and relevant tests are updated

* test: TestRiskCalculation created

* test: TestRiskCalculation minor change

* test: TestRiskCalculation expected data updated

* chore: Unused import removed + .gitignor update

* Update tests/analysis/losses/resilience_curves/test_resilience_curves.py

chore: a number made float

Co-authored-by: Ardt Klapwijk <[email protected]>

* chore: ABC inheritance removed

* chore: attributes with type hints are added to the class and removed form __init__

* chore: Unused imports are removed

* chore: passing key (k) is simplified

* chore: converted the function to an internal method of _get_to_integrate

* chore: inplace used instead of reassigning

* chore: conditions aggregated for readability

* chore: local variable is modified

* chore: super.__init__ is removed and _rework_damage_data usage moved to the base class

* chore: list made set + commented lines removed

* chore: self._to_integrates is returned in the end and not set in the method

* chore: asssert exists() added

* chore: self._rework_damage_data() moved to _get_to_integrate

* chore: self._rework_damage_data() returned

* chore: self._rework_damage_data() does not return but set self._to_integrate

* chore: cut_from_year test updated + rework_to_integrate sets self._to_integrate

* chore: cut_from_year test updated

* chore: cut_from_year test updated

* chore: cut_from_year test updated

* test: factory test is made

* test: a fixture is added

* feat: get_suffix created for AggregateWlEnum

* chore: improvements for a correct encapsulation implementation

* chore: module test_risk_calculation_factory created

* chore: __post_init__ removed

* chore: properties protected

* chore: integrate_df_trapezoidal updated to avoid changing self._to_integrate

* chore: test updated regarding the protected attributes

* chore: get_wl_prefix updated

* chore: _ removed for the fixtures

* chore: fixtures moved to congtest + factory test in new module

* test: get_wl _prefix is added

* test: get_wl _prefix is modified to include expected prefixes

* test: get_wl _prefix fixed

* chore: gitignor updated

* chore: minor

* chore: minor

* chore: usage of abstract classes corrected and dependencies are updated accordingly

* Changed get_disruption to calculate_distruption

* chore: id test updated

* chore: conftest created at the lowest level. id naming also changed

* chore: prefix replaced with abbr

* chore: properties defined in init

* chore: _return_periods property and read-only (annotated) return_periods property

* chore: fixture added to the naming and reference

* chore: performance_key set once

* fix: average renamed (#572)

* Fix/573 update rfid c in the graph simple after segmentation (#574)

* fix: lanes added

* feat: raise error if not all attributes_to_include in the dgf

* feat: reverted the raise error if not all attributes_to_include in the dgf

* feat: function created and used in nut to update rfid_c

* feat: function used in osm_network_wrapper in nut to update rfid_c

* Update ra2ce/network/networks_utils.py

chore: inline comments changed

Co-authored-by: Carles S. Soriano Pérez <[email protected]>

* chore: apply the comments on a previously made function + using deepcopy()

* chore: use of deepcopy is corrected

---------

Co-authored-by: Carles S. Soriano Pérez <[email protected]>

---------

Co-authored-by: Ardt Klapwijk <[email protected]>
Co-authored-by: Cham8920 <[email protected]>
Co-authored-by: Carles S. Soriano Pérez <[email protected]>
  • Loading branch information
4 people authored Sep 10, 2024
1 parent 31da454 commit 2882876
Show file tree
Hide file tree
Showing 37 changed files with 1,587 additions and 955 deletions.
1 change: 0 additions & 1 deletion examples/data/isolated_locations/analyses.ini
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ name = isolated_locations
[analysis1]
name = multi link isolated locations
analysis = multi_link_isolated_locations
aggregate_wl = max
threshold = 0.1
weighing = length
buffer_meters = 100
Expand Down

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ name = beira
[analysis1]
name = multi link origin destination
analysis = multi_link_origin_destination
aggregate_wl = max
threshold = 0.5
weighing = distance
calculate_route_without_disruption = True
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ name = beira
[analysis1]
name = multi link origin closest destination
analysis = multi_link_origin_closest_destination
aggregate_wl = max
threshold = 0.5
weighing = distance
calculate_route_without_disruption = True
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ name = beira
[analysis1]
name = multi link origin closest destination without hazard
analysis = multi_link_origin_closest_destination
aggregate_wl = None
threshold = None
weighing = distance
calculate_route_without_disruption = True
Expand Down
11 changes: 9 additions & 2 deletions ra2ce/analysis/analysis_config_data/analysis_config_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,6 @@ class AnalysisSectionLosses(AnalysisSectionBase):
values_of_time_file: Optional[Path] = None
# the redundancy analysis) and the intensities
# accessibility analyses
aggregate_wl: AggregateWlEnum = field(default_factory=lambda: AggregateWlEnum.NONE)
threshold: float = 0.0
threshold_destinations: float = math.nan
uniform_duration: float = math.nan
Expand All @@ -120,6 +119,13 @@ class AnalysisSectionLosses(AnalysisSectionBase):
category_field_name: str = ""
save_traffic: bool = False

# risk or estimated annual losses related
event_type: EventTypeEnum = field(default_factory=lambda: EventTypeEnum.NONE)
risk_calculation_mode: RiskCalculationModeEnum = field(
default_factory=lambda: RiskCalculationModeEnum.NONE
)
risk_calculation_year: int = 0


@dataclass
class AnalysisSectionDamages(AnalysisSectionBase):
Expand All @@ -139,7 +145,7 @@ class AnalysisSectionDamages(AnalysisSectionBase):
climate_period: float = math.nan
# road damage
representative_damage_percentage: float = 100
event_type: EventTypeEnum = field(default_factory=lambda: EventTypeEnum.INVALID)
event_type: EventTypeEnum = field(default_factory=lambda: EventTypeEnum.NONE)
damage_curve: DamageCurveEnum = field(
default_factory=lambda: DamageCurveEnum.INVALID
)
Expand Down Expand Up @@ -168,6 +174,7 @@ class AnalysisConfigData(ConfigDataProtocol):
default_factory=OriginsDestinationsSection
)
network: NetworkSection = field(default_factory=NetworkSection)
aggregate_wl: AggregateWlEnum = field(default_factory=lambda: AggregateWlEnum.NONE)
hazard_names: list[str] = field(default_factory=list)

@property
Expand Down
11 changes: 11 additions & 0 deletions ra2ce/analysis/analysis_config_data/analysis_config_data_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,17 @@ def _get_analysis_section_losses(self, section_name: str) -> AnalysisSectionLoss
self._parser.get(section_name, "loss_type", fallback=None)
)
# losses
_section.event_type = EventTypeEnum.get_enum(
self._parser.get(section_name, "event_type", fallback=None)
)
_section.risk_calculation_mode = RiskCalculationModeEnum.get_enum(
self._parser.get(section_name, "risk_calculation_mode", fallback=None)
)
_section.risk_calculation_year = self._parser.getint(
section_name,
"risk_calculation_year",
fallback=_section.risk_calculation_year,
)
_section.traffic_cols = self._parser.getlist(
section_name, "traffic_cols", fallback=_section.traffic_cols
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@


class EventTypeEnum(Ra2ceEnumBase):
NONE = 0
EVENT = 1
RETURN_PERIOD = 2
INVALID = 99
3 changes: 3 additions & 0 deletions ra2ce/analysis/analysis_config_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,9 @@ def from_data_with_network(
_new_analysis.config_data.origins_destinations = (
network_config.config_data.origins_destinations
)
_new_analysis.config_data.aggregate_wl = (
network_config.config_data.hazard.aggregate_wl
)
# Graphs are retrieved from the already configured object
_new_analysis.graph_files = network_config.graph_files

Expand Down
164 changes: 91 additions & 73 deletions ra2ce/analysis/losses/losses_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@
from ra2ce.analysis.analysis_config_data.analysis_config_data import (
AnalysisSectionLosses,
)
from ra2ce.analysis.analysis_config_data.enums.event_type_enum import EventTypeEnum
from ra2ce.analysis.analysis_config_data.enums.risk_calculation_mode_enum import (
RiskCalculationModeEnum,
)
from ra2ce.analysis.analysis_config_data.enums.traffic_period_enum import (
TrafficPeriodEnum,
)
Expand All @@ -40,13 +44,15 @@
from ra2ce.analysis.losses.resilience_curves.resilience_curves_reader import (
ResilienceCurvesReader,
)
from ra2ce.analysis.losses.risk_calculation.risk_calculation_factory import (
RiskCalculationFactory,
)
from ra2ce.analysis.losses.time_values.time_values_reader import TimeValuesReader
from ra2ce.analysis.losses.traffic_intensities.traffic_intensities_reader import (
TrafficIntensitiesReader,
)
from ra2ce.network.graph_files.graph_file import GraphFile
from ra2ce.network.hazard.hazard_names import HazardNames
from ra2ce.network.network_config_data.enums.aggregate_wl_enum import AggregateWlEnum
from ra2ce.network.network_config_data.enums.road_type_enum import RoadTypeEnum


Expand Down Expand Up @@ -107,6 +113,8 @@ def __init__(
self.output_path = analysis_input.output_path
self.hazard_names = analysis_input.hazard_names

self.criticality_analysis = gpd.GeoDataFrame()
self.criticality_analysis_non_disrupted = gpd.GeoDataFrame()
self.result = gpd.GeoDataFrame()

def _check_validity_analysis_files(self):
Expand All @@ -130,34 +138,33 @@ def _get_disrupted_criticality_analysis_results(
)
else:
criticality_analysis = criticality_analysis.drop_duplicates(["u", "v"])
# ToDO: check hazard overlay with AggregateWlEnum.NONE or INVALID

# filter out all links not affected by the hazard
if self.analysis.aggregate_wl == AggregateWlEnum.NONE:
self.criticality_analysis = criticality_analysis[
criticality_analysis["EV1_ma"] > self.analysis.threshold
]
elif self.analysis.aggregate_wl == AggregateWlEnum.MAX:
self.criticality_analysis = criticality_analysis[
criticality_analysis["EV1_max"] > self.analysis.threshold
]
elif self.analysis.aggregate_wl == AggregateWlEnum.MEAN:
self.criticality_analysis = criticality_analysis[
criticality_analysis["EV1_mean"] > self.analysis.threshold
]
elif self.analysis.aggregate_wl == AggregateWlEnum.MIN:
self.criticality_analysis = criticality_analysis[
criticality_analysis["EV1_min"] > self.analysis.threshold
]
aggregate_wl_abbreviation = (
self.analysis_config.config_data.aggregate_wl.get_wl_abbreviation()
)
hazard_aggregate_wl_columns = [
c
for c in criticality_analysis.columns
if (c.startswith("RP") or c.startswith("EV"))
and c.endswith(f"_{aggregate_wl_abbreviation}")
]
self.criticality_analysis = criticality_analysis[
(
criticality_analysis[hazard_aggregate_wl_columns]
> self.analysis.threshold
).any(axis=1)
]

self.criticality_analysis_non_disrupted = criticality_analysis[
~criticality_analysis.index.isin(self.criticality_analysis.index)
]
# link_id from list to tuple
if len(self.criticality_analysis_non_disrupted) > 0:
self.criticality_analysis_non_disrupted[
self.link_id
] = self.criticality_analysis_non_disrupted[self.link_id].apply(
lambda x: tuple(x) if isinstance(x, list) else x
self.criticality_analysis_non_disrupted[self.link_id] = (
self.criticality_analysis_non_disrupted[self.link_id].apply(
lambda x: tuple(x) if isinstance(x, list) else x
)
)
self.criticality_analysis[self.link_id] = self.criticality_analysis[
self.link_id
Expand Down Expand Up @@ -185,7 +192,7 @@ def _get_range(height: float) -> tuple[float, float]:
for range_tuple in _hazard_intensity_ranges:
x, y = range_tuple
if x <= height <= y:
return (x, y)
return x, y
raise ValueError(f"No matching range found for height {height}")

def _create_result(vlh: gpd.GeoDataFrame) -> gpd.GeoDataFrame:
Expand Down Expand Up @@ -236,7 +243,7 @@ def _create_result(vlh: gpd.GeoDataFrame) -> gpd.GeoDataFrame:
_check_validity_criticality_analysis()

_hazard_intensity_ranges = self.resilience_curves.ranges
events = self.criticality_analysis.filter(regex=r"^EV(?!1_fr)")
events = self.criticality_analysis.filter(regex=r"^(EV|RP)(?!\d+_fr)")
# Read the performance_change stating the functionality drop
if "key" in self.criticality_analysis.columns:
performance_change = self.criticality_analysis[
Expand Down Expand Up @@ -295,52 +302,48 @@ def _create_result(vlh: gpd.GeoDataFrame) -> gpd.GeoDataFrame:
)
for event in events.columns.tolist():
for _, vlh_row in vehicle_loss_hours.iterrows():
row_hazard_range = _get_range(vlh_row[event])
row_connectivity = vlh_row[connectivity_attribute]
row_performance_changes = performance_change.loc[
[vlh_row[self.link_id]]
]
if "key" in vlh_row.index:
key = vlh_row["key"]
else:
key = 0
(u, v, k) = (
vlh_row["u"],
vlh_row["v"],
key,
)
# allow link_id not to be unique in the graph (results reliability is up to the user)
# this can happen for instance when a directed graph should be made from an input network
for performance_row in row_performance_changes.iterrows():
row_performance_change = performance_row[-1][
f"{self.performance_metric}"
if vlh_row[event] > self.analysis.threshold:
row_hazard_range = _get_range(vlh_row[event])
row_connectivity = vlh_row[connectivity_attribute]
row_performance_changes = performance_change.loc[
[vlh_row[self.link_id]]
]
if "key" in performance_row[-1].index:
performance_key = performance_row[-1]["key"]
if "key" in vlh_row.index:
key = vlh_row["key"]
else:
key = 0
(u, v) = (vlh_row["u"], vlh_row["v"])
# allow link_id not to be unique in the graph (results reliability is up to the user)
# this can happen for instance when a directed graph should be made from an input network
for performance_row in row_performance_changes.iterrows():
row_performance_change = performance_row[-1][
f"{self.performance_metric}"
]
performance_key = 0
row_u_v_k = (
performance_row[-1]["u"],
performance_row[-1]["v"],
performance_key,
)
if (
math.isnan(row_performance_change) and row_connectivity == 0
) or row_performance_change == 0:
self._calculate_production_loss_per_capita(
vehicle_loss_hours, row_hazard_range, vlh_row, event
)
elif not (
math.isnan(row_performance_change)
and math.isnan(row_connectivity)
) and ((u, v, k) == row_u_v_k):
self._populate_vehicle_loss_hour(
vehicle_loss_hours,
row_hazard_range,
vlh_row,
row_performance_change,
event,
if "key" in performance_row[-1].index:
performance_key = performance_row[-1]["key"]
row_u_v_k = (
performance_row[-1]["u"],
performance_row[-1]["v"],
performance_key,
)
if (
math.isnan(row_performance_change) and row_connectivity == 0
) or row_performance_change == 0:
self._calculate_production_loss_per_capita(
vehicle_loss_hours, row_hazard_range, vlh_row, event
)
elif not (
math.isnan(row_performance_change)
and math.isnan(row_connectivity)
) and ((u, v, key) == row_u_v_k):
self._populate_vehicle_loss_hour(
vehicle_loss_hours,
row_hazard_range,
vlh_row,
row_performance_change,
event,
)

vehicle_loss_hours_result = _create_result(vehicle_loss_hours)
return vehicle_loss_hours_result
Expand All @@ -355,7 +358,7 @@ def _get_relevant_link_type(
_max_disruption = 0
for _row_link_type in vlh_row[self.link_type_column]:
_link_type = RoadTypeEnum.get_enum(_row_link_type)
disruption = self.resilience_curves.get_disruption(
disruption = self.resilience_curves.calculate_disruption(
_link_type, row_hazard_range
)
if disruption > _max_disruption:
Expand Down Expand Up @@ -448,9 +451,9 @@ def _calculate_production_loss_per_capita(
[vlh_row.name], f"vlh_{trip_type}_{hazard_col_name}"
] = vlh_trip_type_event
vlh_total += vlh_trip_type_event
vehicle_loss_hours.loc[
[vlh_row.name], f"vlh_{hazard_col_name}_total"
] = vlh_total
vehicle_loss_hours.loc[[vlh_row.name], f"vlh_{hazard_col_name}_total"] = (
vlh_total
)

def _populate_vehicle_loss_hour(
self,
Expand Down Expand Up @@ -500,9 +503,9 @@ def _populate_vehicle_loss_hour(
[vlh_row.name], f"vlh_{trip_type}_{hazard_col_name}"
] = vlh_trip_type_event
vlh_total += vlh_trip_type_event
vehicle_loss_hours.loc[
[vlh_row.name], f"vlh_{hazard_col_name}_total"
] = vlh_total
vehicle_loss_hours.loc[[vlh_row.name], f"vlh_{hazard_col_name}_total"] = (
vlh_total
)

@abstractmethod
def _get_criticality_analysis(self) -> AnalysisLossesProtocol:
Expand All @@ -516,4 +519,19 @@ def execute(self) -> gpd.GeoDataFrame:
)

self.result = self.calculate_vehicle_loss_hours()

# Calculate the risk or estimated annual losses if applicable
if (
self.analysis.event_type == EventTypeEnum.RETURN_PERIOD
and self.analysis.risk_calculation_mode
not in (RiskCalculationModeEnum.INVALID, RiskCalculationModeEnum.NONE)
):
risk_calculation = RiskCalculationFactory.get_risk_calculation(
risk_calculation_mode=self.analysis.risk_calculation_mode,
risk_calculation_year=self.analysis.risk_calculation_year,
losses_gdf=self.result,
)
risk = risk_calculation.get_integration_of_df_trapezoidal()
self.result[f"risk_vlh_total"] = risk

return self.result
Empty file.
Loading

0 comments on commit 2882876

Please sign in to comment.