diff --git a/CHANGELOG.md b/CHANGELOG.md
index 4cedac9..c7dad87 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,7 +4,24 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-[Unreleased]## [0.5.5.2b7]
+[Unreleased]## [0.5.7.1b0]
+
+## [0.5.7.0]- 2024-11-26
+### Changed
+- BACKWARD INCOMPATIBILITY - ScenarioDbManager.__init__: changed default values for db_type=DatabaseType.SQLite. For other uses (DB2 or PostgreSQL, always specify the db_type.
+- BACKWARD INCOMPATIBILITY - ScenarioDbManager.__init__: changed default values for enable_scenario_seq=True, future=True. This reflects the current best practices.
+- BACKWARD INCOMPATIBILITY - Removed ScenarioDbManager from `dse_do_utils.__init__.py`. This avoids the dependency on sqlalchemy with use of dse_do_utils where the ScenarioDbManager is not used.
+Introduces a slight backward incompatibility. Need to import as: `from dse_do_utils.scenariodbmanager import ScenarioDbManager`
+- Removed (deprecated) `module_reload()` from `dse_do_utils.__init__.py`. In notebooks `autoreload` works well,
+- Generics in ScenarioRunner
+- Removed deprecated optional argument `dtypes` from `Core01DataManager.prepare_output_data_frames()`
+- Fixed mutable default arguments in scenariodbmanager module
+### Added
+- CplexDot in core01_optimization_engine: generic function class to use groupby aggregation and the mdl.dot() function.
+- PlotlyManager - self.ref_dm, self.ms_inputs, self.ms_outputs property declarations and documentation
+- PlotlyManager.plotly_kpi_compare_bar_charts and .get_multi_scenario_table for scenario compare
+- Core01DataManager and Core01OptimizationEngine: added support for parameter `mipGap`. Sets the `mdl.parameters.mip.tolerances.mipgap` if value > 0
+- DataManager.extract_solution adds option `allow_mixed_type_columns`. If True allows dvar/expr in column to be a regular Python value and not have the `solution_value` attribute
## [0.5.6.0]- 2023-05-13
### Changed
diff --git a/docs/doc_build/doctrees/dse_do_utils.doctree b/docs/doc_build/doctrees/dse_do_utils.doctree
index e59d652..7988bbf 100644
Binary files a/docs/doc_build/doctrees/dse_do_utils.doctree and b/docs/doc_build/doctrees/dse_do_utils.doctree differ
diff --git a/docs/doc_build/doctrees/environment.pickle b/docs/doc_build/doctrees/environment.pickle
index 273110e..226abaf 100644
Binary files a/docs/doc_build/doctrees/environment.pickle and b/docs/doc_build/doctrees/environment.pickle differ
diff --git a/docs/doc_build/html/.buildinfo b/docs/doc_build/html/.buildinfo
index 05ce825..90b1aa1 100644
--- a/docs/doc_build/html/.buildinfo
+++ b/docs/doc_build/html/.buildinfo
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
-config: a9df07955c95961d749fd73c1e7644d5
+config: ef4c938c6f3ee2a22e6bb6b81cef0c3d
tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/docs/doc_build/html/_modules/dse_do_utils.html b/docs/doc_build/html/_modules/dse_do_utils.html
deleted file mode 100644
index 856634c..0000000
--- a/docs/doc_build/html/_modules/dse_do_utils.html
+++ /dev/null
@@ -1,185 +0,0 @@
-
-
-
-
-
-
-# Copyright IBM All Rights Reserved.
-# SPDX-License-Identifier: Apache-2.0
-
-from.versionimport__version__
-from.datamanagerimportDataManager
-from.optimizationengineimportOptimizationEngine
-from.scenariomanagerimportScenarioManager
-from.scenariodbmanagerimportScenarioDbManager
-# from .scenariopicker import ScenarioPicker
-# from .deployeddomodel import DeployedDOModel
-# from .mapmanager import MapManager
-
-name="dse_do_utils"
-
-
-
[docs]defmodule_reload():
- """DEPRECATED. Requires updates to Python 3.6
- Reloads all component modules. Use when you want to force a reload of this module with imp.reload().
-
- This avoids having to code somewhat complex reloading logic in the notebook that is using this module.
-
- Challenge with imp.reload of dse-do_utils. The following is NOT (!) sufficient::
-
- import imp
- import dse_do_utils
- imp.reload(dse_do_utils)
-
- The package dse_do_utils internally contains a number of sub modules that each contain a part of the code.
- This keeps development easier and more organized. But to make importing easier, the classes are exposed
- in the top level `init.py`, which allows for a simple import statement like from dse_do_utils import ScenarioManager.
- Unfortunately, reloading the top-level module dse_do_utils doesn't force a reload of the internal modules.
-
- In case of subclassing, reloading needs to be done in the right order, i.e. first the parent classes.
-
- Usage::
-
- import imp
- import dse_do_utils # You have to do the import, otherwise not possible to do the next 2 steps
- dse_do_utils.module_reload() #This function
- imp.reload(dse_do_utils) # Necessary to ensure all following expressions `from dse_do_utils import class` are using the updated classes
- from dse_do_utils import DataManager, OptimizationEngine, ScenarioManager, ScenarioPicker, DeployedDOModel, MapManager # This needs to be done AFTER the reload to refresh the definitions
-
-
- Note that this function assumes that the set of classes and component modules is not part of the update.
- If it is, you may need to add another reload::
-
- import imp
- import dse_do_utils # You have to do the import, otherwise not possible to do the next 2 steps
- imp.reload(dse_do_utils) # To reload this function
- dse_do_utils.module_reload() #This function
- imp.reload(dse_do_utils) # Necessary to ensure all future expressions `from dse_do_utils import class` are using the updated classes
- from dse_do_utils import DataManager, OptimizationEngine, ScenarioManager, ScenarioPicker, DeployedDOModel, MapManager # This needs to be done AFTER the reload to refresh the definitions
-
-
- If not using this function, in the notebook you would need to do the following (or the relevant parts of it)::
-
- import imp
- import dse_do_utils
- imp.reload(dse_do_utils.datamanager)
- imp.reload(dse_do_utils.optimizationengine)
- imp.reload(dse_do_utils.scenariomanager)
- imp.reload(dse_do_utils.scenariopicker)
- imp.reload(dse_do_utils.deployeddomodel)
- imp.reload(dse_do_utils.mapmanager)
- imp.reload(dse_do_utils)
- from dse_do_utils import DataManager, OptimizationEngine, ScenarioManager, ScenarioPicker, DeployedDOModel, MapManager
-
- Returns:
-
- """
- importimportlib
- importdatamanager
- importoptimizationengine
- importscenariomanager
- importscenariopicker
- importdeployeddomodel
- importmapmanager
- importmultiscenariomanager
- importlib.reload(datamanager)
- importlib.reload(optimizationengine)
- importlib.reload(scenariomanager)
- importlib.reload(scenariopicker)
- importlib.reload(deployeddomodel)
- importlib.reload(mapmanager)
- importlib.reload(multiscenariomanager)
-
- # The imports below cannot be done here.
- # You need to redo the class imports from the notebook that is calling this function
-
- # from .version import __version__
- # from .datamanager import DataManager
- # from .optimizationengine import OptimizationEngine
- # from .scenariomanager import ScenarioManager
- # from .scenariopicker import ScenarioPicker
- # from .deployeddomodel import DeployedDOModel
- # from .mapmanager import MapManager
-
It typically contains the input and output dictionaries with DataFrames that came from or will be inserted into a DO scenario.
- In addition it will hold any intermediate data.
+ In addition, it will hold any intermediate data. It holds methods that operate on and convert the data. When used in combination with an optimization engine, it should not contain the docplex code that creates or interacts with the docplex Model. (That is the task of the OptimizationEngine.)
@@ -86,7 +85,10 @@
[docs]@staticmethoddefextract_solution(df:pd.DataFrame,extract_dvar_names:Optional[List[str]|Dict[str,str]]=None,drop_column_names:List[str]=None,drop:bool=True,epsilon:float=None,round_decimals:int=None,
- solution_column_name_post_fix:str='Sol')->pd.DataFrame:
+ solution_column_name_post_fix:str='Sol',
+ allow_mixed_type_columns:bool=False)->pd.DataFrame:"""Generalized routine to extract a solution value. Can remove the dvar column from the df to be able to have a clean df for export into scenario.
@@ -427,7 +430,7 @@
Source code for dse_do_utils.datamanager
round_decimals (int): round the solution value by number of decimals. If None, no rounding. If 0, rounding to integer value. solution_column_name_post_fix (str): Postfix for the name of the solution column. Default = 'Sol'
-
+ allow_mixed_type_columns (bool): If True, will allow the column not to have the `solution_value` attribute, i.e. be a plain Python value, not a CPLEX dvar or expression """
@@ -442,7 +445,12 @@
Source code for dse_do_utils.datamanager
forxDVarName,solution_column_nameindvar_column_dict.items():ifxDVarNameindf.columns:# solution_column_name = f'{xDVarName}Sol'
- df[solution_column_name]=[dvar.solution_valuefordvarindf[xDVarName]]
+ # df[solution_column_name] = [dvar.solution_value for dvar in df[xDVarName]]
+ ifallow_mixed_type_columns:
+ df[solution_column_name]=[dvar.solution_valueifhasattr(dvar,'solution_value')elsedvarfor
+ dvarindf[xDVarName]]# VT_20241029: allow expression to be a constant
+ else:
+ df[solution_column_name]=[dvar.solution_valuefordvarindf[xDVarName]]ifdrop:df=df.drop([xDVarName],axis=1)ifepsilonisnotNone:
@@ -539,9 +547,8 @@
[docs]defsolve_v2(self,inputs:Inputs,max_oaas_time_limit_sec:int=None,max_run_time_sec:int=None)->dict:"""Master routine. Initializes the job, starts the execution, monitors the results, post-processes the solution and cleans-up after. Args:
@@ -498,9 +497,8 @@
[docs]defsolve(self,refine_conflict:Optional[bool]=False,**kwargs)->docplex.mp.solution.SolveSolution:# TODO: enable export_as_lp_path()?# self.export_as_lp_path(lp_file_name=self.mdl.name)# TODO: use self.solve_kwargs if **kwargs is empty/None. Or merge them?
@@ -384,9 +383,43 @@
[docs]@staticmethod
+ defcp_integer_var_series_s_v2(mdl:cp.CpoModel,df:pd.DataFrame,min=None,max=None,name=None,
+ domain=None)->pd.Series:
+ """Returns pd.Series[docplex.cp.expression.CpoIntVar].
+ If name is not None, will generate unique names based on pattern: '{name}_{index of df}'
+ If multi-index df, keys are separated by '_', e.g. 'xDvar_1_2_3'
+ """
+
+ ifnameisNone:
+ integer_list=mdl.integer_var_list(df.shape[0],min,max,name,domain)
+ else:
+ integer_list=[]
+ forixindf.index:
+ new_name=f"{name}_{OptimizationEngine._get_index_as_str(ix)}"
+ integer_list.append(mdl.integer_var(min,max,new_name,domain))
+ integer_series=pd.Series(integer_list,index=df.index)
+ returninteger_series
+
+ @staticmethod
+ def_get_index_as_str(ix)->str:
+ """Convert an index of a DF to a string. For naming dvars and constraints.
+ If df has a multi-index, the ix is a tuple."""
+ iftype(ix)istuple:# If muli-index
+ name='_'.join(map(str,ix))
+ else:
+ name=str(ix)
+ # elif isinstance(ix, str):
+ # new_name = f"{name}_{ix}"
+ # elif isinstance(ix, int):
+ # new_name = f"{name}_{ix}"
+ # else:
+ # new_name = f"{name}_{str(ix)}"
+ returnname
+
# Copyright IBM All Rights Reserved.# SPDX-License-Identifier: Apache-2.0
-fromtypingimportGeneric,TypeVar
+fromtypingimportGeneric,TypeVar,Optional,Dict
+importpandasaspd
+importplotly.expressaspx
+importplotly.graph_objsasgo# from typing import List, Dict, Tuple, Optional
-fromdse_do_utils.datamanagerimportDataManager
+fromdse_do_utils.datamanagerimportDataManager,Inputs# import plotly# import plotly.graph_objs as go
@@ -77,6 +79,15 @@
Source code for dse_do_utils.plotlymanager
def__init__(self,dm:DM):self.dm:DM=dm
+ # Used by DoDashApp for scenario compare, when the Reference Scenario is selected
+ # This supports scenario compare visualizations
+ self.ref_dm:Optional[DM]=None# A DataManager based on the reference scenario
+ # Used by the DoDashApp for scenario compare, when multiple scenarios are selected for compare
+ # These DataFrames are 'multi-scenario': they have an additional colum with the scenarioName.
+ # One 'multi-scenario' df contains data for the same scenario table from multiple scenarios
+ self.ms_inputs:Dict[str,pd.DataFrame]=None# Dict[TableName, 'multi-scenario' dataframe]
+ self.ms_outputs:Dict[str,pd.DataFrame]=None# Dict[TableName, 'multi-scenario' dataframe]
+
[docs]defget_plotly_fig_m(self,id):"""DEPRECATED. Not used in dse_do_dashboard package. On the instance `self`, call the method named by id['index']
@@ -90,7 +101,96 @@
Source code for dse_do_utils.plotlymanager
On the instance `self`, call the method named by get_tab_layout_{page_id}. Used in dse_do_dashboard Plotly-Dash dashboards """
- returngetattr(self,f"get_tab_layout_{page_id}")()
+
+ ###################################################################
+ # For scenario-compare in dse-do-dashboard
+ ###################################################################
+
[docs]defplotly_kpi_compare_bar_charts(self,figs_per_row:int=3,orientation:str='v')->[[go.Figure]]:
+ """
+ Generalized compare of KPIs between scenarios. Creates a list-of-list of go.Figure, i.e. rows of figures,
+ for the PlotlyRowsVisualizationPage.
+ Each KPI gets its own bar-chart, comparing the scenarios.
+
+ Supports 3 cases:
+ 1. Multi-scenario compare based on the Reference Scenarios multi-checkbox select on the Home page.
+ 2. Compare the current select scenario with the Reference Scenario selected on the Home page.
+ 3. Single scenario view based on the currently selected scenario
+
+ Args:
+ figs_per_row: int - Maximum number of figures per row
+ orientation: str - `h' (horizontal) or `v` (vertical)
+
+ Returns:
+ figures in rows ([[go.Figure]]) - bar-charts in rows
+
+ """
+ figs=[]
+ ifself.get_multi_scenario_compare_selected():
+ df=self.get_multi_scenario_table('kpis')
+ elifself.get_reference_scenario_compare_selected():
+ ref_df=self.ref_dm.kpis.reset_index()
+ ref_df['scenario_name']='Reference'
+ selected_df=self.dm.kpis.reset_index()
+ selected_df['scenario_name']='Current'
+ df=pd.concat([selected_df,ref_df])
+ else:
+ df=self.dm.kpis.reset_index()
+ df['scenario_name']='Current'
+
+ forkpi_name,groupindf.groupby('NAME'):
+ labels={'scenario_name':'Scenario','VALUE':kpi_name}
+ title=f'{kpi_name}'
+ iforientation=='v':
+ fig=px.bar(group,x='scenario_name',y='VALUE',orientation='v',color='scenario_name',labels=labels,
+ title=title)
+ else:
+ fig=px.bar(group,y='scenario_name',x='VALUE',orientation='h',color='scenario_name',
+ labels=labels)
+ fig.update_layout(xaxis_title=None)
+ fig.update_layout(yaxis_title=None)
+ fig.update_layout(showlegend=False)
+ figs.append(fig)
+
+ # Split list of figures in list-of-lists with maximum size of n:
+ n=figs_per_row
+ figs=[figs[i:i+n]foriinrange(0,len(figs),n)]
+ returnfigs
+
+
[docs]defget_multi_scenario_compare_selected(self)->bool:
+ """Returns True if the user has selected multi-scenario compare.
+ """
+ ms_enabled=(isinstance(self.ms_outputs,dict)
+ andisinstance(self.ms_inputs,dict)
+ and'Scenario'inself.ms_inputs.keys()
+ andself.ms_inputs['Scenario'].shape[0]>0
+ )
+ returnms_enabled
+
+
[docs]defget_reference_scenario_compare_selected(self)->bool:
+ """Returns True if the user has selected (single) reference-scenario compare
+ """
+ ms_selected=self.get_multi_scenario_compare_selected()
+ ref_selected=isinstance(self.ref_dm,DataManager)
+ returnnotms_selectedandref_selected
+
+
[docs]defget_multi_scenario_table(self,table_name:str)->Optional[pd.DataFrame]:
+ """Gets the df from the table named `table_name` in either inputs or outputs.
+ If necessary (i.e. when using scenario_seq), merges the Scenario table, so it has the scenario_name as column.
+ DataFrame is NOT indexed!
+ """
+ iftable_nameinself.ms_inputs.keys():
+ df=self.ms_inputs[table_name]
+ eliftable_nameinself.ms_outputs.keys():
+ df=self.ms_outputs[table_name]
+ else:
+ df=None
+
+ ifdfisnotNone:
+ if"scenario_name"notindf.columns:
+ df=df.merge(self.ms_inputs['Scenario'],on='scenario_seq')# Requires scenario_seq. Merges-in the scenario_name.
+
+ returndf
[docs]classDatabaseType(enum.Enum):"""Used in ScenarioDbManager.__init__ to specify the type of DB it is connecting to."""
@@ -100,8 +99,8 @@
Source code for dse_do_utils.scenariodbmanager
"""
def__init__(self,db_table_name:str,
- columns_metadata:List[sqlalchemy.Column]=[],
- constraints_metadata:List[ForeignKeyConstraint]=[]):
+ columns_metadata=None,
+ constraints_metadata=None):""" Warning: Do not use mixed case names for the db_table_name. Supplying a mixed-case is not working well and is causing DB FK errors.
@@ -114,9 +113,13 @@
Source code for dse_do_utils.scenariodbmanager
:param columns_metadata:
:param constraints_metadata: """
+ ifconstraints_metadataisNone:
+ constraints_metadata=[]
+ ifcolumns_metadataisNone:
+ columns_metadata=[]self.db_table_name=db_table_name# ScenarioDbTable.camel_case_to_snake_case(db_table_name) # To make sure it is a proper DB table name. Also allows us to use the scenario table name.
- self.columns_metadata=self.resolve_metadata_column_conflicts(columns_metadata)
+ self.columns_metadata:List[sqlalchemy.Column]=self.resolve_metadata_column_conflicts(columns_metadata)self.constraints_metadata=constraints_metadataself.dtype=Noneifnotdb_table_name.islower()andnotdb_table_name.isupper():## I.e. is mixed_case
@@ -536,10 +539,10 @@
[docs]definsert_table_row(self,scenario_table_name:str,scenario_name:str,values):
+ ifself.enable_transactions:
+ # print("Insert row scenario within a transaction")
+ withself.engine.begin()asconnection:
+ self._insert_table_row(scenario_table_name,scenario_name,values,connection)
+ else:
+ self._insert_table_row(scenario_table_name,scenario_name,values,self.engine)
+
+ def_insert_table_row(self,scenario_table_name:str,scenario_name:str,values,connection=None):
+ """DRAFT. Insert one row of data.
+ TODO: handle update if it already exists: 'upsert'
+ Args:
+ scenario_table_name (str): Name of scenario table (as used in Inputs/Outputs, not the name in the DB)
+ values (Dict): values of row to be inserted. Typically, a Dict or tuple (e.g. from df.itertuples().
+ connection
+ """
+ # raise NotImplementedError
+
+ ifscenario_table_nameinself.db_tables:
+ db_table=self.db_tables[scenario_table_name]
+ else:
+ raiseValueError(f"Scenario table name '{scenario_table_name}' unknown. Cannot insert data into DB.")
+
+ # TODO: add scenario_seq to values
+ # TODO: if values is a sequence, we need to convert to a Dict so that we can add a value?
+ scenario_seq=self._get_scenario_seq(scenario_name=scenario_name,connection=connection)
+ ifscenario_seqisnotNone:
+ values['scenario_seq']=scenario_seq
+ else:
+ raiseValueError(f"Scenario name '{scenario_name}' is unknown. Cannot insert row.")
+ stmt=(
+ sqlalchemy.insert(db_table.get_sa_table()).values(values)
+ )
+ try:
+ ifconnectionisNone:
+ self.engine.execute(stmt)
+ else:
+ connection.execute(stmt)
+
+ exceptexc.IntegrityErrorase:
+ print("++++++++++++Integrity Error+++++++++++++")
+ print(e)
+ exceptexc.StatementErrorase:
+ print("++++++++++++Statement Error+++++++++++++")
+ print(e)
+
+
[docs]defupdate_table_row(self,scenario_table_name:str,scenario_name:str,values):
+ ifself.enable_transactions:
+ # print("Insert row scenario within a transaction")
+ withself.engine.begin()asconnection:
+ self._update_table_row(scenario_table_name,scenario_name,values,connection)
+ else:
+ self._update_table_row(scenario_table_name,scenario_name,values,self.engine)
+
+ def_update_table_row(self,scenario_table_name:str,scenario_name:str,values,connection=None):
+ """DRAFT. Update one row of data.
+ Args:
+ scenario_table_name (str): Name of scenario table (as used in Inputs/Outputs, not the name in the DB)
+ values (Dict): values of row to be inserted. Typically, a Dict or tuple (e.g. from df.itertuples().
+ connection
+ """
+ ifscenario_table_nameinself.db_tables:
+ db_table=self.db_tables[scenario_table_name]
+ else:
+ raiseValueError(f"Scenario table name '{scenario_table_name}' unknown. Cannot insert data into DB.")
+
+ scenario_seq=self._get_scenario_seq(scenario_name=scenario_name,connection=connection)
+ ifscenario_seqisnotNone:
+ values['scenario_seq']=scenario_seq
+ else:
+ raiseValueError(f"Scenario name '{scenario_name}' is unknown. Cannot insert row.")
+
+ # Split values in 2 parts:
+ # 1. The primary keys
+ # 2. The other columns
+ primary_keys=[c.nameforcindb_table.columns_metadataifc.primary_keyandc.name!='scenario_seq'andc.name!='scenario_name']
+ pk_values={k:vfork,vinvalues.items()ifkinprimary_keys}
+ pk_conditions=[(db_table.get_sa_column(k)==v)fork,vinpk_values.items()]# TODO:
+ column_values={k:vfork,vinvalues.items()ifknotinprimary_keysandknotin['scenario_seq','scenario_name']}# remove PK values
+ t:sqlalchemy.Table=db_table.get_sa_table()
+
+ ifself.enable_scenario_seq:
+ if(scenario_seq:=self._get_scenario_seq(scenario_name,connection))isnotNone:
+ # print(f"ScenarioSeq = {scenario_seq}")
+ sql=t.update().where(sqlalchemy.and_((t.c.scenario_seq==scenario_seq),*pk_conditions)).values(column_values)
+ # connection.execute(sql) # VT20230204: Duplicate execute? Will be done anyway at the end of this method!
+ else:
+ raiseValueError(f"No scenario with name {scenario_name} exists")
+ else:
+ sql=t.update().where(sqlalchemy.and_((t.c.scenario_name==scenario_name),*pk_conditions)).values(column_values)
+
+ # TODO: this does NOT fail if the row doesn't exist. It simply doesn;t do anything !? How can we have this fail, so we can do an insert?
+ connection.execute(sql)
+
+
+
[docs]defupsert_table_row(self,scenario_table_name:str,scenario_name:str,values):
+ ifself.enable_transactions:
+ # print("Insert row scenario within a transaction")
+ withself.engine.begin()asconnection:
+ self._upsert_table_row(scenario_table_name,scenario_name,values,connection)
+ else:
+ self._upsert_table_row(scenario_table_name,scenario_name,values,self.engine)
+
+ def_upsert_table_row(self,scenario_table_name:str,scenario_name:str,values,connection=None):
+ """Updates or inserts a row in a DB table.
+ Assumes the values contain all primary key values.
+ Other columns are optional.
+ If row exists, will update the row. If row doesn't exist, will do an insert.
+ Update also supports partial updates of non-pk fields.
+ Beware that a None will result in a NULL.
+
+ Args:
+ scenario_table_name (str): Name of scenario table (as used in Inputs/Outputs, not the name in the DB)
+ scenario_name (str): scenario_name
+ values (Dict): values of row to be inserted. Typically, a Dict or tuple (e.g. from df.itertuples(). Must include values for all PK columns.
+ connection
+
+ Raises errors for:
+ Unknown scenario_name
+ Primary Key value not in values
+ """
+ ifscenario_table_nameinself.db_tables:
+ db_table=self.db_tables[scenario_table_name]
+ else:
+ raiseValueError(f"Scenario table name '{scenario_table_name}' unknown. Cannot upsert data into DB.")
+
+ # scenario_seq = self._get_scenario_seq(scenario_name=scenario_name, connection=connection)
+ # if scenario_seq is not None:
+ # values['scenario_seq'] = scenario_seq
+ # else:
+ # raise ValueError(f"Scenario name '{scenario_name}' is unknown. Cannot upsert row.")
+
+ # Split values in 2 parts:
+ # 1. The primary keys
+ # 2. The other columns
+ primary_keys=[c.nameforcindb_table.columns_metadataifc.primary_keyandc.name!='scenario_seq'andc.name!='scenario_name']
+ ifnotall(pkinvalues.keys()forpkinprimary_keys):
+ raiseValueError(f"Not all required primary keys {primary_keys} specified in upsert request {values}")
+ # for pk in primary_keys:
+ # if pk not in values.keys():
+ # raise ValueError(f"Primary key {pk} value not specified in upsert request")
+ pk_values={k:vfork,vinvalues.items()ifkinprimary_keys}
+ pk_conditions=[(db_table.get_sa_column(k)==v)fork,vinpk_values.items()]
+ column_values={k:vfork,vinvalues.items()ifknotinprimary_keysandknotin['scenario_seq','scenario_name']}# remove PK values
+ t:sqlalchemy.Table=db_table.get_sa_table()
+
+ ifself.enable_scenario_seq:
+ if(scenario_seq:=self._get_scenario_seq(scenario_name,connection))isnotNone:
+ # print(f"ScenarioSeq = {scenario_seq}")
+ # sql_exists = sqlalchemy.exists().where(sqlalchemy.and_((t.c.scenario_seq == scenario_seq), *pk_conditions))
+ sql_select=t.select().where(sqlalchemy.and_((t.c.scenario_seq==scenario_seq),*pk_conditions))#.exists()
+ res=connection.execute(sql_select)
+ count=res.rowcount
+ ifcount>0:
+ # Update existing record
+ sql_update=t.update().where(sqlalchemy.and_((t.c.scenario_seq==scenario_seq),*pk_conditions)).values(column_values)
+ connection.execute(sql_update)
+ else:
+ # Insert new record
+ sql_insert=t.insert().values(values)
+ connection.execute(sql_insert)
+ else:
+ raiseValueError(f"Scenario name '{scenario_name}' is unknown. Cannot upsert row.")
+ else:
+ raiseNotImplementedError(f"Upsert only supports enable_scenario_seq")
[docs]classScenarioGenerator(Generic[SC]):"""Generates a variation of a scenario, i.e. `inputs` dataset, driven by a ScenarioConfig. To be subclassed. This base class implements overrides of the Parameter table.
@@ -108,10 +110,10 @@
Source code for dse_do_utils.scenariorunner
def__init__(self,inputs:Inputs,
- scenario_config:ScenarioConfig)->None:
+ scenario_config:SC)->None:self._logger:Logger=getLogger(__name__)self.inputs:Inputs=inputs.copy()# Only copy of dict
- self.scenario_config:ScenarioConfig=scenario_config
+ self.scenario_config:SC=scenario_config
[docs]defgenerate_scenario(self):"""Generate a variation of the base_inputs. To be overridden.
@@ -392,7 +394,7 @@
Source code for dse_do_utils.scenariorunner
Set bulk to False to get more granular DB insert errors, i.e. per record. TODO: add a data_check() on the DataManager for additional checks."""self._logger.info('Checking input data via SQLite and DataManager')
- self.sqlite_scenario_db_manager:ScenarioDbManager=self.scenario_db_manager_class()
+ self.sqlite_scenario_db_manager:ScenarioDbManager=self.scenario_db_manager_class(db_type=DatabaseType.SQLite)self.sqlite_scenario_db_manager.create_schema()self.sqlite_scenario_db_manager.replace_scenario_in_db(scenario_name,deepcopy(inputs),{},bulk=bulk)
@@ -409,7 +411,7 @@
Source code for dse_do_utils.scenariorunner
TODO: add a data_check() on the DataManager for additional checks."""self._logger.info('Checking output data via SQLite and DataManager')ifself.sqlite_scenario_db_managerisNone:
- self.sqlite_scenario_db_manager:ScenarioDbManager=self.scenario_db_manager_class()
+ self.sqlite_scenario_db_manager:ScenarioDbManager=self.scenario_db_manager_class(db_type=DatabaseType.SQLite)self.sqlite_scenario_db_manager.create_schema()self.sqlite_scenario_db_manager.replace_scenario_in_db(scenario_name,deepcopy(inputs),deepcopy(outputs),bulk=bulk)else:
@@ -557,9 +559,8 @@
Index columns are added at the tail of the tuple, so to be compatible with code that uses the position of the fields in the tuple. Inspired by https://stackoverflow.com/questions/46151666/iterate-over-pandas-dataframe-with-multiindex-by-index-names.
+
+ Notes:
+ * Does NOT work when df.Index has no names
+ TODO: does not work if only Index and no columns
+ TODO: test the combinations where row or Index are not tuples. Is row always a tuple? """Row=namedtuple("Row",['Index',*df.columns,*df.index.names])forrowindf.itertuples():
- yieldRow(*(row+row.Index))
+ # Option1 - Fails when Index is not a tuple
+ # yield Row(*(row + row.Index))
+
+ # Option 2 - In case the df has no columns?
+ ifisinstance(row.Index,tuple):
+ yieldRow(*(row+row.Index))
+ else:
+ yieldRow(*row,row.Index)
Returns pd.Series[docplex.cp.expression.CpoIntVar].
+If name is not None, will generate unique names based on pattern: ‘{name}_{index of df}’
+If multi-index df, keys are separated by ‘_’, e.g. ‘xDvar_1_2_3’
Gets the df from the table named table_name in either inputs or outputs.
+If necessary (i.e. when using scenario_seq), merges the Scenario table, so it has the scenario_name as column.
+DataFrame is NOT indexed!
Generalized compare of KPIs between scenarios. Creates a list-of-list of go.Figure, i.e. rows of figures,
+for the PlotlyRowsVisualizationPage.
+Each KPI gets its own bar-chart, comparing the scenarios.
+
+
Supports 3 cases:
+
Multi-scenario compare based on the Reference Scenarios multi-checkbox select on the Home page.
+
Compare the current select scenario with the Reference Scenario selected on the Home page.
+
Single scenario view based on the currently selected scenario
+
+
+
+
+
Parameters:
+
+
figs_per_row – int - Maximum number of figures per row
+
orientation – str - h’ (horizontal) or `v (vertical)
+
+
+
Returns:
+
figures in rows ([[go.Figure]]) - bar-charts in rows
Abstract class. Subclass to be able to define table schema definition, i.e. column name, data types, primary and foreign keys.
Only columns that are specified and included in the DB insert.
Generates a variation of a scenario, i.e. inputs dataset, driven by a ScenarioConfig.
To be subclassed.
This base class implements overrides of the Parameter table.
@@ -3421,6 +3492,12 @@
The package dse_do_utils internally contains a number of sub modules that each contain a part of the code.
-This keeps development easier and more organized. But to make importing easier, the classes are exposed
-in the top level init.py, which allows for a simple import statement like from dse_do_utils import ScenarioManager.
-Unfortunately, reloading the top-level module dse_do_utils doesn’t force a reload of the internal modules.
-
In case of subclassing, reloading needs to be done in the right order, i.e. first the parent classes.
-
Usage:
-
importimp
-importdse_do_utils# You have to do the import, otherwise not possible to do the next 2 steps
-dse_do_utils.module_reload()#This function
-imp.reload(dse_do_utils)# Necessary to ensure all following expressions `from dse_do_utils import class` are using the updated classes
-fromdse_do_utilsimportDataManager,OptimizationEngine,ScenarioManager,ScenarioPicker,DeployedDOModel,MapManager# This needs to be done AFTER the reload to refresh the definitions
-
-
-
Note that this function assumes that the set of classes and component modules is not part of the update.
-If it is, you may need to add another reload:
-
importimp
-importdse_do_utils# You have to do the import, otherwise not possible to do the next 2 steps
-imp.reload(dse_do_utils)# To reload this function
-dse_do_utils.module_reload()#This function
-imp.reload(dse_do_utils)# Necessary to ensure all future expressions `from dse_do_utils import class` are using the updated classes
-fromdse_do_utilsimportDataManager,OptimizationEngine,ScenarioManager,ScenarioPicker,DeployedDOModel,MapManager# This needs to be done AFTER the reload to refresh the definitions
-
-
-
If not using this function, in the notebook you would need to do the following (or the relevant parts of it):