diff --git a/oven/bugs/repressilator.xml b/oven/bugs/repressilator.xml
deleted file mode 100644
index 562478b6..00000000
--- a/oven/bugs/repressilator.xml
+++ /dev/null
@@ -1,1021 +0,0 @@
-
-
-
-
-
- Elowitz2000 - Repressilator
-
-
This model describes the deterministic version of the repressilator system.
-
The authors of this model (see reference) use three transcriptional repressor systems that are not part of any natural biological clock to build an oscillating network that they called the repressilator. The model system was induced in Escherichia coli.
-
In this system, LacI (variable X is the mRNA, variable PX is the protein) inhibits the tetracycline-resistance transposon tetR (Y, PY describe mRNA and protein). Protein tetR inhibits the gene Cl from phage Lambda (Z, PZ: mRNA, protein),and protein Cl inhibits lacI expression. With the appropriate parameter values this system oscillates.
-
-
-
This model is described in the article:
-
-
Elowitz MB, Leibler S.
-
Nature. 2000 Jan; 403(6767):335-338
-
Abstract:
-
-
Networks of interacting biomolecules carry out many essential functions in living cells, but the 'design principles' underlying the functioning of such intracellular networks remain poorly understood, despite intensive efforts including quantitative analysis of relatively simple systems. Here we present a complementary approach to this problem: the design and construction of a synthetic network to implement a particular function. We used three transcriptional repressor systems that are not part of any natural biological clock to build an oscillating network, termed the repressilator, in Escherichia coli. The network periodically induces the synthesis of green fluorescent protein as a readout of its state in individual cells. The resulting oscillations, with typical periods of hours, are slower than the cell-division cycle, so the state of the oscillator has to be transmitted from generation to generation. This artificial clock displays noisy behaviour, possibly because of stochastic fluctuations of its components. Such 'rational network design may lead both to the engineering of new cellular behaviours and to an improved understanding of naturally occurring networks.
-
-
-
-
The model is based upon the equations in Box 1 of the paper; however, these equations as printed are dimensionless, and the correct dimensions have been returned to the equations, and the parameters set to reproduce Figure 1C (left).
-
-
-
The original model was generated by B.E. Shapiro using Cellerator version 1.0 update 2.1127 using Mathematica 4.2 for Mac OS X (June 4, 2002), November 27, 2002 12:15:32, using (PowerMac,PowerPC, Mac OS X,MacOSX,Darwin).
-
Nicolas Le Novere provided a corrected version generated by SBMLeditor on Sun Aug 20 00:44:05 BST 2006. This removed the EmptySet species. Ran fine on COPASI 4.0 build 18.
-
Bruce Shapiro revised the model with SBMLeditor on 23 October 2006 20:39 PST. This defines default units and correct reactions. The original Cellerator reactions while being mathematically correct did not accurately reflect the intent of the authors. The original notes were mostly removed because they were mostly incorrect in the revised version. Tested with MathSBML 2.6.0.
-
Nicolas Le Novere changed the volume to 1 cubic micrometre, to allow for stochastic simulation.
-
Changed by Lukas Endler to use the average livetime of mRNA instead of its halflife and a corrected value of alpha and alpha0.
-
Moreover, the equations used in this model were clarified, cf. below.
-
The equations given in box 1
- of the original publication are rescaled in three respects (lowercase letters denote the rescaled, uppercase letters the unscaled number of molecules per cell):
-
- - the time is rescaled to the average mRNA lifetime, t_ave: τ = t/t_ave
- - the mRNA concentration is rescaled to the translation efficiency eff: m = M/eff
- - the protein concentration is rescaled to Km: p = P/Km
-
-
- α
- in the equations should be in units of rescaled proteins per promotor and cell, and β
- is the ratio of the protein to the mRNA decay rates or the ratio of the mRNA to the protein halflife.
-
In this version of the model α
- and β
- are calculated correspondingly to the article, while p
- and m
- where just replaced by P/Km
- resp. M/eff
- and all equations multiplied by 1/t_ave
- . Also, to make the equations easier to read, commonly used variables derived from the parameters given in the article by simple rules were introduced.
-
The parameters given in the article were:
-
-
- | promotor strength (repressed) ( tps_repr
- ): |
- 5*10 -4 |
- transcripts/(promotor*s) |
-
-
- | promotor strength (full) ( tps_active
- ): |
- 0.5 |
- transcripts/(promotor*s) |
-
-
- | mRNA half life, τ 1/2,mRNA
- : |
- 2 |
- min |
-
-
- | protein half life, τ 1/2,prot
- : |
- 10 |
- min |
-
-
- | K M
- : |
- 40 |
- monomers/cell |
-
-
- | Hill coefficient n: |
- 2 |
- |
-
-
-
From these the following constants can be derived:
-
-
- | average mRNA lifetime ( t_ave
- ): |
-
- τ 1/2,mRNA
- /ln(2)
- |
- = 2.89 min |
-
-
- | mRNA decay rate ( kd_mRNA
- ): |
-
- ln(2)/ τ 1/2,mRNA
- |
- = 0.347 min -1 |
-
-
- | protein decay rate ( kd_prot
- ): |
-
- ln(2)/ τ 1/2,prot
- |
-
-
- | transcription rate ( a_tr
- ): |
-
- tps_active*60
- |
- = 29.97 transcripts/min |
-
-
- | transcription rate (repressed) ( a0_tr
- ): |
-
- tps_repr*60
- |
- = 0.03 transcripts/min |
-
-
- | translation rate ( k_tl
- ): |
-
- eff*kd_mRNA
- |
- = 6.93 proteins/(mRNA*min) |
-
-
- | α : |
-
- a_tr*eff*τ 1/2,prot
- /(ln(2)*K M
- )
- |
- = 216.4 proteins/(promotor*cell*Km) |
-
-
- | α 0
- : |
-
- a0_tr*eff*τ 1/2,prot
- /(ln(2)*K M
- )
- |
- = 0.2164 proteins/(promotor*cell*Km) |
-
-
- | β : |
-
- k_dp/k_dm
- |
- = 0.2 |
-
-
-
-
Annotation by the Kinetic Simulation Algorithm Ontology (KiSAO):
-
To reproduce the simulations run published by the authors, the model has to be simulated with any of two different approaches. First, one could use a deterministic method ( KISAO_0000035
- ) with continuous variables ( KISAO_0000018
- ). One sample algorithm to use is the CVODE solver ( KISAO_0000019
- ). Second, one could simulate the system using Gillespie's direct method ( KISAO_0000029
- ), which is a stochastic method ( KISAO_0000036
- ) supporting adaptive timesteps ( KISAO_0000041
- ) and using discrete variables ( KISAO_0000016
- ).
-
-
-
-
To the extent possible under law, all copyright and related or neighbouring rights to this encoded model have been dedicated to the public domain worldwide. Please refer to CC0 Public Domain Dedication
- for more information.
-
-
-
-
-
-
-
-
-
-
- Le Novère
- Nicolas
-
- lenov@ebi.ac.uk
-
- EMBL-EBI
-
-
-
-
- Chelliah
- Vijayalakshmi
-
- viji@ebi.ac.uk
-
- EMBL-EBI
-
-
-
-
- Endler
- Lukas
-
- lukas@ebi.ac.uk
-
- EMBL-EBI
-
-
-
-
- Juty
- Nick
-
- juty@ebi.ac.uk
-
- EMBL-EBI
-
-
-
-
- Shapiro
- Bruce
-
- bshapiro@caltech.edu
-
- Jet Propulsion Laboratory
-
-
-
-
-
- 2009-01-20T14:03:56Z
-
-
- 2013-07-10T10:59:30Z
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- lacI inhibitor
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Tet repressor protein
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- lambda repressor
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- ratio of protein to mRNA decay rates
-
-
-
-
-
- Leakiness in protein copies per promoter and cell
-
-
-
-
-
- Protein copies per promoter and cell
-
-
-
-
-
- Average number of proteins per transcript
-
-
-
-
-
- Hill coefficient
-
-
-
-
-
- Number of repressor molecules per cell giving half maximal repression, in monomers per cell
-
-
-
-
-
-
-
-
- mRNA decay rate constant
-
-
-
-
-
- Protein decay rate costant
-
-
-
-
-
- Translation rate constant
-
-
-
-
-
- Transcription rate from free promotor minus a0_tr
-
-
-
-
-
- Transcrition from free promotor in transcripts per second and promotor
-
-
-
-
-
- Transcrition from fully repressed promotor in transcripts per second and promotor
-
-
-
-
-
- Transcription rate from fully repressed promotor
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/oven/bugs/times_vector.py b/oven/bugs/times_vector.py
deleted file mode 100644
index 6170e352..00000000
--- a/oven/bugs/times_vector.py
+++ /dev/null
@@ -1,38 +0,0 @@
-"""Simulate roadrunner with a times vector."""
-
-from rich import print
-from sbmlutils.resources import REPRESSILATOR_SBML
-
-from sbmlsim.simulator.rr_model import roadrunner
-from sbmlsim.utils import timeit
-
-
-r: roadrunner.RoadRunner = roadrunner.RoadRunner(str(REPRESSILATOR_SBML))
-s1 = r.simulate(times=[0, 10.98, 50.12])
-s2 = r.simulate(start=0, end=10, steps=50)
-
-print(s1)
-print("-" * 80)
-print(s2)
-print("-" * 80)
-
-r: roadrunner.RoadRunner = roadrunner.RoadRunner(str(REPRESSILATOR_SBML))
-s1 = r.simulate(start=0, end=10, steps=50)
-s2 = r.simulate(times=[0, 10.98, 50.12])
-
-print(s1)
-print("-" * 80)
-print(s2)
-print("-" * 80)
-
-
-@timeit
-def simulate_times(r: roadrunner.RoadRunner):
- r.resetToOrigin()
- r.simulate(times=[0, 10.98, 50.12])
-
-
-@timeit
-def simulate_steps(r):
- r.resetToOrigin()
- r.simulate(times=[0, 10.98, 50.12, 100.0])
diff --git a/oven/multiprocessing/example_multiprocessing.py b/oven/multiprocessing/example_multiprocessing.py
deleted file mode 100644
index aa7fc9cc..00000000
--- a/oven/multiprocessing/example_multiprocessing.py
+++ /dev/null
@@ -1,26 +0,0 @@
-"""Multiprocessing simulation example."""
-from multiprocessing import Process
-
-from sbmlsim import RESOURCES_DIR
-from sbmlsim.simulator.rr_model import roadrunner
-
-
-def run_simulations(r: roadrunner.RoadRunner, size: int) -> None:
- """Run simulations."""
- for _ in range(size):
- print("simulate")
- res = r.simulate(0, 100, steps=5)
- print(res)
-
-
-def multiprocessing_example() -> None:
- """Run multiprocessing example."""
- model_path = RESOURCES_DIR / "testdata" / "models" / "icg_body_flat.xml"
- rr: roadrunner.RoadRunner = roadrunner.RoadRunner(str(model_path))
- p = Process(target=run_simulations, args=(rr, 10))
- p.start()
- p.join()
-
-
-if __name__ == "__main__":
- multiprocessing_example()
diff --git a/oven/multiprocessing/example_multiprocessing_lucian.py b/oven/multiprocessing/example_multiprocessing_lucian.py
deleted file mode 100644
index bf041405..00000000
--- a/oven/multiprocessing/example_multiprocessing_lucian.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import ray
-import roadrunner
-
-# start ray
-ray.init(ignore_reinit_error=True)
-
-modelstr = """
-
-
-
-
-
-
-
-"""
-
-@ray.remote
-class SimulatorActorPath(object):
- """Ray actor to execute simulations."""
-
- def __init__(self):
- self.r = roadrunner.RoadRunner(modelstr)
-
- def simulate(self):
- """Simulate."""
- print("Just read the value of 'a'")
- print(self.r.getValue("a"), "\n")
-
-
-def ray_example():
- """Ray example."""
- actor_count: int = 1 # cores to run this on
-
- simulators = [SimulatorActorPath.remote() for _ in range(actor_count)]
-
- # run simulations
- # tc_ids = []
- for simulator in simulators:
- simulator.simulate.remote()
-
-
-if __name__ == "__main__":
- ray_example()
diff --git a/oven/multiprocessing/icg_body_flat.xml b/oven/multiprocessing/icg_body_flat.xml
deleted file mode 100644
index b565f7d4..00000000
--- a/oven/multiprocessing/icg_body_flat.xml
+++ /dev/null
@@ -1,2613 +0,0 @@
-
-
-
-
-
- Whole-body PBPK model of ICG
- Description
- Model for whole-body distribution and elimination of indocyanine green
-encoded in SBML format.
-
- Assumptions:
-
-
- - icg is only metabolized in the liver
-
- The content of this model has been carefully created in a manual research effort.
- Terms of use
- Copyright © 2021 Matthias König.
-
-
-
-
This work is licensed under a Creative Commons Attribution 4.0 International License.
-Redistribution and use of any part of this model, with or without modification, are permitted provided
-that the following conditions are met:
-
- - Redistributions of this SBML file must retain the above copyright notice, this list of conditions and the following disclaimer.
- - Redistributions in a different form must reproduce the above copyright notice, this list of conditions
-and the following disclaimer in the documentation and/or other materials provided
-with the distribution.
-
- This model is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the
-implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-
-
-
-
-
-
-
-
- Köller
- Adrian
-
- adriankl39@googlemail.com
-
- Humboldt-University Berlin, Institute for Theoretical Biology
-
-
-
-
- König
- Matthias
-
- koenigmx@hu-berlin.de
-
- Humboldt-University Berlin, Institute for Theoretical Biology
-
-
-
-
-
- 1900-01-01T00:00:00Z
-
-
- 1900-01-01T00:00:00Z
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Reference range: 0.01 [mmole/l] (~ 1 mg/dl)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Simone1997 75 [50-100] ml/min/kg
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- BW*COBW (100 ml/s)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- CO/1000 ml/l * 60 s/min (6 l/min)
-IRCP2001 reference values Cardiac output: 6.5 l/min (male); 5.9 l/min (female)
-Cardiac output at lower end => 3.75 l/min
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- FVgi = FVgu + FVpa + FVsp = 0.0171 + 0.01 + 0.0026 = 0.0297
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- bilirubin reference range: ~ 0.1 - 5 [g/dl]
-(10 [mg/l] /584.6623 [g/mole]) ~ 0.0171 mmole/l
-setting Ki in reference range
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Parameter for scaling transport protein amount.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Solute carrier organic anion transporter family member 1B3 (SLCO1B3).
-Bilirubin effect was modeled as competitive inhibition.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Phosphatidylcholine translocator ABCB4
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- As simplification transport directly into feces (no information on
-time course available). No transport proteins involved in the biliary
-secretion into the duodenum.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/oven/multiprocessing/multiprocessing_manager.py b/oven/multiprocessing/multiprocessing_manager.py
deleted file mode 100644
index 8b357e3d..00000000
--- a/oven/multiprocessing/multiprocessing_manager.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-from multiprocessing import Process, Manager
-
-def f(d, l):
- d[1] = '1'
- d['2'] = 2
- d[0.25] = None
- l.reverse()
-
-if __name__ == '__main__':
- with Manager() as manager:
- d = manager.dict()
- l = manager.list(range(10))
-
- p = Process(target=f, args=(d, l))
- p.start()
- p.join()
-
- print(d)
- print(l)
diff --git a/oven/multiprocessing/ray_init.py b/oven/multiprocessing/ray_init.py
deleted file mode 100644
index eab2a97d..00000000
--- a/oven/multiprocessing/ray_init.py
+++ /dev/null
@@ -1,15 +0,0 @@
-"""
-ray start --head --port=6379
-"""
-
-
-import time
-tstart = time.time()
-import ray
-
-ray.init(address='auto')
-# ray.init()
-
-
-tend = time.time()
-print(f"ray start time:", tend-tstart)
diff --git a/src/sbmlsim/combine/datagenerator.py b/src/sbmlsim/combine/datagenerator.py
index 199ee828..9829ef2e 100644
--- a/src/sbmlsim/combine/datagenerator.py
+++ b/src/sbmlsim/combine/datagenerator.py
@@ -1,7 +1,5 @@
"""DataGenerator."""
-from typing import Dict
-
from sbmlsim.data import DataSet
from sbmlsim.result import XResult
@@ -10,8 +8,8 @@ class DataGeneratorFunction:
"""DataGeneratorFunction."""
def __call__(
- self, xresults: Dict[str, XResult], dsets: Dict[str, DataSet] = None
- ) -> Dict[str, XResult]:
+ self, xresults: dict[str, XResult], dsets: dict[str, DataSet] = None
+ ) -> dict[str, XResult]:
"""Call the function."""
raise NotImplementedError
@@ -24,7 +22,7 @@ def __init__(self, index: int, dimension: str = "_time"):
self.index = index
self.dimension = dimension
- def __call__(self, xresults: Dict[str, XResult], dsets=None) -> Dict[str, XResult]:
+ def __call__(self, xresults: dict[str, XResult], dsets=None) -> dict[str, XResult]:
"""Reduce based on '_time' dimension with given index."""
results = {}
for key, xres in xresults.items():
@@ -50,8 +48,8 @@ class DataGenerator:
def __init__(
self,
f: DataGeneratorFunction,
- xresults: Dict[str, XResult],
- dsets: Dict[str, DataSet] = None,
+ xresults: dict[str, XResult],
+ dsets: dict[str, DataSet] = None,
):
"""Initialize DataGenerator."""
self.xresults = xresults
diff --git a/src/sbmlsim/combine/mathml.py b/src/sbmlsim/combine/mathml.py
index 2ac5e959..02aa2a24 100644
--- a/src/sbmlsim/combine/mathml.py
+++ b/src/sbmlsim/combine/mathml.py
@@ -3,7 +3,7 @@
Using sympy to evaluate the expressions.
"""
-from typing import Any, Dict, Set
+from typing import Any
from pymetadata import log
import libsedml
from sympy import lambdify, sympify
@@ -92,7 +92,7 @@ def expr_from_formula(formula: str):
return expr
-def evaluate(astnode: libsedml.ASTNode, variables: Dict):
+def evaluate(astnode: libsedml.ASTNode, variables: dict):
"""Evaluate the astnode with values."""
expr = parse_astnode(astnode)
f = lambdify(args=list(expr.free_symbols), expr=expr)
@@ -100,9 +100,9 @@ def evaluate(astnode: libsedml.ASTNode, variables: Dict):
return res
-def _get_variables(astnode: libsedml.ASTNode, variables=None) -> Set[str]:
+def _get_variables(astnode: libsedml.ASTNode, variables=None) -> set[str]:
"""Add variable names to the variables."""
- variables: Set
+ variables: set
if variables is None:
variables = set()
diff --git a/src/sbmlsim/combine/sedml/data.py b/src/sbmlsim/combine/sedml/data.py
index 129baf00..f30e1f8c 100644
--- a/src/sbmlsim/combine/sedml/data.py
+++ b/src/sbmlsim/combine/sedml/data.py
@@ -1,10 +1,11 @@
"""Reading NUML, CSV and TSV data from DataDescriptions."""
+
import http.client as httplib
import importlib
import os
import tempfile
from pathlib import Path
-from typing import Dict, Optional
+from typing import Optional
import libsbml
import libsedml
@@ -30,7 +31,7 @@ class DataDescriptionParser:
@classmethod
def parse(
cls, dd: libsedml.SedDataDescription, working_dir: Path = None
- ) -> Dict[str, pd.Series]:
+ ) -> dict[str, pd.Series]:
"""Parse single DataDescription.
Returns dictionary of data sources {DataSource.id, slice_data}
@@ -129,7 +130,6 @@ def parse(
# -------------------------------
data_sources = {}
for ds in dd.getListOfDataSources():
-
dsid = ds.getId()
# log DataSource
diff --git a/src/sbmlsim/combine/sedml/io.py b/src/sbmlsim/combine/sedml/io.py
index 2770ef39..045e8459 100644
--- a/src/sbmlsim/combine/sedml/io.py
+++ b/src/sbmlsim/combine/sedml/io.py
@@ -1,10 +1,11 @@
"""Template functions to run the example cases."""
+
import importlib
import os
import zipfile
from enum import Enum
from pathlib import Path
-from typing import Dict, Optional, Tuple, Union
+from typing import Optional, Tuple, Union
from xml.etree import ElementTree
import libsedml
@@ -147,7 +148,6 @@ def read_sedml(self) -> Tuple[libsedml.SedDocument, SEDMLInputType]:
omex = pyomex.Omex.from_omex(omex_path=file_path)
sedml_entries = omex.entries_by_format(format_key="sed-ml")
for entry in sedml_entries:
-
logger.info("SED-ML location: ", entry.location)
if entry.master:
sedml_path = omex.get_path(entry.location)
diff --git a/src/sbmlsim/combine/sedml/parser.py b/src/sbmlsim/combine/sedml/parser.py
index fe8bec9c..8f3297b5 100644
--- a/src/sbmlsim/combine/sedml/parser.py
+++ b/src/sbmlsim/combine/sedml/parser.py
@@ -83,7 +83,7 @@
from enum import Enum
from pathlib import Path
from pprint import pprint
-from typing import Dict, List, Optional, Set, Type, Union
+from typing import Optional, Type, Union
import libsedml
import pandas as pd
@@ -191,9 +191,9 @@ def sedml_target(self) -> Optional[str]:
@staticmethod
def sbmlsim_model_targets(
r: roadrunner.ExecutableModel,
- ) -> Dict[str, "SBMLModelTarget"]:
+ ) -> dict[str, "SBMLModelTarget"]:
"""Model targets which are supported by sbmlsim."""
- d: Dict[str, "SBMLModelTarget"] = {}
+ d: dict[str, "SBMLModelTarget"] = {}
# time
d["time"] = SBMLModelTarget(
@@ -295,9 +295,9 @@ def __init__(
omex = pyomex.Omex.from_directory(working_dir)
omex.to_omex(omex_path=omex_path)
- def _selection_lookup_table(self) -> Dict[str, Dict[str, SBMLModelTarget]]:
+ def _selection_lookup_table(self) -> dict[str, dict[str, SBMLModelTarget]]:
"""Lookup table for sbmlsim model selections."""
- d: Dict[str, Dict[str, SBMLModelTarget]] = {}
+ d: dict[str, dict[str, SBMLModelTarget]] = {}
for model_id in self.exp.models():
rrsbml_model: RoadrunnerSBMLModel = self.exp._models[model_id]
rr_model: roadrunner.ExecutableModel = rrsbml_model.r.model
@@ -331,7 +331,7 @@ def serialize_datasets(self):
# FIXME: Data must be unit converted to the actual plot/report;
# FIXME: same for the model
- dset_indices: Dict[str, Set[str]] = defaultdict(set)
+ dset_indices: dict[str, set[str]] = defaultdict(set)
# THIS CREATES PROBLEMS
# for did, data in self.exp._data.items():
# sed_dg: libsedml.SedDataGenerator = self.sed_doc.createDataGenerator()
@@ -425,7 +425,7 @@ def serialize_models(self):
for model_id, model in self.exp.models().items():
print(model_id, model)
rrsbml_model: RoadrunnerSBMLModel = self.exp._models[model_id]
- selection_map: Dict[str, SBMLModelTarget] = self.selection_lookup[model_id]
+ selection_map: dict[str, SBMLModelTarget] = self.selection_lookup[model_id]
sed_model: libsedml.SedModel = self.sed_doc.createModel()
sed_model.setId(model_id)
@@ -452,7 +452,7 @@ def serialize_models(self):
sed_model.setSource(str(model_path_rel))
# get normalized changes (to model units)
- changes: Dict[str, Quantity] = UnitsInformation.normalize_changes(
+ changes: dict[str, Quantity] = UnitsInformation.normalize_changes(
changes=abstract_model.changes, uinfo=rrsbml_model.uinfo
)
@@ -479,7 +479,7 @@ def serialize_simulations(self):
Write experiment simulations in SedDocument.
"""
sim_id: str
- simulation: Dict[str, AbstractSim]
+ simulation: dict[str, AbstractSim]
for sim_id, simulation in self.exp._simulations.items():
if isinstance(simulation, (TimecourseSim, ScanSim)):
if isinstance(simulation, TimecourseSim):
@@ -867,7 +867,7 @@ def __init__(
self.exp_class: Type[SimulationExperiment]
# --- Models ---
- self.models: Dict[str, AbstractModel] = {}
+ self.models: dict[str, AbstractModel] = {}
# resolve original model source and changes
model_sources, model_changes = self.resolve_model_changes()
@@ -882,12 +882,12 @@ def __init__(
logger.debug(f"models: {self.models}")
# --- DataDescriptions ---
- self.data_descriptions: Dict[str, Dict[str, pd.Series]] = {}
- self.datasets: Dict[str, DataSet] = {}
+ self.data_descriptions: dict[str, dict[str, pd.Series]] = {}
+ self.datasets: dict[str, DataSet] = {}
sed_dd: libsedml.SedDataDescription
for sed_dd in sed_doc.getListOfDataDescriptions():
did = sed_dd.getId()
- data_description: Dict[str, pd.Series] = DataDescriptionParser.parse(
+ data_description: dict[str, pd.Series] = DataDescriptionParser.parse(
sed_dd, self.working_dir
)
self.data_descriptions[did] = data_description
@@ -901,7 +901,7 @@ def __init__(
logger.debug(f"data_descriptions: {self.data_descriptions}")
# --- AlgorithmParameters ---
- self.algorithm_parameters: List[AlgorithmParameter] = []
+ self.algorithm_parameters: list[AlgorithmParameter] = []
sed_alg_par: libsedml.SedAlgorithmParameter
for sed_alg_par in sed_doc.getListOfAlgorithmParameters():
self.algorithm_parameters.append(
@@ -910,14 +910,14 @@ def __init__(
logger.debug(f"algorithm_parameters: {self.algorithm_parameters}")
# --- Simulations ---
- self.simulations: Dict[str, AbstractSim] = {}
+ self.simulations: dict[str, AbstractSim] = {}
sed_sim: libsedml.SedSimulation
for sed_sim in sed_doc.getListOfSimulations():
self.simulations[sed_sim.getId()] = self.parse_simulation(sed_sim)
logger.debug(f"simulations: {self.simulations}")
# --- Tasks ---
- self.tasks: Dict[str, Task] = {}
+ self.tasks: dict[str, Task] = {}
sed_task: libsedml.SedTask
for sed_task in sed_doc.getListOfTasks():
task = self.parse_task(sed_task)
@@ -939,7 +939,7 @@ def __init__(
# Fit Experiments
print("*** FitExperiments & FitMappings ***")
- fit_experiments: List[FitExperiment] = []
+ fit_experiments: list[FitExperiment] = []
sed_fit_experiment: libsedml.SedFitExperiment
for sed_fit_experiment in sed_petask.getListOfFitExperiments():
pprint(sed_fit_experiment)
@@ -961,7 +961,7 @@ def __init__(
)
# fit_mappings
- mappings: List[FitMapping] = []
+ mappings: list[FitMapping] = []
sed_fit_mapping: libsedml.SedFitMapping
for sed_fit_mapping in sed_fit_experiment.getListOfFitMappings():
weight: float = sed_fit_mapping.getWeight()
@@ -996,7 +996,7 @@ def __init__(
# Fit Parameters
print("*** FitParameters ***")
- parameters: List[FitParameter] = []
+ parameters: list[FitParameter] = []
sed_adjustable_parameter: libsedml.SedAdjustableParameter
for (
sed_adjustable_parameter
@@ -1030,7 +1030,7 @@ def __init__(
)
# resolve links to experiments!
- experiment_refs: List[str] = []
+ experiment_refs: list[str] = []
for (
sed_experiment_ref
@@ -1045,10 +1045,10 @@ def __init__(
# --- Data ---
# data is generated in the figures and reports
- self.data: Dict[str, Data] = {}
+ self.data: dict[str, Data] = {}
# --- Styles ---
- self.styles: Dict[str, Style] = {}
+ self.styles: dict[str, Style] = {}
sed_style: libsedml.SedStyle
for sed_style in sed_doc.getListOfStyles():
self.styles[sed_style.getId()] = self.parse_style(sed_style)
@@ -1056,7 +1056,7 @@ def __init__(
logger.debug(f"styles: {self.styles}")
# --- Outputs: Figures/Plots ---
- self.figures: Dict[str, Figure] = {}
+ self.figures: dict[str, Figure] = {}
sed_output: libsedml.SedOutput
# which plots are not in figures
@@ -1089,13 +1089,13 @@ def __init__(
logger.debug(f"figures: {self.figures}")
# --- Outputs: Reports---
- self.reports: Dict[str, Dict[str, Data]] = {}
+ self.reports: dict[str, dict[str, Data]] = {}
for sed_output in sed_doc.getListOfOutputs():
type_code = sed_output.getTypeCode()
if type_code == libsedml.SEDML_OUTPUT_REPORT:
sed_report: libsedml.SedReport = sed_output
- report: Dict[str, str] = self.parse_report(sed_report=sed_report)
+ report: dict[str, str] = self.parse_report(sed_report=sed_report)
self.reports[sed_output.getId()] = report
logger.debug(f"reports: {self.reports}")
@@ -1134,29 +1134,29 @@ def _create_experiment_class(self) -> Type[SimulationExperiment]:
"""
# Create the experiment object
- def f_algorithm_parameters(obj) -> List[AlgorithmParameter]:
+ def f_algorithm_parameters(obj) -> list[AlgorithmParameter]:
return self.algorithm_parameters
- def f_models(obj) -> Dict[str, AbstractModel]:
+ def f_models(obj) -> dict[str, AbstractModel]:
return self.models
- def f_datasets(obj) -> Dict[str, DataSet]:
+ def f_datasets(obj) -> dict[str, DataSet]:
"""Dataset definition (experimental data)."""
return self.datasets
- def f_simulations(obj) -> Dict[str, AbstractSim]:
+ def f_simulations(obj) -> dict[str, AbstractSim]:
return self.simulations
- def f_tasks(obj) -> Dict[str, Task]:
+ def f_tasks(obj) -> dict[str, Task]:
return self.tasks
- def f_data(obj) -> Dict[str, Data]:
+ def f_data(obj) -> dict[str, Data]:
return self.data
- def f_figures(obj) -> Dict[str, Figure]:
+ def f_figures(obj) -> dict[str, Figure]:
return self.figures
- def f_reports(obj) -> Dict[str, Dict[str, str]]:
+ def f_reports(obj) -> dict[str, dict[str, str]]:
return self.reports
class_name = self.name
@@ -1213,7 +1213,7 @@ def parse_model(
self,
sed_model: libsedml.SedModel,
source: str,
- sed_changes: List[libsedml.SedChange],
+ sed_changes: list[libsedml.SedChange],
) -> AbstractModel:
"""Convert SedModel to AbstractModel.
@@ -1309,7 +1309,7 @@ def find_source(mid: str, changes):
return model_sources, all_changes
- def parse_change(self, sed_change: libsedml.SedChange) -> Dict:
+ def parse_change(self, sed_change: libsedml.SedChange) -> dict:
"""Parse the libsedml.Change.
Currently only a limited subset of model changes is supported.
@@ -1431,7 +1431,7 @@ def parse_simulation(self, sed_sim: libsedml.SedSimulation) -> TimecourseSim:
def parse_task(self, sed_task: libsedml.SedAbstractTask) -> Task:
"""Parse arbitrary task (repeated or simple, or simple repeated)."""
# If no DataGenerator references the task, no execution is necessary
- dgs: List[libsedml.SedDataGenerator] = self.data_generators_for_task(sed_task)
+ dgs: list[libsedml.SedDataGenerator] = self.data_generators_for_task(sed_task)
if len(dgs) == 0:
logger.warning(
f"Task '{sed_task.getId()}' is not used in any DataGenerator."
@@ -1560,8 +1560,8 @@ def parse_plot2d(self, sed_plot2d: libsedml.SedPlot2D) -> Plot:
plot.yaxis_right = self.parse_axis(sed_plot2d.getRightYAxis())
# curves
- curves: List[Curve] = []
- areas: List[ShadedArea] = []
+ curves: list[Curve] = []
+ areas: list[ShadedArea] = []
for sed_abstract_curve in sed_plot2d.getListOfCurves():
abstract_curve = self.parse_abstract_curve(sed_abstract_curve)
if isinstance(abstract_curve, Curve):
@@ -1577,13 +1577,13 @@ def parse_plot3d(self, sed_plot3d: libsedml.SedPlot3D) -> Plot:
# FIXME: implement
raise NotImplementedError
- def parse_report(self, sed_report: libsedml.SedReport) -> Dict[str, str]:
+ def parse_report(self, sed_report: libsedml.SedReport) -> dict[str, str]:
"""Parse Report.
:return dictionary of label: dataGenerator.id mapping.
"""
sed_dataset: libsedml.SedDataSet
- report: Dict[str, str] = {}
+ report: dict[str, str] = {}
for sed_dataset in sed_report.getListOfDataSets():
sed_dg_id: str = sed_dataset.getDataReference()
if self.sed_doc.getDataGenerator(sed_dg_id) is None:
@@ -1878,12 +1878,12 @@ def data_from_datagenerator(self, sed_dg_ref: Optional[str]) -> Optional[Data]:
astnode: libsedml.ASTNode = sed_dg.getMath()
function: str = libsedml.formulaToL3String(astnode)
- parameters: Dict[str, float] = {}
+ parameters: dict[str, float] = {}
sed_par: libsedml.SedParameter
for sed_par in sed_dg.getListOfParameters():
parameters[sed_par.getId()] = sed_par.getValue()
- variables: Dict[str, Data] = {}
+ variables: dict[str, Data] = {}
sed_var: libsedml.SedVariable
for sed_var in sed_dg.getListOfVariables():
task_id = sed_var.getTaskReference()
@@ -1925,7 +1925,7 @@ def data_from_datagenerator(self, sed_dg_ref: Optional[str]) -> Optional[Data]:
def data_generators_for_task(
self,
sed_task: libsedml.SedTask,
- ) -> List[libsedml.SedDataGenerator]:
+ ) -> list[libsedml.SedDataGenerator]:
"""Get DataGenerators which reference the given task."""
sed_dgs = []
sed_dg: libsedml.SedDataGenerator
@@ -1939,7 +1939,7 @@ def data_generators_for_task(
return sed_dgs
@staticmethod
- def get_ordered_subtasks(sed_task: libsedml.SedTask) -> List[libsedml.SedTask]:
+ def get_ordered_subtasks(sed_task: libsedml.SedTask) -> list[libsedml.SedTask]:
"""Ordered list of subtasks for task."""
subtasks = sed_task.getListOfSubTasks()
subtask_order = [st.getOrder() for st in subtasks]
diff --git a/src/sbmlsim/combine/sedml/report.py b/src/sbmlsim/combine/sedml/report.py
index f9f9cb87..24a4aacf 100644
--- a/src/sbmlsim/combine/sedml/report.py
+++ b/src/sbmlsim/combine/sedml/report.py
@@ -1,7 +1,5 @@
"""Reports."""
-from typing import Dict
-
from pymetadata import log
@@ -14,7 +12,7 @@ class Report:
Collections of data generators.
"""
- def __init__(self, sid: str, name: str = None, datasets: Dict[str, str] = None):
+ def __init__(self, sid: str, name: str = None, datasets: dict[str, str] = None):
"""Construct report."""
self.sid: str = sid
self.name: str = name
@@ -22,7 +20,7 @@ def __init__(self, sid: str, name: str = None, datasets: Dict[str, str] = None):
if datasets is None:
self.datasets = {}
- self.datasets: Dict[str, str] = datasets
+ self.datasets: dict[str, str] = datasets
def add_dataset(self, label: str, data_id: str) -> None:
"""Add dataset for given label."""
diff --git a/src/sbmlsim/combine/sedml/runner.py b/src/sbmlsim/combine/sedml/runner.py
index 3efcf6da..2d6b4380 100644
--- a/src/sbmlsim/combine/sedml/runner.py
+++ b/src/sbmlsim/combine/sedml/runner.py
@@ -56,7 +56,7 @@ def execute_sedml(path: Path, working_dir: Path, output_path: Path) -> None:
# execute simulation experiment
runner = ExperimentRunner(
[sedml_parser.exp_class],
- simulator=SimulatorSerialRR(),
+ simulator=SimulatorSerial(),
data_path=sedml_reader.exec_dir,
base_path=sedml_reader.exec_dir,
)
diff --git a/src/sbmlsim/combine/sedml/task.py b/src/sbmlsim/combine/sedml/task.py
index d72c5c15..5d185dcd 100644
--- a/src/sbmlsim/combine/sedml/task.py
+++ b/src/sbmlsim/combine/sedml/task.py
@@ -1,8 +1,9 @@
-from typing import List
+import warnings
import libsedml
+import numpy as np
from pymetadata import log
-
+from sbmlutils.converters.mathml import evaluableMathML
logger = log.get_logger(__name__)
@@ -108,12 +109,12 @@ def add_children(node):
@staticmethod
def get_ordered_subtasks(
repeated_task: libsedml.SedRepeatedTask,
- ) -> List[libsedml.SedSubTask]:
+ ) -> list[libsedml.SedSubTask]:
"""Ordered list of subtasks for repeated task."""
subtasks: libsedml.SedListOfSubTasks = repeated_task.getListOfSubTasks()
- subtaskOrder: List[int] = [st.getOrder() for st in subtasks]
+ subtaskOrder: list[int] = [st.getOrder() for st in subtasks]
# sort by order, if all subtasks have order (not required)
- if all(subtaskOrder) != None:
+ if all(subtaskOrder) is not None:
subtasks = [st for (stOrder, st) in sorted(zip(subtaskOrder, subtasks))]
return subtasks
@@ -121,6 +122,10 @@ def get_ordered_subtasks(
# -------------------------------------------------------------------------------------
+class SEDMLCodeFactory:
+ pass
+
+
class Test(object):
@staticmethod
def simpleTaskToPython(doc, node: TaskNode):
diff --git a/src/sbmlsim/comparison/diff.py b/src/sbmlsim/comparison/diff.py
index 9b1417e3..50bdc785 100644
--- a/src/sbmlsim/comparison/diff.py
+++ b/src/sbmlsim/comparison/diff.py
@@ -5,7 +5,6 @@
"""
from pathlib import Path
-from typing import Dict
import numpy as np
import pandas as pd
@@ -20,7 +19,7 @@
logger = log.get_logger(__name__)
-def get_files_by_extension(base_path: Path, extension: str = ".json") -> Dict[str, str]:
+def get_files_by_extension(base_path: Path, extension: str = ".json") -> dict[str, str]:
"""Get all files by given extension.
Simulation definitions are json files.
@@ -49,12 +48,12 @@ class DataSetsComparison:
@timeit
def __init__(
self,
- dfs_dict: Dict[str, pd.DataFrame],
+ dfs_dict: dict[str, pd.DataFrame],
columns_filter=None,
time_column: bool = True,
title: str = None,
- selections: Dict[str, str] = None,
- factors: Dict[str, float] = None,
+ selections: dict[str, str] = None,
+ factors: dict[str, float] = None,
):
"""Initialize the comparison.
@@ -341,7 +340,8 @@ def plot_diff(self):
sns.heatmap(
data=df_diff.T,
cmap="seismic",
- linewidths=0.2, linecolor="black",
+ linewidths=0.2,
+ linecolor="black",
vmin=-vmax,
vmax=vmax,
ax=ax1,
@@ -357,7 +357,7 @@ def plot_diff(self):
ax4.plot(diff_rel[cid], label=cid)
ax2.set_ylabel("Tolerance difference", fontweight="bold")
- ax2.legend(prop={'size': 6})
+ ax2.legend(prop={"size": 6})
ax3.set_ylabel("Absolute difference", fontweight="bold")
ax4.set_ylabel("Relative difference", fontweight="bold")
@@ -365,7 +365,7 @@ def plot_diff(self):
ax.set_xlabel("time index", fontweight="bold")
ax.set_yscale("log")
ax.set_ylim(bottom=1e-10)
- ax.legend(prop={'size': 6})
+ ax.legend(prop={"size": 6})
if ax.get_ylim()[1] < 10 * DataSetsComparison.tol_abs:
ax.set_ylim(top=10 * DataSetsComparison.tol_abs)
diff --git a/src/sbmlsim/comparison/example_comparison.py b/src/sbmlsim/comparison/example_comparison.py
index 0a83de49..e261aa88 100644
--- a/src/sbmlsim/comparison/example_comparison.py
+++ b/src/sbmlsim/comparison/example_comparison.py
@@ -15,7 +15,7 @@
"""
from pathlib import Path
-from typing import List, Dict, Type
+from typing import Type
import numpy as np
import pandas as pd
@@ -51,10 +51,10 @@
# conditions_path = base_path / "resources" / "condition.tsv"
conditions_path = base_path / "resources" / "condition_liver.tsv"
- conditions_list: List[Condition] = Condition.parse_conditions_from_file(
+ conditions_list: list[Condition] = Condition.parse_conditions_from_file(
conditions_path=conditions_path
)
- conditions: Dict[str, Condition] = {c.sid: c for c in conditions_list}
+ conditions: dict[str, Condition] = {c.sid: c for c in conditions_list}
# simulate condition with simulators
# ----------------------------------------------------------------
@@ -74,7 +74,7 @@
# print(f"{timepoints=}")
# run comparison
- dfs: Dict[str, pd.DataFrame] = {}
+ dfs: dict[str, pd.DataFrame] = {}
simulator: Type[SimulateSBML]
for key, simulator in {
"roadrunner": SimulateRoadrunnerSBML,
diff --git a/src/sbmlsim/comparison/example_copasi.py b/src/sbmlsim/comparison/example_copasi.py
index 7bcddfbf..1a9db9a7 100644
--- a/src/sbmlsim/comparison/example_copasi.py
+++ b/src/sbmlsim/comparison/example_copasi.py
@@ -1,14 +1,11 @@
from pathlib import Path
+from basico import load_model, set_parameters, get_parameters
base_path: Path = Path(__file__).parent
model_path = base_path / "resources" / "icg_sd.xml"
print(model_path)
-from basico import (
- load_model,
- set_parameters,
- get_parameters
-)
+
load_model(location=str(model_path))
set_parameters("body weight [kg]", initial_value=83.5)
diff --git a/src/sbmlsim/comparison/simulate.py b/src/sbmlsim/comparison/simulate.py
index c3f54067..2e3d1c7f 100644
--- a/src/sbmlsim/comparison/simulate.py
+++ b/src/sbmlsim/comparison/simulate.py
@@ -1,8 +1,7 @@
from __future__ import annotations
from pathlib import Path
-from typing import Dict, Optional, List, Set, Tuple, Any
+from typing import Optional, Tuple, Any
-import numpy as np
import pandas as pd
import libsbml
from petab.conditions import get_condition_df
@@ -12,26 +11,23 @@
class Change:
"""Assignment of value to a target id in the model.
- ${parameterId}
- The values will override any parameter values specified in the model.
+ ${parameterId}
+ The values will override any parameter values specified in the model.
- ${speciesId}
- If a species ID is provided, it is interpreted as the initial
- concentration/amount of that species and will override the initial
- concentration/amount given in the SBML model or given by
- a preequilibration condition. If NaN is provided for a condition, the result
- of the preequilibration (or initial concentration/amount from the SBML model,
- if no preequilibration is defined) is used.
+ ${speciesId}
+ If a species ID is provided, it is interpreted as the initial
+ concentration/amount of that species and will override the initial
+ concentration/amount given in the SBML model or given by
+ a preequilibration condition. If NaN is provided for a condition, the result
+ of the preequilibration (or initial concentration/amount from the SBML model,
+ if no preequilibration is defined) is used.
- ${compartmentId}
- If a compartment ID is provided, it is interpreted as the initial
- compartment size.
+ ${compartmentId}
+ If a compartment ID is provided, it is interpreted as the initial
+ compartment size.
"""
- def __init__(self,
- target_id: str,
- value: float,
- unit: Optional[str]
- ):
+
+ def __init__(self, target_id: str, value: float, unit: Optional[str]):
self.target_id: str = target_id
self.value: float = value
self.unit: str = unit
@@ -40,31 +36,27 @@ def __init__(self,
class Condition:
"""Collection of assignments with a given id."""
- def __init__(self,
- sid: str,
- name: Optional[str],
- changes: Optional[List[Change]]
- ):
+ def __init__(self, sid: str, name: Optional[str], changes: Optional[list[Change]]):
self.sid: str = sid
self.name: Optional[str] = name
if changes is None:
changes = []
- self.changes: List[Change] = changes
+ self.changes: list[Change] = changes
@classmethod
- def parse_conditions_from_file(cls, conditions_path: Path) -> List[Condition]:
+ def parse_conditions_from_file(cls, conditions_path: Path) -> list[Condition]:
"""Parse conditions from file."""
df = get_condition_df(condition_file=str(conditions_path))
return cls.parse_conditions(df)
@staticmethod
- def parse_conditions(df: pd.DataFrame) -> List[Condition]:
+ def parse_conditions(df: pd.DataFrame) -> list[Condition]:
"""Parse conditions from DataFrame."""
- conditions: List[Condition] = []
+ conditions: list[Condition] = []
columns = df.columns
target_ids = [col for col in columns if col not in {"conditionName"}]
for condition_id, row in df.iterrows():
- changes: List[Change] = []
+ changes: list[Change] = []
for tid in target_ids:
changes.append(
Change(
@@ -76,7 +68,7 @@ def parse_conditions(df: pd.DataFrame) -> List[Condition]:
condition = Condition(
sid=str(condition_id),
name=row["conditionName"] if "conditionName" in columns else None,
- changes=changes
+ changes=changes,
)
conditions.append(condition)
@@ -86,8 +78,13 @@ def parse_conditions(df: pd.DataFrame) -> List[Condition]:
class SimulateSBML:
"""Class for simulating an SBML model."""
- def __init__(self, sbml_path, results_dir: Path,
- absolute_tolerance: float=1E-8, relative_tolerance=1E-8):
+ def __init__(
+ self,
+ sbml_path,
+ results_dir: Path,
+ absolute_tolerance: float = 1e-8,
+ relative_tolerance=1e-8,
+ ):
"""
:param sbml_path: Path to SBML model.
@@ -105,26 +102,26 @@ def __init__(self, sbml_path, results_dir: Path,
# process SBML information for unifying simulations
sbml_data = self.parse_sbml(sbml_path=self.sbml_path)
self.mid: str = sbml_data[0]
- self.species: List[str] = sbml_data[1]
- self.compartments: List[str] = sbml_data[2]
- self.parameters: List[str] = sbml_data[3]
- self.has_only_substance: Dict[str, bool] = sbml_data[4]
- self.species_compartments: Dict[str, str] = sbml_data[5]
- self.species_compartments_names: Dict[str, str] = sbml_data[6]
- self.sid2name: Dict[str, str] = sbml_data[7]
+ self.species: list[str] = sbml_data[1]
+ self.compartments: list[str] = sbml_data[2]
+ self.parameters: list[str] = sbml_data[3]
+ self.has_only_substance: dict[str, bool] = sbml_data[4]
+ self.species_compartments: dict[str, str] = sbml_data[5]
+ self.species_compartments_names: dict[str, str] = sbml_data[6]
+ self.sid2name: dict[str, str] = sbml_data[7]
@staticmethod
def parse_sbml(sbml_path: Path) -> Tuple[Any]:
"""Parses the identifiers."""
doc: libsbml.SBMLDocument = libsbml.readSBMLFromFile(str(sbml_path))
model: libsbml.Model = doc.getModel()
- species: List[str] = set()
- parameters: List[str] = set()
- compartments: List[str] = set()
- has_only_substance: Dict[str, bool] = {}
- species_compartments: Dict[str, str] = {}
- species_compartments_names: Dict[str, str] = {}
- sid2name: Dict[str, str] = {}
+ species: list[str] = list()
+ parameters: list[str] = list()
+ compartments: list[str] = list()
+ has_only_substance: dict[str, bool] = {}
+ species_compartments: dict[str, str] = {}
+ species_compartments_names: dict[str, str] = {}
+ sid2name: dict[str, str] = {}
mid = str(uuid.uuid4())
if model:
@@ -137,7 +134,9 @@ def parse_sbml(sbml_path: Path) -> Tuple[Any]:
compartment_id = s.getCompartment()
species_compartments[sid] = compartment_id
c: libsbml.Compartment = model.getCompartment(compartment_id)
- species_compartments_names[sid] = c.getName() if c.isSetName() else c.getId()
+ species_compartments_names[sid] = (
+ c.getName() if c.isSetName() else c.getId()
+ )
sid2name[sid] = s.getName() if s.isSetName() else s.getId()
for p in model.getListOfParameters():
@@ -160,5 +159,5 @@ def parse_sbml(sbml_path: Path) -> Tuple[Any]:
sid2name,
)
- def simulate_condition(self, condition: Condition, timepoints: List[float]):
+ def simulate_condition(self, condition: Condition, timepoints: list[float]):
pass
diff --git a/src/sbmlsim/comparison/simulate_roadrunner.py b/src/sbmlsim/comparison/simulate_roadrunner.py
index 19ea55db..3ff7e3a5 100644
--- a/src/sbmlsim/comparison/simulate_roadrunner.py
+++ b/src/sbmlsim/comparison/simulate_roadrunner.py
@@ -1,5 +1,3 @@
-from typing import List
-
import numpy as np
import pandas as pd
import roadrunner
@@ -36,7 +34,7 @@ def __init__(self, **kwargs):
integrator.setValue("relative_tolerance", self.relative_tolerance)
def simulate_condition(
- self, condition: Condition, timepoints: List[float]
+ self, condition: Condition, timepoints: list[float]
) -> pd.DataFrame:
"""Simulate condition"""
# print(f"simulate condition: {condition.sid}")
diff --git a/src/sbmlsim/data.py b/src/sbmlsim/data.py
index 8d4084a9..d6b07764 100644
--- a/src/sbmlsim/data.py
+++ b/src/sbmlsim/data.py
@@ -3,7 +3,7 @@
from __future__ import annotations
from enum import Enum
from pathlib import Path
-from typing import Dict, List, Optional, Union
+from typing import Optional, Union
import pandas as pd
from pymetadata import log
@@ -48,8 +48,8 @@ def __init__(
task: str = None,
dataset: str = None,
function: str = None,
- variables: Dict[str, "Data"] = None,
- parameters: Dict[str, float] = None,
+ variables: dict[str, "Data"] = None,
+ parameters: dict[str, float] = None,
sid: str = None,
):
"""Construct data."""
@@ -70,8 +70,8 @@ def __init__(
self.task_id: str = task
self.dset_id: str = dataset
self.function: str = function
- self.variables: Dict[str, "Data"] = variables
- self.parameters: Dict[str, float] = parameters
+ self.variables: dict[str, "Data"] = variables
+ self.parameters: dict[str, float] = parameters
self.unit: Optional[str] = None
self._sid = sid
@@ -333,7 +333,7 @@ def __repr__(self) -> str:
@classmethod
def from_df(
- cls, df: pd.DataFrame, ureg: UnitRegistry, udict: Dict[str, str] = None
+ cls, df: pd.DataFrame, ureg: UnitRegistry, udict: dict[str, str] = None
) -> "DataSet":
"""Create DataSet from given pandas.DataFrame.
@@ -360,7 +360,7 @@ def from_df(
udict = {}
# all units from udict and DataFrame
- all_udict: Dict[str, str] = {}
+ all_udict: dict[str, str] = {}
for key in df.columns:
# handle '*_unit columns'
@@ -507,7 +507,7 @@ def unit_conversion(self, key, factor: Quantity) -> None:
# @deprecated
def load_pkdb_dataframe(
- sid, data_path: Union[Path, List[Path]], sep="\t", comment="#", **kwargs
+ sid, data_path: Union[Path, list[Path]], sep="\t", comment="#", **kwargs
) -> pd.DataFrame:
"""Load TSV data from PKDB figure or table id.
@@ -554,7 +554,7 @@ def load_pkdb_dataframe(
# @deprecated
def load_pkdb_dataframes_by_substance(
sid, data_path, **kwargs
-) -> Dict[str, pd.DataFrame]:
+) -> dict[str, pd.DataFrame]:
"""Load dataframes from given PKDB figure/table id split on substance.
The DataFrame is split on the 'substance' key.
@@ -567,7 +567,7 @@ def load_pkdb_dataframes_by_substance(
:param sid:
:param data_path:
:param kwargs:
- :return: Dict[substance, pd.DataFrame]
+ :return: dict[substance, pd.DataFrame]
"""
df = load_pkdb_dataframe(sid=sid, data_path=data_path, na_values=["na"], **kwargs)
frames = {}
diff --git a/src/sbmlsim/examples/example_model_change.py b/src/sbmlsim/examples/example_model_change.py
index e61a9666..d0d93bce 100644
--- a/src/sbmlsim/examples/example_model_change.py
+++ b/src/sbmlsim/examples/example_model_change.py
@@ -19,7 +19,8 @@ def run_model_change_example1():
:return:
"""
- r = RoadrunnerSBMLModel.loda_model_from_source(REPRESSILATOR_SBML)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ r = model.r
RoadrunnerSBMLModel.set_timecourse_selections(r)
s1 = r.simulate(start=0, end=100, steps=500)
@@ -59,7 +60,7 @@ def run_model_change_example1():
def run_model_clamp1():
"""Using Timecourse simulations for clamps."""
- simulator = SimulatorSerial.from_sbml(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(REPRESSILATOR_SBML)
# setting a species as boundary condition
tcsim = TimecourseSim(
@@ -114,7 +115,7 @@ def plot_result(xres: XResult, title: str = None) -> None:
plt.show()
# reference simulation
- simulator = SimulatorSerial.from_sbml(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(REPRESSILATOR_SBML)
tcsim = TimecourseSim(
[
Timecourse(start=0, end=220, steps=300, changes={"X": 10}),
diff --git a/src/sbmlsim/examples/example_scan.py b/src/sbmlsim/examples/example_scan.py
index 82f87a2a..6aee8413 100644
--- a/src/sbmlsim/examples/example_scan.py
+++ b/src/sbmlsim/examples/example_scan.py
@@ -2,6 +2,7 @@
import numpy as np
+from sbmlsim.model import RoadrunnerSBMLModel
from sbmlsim.resources import REPRESSILATOR_SBML
from sbmlsim.simulation import Dimension, ScanSim, Timecourse, TimecourseSim
from sbmlsim.simulator import SimulatorSerial
@@ -10,7 +11,8 @@
def run_scan0d() -> XResult:
"""Perform a parameter 0D scan, i.e., simple simulation"""
- simulator = SimulatorSerial.from_sbml(sbml_path=REPRESSILATOR_SBML)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
scan0d = ScanSim(
simulation=TimecourseSim(
@@ -30,7 +32,8 @@ def run_scan1d() -> XResult:
Scanning a single parameter.
"""
- simulator = SimulatorSerial.from_sbml(sbml_path=REPRESSILATOR_SBML)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
scan1d = ScanSim(
simulation=TimecourseSim(
@@ -55,7 +58,8 @@ def run_scan1d() -> XResult:
def run_scan2d() -> XResult:
"""Perform a parameter scan"""
- simulator = SimulatorSerial.from_sbml(sbml_path=REPRESSILATOR_SBML)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
scan2d = ScanSim(
simulation=TimecourseSim(
@@ -85,7 +89,8 @@ def run_scan2d() -> XResult:
def run_scan1d_distribution() -> XResult:
"""Perform a parameter scan by sampling from a distribution"""
- simulator = SimulatorSerial.from_sbml(sbml_path=REPRESSILATOR_SBML)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
scan1d = ScanSim(
simulation=TimecourseSim(
@@ -132,7 +137,7 @@ def run_scan1d_distribution() -> XResult:
plt.show()
da = xres[column]
- for k in range(xres.dims["dim1"]):
+ for k in range(xres.sizes["dim1"]):
# individual timecourses
plt.plot(da.coords["time"], da.isel(dim1=k))
diff --git a/src/sbmlsim/examples/example_sensitivity.py b/src/sbmlsim/examples/example_sensitivity.py
index b3b0eb5e..3fa9a3cb 100644
--- a/src/sbmlsim/examples/example_sensitivity.py
+++ b/src/sbmlsim/examples/example_sensitivity.py
@@ -61,7 +61,7 @@ def run_sensitivity():
:return:
"""
- simulator = SimulatorSerial.from_sbml(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(REPRESSILATOR_SBML)
# parameter sensitivity
tcsim = TimecourseSim(
diff --git a/src/sbmlsim/examples/example_units.py b/src/sbmlsim/examples/example_units.py
index f52c9230..c4f4db53 100644
--- a/src/sbmlsim/examples/example_units.py
+++ b/src/sbmlsim/examples/example_units.py
@@ -15,7 +15,7 @@
def run_demo_example():
"""Run various timecourses."""
- simulator = SimulatorSerial.from_sbml(DEMO_SBML)
+ simulator = SimulatorSerial(DEMO_SBML)
# units information
uinfo = UnitsInformation.from_sbml(DEMO_SBML)
@@ -54,7 +54,8 @@ def run_demo_example():
# print(tc_sim)
xres: XResult = simulator.run_scan(tc_scan)
- xres.set_units(udict=uinfo.udict)
+ xres.uinfo = uinfo
+
console.log(xres)
# create figure
@@ -75,14 +76,9 @@ def run_demo_example():
yunit = ax_units["yunit"]
for key in ["[e__A]", "[e__B]", "[e__C]", "[c__A]", "[c__B]", "[c__C]"]:
- # => How to better handle units !!!
- # FIXME: correct handling of units; and subsequent conversion
- # Mapping of model to units needed
-
- # FIXME: correct reduction of additional dimensions!
ax.plot(
- Q_(xres["time"].values, xres.units["time"]).to(xunit).m,
- Q_(xres[key].values, xres.units[key]).to(yunit).m,
+ Q_(xres["time"].values, xres.uinfo["time"]).to(xunit).m,
+ Q_(xres[key].values, xres.uinfo[key]).to(yunit).m,
label=f"{key} [{yunit}]",
)
ax.legend()
diff --git a/src/sbmlsim/examples/experiments/covid/experiments/__init__.py b/src/sbmlsim/examples/experiments/covid/experiments/__init__.py
index a4b5a077..1a0d3661 100644
--- a/src/sbmlsim/examples/experiments/covid/experiments/__init__.py
+++ b/src/sbmlsim/examples/experiments/covid/experiments/__init__.py
@@ -1,3 +1,5 @@
from .bertozzi2020 import Bertozzi2020
from .cuadros2020 import Cuadros2020
from .carcione2020 import Carcione2020
+
+__all__ = ["Bertozzi2020", "Cuadros2020", "Carcione2020"]
diff --git a/src/sbmlsim/examples/experiments/covid/experiments/bertozzi2020.py b/src/sbmlsim/examples/experiments/covid/experiments/bertozzi2020.py
index 25268f18..77e667c5 100644
--- a/src/sbmlsim/examples/experiments/covid/experiments/bertozzi2020.py
+++ b/src/sbmlsim/examples/experiments/covid/experiments/bertozzi2020.py
@@ -1,5 +1,4 @@
from pathlib import Path
-from typing import Dict
from sbmlsim.experiment import SimulationExperiment
from sbmlsim.model import AbstractModel
@@ -9,8 +8,8 @@
class Bertozzi2020(SimulationExperiment):
- def models(self) -> Dict[str, AbstractModel]:
- Q_ = self.Q_
+ def models(self) -> dict[str, AbstractModel]:
+ # Q_ = self.Q_
models = {
"model": AbstractModel(
source=Path(__file__).parent
@@ -24,7 +23,7 @@ def models(self) -> Dict[str, AbstractModel]:
}
return models
- def simulations(self) -> Dict[str, TimecourseSim]:
+ def simulations(self) -> dict[str, TimecourseSim]:
Q_ = self.Q_
Ro_CA = 1.9544
@@ -48,14 +47,14 @@ def simulations(self) -> Dict[str, TimecourseSim]:
return tcsims
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
if self.simulations():
return {
f"task_{key}": Task(model="model", simulation=key)
for key in self.simulations()
}
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
unit_time = "time"
unit_y = "substance"
diff --git a/src/sbmlsim/examples/experiments/covid/experiments/carcione2020.py b/src/sbmlsim/examples/experiments/covid/experiments/carcione2020.py
index 023bec93..74c3d0fb 100644
--- a/src/sbmlsim/examples/experiments/covid/experiments/carcione2020.py
+++ b/src/sbmlsim/examples/experiments/covid/experiments/carcione2020.py
@@ -1,5 +1,4 @@
from pathlib import Path
-from typing import Dict
from sbmlsim.experiment import SimulationExperiment
from sbmlsim.model import AbstractModel
@@ -9,8 +8,8 @@
class Carcione2020(SimulationExperiment):
- def models(self) -> Dict[str, AbstractModel]:
- Q_ = self.Q_
+ def models(self) -> dict[str, AbstractModel]:
+ # Q_ = self.Q_
models = {
"model": AbstractModel(
source=Path(__file__).parent
@@ -24,7 +23,7 @@ def models(self) -> Dict[str, AbstractModel]:
}
return models
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
tasks = {}
if self.simulations():
tasks = {
@@ -33,8 +32,8 @@ def tasks(self) -> Dict[str, Task]:
}
return tasks
- def simulations(self) -> Dict[str, TimecourseSim]:
- Q_ = self.Q_
+ def simulations(self) -> dict[str, TimecourseSim]:
+ # Q_ = self.Q_
tcsims = {}
tcsims["sim1"] = TimecourseSim(
@@ -49,7 +48,7 @@ def simulations(self) -> Dict[str, TimecourseSim]:
)
return tcsims
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
return {**self.figure_plot_1()}
def figure_plot_1(self):
diff --git a/src/sbmlsim/examples/experiments/covid/experiments/cuadros2020.py b/src/sbmlsim/examples/experiments/covid/experiments/cuadros2020.py
index 2a99600a..105f56c1 100644
--- a/src/sbmlsim/examples/experiments/covid/experiments/cuadros2020.py
+++ b/src/sbmlsim/examples/experiments/covid/experiments/cuadros2020.py
@@ -1,5 +1,4 @@
from pathlib import Path
-from typing import Dict
from sbmlsim.experiment import SimulationExperiment
from sbmlsim.model import AbstractModel
@@ -9,8 +8,8 @@
class Cuadros2020(SimulationExperiment):
- def models(self) -> Dict[str, AbstractModel]:
- Q_ = self.Q_
+ def models(self) -> dict[str, AbstractModel]:
+ # Q_ = self.Q_
models = {
"model": AbstractModel(
source=Path(__file__).parent
@@ -24,8 +23,8 @@ def models(self) -> Dict[str, AbstractModel]:
}
return models
- def simulations(self) -> Dict[str, TimecourseSim]:
- Q_ = self.Q_
+ def simulations(self) -> dict[str, TimecourseSim]:
+ # Q_ = self.Q_
tcsims = {}
tcsims["sim1"] = TimecourseSim(
[
@@ -39,14 +38,14 @@ def simulations(self) -> Dict[str, TimecourseSim]:
)
return tcsims
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
if self.simulations():
return {
f"task_{key}": Task(model="model", simulation=key)
for key in self.simulations()
}
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
unit_time = "time"
unit_y = "substance"
diff --git a/src/sbmlsim/examples/experiments/covid/omex/download_covid_models.py b/src/sbmlsim/examples/experiments/covid/omex/download_covid_models.py
index 534d60c1..b3fc0c1f 100644
--- a/src/sbmlsim/examples/experiments/covid/omex/download_covid_models.py
+++ b/src/sbmlsim/examples/experiments/covid/omex/download_covid_models.py
@@ -1,17 +1,17 @@
"""Helper module for downloading COVID-19 biomodels."""
+
import json
from pathlib import Path
from pprint import pprint
-from typing import Dict, List
import requests
from sbmlutils.biomodels import download_biomodel_omex
-def query_covid19_biomodels() -> List[str]:
+def query_covid19_biomodels() -> list[str]:
"""Query the COVID-19 biomodels.
- :return List of biomodel identifiers
+ :return list of biomodel identifiers
"""
url = "https://www.ebi.ac.uk/biomodels/search?query=submitter_keywords%3A%22COVID-19%22%20AND%20curationstatus%3A%22Manually%20curated%22&numResults=100&format=json"
response = requests.get(url)
@@ -21,7 +21,7 @@ def query_covid19_biomodels() -> List[str]:
return sorted(biomodel_ids)
-def get_covid19_model(output_dir: Path) -> Dict[str, Path]:
+def get_covid19_model(output_dir: Path) -> dict[str, Path]:
"""Get all manually curated COVID-19 models.
:return dictionary of model ids to Paths.
diff --git a/src/sbmlsim/examples/experiments/covid/simulate.py b/src/sbmlsim/examples/experiments/covid/simulate.py
index 9b9d0139..38c8a513 100644
--- a/src/sbmlsim/examples/experiments/covid/simulate.py
+++ b/src/sbmlsim/examples/experiments/covid/simulate.py
@@ -1,6 +1,7 @@
"""
Run COVID-19 model experiments.
"""
+
from pathlib import Path
from sbmlsim.combine.sedml.parser import SEDMLSerializer
@@ -30,7 +31,7 @@ def run_covid_examples(output_path: Path) -> None:
exp_id = experiment.__name__
# serialize to SED-ML/OMEX archive
omex_path = output_path / f"{exp_id}.omex"
- serializer = SEDMLSerializer(
+ SEDMLSerializer(
exp_class=experiment,
working_dir=output_path / "omex",
sedml_filename=f"{exp_id}_sedml.xml",
diff --git a/src/sbmlsim/examples/experiments/curve_types/experiment.py b/src/sbmlsim/examples/experiments/curve_types/experiment.py
index 658f962c..82f39e4e 100644
--- a/src/sbmlsim/examples/experiments/curve_types/experiment.py
+++ b/src/sbmlsim/examples/experiments/curve_types/experiment.py
@@ -1,8 +1,9 @@
"""
Example simulation experiment.
"""
+
from pathlib import Path
-from typing import Dict, Union
+from typing import Union
from sbmlsim.combine.sedml.report import Report
from sbmlsim.data import Data
@@ -17,13 +18,13 @@
class CurveTypesExperiment(SimulationExperiment):
"""Simulation experiments for curve types."""
- def models(self) -> Dict[str, Union[Path, AbstractModel]]:
+ def models(self) -> dict[str, Union[Path, AbstractModel]]:
"""Define models."""
return {
"model": Path(__file__).parent / "results" / "curve_types_model.xml",
}
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
"""Define simulations."""
tc = TimecourseSim(
timecourses=Timecourse(start=0, end=10, steps=10),
@@ -31,14 +32,14 @@ def simulations(self) -> Dict[str, AbstractSim]:
)
return {"tc": tc}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Define tasks."""
tasks = dict()
for model in ["model"]:
tasks[f"task_{model}_tc"] = Task(model=model, simulation="tc")
return tasks
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
"""Define data generators."""
# direct access via id
data = []
@@ -47,7 +48,7 @@ def data(self) -> Dict[str, Data]:
data.append(Data(task=f"task_{model}_tc", index=selection))
return {d.sid: d for d in data}
- def reports(self) -> Dict[str, Report]:
+ def reports(self) -> dict[str, Report]:
"""Define reports."""
report1 = Report(
sid="report1",
@@ -58,7 +59,7 @@ def reports(self) -> Dict[str, Report]:
)
return {report1.sid: report1}
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
"""Define figure outputs (plots)."""
fig = Figure(
experiment=self,
@@ -72,14 +73,14 @@ def figures(self) -> Dict[str, Figure]:
# FIXME: add helper to easily create figure layouts with plots
p0 = fig.add_subplot(Plot(sid="plot0", name="Timecourse"), row=1, col=1)
- p0.set_title(f"Timecourse")
+ p0.set_title("Timecourse")
p0.set_xaxis("time", unit="min")
p0.set_yaxis("data", unit="mM")
p0.curve(
- x=Data("time", task=f"task_model_tc"),
- y=Data("[S1]", task=f"task_model_tc"),
- label=f"[S1]",
+ x=Data("time", task="task_model_tc"),
+ y=Data("[S1]", task="task_model_tc"),
+ label="[S1]",
)
return {
diff --git a/src/sbmlsim/examples/experiments/demo/demo.py b/src/sbmlsim/examples/experiments/demo/demo.py
index ce61da35..85c4f5f3 100644
--- a/src/sbmlsim/examples/experiments/demo/demo.py
+++ b/src/sbmlsim/examples/experiments/demo/demo.py
@@ -3,8 +3,8 @@
Various scans.
"""
+
from pathlib import Path
-from typing import Dict
import numpy as np
@@ -20,7 +20,7 @@
Timecourse,
TimecourseSim,
)
-from sbmlsim.simulation.sensitivity import ModelSensitivity, SensitivityType
+from sbmlsim.simulation.sensitivity import ModelSensitivity
from sbmlsim.simulator.simulation_serial import SimulatorSerial
from sbmlsim.task import Task
@@ -28,24 +28,24 @@
class DemoExperiment(SimulationExperiment):
"""Simple repressilator experiment."""
- def models(self) -> Dict[str, AbstractModel]:
+ def models(self) -> dict[str, AbstractModel]:
"""Define models."""
return {"model": RoadrunnerSBMLModel(source=DEMO_SBML, ureg=self.ureg)}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Define tasks."""
return {
f"task_{key}": Task(model="model", simulation=key)
for key in self.simulations()
}
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
"""Define simulations."""
return {
**self.sim_scans(),
}
- def sim_scans(self) -> Dict[str, AbstractSim]:
+ def sim_scans(self) -> dict[str, AbstractSim]:
Q_ = self.Q_
scan_init = ScanSim(
simulation=TimecourseSim(
@@ -74,7 +74,7 @@ def sim_scans(self) -> Dict[str, AbstractSim]:
"scan_init": scan_init,
}
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
# print(self._results.keys())
# print(self._results["task_scan_init"])
diff --git a/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.html b/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.html
index b410c388..95f068cf 100644
--- a/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.html
+++ b/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.html
@@ -64,7 +64,6 @@
Code
-from typing import Dict
import numpy as np
import pandas as pd
import xarray as xr
@@ -84,14 +83,14 @@ Code
"""Hormone dose-response curves."""
@timeit
- def models(self) -> Dict[str, AbstractModel]:
+ def models(self) -> dict[str, AbstractModel]:
return {
"model1": RoadrunnerSBMLModel(source="model/liver_glucose.xml",
base_path=self.base_path)
}
@timeit
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
# dose-response data for hormones
@@ -152,14 +151,14 @@ Code
return dsets
@timeit
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Tasks"""
return {
"task_glc_scan": Task(model="model1", simulation="glc_scan")
}
@timeit
- def simulations(self) -> Dict[str, ScanSim]:
+ def simulations(self) -> dict[str, ScanSim]:
"""Scanning dose-response curves of hormones and gamma function.
Vary external glucose concentrations (boundary condition).
@@ -180,7 +179,7 @@ Code
}
@timeit
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
xunit = "mM"
yunit_hormone = "pmol/l"
yunit_gamma = "dimensionless"
diff --git a/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.md b/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.md
index f8d7ddf6..fd93184d 100644
--- a/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.md
+++ b/src/sbmlsim/examples/experiments/glucose/DoseResponseExperiment.md
@@ -25,7 +25,6 @@
[experiments/dose_response.py](experiments/dose_response.py)
```python
-from typing import Dict
import numpy as np
import pandas as pd
import xarray as xr
@@ -46,14 +45,14 @@ class DoseResponseExperiment(SimulationExperiment):
"""Hormone dose-response curves."""
@timeit
- def models(self) -> Dict[str, AbstractModel]:
+ def models(self) -> dict[str, AbstractModel]:
return {
"model1": RoadrunnerSBMLModel(source="model/liver_glucose.xml",
base_path=self.base_path)
}
@timeit
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
# dose-response data for hormones
@@ -114,14 +113,14 @@ class DoseResponseExperiment(SimulationExperiment):
return dsets
@timeit
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Tasks"""
return {
"task_glc_scan": Task(model="model1", simulation="glc_scan")
}
@timeit
- def simulations(self) -> Dict[str, ScanSim]:
+ def simulations(self) -> dict[str, ScanSim]:
"""Scanning dose-response curves of hormones and gamma function.
Vary external glucose concentrations (boundary condition).
@@ -142,7 +141,7 @@ class DoseResponseExperiment(SimulationExperiment):
}
@timeit
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
xunit = "mM"
yunit_hormone = "pmol/l"
yunit_gamma = "dimensionless"
diff --git a/src/sbmlsim/examples/experiments/glucose/experiments/dose_response.py b/src/sbmlsim/examples/experiments/glucose/experiments/dose_response.py
index 3db81b20..305d93ef 100644
--- a/src/sbmlsim/examples/experiments/glucose/experiments/dose_response.py
+++ b/src/sbmlsim/examples/experiments/glucose/experiments/dose_response.py
@@ -1,17 +1,17 @@
from pathlib import Path
-from typing import Dict, Union
+from typing import Union
import numpy as np
import pandas as pd
import xarray as xr
from matplotlib.pyplot import Figure
-from build.lib.sbmlsim.plot.plotting_deprecated_matplotlib import add_data
+
from sbmlsim.data import Data, DataSet, load_pkdb_dataframe
from sbmlsim.experiment import SimulationExperiment
from sbmlsim.model import AbstractModel
-# from sbmlsim.plot.plotting_deprecated_matplotlib import add_data
+from sbmlsim.plot.plotting_deprecated_matplotlib import add_data
from sbmlsim.plot.serialization_matplotlib import plt
from sbmlsim.simulation import Dimension, ScanSim, Timecourse, TimecourseSim
from sbmlsim.task import Task
@@ -23,11 +23,11 @@ class DoseResponseExperiment(SimulationExperiment):
"""Hormone dose-response curves."""
@timeit
- def models(self) -> Dict[str, Union[AbstractModel, Path]]:
+ def models(self) -> dict[str, Union[AbstractModel, Path]]:
return {"model1": Path(__file__).parent.parent / "model" / "liver_glucose.xml"}
@timeit
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
# dose-response data for hormones
@@ -107,12 +107,12 @@ def datasets(self) -> Dict[str, DataSet]:
return dsets
@timeit
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Tasks"""
return {"task_glc_scan": Task(model="model1", simulation="glc_scan")}
@timeit
- def simulations(self) -> Dict[str, ScanSim]:
+ def simulations(self) -> dict[str, ScanSim]:
"""Scanning dose-response curves of hormones and gamma function.
Vary external glucose concentrations (boundary condition).
@@ -128,14 +128,14 @@ def simulations(self) -> Dict[str, ScanSim]:
)
return {"glc_scan": glc_scan}
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
self.add_selections_data(
selections=["time", "glu", "ins", "epi", "gamma"],
task_ids=["task_glc_scan"],
)
return {}
- def figures_mpl(self) -> Dict[str, Figure]:
+ def figures_mpl(self) -> dict[str, Figure]:
xunit = "mM"
yunit_hormone = "pmol/l"
yunit_gamma = "dimensionless"
diff --git a/src/sbmlsim/examples/experiments/glucose/results/DoseResponseExperiment/DoseResponseExperiment.html b/src/sbmlsim/examples/experiments/glucose/results/DoseResponseExperiment/DoseResponseExperiment.html
index c3989a91..9af44b58 100644
--- a/src/sbmlsim/examples/experiments/glucose/results/DoseResponseExperiment/DoseResponseExperiment.html
+++ b/src/sbmlsim/examples/experiments/glucose/results/DoseResponseExperiment/DoseResponseExperiment.html
@@ -63,7 +63,7 @@ Code
../../../../../home/mkoenig/git/sbmlsim/src/sbmlsim/examples/experiments/glucose/experiments/dose_response.py
from pathlib import Path
-from typing import Dict, Union
+from typing import dict, Union
import numpy as np
import pandas as pd
@@ -85,11 +85,11 @@ Code
"""Hormone dose-response curves."""
@timeit
- def models(self) -> Dict[str, Union[AbstractModel, Path]]:
+ def models(self) -> dict[str, Union[AbstractModel, Path]]:
return {"model1": Path(__file__).parent.parent / "model" / "liver_glucose.xml"}
@timeit
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
# dose-response data for hormones
@@ -169,12 +169,12 @@ Code
return dsets
@timeit
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Tasks"""
return {"task_glc_scan": Task(model="model1", simulation="glc_scan")}
@timeit
- def simulations(self) -> Dict[str, ScanSim]:
+ def simulations(self) -> dict[str, ScanSim]:
"""Scanning dose-response curves of hormones and gamma function.
Vary external glucose concentrations (boundary condition).
@@ -190,14 +190,14 @@ Code
)
return {"glc_scan": glc_scan}
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
self.add_selections_data(
selections=["time", "glu", "ins", "epi", "gamma"],
task_ids=["task_glc_scan"],
)
return {}
- def figures_mpl(self) -> Dict[str, Figure]:
+ def figures_mpl(self) -> dict[str, Figure]:
xunit = "mM"
yunit_hormone = "pmol/l"
yunit_gamma = "dimensionless"
diff --git a/src/sbmlsim/examples/experiments/initial_assignment/initial_assignment.py b/src/sbmlsim/examples/experiments/initial_assignment/initial_assignment.py
index e55e254b..927e0c8f 100644
--- a/src/sbmlsim/examples/experiments/initial_assignment/initial_assignment.py
+++ b/src/sbmlsim/examples/experiments/initial_assignment/initial_assignment.py
@@ -1,8 +1,8 @@
"""
Example simulation experiment.
"""
+
from pathlib import Path
-from typing import Dict
from sbmlsim.experiment import ExperimentRunner, SimulationExperiment
from sbmlsim.model import AbstractModel, RoadrunnerSBMLModel
@@ -18,7 +18,7 @@
class AssignmentExperiment(SimulationExperiment):
"""Testing initial assignments."""
- def models(self) -> Dict[str, AbstractModel]:
+ def models(self) -> dict[str, AbstractModel]:
return {
"model": RoadrunnerSBMLModel(
source=base_path / "initial_assignment.xml", ureg=self.ureg
@@ -30,7 +30,7 @@ def models(self) -> Dict[str, AbstractModel]:
),
}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
tasks = {}
for model_key in self._models.keys():
for sim_key in self._simulations.keys():
@@ -39,7 +39,7 @@ def tasks(self) -> Dict[str, Task]:
)
return tasks
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
Q_ = self.Q_
tcs = {}
tcs["sim1"] = TimecourseSim(
@@ -61,7 +61,7 @@ def simulations(self) -> Dict[str, AbstractSim]:
return tcs
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
unit_time = "min"
unit_amount = "mmole"
unit_concentration = "mM"
@@ -80,7 +80,6 @@ def figures(self) -> Dict[str, Figure]:
colors = ["black", "blue", "red"]
for ks, sim_key in enumerate(self._simulations.keys()):
for km, model_key in enumerate(self._models.keys()):
-
task_key = f"task_{model_key}_{sim_key}"
kwargs = {
diff --git a/src/sbmlsim/examples/experiments/midazolam/experiments/__init__.py b/src/sbmlsim/examples/experiments/midazolam/experiments/__init__.py
index 77729a5e..9d4777a6 100644
--- a/src/sbmlsim/examples/experiments/midazolam/experiments/__init__.py
+++ b/src/sbmlsim/examples/experiments/midazolam/experiments/__init__.py
@@ -1,10 +1,8 @@
-from typing import Dict, Tuple
from collections import namedtuple
from sbmlsim.experiment import SimulationExperiment
from sbmlsim.model import AbstractModel
from sbmlsim.simulation import TimecourseSim
-from sbmlsim.simulation.sensitivity import ModelSensitivity
from sbmlsim.task import Task
from ...midazolam import MODEL_PATH
@@ -16,7 +14,7 @@
class MidazolamSimulationExperiment(SimulationExperiment):
"""Base class for all GlucoseSimulationExperiments."""
- def models(self) -> Dict[str, AbstractModel]:
+ def models(self) -> dict[str, AbstractModel]:
Q_ = self.Q_
models = {
"model": AbstractModel(
@@ -32,7 +30,7 @@ def models(self) -> Dict[str, AbstractModel]:
}
return models
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
if self.simulations():
return {
f"task_{key}": Task(model="model", simulation=key)
@@ -41,7 +39,7 @@ def tasks(self) -> Dict[str, Task]:
else:
return {}
- def simulations(self, simulations=None) -> Dict[str, TimecourseSim]:
+ def simulations(self, simulations=None) -> dict[str, TimecourseSim]:
if simulations is None:
return simulations
diff --git a/src/sbmlsim/examples/experiments/midazolam/experiments/kupferschmidt1995.py b/src/sbmlsim/examples/experiments/midazolam/experiments/kupferschmidt1995.py
index 1cd1810e..a5118e4d 100644
--- a/src/sbmlsim/examples/experiments/midazolam/experiments/kupferschmidt1995.py
+++ b/src/sbmlsim/examples/experiments/midazolam/experiments/kupferschmidt1995.py
@@ -1,5 +1,3 @@
-from typing import Dict, List
-
from sbmlsim.data import DataSet, load_pkdb_dataframes_by_substance
from sbmlsim.fit import FitData, FitMapping
from sbmlsim.plot import Axis, Figure
@@ -9,7 +7,7 @@
class Kupferschmidt1995(MidazolamSimulationExperiment):
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
for fig_id in ["Fig1", "Fig2"]:
dframes = load_pkdb_dataframes_by_substance(
@@ -43,12 +41,12 @@ def datasets(self) -> Dict[str, DataSet]:
]
return dsets
- def simulations(self) -> Dict[str, TimecourseSim]:
+ def simulations(self) -> dict[str, TimecourseSim]:
return super(Kupferschmidt1995, self).simulations(
simulations={**self.simulations_mid()}
)
- def simulations_mid(self) -> Dict[str, TimecourseSim]:
+ def simulations_mid(self) -> dict[str, TimecourseSim]:
"""Kupferschmidt1995
- midazolam, iv, 5 [mg]
@@ -89,7 +87,7 @@ def simulations_mid(self) -> Dict[str, TimecourseSim]:
return tcsims
- def fit_mappings(self) -> Dict[str, FitMapping]:
+ def fit_mappings(self) -> dict[str, FitMapping]:
# fit mapping: which data maps on which simulation
fit_dict = {
"fm_mid_iv": {
@@ -131,7 +129,7 @@ def fit_mappings(self) -> Dict[str, FitMapping]:
)
return mappings
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
return {**self.figure_mid()}
def figure_mid(self):
diff --git a/src/sbmlsim/examples/experiments/midazolam/experiments/mandema1992.py b/src/sbmlsim/examples/experiments/midazolam/experiments/mandema1992.py
index 0d3441b9..a00284cf 100644
--- a/src/sbmlsim/examples/experiments/midazolam/experiments/mandema1992.py
+++ b/src/sbmlsim/examples/experiments/midazolam/experiments/mandema1992.py
@@ -1,5 +1,3 @@
-from typing import Dict, List
-
from sbmlsim.data import DataSet, load_pkdb_dataframes_by_substance
from sbmlsim.fit import FitData, FitMapping
from sbmlsim.plot import Axis, Figure
@@ -11,7 +9,7 @@
class Mandema1992(MidazolamSimulationExperiment):
"""Mandema1992."""
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
dsets = {}
for fig_id in ["Fig1A", "Fig2A", "Fig3A"]:
dframes = load_pkdb_dataframes_by_substance(
@@ -30,10 +28,10 @@ def datasets(self) -> Dict[str, DataSet]:
dsets[f"{fig_id}_{substance}"] = dset
return dsets
- def simulations(self) -> Dict[str, TimecourseSim]:
+ def simulations(self) -> dict[str, TimecourseSim]:
return {**self.simulation_mid()}
- def simulation_mid(self) -> Dict[str, TimecourseSim]:
+ def simulation_mid(self) -> dict[str, TimecourseSim]:
"""Mandema1992
- midazolam, iv, 0.1 [mg/kg] (infusion over 15 min)
@@ -113,7 +111,7 @@ def simulation_mid(self) -> Dict[str, TimecourseSim]:
return tcsims
- def fit_mappings(self) -> Dict[str, FitMapping]:
+ def fit_mappings(self) -> dict[str, FitMapping]:
# fit mapping: which data maps on which simulation
fit_dict = {
"fm1": {"ref": "Fig1A_midazolam", "obs": "task_mid_iv", "yid": "[Cve_mid]"},
@@ -152,7 +150,7 @@ def fit_mappings(self) -> Dict[str, FitMapping]:
)
return mappings
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
return {
**self.figure_mid(),
}
diff --git a/src/sbmlsim/examples/experiments/midazolam/fitting_example.py b/src/sbmlsim/examples/experiments/midazolam/fitting_example.py
index a62f2013..16172ddf 100644
--- a/src/sbmlsim/examples/experiments/midazolam/fitting_example.py
+++ b/src/sbmlsim/examples/experiments/midazolam/fitting_example.py
@@ -6,7 +6,6 @@
op_mandema1992,
op_mid1oh_iv,
)
-from sbmlsim.fit.analysis import OptimizationAnalysis
from sbmlsim.fit.optimization import OptimizationProblem
from sbmlsim.fit.options import (
OptimizationAlgorithmType,
@@ -14,7 +13,6 @@
WeightingCurvesType,
WeightingPointsType,
)
-from sbmlsim.fit.result import OptimizationResult
from sbmlsim.fit.runner import run_optimization
@@ -34,16 +32,14 @@ def fitting_example(op_factory: Callable, size: int = 10, n_cores: int = 10) ->
("lsq", OptimizationAlgorithmType.LEAST_SQUARE),
("de", OptimizationAlgorithmType.DIFFERENTIAL_EVOLUTION),
]:
-
op: OptimizationProblem = op_factory()
fit_path = MIDAZOLAM_PATH / "results_fit" / op.opid / alg_key
if not fit_path.exists():
fit_path.mkdir(parents=True)
- opt_result: OptimizationResult = run_optimization(
+ run_optimization(
problem=op, algorithm=algorithm, size=size, n_cores=n_cores, **fit_kwargs
)
-
# OptimizationAnalysis(opt_result=opt_result, op=op)
diff --git a/src/sbmlsim/examples/experiments/repressilator/repressilator.py b/src/sbmlsim/examples/experiments/repressilator/repressilator.py
index 090b02cc..654746f5 100644
--- a/src/sbmlsim/examples/experiments/repressilator/repressilator.py
+++ b/src/sbmlsim/examples/experiments/repressilator/repressilator.py
@@ -1,8 +1,9 @@
"""
Example simulation experiment.
"""
+
from pathlib import Path
-from typing import Dict, Union
+from typing import Union
from sbmlsim.combine.sedml.report import Report
@@ -21,7 +22,7 @@
class RepressilatorExperiment(SimulationExperiment):
"""Simple repressilator experiment."""
- def models(self) -> Dict[str, Union[Path, AbstractModel]]:
+ def models(self) -> dict[str, Union[Path, AbstractModel]]:
"""Define models."""
return {
"model1": REPRESSILATOR_SBML,
@@ -34,7 +35,7 @@ def models(self) -> Dict[str, Union[Path, AbstractModel]]:
),
}
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
"""Define simulations."""
tc = TimecourseSim(
timecourses=Timecourse(start=0, end=1000, steps=1000),
@@ -42,14 +43,14 @@ def simulations(self) -> Dict[str, AbstractSim]:
)
return {"tc": tc}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Define tasks."""
tasks = dict()
for model in ["model1", "model2"]:
tasks[f"task_{model}_tc"] = Task(model=model, simulation="tc")
return tasks
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
"""Define data generators."""
# direct access via id
data = []
@@ -79,7 +80,7 @@ def data(self) -> Dict[str, Data]:
pprint(data_dict)
return data_dict
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
"""Define figure outputs (plots)."""
fig = Figure(
experiment=self,
@@ -96,29 +97,29 @@ def figures(self) -> Dict[str, Figure]:
Plot(sid="plot2", name="Postprocessing"), row=2, col=1, col_span=2
)
- p0.set_title(f"Timecourse")
+ p0.set_title("Timecourse")
p0.set_xaxis("time", unit="second")
p0.set_yaxis("data", unit="dimensionless")
- p1.set_title(f"Preprocessing")
+ p1.set_title("Preprocessing")
p1.set_xaxis("time", unit="second")
p1.set_yaxis("data", unit="dimensionless")
colors = ["tab:red", "tab:green", "tab:blue"]
for k, sid in enumerate(["PX", "PY", "PZ"]):
p0.curve(
- x=Data("time", task=f"task_model1_tc"),
- y=Data(f"{sid}", task=f"task_model1_tc"),
+ x=Data("time", task="task_model1_tc"),
+ y=Data(f"{sid}", task="task_model1_tc"),
label=f"{sid}",
color=colors[k],
)
p1.curve(
- x=Data("time", task=f"task_model2_tc"),
- y=Data(f"{sid}", task=f"task_model2_tc"),
+ x=Data("time", task="task_model2_tc"),
+ y=Data(f"{sid}", task="task_model2_tc"),
label=f"{sid}",
color=colors[k],
linewidth=2.0,
)
- p2.set_title(f"Postprocessing")
+ p2.set_title("Postprocessing")
p2.set_xaxis("data", unit="dimensionless")
p2.set_yaxis("data", unit="dimensionless")
@@ -136,7 +137,7 @@ def figures(self) -> Dict[str, Figure]:
fig.sid: fig,
}
- def reports(self) -> Dict[str, Report]:
+ def reports(self) -> dict[str, Report]:
"""Define reports.
HashMap of DataGenerators.
diff --git a/src/sbmlsim/examples/experiments/repressilator/repressilator_scans.py b/src/sbmlsim/examples/experiments/repressilator/repressilator_scans.py
index 3fa053dd..28020297 100644
--- a/src/sbmlsim/examples/experiments/repressilator/repressilator_scans.py
+++ b/src/sbmlsim/examples/experiments/repressilator/repressilator_scans.py
@@ -1,8 +1,9 @@
"""
Example simulation experiment.
"""
+
from pathlib import Path
-from typing import Dict, Union
+from typing import Union
import numpy as np
@@ -25,7 +26,7 @@
class RepressilatorScanExperiment(SimulationExperiment):
"""Simple repressilator experiment."""
- def models(self) -> Dict[str, Union[Path, AbstractModel]]:
+ def models(self) -> dict[str, Union[Path, AbstractModel]]:
return {
"model1": REPRESSILATOR_SBML,
"model2": AbstractModel(
@@ -33,20 +34,20 @@ def models(self) -> Dict[str, Union[Path, AbstractModel]]:
),
}
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
return {
**self.sim_scans(),
# **self.sim_sensitivities(),
}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
tasks = dict()
for model in ["model1", "model2"]:
for sim_key in self.simulations():
tasks[f"task_{model}_{sim_key}"] = Task(model=model, simulation=sim_key)
return tasks
- def sim_scans(self) -> Dict[str, AbstractSim]:
+ def sim_scans(self) -> dict[str, AbstractSim]:
Q_ = self.Q_
unit_data = "dimensionless"
tc = TimecourseSim(
@@ -82,20 +83,20 @@ def sim_scans(self) -> Dict[str, AbstractSim]:
),
],
)
- scan3d = ScanSim(
- simulation=tc,
- dimensions=[
- Dimension(
- "dim1", changes={"X": Q_(np.linspace(0, 10, num=5), unit_data)}
- ),
- Dimension(
- "dim2", changes={"Y": Q_(np.linspace(0, 10, num=5), unit_data)}
- ),
- Dimension(
- "dim3", changes={"Z": Q_(np.linspace(0, 10, num=5), unit_data)}
- ),
- ],
- )
+ # scan3d = ScanSim(
+ # simulation=tc,
+ # dimensions=[
+ # Dimension(
+ # "dim1", changes={"X": Q_(np.linspace(0, 10, num=5), unit_data)}
+ # ),
+ # Dimension(
+ # "dim2", changes={"Y": Q_(np.linspace(0, 10, num=5), unit_data)}
+ # ),
+ # Dimension(
+ # "dim3", changes={"Z": Q_(np.linspace(0, 10, num=5), unit_data)}
+ # ),
+ # ],
+ # )
return {
"tc": tc,
@@ -104,7 +105,7 @@ def sim_scans(self) -> Dict[str, AbstractSim]:
# "scan3d": scan3d,
}
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
"""Data used for plotting and analysis.
Generates promises for results.
@@ -147,7 +148,7 @@ def data(self) -> Dict[str, Data]:
return {d.sid: d for d in data}
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
unit_time = "min"
unit_data = "dimensionless"
diff --git a/src/sbmlsim/examples/experiments/repressilator/results/sbmlsim/RepressilatorExperiment/RepressilatorExperiment.html b/src/sbmlsim/examples/experiments/repressilator/results/sbmlsim/RepressilatorExperiment/RepressilatorExperiment.html
index ed13599b..6eab1d55 100644
--- a/src/sbmlsim/examples/experiments/repressilator/results/sbmlsim/RepressilatorExperiment/RepressilatorExperiment.html
+++ b/src/sbmlsim/examples/experiments/repressilator/results/sbmlsim/RepressilatorExperiment/RepressilatorExperiment.html
@@ -64,7 +64,7 @@ Code
Example simulation experiment.
"""
from pathlib import Path
-from typing import Dict, Type, Union
+from typing import dict, Type, Union
from sbmlsim.combine.sedml.parser import SEDMLSerializer
from sbmlsim.data import Data
@@ -83,7 +83,7 @@ Code
class RepressilatorExperiment(SimulationExperiment):
"""Simple repressilator experiment."""
- def models(self) -> Dict[str, Union[Path, AbstractModel]]:
+ def models(self) -> dict[str, Union[Path, AbstractModel]]:
"""Define models."""
return {
"model1": MODEL_REPRESSILATOR,
@@ -96,7 +96,7 @@ Code
),
}
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
"""Define simulations."""
tc = TimecourseSim(
timecourses=Timecourse(start=0, end=1000, steps=1000),
@@ -104,14 +104,14 @@ Code
)
return {"tc": tc}
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Define tasks."""
tasks = dict()
for model in ["model1", "model2"]:
tasks[f"task_{model}_tc"] = Task(model=model, simulation="tc")
return tasks
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
"""Define data generators."""
# direct access via id
data = []
@@ -141,7 +141,7 @@ Code
pprint(data_dict)
return data_dict
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
"""Define figure outputs (plots)."""
fig = Figure(
experiment=self,
@@ -198,7 +198,7 @@ Code
fig.sid: fig,
}
- def reports(self) -> Dict[str, Report]:
+ def reports(self) -> dict[str, Report]:
"""Define reports.
HashMap of DataGenerators.
diff --git a/src/sbmlsim/experiment/__init__.py b/src/sbmlsim/experiment/__init__.py
index 3284ff93..33f20087 100644
--- a/src/sbmlsim/experiment/__init__.py
+++ b/src/sbmlsim/experiment/__init__.py
@@ -6,3 +6,10 @@
)
from .runner import ExperimentRunner
from sbmlsim.report.experiment_report import ExperimentReport
+
+__all__ = [
+ "SimulationExperiment",
+ "ExperimentResult",
+ "ExperimentRunner",
+ "ExperimentReport",
+]
diff --git a/src/sbmlsim/experiment/experiment.py b/src/sbmlsim/experiment/experiment.py
index ebe21bb5..e7ab8422 100644
--- a/src/sbmlsim/experiment/experiment.py
+++ b/src/sbmlsim/experiment/experiment.py
@@ -6,7 +6,7 @@
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
-from typing import Dict, Iterable, List, Union
+from typing import Iterable, Union
from pymetadata import log
@@ -87,15 +87,15 @@ def __init__(
self.settings = kwargs
# init variables
- self._models: Dict[str, RoadrunnerSBMLModel] = {}
- self._data: Dict[str, Data] = {}
- self._datasets: Dict[str, DataSet] = {}
- self._fit_mappings: Dict[str, FitMapping] = {}
- self._simulations: Dict[str, AbstractSim] = {}
- self._tasks: Dict[str, Task] = {}
- self._figures: Dict[str, Figure] = {}
- self._results: Dict[str, XResult] = {}
- self._reports: Dict[str, Dict[str, str]] = {}
+ self._models: dict[str, RoadrunnerSBMLModel] = {}
+ self._data: dict[str, Data] = {}
+ self._datasets: dict[str, DataSet] = {}
+ self._fit_mappings: dict[str, FitMapping] = {}
+ self._simulations: dict[str, AbstractSim] = {}
+ self._tasks: dict[str, Task] = {}
+ self._figures: dict[str, Figure] = {}
+ self._results: dict[str, XResult] = {}
+ self._reports: dict[str, dict[str, str]] = {}
def initialize(self) -> None:
"""Initialize SimulationExperiment.
@@ -107,7 +107,7 @@ def initialize(self) -> None:
"""
try:
# initialized from the outside
- # self._models: Dict[str, AbstractModel] = self.models()
+ # self._models: dict[str, AbstractModel] = self.models()
self._datasets.update(self.datasets())
self._simulations.update(self.simulations())
self._tasks.update(self.tasks())
@@ -138,35 +138,35 @@ def __str__(self) -> str:
]
return "\n".join(info)
- def models(self) -> Dict[str, Union[AbstractModel, Path]]:
+ def models(self) -> dict[str, Union[AbstractModel, Path]]:
"""Define model definitions.
The child classes fill out the information.
"""
return dict()
- def datasets(self) -> Dict[str, DataSet]:
+ def datasets(self) -> dict[str, DataSet]:
"""Define dataset definitions (experimental data).
The child classes fill out the information.
"""
return dict()
- def simulations(self) -> Dict[str, AbstractSim]:
+ def simulations(self) -> dict[str, AbstractSim]:
"""Define simulation definitions.
The child classes fill out the information.
"""
return dict()
- def tasks(self) -> Dict[str, Task]:
+ def tasks(self) -> dict[str, Task]:
"""Define task definitions.
The child classes fill out the information.
"""
return dict()
- def data(self) -> Dict[str, Data]:
+ def data(self) -> dict[str, Data]:
"""Define DataGenerators including functions.
This determines the selection in the model.
@@ -176,7 +176,7 @@ def data(self) -> Dict[str, Data]:
"""
return dict()
- def figures(self) -> Dict[str, Figure]:
+ def figures(self) -> dict[str, Figure]:
"""Figure definition.
Selections accessed in figures and analyses must be registered beforehand
@@ -187,7 +187,7 @@ def figures(self) -> Dict[str, Figure]:
"""
return {}
- def figures_mpl(self) -> Dict[str, FigureMPL]:
+ def figures_mpl(self) -> dict[str, FigureMPL]:
"""Matplotlib figure definition.
Selections accessed in figures and analyses must be registered beforehand
@@ -198,7 +198,7 @@ def figures_mpl(self) -> Dict[str, FigureMPL]:
"""
return {}
- def fit_mappings(self) -> Dict[str, FitMapping]:
+ def fit_mappings(self) -> dict[str, FitMapping]:
"""Define fit mappings.
Mapping reference data on observables.
@@ -207,7 +207,7 @@ def fit_mappings(self) -> Dict[str, FitMapping]:
"""
return dict()
- def reports(self) -> Dict[str, Dict[str, str]]:
+ def reports(self) -> dict[str, dict[str, str]]:
"""Define reports.
Reports are defined by a hashmap label:Data.
@@ -248,7 +248,7 @@ def add_selections_data(
# --- RESULTS ---------------------------------------------------------------------
@property
- def results(self) -> Dict[str, XResult]:
+ def results(self) -> dict[str, XResult]:
"""Access simulation results.
Results are mapped on tasks based on the task_ids. E.g.
@@ -381,7 +381,7 @@ def run(
output_path: Path = None,
show_figures: bool = True,
save_results: bool = False,
- figure_formats: List[str] = None,
+ figure_formats: list[str] = None,
reduced_selections: bool = True,
) -> "ExperimentResult":
"""Execute given experiment and store results."""
@@ -440,7 +440,7 @@ def _run_tasks(self, simulator, reduced_selections: bool = True):
self._results = dict()
# get all tasks for given model
- model_tasks: Dict[str, List[str]] = defaultdict(list)
+ model_tasks: dict[str, list[str]] = defaultdict(list)
for task_key, task in self._tasks.items():
model_tasks[task.model_id].append(task_key)
@@ -571,7 +571,7 @@ def save_results(self, results_path: Path) -> None:
result.to_tsv(results_path / f"{self.sid}_{rkey}.tsv")
@timeit
- def create_mpl_figures(self) -> Dict[str, Union[FigureMPL, Figure]]:
+ def create_mpl_figures(self) -> dict[str, Union[FigureMPL, Figure]]:
"""Create matplotlib figures."""
mpl_figures = {}
for fig_key, fig in self._figures.items():
@@ -585,7 +585,7 @@ def create_mpl_figures(self) -> Dict[str, Union[FigureMPL, Figure]]:
return mpl_figures
@timeit
- def show_mpl_figures(self, mpl_figures: Dict[str, FigureMPL]) -> None:
+ def show_mpl_figures(self, mpl_figures: dict[str, FigureMPL]) -> None:
"""Show matplotlib figures."""
for _, fig_mpl in mpl_figures.items():
# see https://stackoverflow.com/questions/23141452/difference-between-plt-draw-and-plt-show-in-matplotlib/23141491#23141491
@@ -596,9 +596,9 @@ def show_mpl_figures(self, mpl_figures: Dict[str, FigureMPL]) -> None:
def save_mpl_figures(
self,
results_path: Path,
- mpl_figures: Dict[str, FigureMPL],
- figure_formats: List[str] = None,
- ) -> Dict[str, List[Path]]:
+ mpl_figures: dict[str, FigureMPL],
+ figure_formats: list[str] = None,
+ ) -> dict[str, list[Path]]:
"""Save matplotlib figures."""
if figure_formats is None:
# default to SVG output
@@ -614,7 +614,7 @@ def save_mpl_figures(
return paths
@classmethod
- def close_mpl_figures(cls, mpl_figures: Dict[str, FigureMPL]):
+ def close_mpl_figures(cls, mpl_figures: dict[str, FigureMPL]):
"""Close matplotlib figures."""
for _, fig_mpl in mpl_figures.items():
plt.close(fig_mpl)
@@ -627,7 +627,7 @@ class ExperimentResult:
experiment: SimulationExperiment
output_path: Path
- def to_dict(self) -> Dict:
+ def to_dict(self) -> dict:
"""Conversion to dictionary.
Used in serialization and required for reports.
diff --git a/src/sbmlsim/experiment/runner.py b/src/sbmlsim/experiment/runner.py
index 40b8bf38..b7bea376 100644
--- a/src/sbmlsim/experiment/runner.py
+++ b/src/sbmlsim/experiment/runner.py
@@ -9,7 +9,7 @@
"""
from pathlib import Path
-from typing import Dict, List, Optional, Tuple, Type, Union, Set
+from typing import Optional, Tuple, Type, Union
from pymetadata import log
from pymetadata.console import console
@@ -31,7 +31,7 @@ class ExperimentRunner(object):
def __init__(
self,
experiment_classes: Union[
- Type[SimulationExperiment], List[Type[SimulationExperiment]]
+ Type[SimulationExperiment], list[Type[SimulationExperiment]]
],
base_path: Path,
data_path: Path,
@@ -54,7 +54,7 @@ def __init__(
# initialize experiments
self.base_path = base_path
self.data_path = data_path
- self.experiments: Dict[str, SimulationExperiment] = {}
+ self.experiments: dict[str, SimulationExperiment] = {}
self.models = {}
self.simulator: Optional[SimulatorSerial] = None
@@ -76,9 +76,9 @@ def set_simulator(self, simulator: SimulatorSerial) -> None:
def initialize(
self,
experiment_classes: Union[
- List[Type[SimulationExperiment]],
+ list[Type[SimulationExperiment]],
Tuple[Type[SimulationExperiment]],
- Set[Type[SimulationExperiment]],
+ set[Type[SimulationExperiment]],
],
**kwargs,
):
@@ -128,9 +128,9 @@ def run_experiments(
output_path: Path,
show_figures: bool = False,
save_results: bool = False,
- figure_formats: List[str] = None,
+ figure_formats: list[str] = None,
reduced_selections: bool = True,
- ) -> List[ExperimentResult]:
+ ) -> list[ExperimentResult]:
"""Run the experiments."""
if not output_path.exists():
output_path.mkdir(parents=True)
@@ -155,10 +155,10 @@ def run_experiments(
def run_experiments(
- experiments: Union[Type[SimulationExperiment], List[Type[SimulationExperiment]]],
+ experiments: Union[Type[SimulationExperiment], list[Type[SimulationExperiment]]],
output_path: Path,
base_path: Path = None,
- data_path: Union[List[Path], Tuple[Path], Optional[Path]] = None,
+ data_path: Union[list[Path], Tuple[Path], Optional[Path]] = None,
) -> Path:
"""Run simulation experiments."""
if not isinstance(experiments, (list, tuple)):
diff --git a/src/sbmlsim/fit/__init__.py b/src/sbmlsim/fit/__init__.py
index 2139a98d..8526af2c 100644
--- a/src/sbmlsim/fit/__init__.py
+++ b/src/sbmlsim/fit/__init__.py
@@ -5,3 +5,10 @@
"""
from .objects import FitMapping, FitData, FitExperiment, FitParameter
+
+__all__ = [
+ "FitMapping",
+ "FitData",
+ "FitExperiment",
+ "FitParameter",
+]
diff --git a/src/sbmlsim/fit/analysis.py b/src/sbmlsim/fit/analysis.py
index a450a39e..1378fe9c 100644
--- a/src/sbmlsim/fit/analysis.py
+++ b/src/sbmlsim/fit/analysis.py
@@ -2,7 +2,7 @@
import webbrowser
from pathlib import Path
-from typing import Any, Dict, List, Tuple
+from typing import Any, Tuple
import matplotlib
import numpy as np
@@ -44,7 +44,7 @@ def __init__(
show_titles: bool = True,
residual: ResidualType = None,
loss_function: LossFunctionType = None,
- weighting_curves: List[WeightingCurvesType] = None,
+ weighting_curves: list[WeightingCurvesType] = None,
weighting_points: WeightingPointsType = None,
variable_step_size: bool = True,
absolute_tolerance: float = 1e-6,
@@ -100,7 +100,7 @@ def __init__(
self.op: OptimizationProblem = op # type: ignore
- def run(self, mpl_parameters: Dict[str, Any] = None) -> None:
+ def run(self, mpl_parameters: dict[str, Any] = None) -> None:
"""Execute complete analysis.
This creates all plots and reports.
@@ -152,7 +152,6 @@ def run(self, mpl_parameters: Dict[str, Any] = None) -> None:
parameters.update(mpl_parameters)
plt.rcParams.update(parameters)
-
# optimization traces
self.plot_traces(
path=plots_dir / f"traces.{self.image_format}",
@@ -372,7 +371,9 @@ def plot_fit(self, output_dir: Path, x: np.ndarray) -> None:
markersize=10,
)
# plot simulation
- ax.plot(x_obs.values, y_obs.values, "-", color="blue", label="observable")
+ ax.plot(
+ x_obs.values, y_obs.values, "-", color="blue", label="observable"
+ )
xdelta = np.max(x_ref) - np.min(x_ref)
ax.set_xlim(
@@ -450,7 +451,9 @@ def plot_fit_residual(self, output_dir: Path, x: np.ndarray) -> None:
)
# prediction
- ax.plot(x_obs.values, y_obs.values, "-", color="blue", label="observable")
+ ax.plot(
+ x_obs.values, y_obs.values, "-", color="blue", label="observable"
+ )
ax.plot(x_ref, y_obsip, "o", color="blue", label="interpolation")
# reference data
@@ -467,7 +470,6 @@ def plot_fit_residual(self, output_dir: Path, x: np.ndarray) -> None:
)
for ax in (ax3, ax4):
-
ax.plot(
x_ref,
res_weighted2,
@@ -563,7 +565,6 @@ def _datapoints_df(self, x: np.ndarray) -> pd.DataFrame:
],
)
-
kwargs_scatter = {
"markersize": "10",
"markeredgecolor": "black",
@@ -588,11 +589,11 @@ def plot_datapoint_scatter(self, x: np.ndarray, path: Path):
ax.fill_between(
[min_dp, max_dp, max_dp, min_dp],
- [min_dp/10, max_dp/10, max_dp*10, min_dp*10],
+ [min_dp / 10, max_dp / 10, max_dp * 10, min_dp * 10],
color="lightgray",
)
ax.plot([min_dp, max_dp], [min_dp, max_dp], color="black")
- for bfactor in [1/10, 10]:
+ for bfactor in [1 / 10, 10]:
ax.plot(
[min_dp, max_dp],
[min_dp * bfactor, max_dp * bfactor],
@@ -606,14 +607,14 @@ def plot_datapoint_scatter(self, x: np.ndarray, path: Path):
dp.y_ref[dp.experiment == experiment].values,
dp.y_obs[dp.experiment == experiment].values,
# yerr=dp.y_ref_err,
- **self.kwargs_scatter
+ **self.kwargs_scatter,
)
# annotations
for k in range(len(dp)):
# plot labels for datapoints far away
- ratio = dp.y_ref.values[k]/dp.y_obs.values[k]
- if (ratio > 10 or ratio < 1/10):
+ ratio = dp.y_ref.values[k] / dp.y_obs.values[k]
+ if ratio > 10 or ratio < 1 / 10:
ax.annotate(
dp.experiment.values[k],
xy=(
@@ -647,7 +648,7 @@ def plot_residual_scatter(self, x: np.ndarray, path: Path):
ax.plot(
xdata[dp.experiment == experiment].values,
ydata[dp.experiment == experiment].values,
- **self.kwargs_scatter
+ **self.kwargs_scatter,
)
min_res = np.min(ydata)
@@ -690,7 +691,9 @@ def plot_residual_scatter(self, x: np.ndarray, path: Path):
alpha=0.7,
)
ax.set_xlabel("Experiment $y_{i,k}$", fontweight="bold")
- ax.set_ylabel("Relative residual $\\frac{f(x_{i,k})-y_{i,k}}{y_{i,k}}$", fontweight="bold")
+ ax.set_ylabel(
+ "Relative residual $\\frac{f(x_{i,k})-y_{i,k}}{y_{i,k}}$", fontweight="bold"
+ )
ax.set_xscale("log")
# ax.set_yscale("log")
ax.grid()
@@ -778,7 +781,7 @@ def plot_residual_boxplot(self, x: np.ndarray, path: Path) -> None:
ax.boxplot(
# position,
box_data,
- vert=False
+ vert=False,
# color="black",
# alpha=0.8
)
@@ -919,7 +922,11 @@ def plot_traces(self, path: Path) -> None:
# plot final optimization cost of trace
if len(df_run.cost.values > 0):
ax.plot(
- len(df_run) - 1, df_run.cost.values[-1], "o", color="black", alpha=0.8
+ len(df_run) - 1,
+ df_run.cost.values[-1],
+ "o",
+ color="black",
+ alpha=0.8,
)
ax.set_xlabel("Optimization step")
@@ -994,9 +1001,11 @@ def plot_correlation(
alpha=0.5,
)
# optimal values
- ax.scatter(
- df[pidx], df[pidy], c=df.cost, s=size, alpha=0.9, cmap="jet"
- ),
+ (
+ ax.scatter(
+ df[pidx], df[pidy], c=df.cost, s=size, alpha=0.9, cmap="jet"
+ ),
+ )
ax.plot(
self.optres.xopt[kx],
diff --git a/src/sbmlsim/fit/helpers.py b/src/sbmlsim/fit/helpers.py
index 637a4479..04ae6069 100644
--- a/src/sbmlsim/fit/helpers.py
+++ b/src/sbmlsim/fit/helpers.py
@@ -1,4 +1,5 @@
"""Helper functions for fitting."""
+
from pathlib import Path
from sbmlsim.fit import FitExperiment, FitMapping
@@ -8,9 +9,8 @@
from sbmlsim.experiment import ExperimentRunner, SimulationExperiment
-from sbmlsim.fit import FitExperiment, FitMapping, FitData
-from typing import Dict, List, Type, Union, Callable, Iterable, Tuple, Any
+from typing import Type, Union, Callable, Iterable, Tuple, Any
from sbmlsim.fit.objects import MappingMetaData
@@ -18,17 +18,21 @@
def filtered_fit_experiments(
- experiment_classes: List[Type[SimulationExperiment]],
+ experiment_classes: list[Type[SimulationExperiment]],
metadata_filters: Union[Callable, Iterable[Callable]],
base_path: Path,
data_path: Path,
-) -> Tuple[Dict[str, List[FitExperiment]], pd.DataFrame]:
+) -> Tuple[dict[str, list[FitExperiment]], pd.DataFrame]:
"""Fit experiments based on MappingMetaData.
:param experiment_classes: List of SimulationExperiment class definition
:param metadata_filter:
"""
- filters = [metadata_filters] if isinstance(metadata_filters, Callable) else metadata_filters
+ filters = (
+ [metadata_filters]
+ if isinstance(metadata_filters, Callable)
+ else metadata_filters
+ )
# instantiate objects for filtering of fit mappings
runner = ExperimentRunner(
@@ -37,8 +41,8 @@ def filtered_fit_experiments(
data_path=data_path,
)
- fit_experiments: Dict[str, List[FitExperiment]] = {}
- all_info: List[Dict] = []
+ fit_experiments: dict[str, list[FitExperiment]] = {}
+ all_info: list[dict] = []
for k, experiment_name in enumerate(runner.experiments):
# print(experiment_name)
@@ -48,7 +52,6 @@ def filtered_fit_experiments(
# filter mappings by metadata
mappings = []
for fm_key, fit_mapping in experiment.fit_mappings().items():
-
# tests all the filters
accept = True
for filter in filters:
@@ -63,11 +66,11 @@ def filtered_fit_experiments(
try:
metadata: MappingMetaData = fit_mapping.metadata
yid = "__".join(fit_mapping.observable.y.sid.split("__")[1:])
- info: Dict[str, Any] = {
+ info: dict[str, Any] = {
"experiment": experiment_name,
"fm_key": fm_key,
"yid": yid,
- **metadata.to_dict()
+ **metadata.to_dict(),
}
all_info.append(info)
except Exception as err:
@@ -91,8 +94,9 @@ def filtered_fit_experiments(
return fit_experiments, df
+
def f_fitexp(
- experiment_classes: List[Type[SimulationExperiment]],
+ experiment_classes: list[Type[SimulationExperiment]],
metadata_filters: Union[Callable, Iterable[Callable]],
base_path: Path,
data_path: Path,
@@ -113,6 +117,7 @@ def filter_empty(fit_mapping_key: str, fit_mapping: FitMapping) -> bool:
"""Return all experiments/mappings."""
return True
+
def filter_outlier(fit_mapping_key: str, fit_mapping: FitMapping) -> bool:
"""Return non outlier experiments."""
return not fit_mapping.metadata.outlier
diff --git a/src/sbmlsim/fit/objects.py b/src/sbmlsim/fit/objects.py
index 5d673f21..51a46c61 100644
--- a/src/sbmlsim/fit/objects.py
+++ b/src/sbmlsim/fit/objects.py
@@ -5,7 +5,7 @@
import math
from dataclasses import dataclass
from pathlib import Path
-from typing import Any, Callable, Dict, Iterable, List, Optional, Union
+from typing import Any, Callable, Iterable, Optional, Union
import numpy as np
import pandas as pd
@@ -28,10 +28,10 @@ class FitExperiment:
def __init__(
self,
experiment: Callable,
- mappings: List[str] = None,
- weights: Union[float, List[float]] = None,
+ mappings: list[str] = None,
+ weights: Union[float, list[float]] = None,
use_mapping_weights: bool = False,
- fit_parameters: Dict[str, List["FitParameter"]] = None,
+ fit_parameters: dict[str, list["FitParameter"]] = None,
exclude: bool = False,
):
"""Initialize simulation experiment used in a fitting.
@@ -72,12 +72,12 @@ def __init__(
)
@property
- def weights(self) -> List[float]:
+ def weights(self) -> list[float]:
"""Weights of fit mappings."""
return self._weights
@weights.setter
- def weights(self, weights: Union[float, List[float]] = None) -> None:
+ def weights(self, weights: Union[float, list[float]] = None) -> None:
"""Set weights for mappings in fit experiment."""
weights_processed = None
@@ -117,7 +117,7 @@ def weights(self, weights: Union[float, List[float]] = None) -> None:
self._weights = weights_processed
@staticmethod
- def reduce(fit_experiments: Iterable["FitExperiment"]) -> List["FitExperiment"]:
+ def reduce(fit_experiments: Iterable["FitExperiment"]) -> list["FitExperiment"]:
"""Collect fit mappings of multiple FitExperiments if these can be combined."""
red_experiments = {}
for fit_exp in fit_experiments:
diff --git a/src/sbmlsim/fit/optimization.py b/src/sbmlsim/fit/optimization.py
index 486960e3..9cd57b57 100644
--- a/src/sbmlsim/fit/optimization.py
+++ b/src/sbmlsim/fit/optimization.py
@@ -5,7 +5,7 @@
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
-from typing import Any, Dict, List, Optional, Set, Tuple, Union
+from typing import Any, Optional, Tuple, Union
import numpy as np
import pandas as pd
@@ -52,8 +52,8 @@ class OptimizationProblem(ObjectJSONEncoder):
def __init__(
self,
opid: str,
- fit_experiments: List[FitExperiment],
- fit_parameters: List[FitParameter],
+ fit_experiments: list[FitExperiment],
+ fit_parameters: list[FitParameter],
base_path: Path = None,
data_path: Path = None,
):
@@ -99,24 +99,24 @@ def __init__(
self.weighting_curves: Optional[WeightingCurvesType] = None
self.weighting_points: Optional[WeightingPointsType] = None
- self.experiment_keys: List[str] = []
- self.mapping_keys: List[str] = []
- self.xid_observable: List[str] = []
- self.yid_observable: List[str] = []
- self.x_references: List[Any] = []
- self.y_references: List[Any] = []
- self.y_errors: List[Any] = []
- self.y_errors_type: List[str] = []
- self.weights: List[
+ self.experiment_keys: list[str] = []
+ self.mapping_keys: list[str] = []
+ self.xid_observable: list[str] = []
+ self.yid_observable: list[str] = []
+ self.x_references: list[Any] = []
+ self.y_references: list[Any] = []
+ self.y_errors: list[Any] = []
+ self.y_errors_type: list[str] = []
+ self.weights: list[
Any
] = [] # total weights for points (data points and curve weights)
- self.weights_points: List[Any] = [] # weights for data points based on errors
- self.weights_curves: List[Any] = [] # user defined weights per mapping/curve
+ self.weights_points: list[Any] = [] # weights for data points based on errors
+ self.weights_curves: list[Any] = [] # user defined weights per mapping/curve
- self.models: List[Any] = []
+ self.models: list[Any] = []
self.xmodel: np.ndarray = np.empty(shape=(len(self.pids)))
- self.simulations: List[Any] = []
- self.selections: List[Any] = []
+ self.simulations: list[Any] = []
+ self.selections: list[Any] = []
def __repr__(self) -> str:
"""Get representation."""
@@ -138,7 +138,7 @@ def __str__(self) -> str:
info.extend([f"\t{p}" for p in self.parameters])
return "\n".join(info)
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary."""
d = dict()
for key in ["opid", "fit_experiments", "parameters", "base_path", "data_path"]:
@@ -194,7 +194,7 @@ def initialize(
self,
residual: Optional[ResidualType],
loss_function: LossFunctionType,
- weighting_curves: List[WeightingCurvesType],
+ weighting_curves: list[WeightingCurvesType],
weighting_points: Optional[WeightingPointsType],
variable_step_size: bool = True,
relative_tolerance: float = 1e-6,
@@ -218,7 +218,7 @@ def initialize(
weighting_curves = []
if isinstance(weighting_curves, WeightingCurvesType):
raise TypeError(
- f"weighting_curves must be a 'List[WeightingCurvesType]', "
+ f"weighting_curves must be a 'list[WeightingCurvesType]', "
f"but '{type(weighting_curves)}' given."
)
@@ -250,7 +250,7 @@ def initialize(
# FIXME: selections should be based on fit mappings; this will reduce
# selections and speed up calculations
- selections_set: Set[str] = set()
+ selections_set: set[str] = set()
# for d in sim_experiment._data.values(): # type: Data
# if d.is_task():
# selections_set.add(d.selection)
@@ -473,7 +473,7 @@ def initialize(
pid_value = model.changes[pid]
self.xmodel[k] = pid_value
- selections: List[str] = list(selections_set)
+ selections: list[str] = list(selections_set)
# lookup maps
self.models.append(model)
@@ -526,7 +526,7 @@ def optimize(
sampling: SamplingType = SamplingType.UNIFORM,
seed: Optional[int] = None,
**kwargs,
- ) -> Tuple[List[optimize.OptimizeResult], List]:
+ ) -> Tuple[list[optimize.OptimizeResult], list]:
"""Run parameter optimization.
To change the weighting or handling of residuals reinitialize the optimization
@@ -570,7 +570,7 @@ def _optimize_single(
x0: np.ndarray = None,
algorithm=OptimizationAlgorithmType.LEAST_SQUARE,
**kwargs,
- ) -> Tuple[scipy.optimize.OptimizeResult, List]:
+ ) -> Tuple[scipy.optimize.OptimizeResult, list]:
"""Run single optimization with x0 start values.
:param x0: parameter start vector (important for deterministic optimizers)
diff --git a/src/sbmlsim/fit/pet/amici_example.py b/src/sbmlsim/fit/pet/amici_example.py
index 10ef49b8..dc86e4c8 100644
--- a/src/sbmlsim/fit/pet/amici_example.py
+++ b/src/sbmlsim/fit/pet/amici_example.py
@@ -6,11 +6,13 @@
"""
+import numpy as np
import amici
-sbml_importer = amici.SbmlImporter('pravastatin_body_all_flat.xml')
-model_name = 'model_pravastatin'
-model_dir = 'model_pravastatin'
+sbml_importer = amici.SbmlImporter("pravastatin_body_all_flat.xml")
+
+model_name = "model_pravastatin"
+model_dir = "model_pravastatin"
sbml_importer.sbml2amici(model_name, model_dir)
# load the model module
@@ -24,8 +26,8 @@
solver.setAbsoluteTolerance(1e-10)
# set timepoints
-import numpy as np
-timepoints = np.linspace(0, 24*60, 10)
+
+timepoints = np.linspace(0, 24 * 60, 10)
model.setTimepoints(timepoints)
rdata = amici.runAmiciSimulation(model, solver)
@@ -37,4 +39,3 @@
# FIXME: How to make this fast and access parameters, compartments, variables, ...?
# https://amici.readthedocs.io/en/latest/ExampleSteadystate.html
# amici.pandas: rdata
-
diff --git a/src/sbmlsim/fit/pet/boehm_JProteomeRes2014/benchmark_import.py b/src/sbmlsim/fit/pet/boehm_JProteomeRes2014/benchmark_import.py
index 5f73ae8e..a2089b81 100644
--- a/src/sbmlsim/fit/pet/boehm_JProteomeRes2014/benchmark_import.py
+++ b/src/sbmlsim/fit/pet/boehm_JProteomeRes2014/benchmark_import.py
@@ -1,7 +1,4 @@
import h5py
-import numpy as np
-import pandas as pd
-import scipy as sp
class DataProvider:
@@ -12,40 +9,40 @@ def get_edata(self):
pass
def get_timepoints(self):
- with h5py.File(self.h5_file, 'r') as f:
- timepoints = f['/amiciOptions/ts'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ timepoints = f["/amiciOptions/ts"][:]
return timepoints
def get_pscales(self):
- with h5py.File(self.h5_file, 'r') as f:
- pscale = f['/amiciOptions/pscale'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ pscale = f["/amiciOptions/pscale"][:]
return pscale
def get_fixed_parameters(self):
- with h5py.File(self.h5_file, 'r') as f:
- fixed_parameters = f['/fixedParameters/k'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ fixed_parameters = f["/fixedParameters/k"][:]
fixed_parameters = fixed_parameters[0]
return fixed_parameters
def get_fixed_parameters_names(self):
- with h5py.File(self.h5_file, 'r') as f:
- fixed_parameters_names = f['/fixedParameters/parameterNames'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ fixed_parameters_names = f["/fixedParameters/parameterNames"][:]
return fixed_parameters_names
def get_initial_states(self):
pass
def get_measurements(self):
- with h5py.File(self.h5_file, 'r') as f:
- measurements = f['/measurements/y'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ measurements = f["/measurements/y"][:]
return measurements
def get_ysigma(self):
- with h5py.File(self.h5_file, 'r') as f:
- ysigma = f['/measurements/ysigma'][:]
+ with h5py.File(self.h5_file, "r") as f:
+ ysigma = f["/measurements/ysigma"][:]
return ysigma
def get_observableNames(self):
- with h5py.File(self.h5_file, 'r') as f:
- observable_names = f['/measurements/observableNames']
+ with h5py.File(self.h5_file, "r") as f:
+ observable_names = f["/measurements/observableNames"]
return observable_names
diff --git a/src/sbmlsim/fit/pet/petab_example_01.py b/src/sbmlsim/fit/pet/petab_example_01.py
index 3aacb4ec..f2dbac86 100644
--- a/src/sbmlsim/fit/pet/petab_example_01.py
+++ b/src/sbmlsim/fit/pet/petab_example_01.py
@@ -1,23 +1,14 @@
-"""Example using pypesto, petab, amici.
-
-
-
-"""
-import os.path
-
-import amici
+"""Example using pypesto, petab, amici."""
# import matplotlib and increase image resolution
import matplotlib as mpl
import numpy as np
-import petab
import pypesto
import pypesto.optimize as optimize
import pypesto.petab
-import pypesto.visualize as visualize
-mpl.rcParams['figure.dpi'] = 300
+mpl.rcParams["figure.dpi"] = 300
# define objective function
diff --git a/src/sbmlsim/fit/pet/petab_example_02.py b/src/sbmlsim/fit/pet/petab_example_02.py
index ffe60c2a..29785a97 100644
--- a/src/sbmlsim/fit/pet/petab_example_02.py
+++ b/src/sbmlsim/fit/pet/petab_example_02.py
@@ -1,28 +1,17 @@
-"""Example using pypesto, petab, amici.
-
-
-
-"""
-import os.path
-
-import amici
+"""Example using pypesto, petab, amici."""
# import matplotlib and increase image resolution
import matplotlib as mpl
-import numpy as np
-import petab
import pypesto
import pypesto.optimize as optimize
import pypesto.petab
-import pypesto.visualize as visualize
-
-mpl.rcParams['figure.dpi'] = 300
+mpl.rcParams["figure.dpi"] = 300
# directory of the PEtab problem
-petab_yaml = './boehm_JProteomeRes2014/Boehm_JProteomeRes2014.yaml'
+petab_yaml = "./boehm_JProteomeRes2014/Boehm_JProteomeRes2014.yaml"
importer = pypesto.petab.PetabImporter.from_yaml(petab_yaml)
problem = importer.create_problem()
@@ -48,7 +37,6 @@
engine = pypesto.engine.MultiProcessEngine()
# Optimize
-result = optimize.minimize(problem=problem,
- optimizer=optimizer,
- engine=engine,
- n_starts=100)
+result = optimize.minimize(
+ problem=problem, optimizer=optimizer, engine=engine, n_starts=100
+)
diff --git a/src/sbmlsim/fit/result.py b/src/sbmlsim/fit/result.py
index 356af58c..9621c84b 100644
--- a/src/sbmlsim/fit/result.py
+++ b/src/sbmlsim/fit/result.py
@@ -1,8 +1,9 @@
"""Result of optimization."""
+
import datetime
import uuid
from pathlib import Path
-from typing import Dict, Iterable, List, Optional, Set, Tuple, Union
+from typing import Iterable, Optional, Union
import numpy as np
import pandas as pd
@@ -23,8 +24,8 @@ class OptimizationResult(ObjectJSONEncoder):
def __init__(
self,
parameters: Iterable[FitParameter],
- fits: List[OptimizeResult],
- trajectories: List,
+ fits: list[OptimizeResult],
+ trajectories: list,
sid: str = None,
):
"""Initialize optimization result.
@@ -46,15 +47,15 @@ def __init__(
self.sid = (
"{:%Y%m%d_%H%M%S}".format(datetime.datetime.now()) + f"__{uuid_str[:5]}"
)
- self.parameters: List[FitParameter] = []
+ self.parameters: list[FitParameter] = []
for p in parameters:
- if isinstance(p, Dict):
+ if isinstance(p, dict):
p = FitParameter(**p)
self.parameters.append(p)
- self.fits: List[OptimizeResult] = []
+ self.fits: list[OptimizeResult] = []
for fit in fits:
- if isinstance(fit, Dict):
+ if isinstance(fit, dict):
fit = OptimizeResult(**fit)
self.fits.append(fit)
@@ -100,7 +101,7 @@ def __str__(self) -> str:
return info
@staticmethod
- def combine(opt_results: List["OptimizationResult"]) -> "OptimizationResult":
+ def combine(opt_results: list["OptimizationResult"]) -> "OptimizationResult":
"""Combine results from multiple parameter fitting experiments."""
# FIXME: check that the parameters are fitting
parameters = opt_results[0].parameters
@@ -134,11 +135,11 @@ def xopt(self) -> np.ndarray:
return values
@property
- def xopt_fit_parameters(self) -> List[FitParameter]:
+ def xopt_fit_parameters(self) -> list[FitParameter]:
"""Optimal parameters as Fit parameters."""
return self._x_as_fit_parameters(x=self.xopt)
- def _x_as_fit_parameters(self, x) -> List[FitParameter]:
+ def _x_as_fit_parameters(self, x) -> list[FitParameter]:
"""Convert numerical parameter vector to fit parameters."""
fit_pars = []
for k, p in enumerate(self.parameters):
@@ -154,7 +155,7 @@ def _x_as_fit_parameters(self, x) -> List[FitParameter]:
return fit_pars
@staticmethod
- def process_traces(parameters: List[FitParameter], trajectories):
+ def process_traces(parameters: list[FitParameter], trajectories):
"""Process the optimization results."""
results = []
pids = [p.pid for p in parameters]
@@ -172,7 +173,7 @@ def process_traces(parameters: List[FitParameter], trajectories):
return df
@staticmethod
- def process_fits(parameters: List[FitParameter], fits: List[OptimizeResult]):
+ def process_fits(parameters: list[FitParameter], fits: list[OptimizeResult]):
"""Process the optimization results."""
results = []
pids = [p.pid for p in parameters]
diff --git a/src/sbmlsim/fit/runner.py b/src/sbmlsim/fit/runner.py
index 509f9f34..11e1a0a1 100644
--- a/src/sbmlsim/fit/runner.py
+++ b/src/sbmlsim/fit/runner.py
@@ -20,9 +20,10 @@
they dont contend for other lower-level (OS) resources. That's the "multiprocessing"
part.
"""
+
import multiprocessing
import os
-from typing import List, Optional
+from typing import Optional
import numpy as np
from pymetadata import log
@@ -51,7 +52,7 @@ def run_optimization(
algorithm: OptimizationAlgorithmType = OptimizationAlgorithmType.LEAST_SQUARE,
residual: ResidualType = ResidualType.ABSOLUTE,
loss_function: LossFunctionType = LossFunctionType.LINEAR,
- weighting_curves: List[WeightingCurvesType] = None,
+ weighting_curves: list[WeightingCurvesType] = None,
weighting_points: WeightingPointsType = WeightingPointsType.NO_WEIGHTING,
seed: Optional[int] = None,
variable_step_size: bool = True,
@@ -156,7 +157,7 @@ def run_optimization(
# worker pool
with multiprocessing.Pool(processes=n_cores) as pool:
- opt_results: List[OptimizationResult] = pool.map(worker, args_list)
+ opt_results: list[OptimizationResult] = pool.map(worker, args_list)
# combine simulation results
opt_result = OptimizationResult.combine(opt_results)
@@ -182,7 +183,7 @@ def _run_optimization_serial(
algorithm: OptimizationAlgorithmType = OptimizationAlgorithmType.LEAST_SQUARE,
residual: ResidualType = ResidualType.ABSOLUTE,
loss_function: LossFunctionType = LossFunctionType.LINEAR,
- weighting_curves: List[WeightingCurvesType] = None,
+ weighting_curves: list[WeightingCurvesType] = None,
weighting_points: WeightingPointsType = WeightingPointsType.NO_WEIGHTING,
seed: Optional[int] = None,
variable_step_size: bool = True,
diff --git a/src/sbmlsim/fit/sampling.py b/src/sbmlsim/fit/sampling.py
index a428c74a..bfcb21e6 100644
--- a/src/sbmlsim/fit/sampling.py
+++ b/src/sbmlsim/fit/sampling.py
@@ -1,6 +1,6 @@
"""Sampling of parameter values."""
+
from enum import Enum
-from typing import Dict, Iterable, List, Sized
import numpy as np
import pandas as pd
@@ -29,7 +29,7 @@ class SamplingType(Enum):
def create_samples(
- parameters: List[FitParameter],
+ parameters: list[FitParameter],
size,
sampling=SamplingType.LOGUNIFORM,
seed=None,
@@ -130,12 +130,12 @@ def plot_samples(samples):
def example_sampling() -> None:
"""Run sampling exa how to use sampling."""
- parameters: List[FitParameter] = [
+ parameters: list[FitParameter] = [
FitParameter(pid="p1", lower_bound=10, upper_bound=1e4),
FitParameter(pid="p2", lower_bound=1, upper_bound=1e3),
FitParameter(pid="p3", lower_bound=1, upper_bound=1e3),
]
- samples: Dict[str, pd.DataFrame] = {}
+ samples: dict[str, pd.DataFrame] = {}
for sampling in [
SamplingType.UNIFORM,
SamplingType.UNIFORM_LHS,
diff --git a/src/sbmlsim/interpolation/interpolation.py b/src/sbmlsim/interpolation/interpolation.py
index d46b03c4..db589e07 100644
--- a/src/sbmlsim/interpolation/interpolation.py
+++ b/src/sbmlsim/interpolation/interpolation.py
@@ -8,16 +8,16 @@
The functionality is very useful, but only if this can be applied to existing
models in a simple manner.
"""
+
from pathlib import Path
-from typing import Any, List, Optional, Tuple, Union
+from typing import Tuple, Union
import libsbml
import pandas as pd
from pymetadata import log
from sbmlutils.io.sbml import write_sbml
-from sbmlutils.validation import validate_doc
-
+from sbmlutils.validation import validate_doc, ValidationOptions
logger = log.get_logger(__name__)
@@ -122,10 +122,10 @@ def _formula_cubic_spline(x: pd.Series, y: pd.Series) -> str:
from the spline interpolation.
"""
# calculate spline coefficients
- coeffs: List[Tuple[float]] = Interpolator._natural_spline_coeffs(x, y)
+ coeffs: list[Tuple[float]] = Interpolator._natural_spline_coeffs(x, y)
# create piecewise terms
- items: List[str] = []
+ items: list[str] = []
for k in range(len(x) - 1):
x1 = x.iloc[k]
x2 = x.iloc[k + 1]
@@ -140,7 +140,7 @@ def _formula_cubic_spline(x: pd.Series, y: pd.Series) -> str:
return "piecewise({})".format(", ".join(items))
@staticmethod
- def _natural_spline_coeffs(X: pd.Series, Y: pd.Series) -> List[Tuple[float]]:
+ def _natural_spline_coeffs(X: pd.Series, Y: pd.Series) -> list[Tuple[float]]:
"""Calculate natural spline coefficients.
Calculation of coefficients for
@@ -185,7 +185,7 @@ def _natural_spline_coeffs(X: pd.Series, Y: pd.Series) -> List[Tuple[float]]:
b[j] = (a[j + 1] - a[j]) / h[j] - (h[j] * (c[j + 1] + 2 * c[j])) / 3
d[j] = (c[j + 1] - c[j]) / (3 * h[j])
# store coefficients
- coeffs: List[Tuple[float]] = []
+ coeffs: list[Tuple[float]] = []
for i in range(n):
coeffs.append((a[i], b[i], c[i], d[i])) # type: ignore
return coeffs
@@ -256,7 +256,7 @@ def __init__(self, data: pd.DataFrame, method: str = "linear"):
self.model: libsbml.Model = None
self.data: pd.DataFrame = data
self.method: str = method
- self.interpolators: List[Interpolator] = []
+ self.interpolators: list[Interpolator] = []
self.validate_data()
@@ -282,7 +282,7 @@ def validate_data(self) -> None:
# first column has to be ascending (times)
def is_sorted(df: pd.DataFrame, colname: str) -> bool:
- return bool(pd.Index(df[colname]).is_monotonic)
+ return bool(pd.Index(df[colname]).is_monotonic_increasing)
if not is_sorted(self.data, colname=self.data.columns[0]):
logger.warning("First column should contain ascending values.")
@@ -328,7 +328,7 @@ def _create_sbml(self) -> None:
Interpolation.add_interpolator_to_model(interpolator, self.model)
# validation of SBML document
- validate_doc(self.doc, units_consistency=False)
+ validate_doc(self.doc, options=ValidationOptions(units_consistency=False))
def _init_sbml_model(self) -> None:
"""Create and initialize the SBML model."""
@@ -346,13 +346,13 @@ def _init_sbml_model(self) -> None:
self.model = model
@staticmethod
- def create_interpolators(data: pd.DataFrame, method: str) -> List[Interpolator]:
+ def create_interpolators(data: pd.DataFrame, method: str) -> list[Interpolator]:
"""Create all interpolators for the given data set.
The columns 1, ... (Ncol-1) are interpolated against
column 0.
"""
- interpolators: List[Interpolator] = []
+ interpolators: list[Interpolator] = []
columns = data.columns
time = data[columns[0]]
for k in range(1, len(columns)):
diff --git a/src/sbmlsim/model/model.py b/src/sbmlsim/model/model.py
index b6dac0e2..d3e5f063 100644
--- a/src/sbmlsim/model/model.py
+++ b/src/sbmlsim/model/model.py
@@ -5,9 +5,10 @@
Other formats could be supported like CellML or NeuroML.
"""
+
from enum import Enum
from pathlib import Path
-from typing import Dict, List, Optional, Union
+from typing import Optional, Union
from pymetadata import log
@@ -49,8 +50,8 @@ def __init__(
language: Optional[str] = None,
language_type: LanguageType = LanguageType.SBML,
base_path: Optional[Path] = None,
- changes: Dict = None,
- selections: List[str] = None,
+ changes: dict = None,
+ selections: list[str] = None,
):
"""Initialize SourceType."""
diff --git a/src/sbmlsim/model/model_resources.py b/src/sbmlsim/model/model_resources.py
index e7df4778..425f3395 100644
--- a/src/sbmlsim/model/model_resources.py
+++ b/src/sbmlsim/model/model_resources.py
@@ -3,10 +3,11 @@
Interacting with model resources to retrieve models.
This currently includes BioModels, but can easily be extended to other models.
"""
+
import re
from dataclasses import dataclass
from pathlib import Path
-from typing import Dict, Optional, Union
+from typing import Optional, Union
import requests
from pymetadata import log
@@ -31,7 +32,7 @@ def is_content(self) -> bool:
"""Check if the source is Content."""
return self.content is not None
- def to_dict(self) -> Dict[str, Optional[str]]:
+ def to_dict(self) -> dict[str, Optional[str]]:
"""Convert to dict.
Used for serialization.
diff --git a/src/sbmlsim/model/model_roadrunner.py b/src/sbmlsim/model/model_roadrunner.py
index bf014722..2cc006af 100644
--- a/src/sbmlsim/model/model_roadrunner.py
+++ b/src/sbmlsim/model/model_roadrunner.py
@@ -2,7 +2,7 @@
import tempfile
from pathlib import Path
-from typing import Dict, List, Optional, Union
+from typing import Optional, Union
import libsbml
import numpy as np
@@ -33,12 +33,12 @@ def __init__(
self,
source: Union[str, Path],
base_path: Path = None,
- changes: Dict = None,
+ changes: dict = None,
sid: str = None,
name: str = None,
- selections: List[str] = None,
+ selections: list[str] = None,
ureg: UnitRegistry = None,
- settings: Dict = None,
+ settings: dict = None,
):
super(RoadrunnerSBMLModel, self).__init__(
source=source,
@@ -84,9 +84,9 @@ def Q_(self) -> Quantity:
@staticmethod
def from_abstract_model(
abstract_model: AbstractModel,
- selections: List[str] = None,
+ selections: list[str] = None,
ureg: UnitRegistry = None,
- settings: Dict = None,
+ settings: dict = None,
):
"""Create from AbstractModel."""
logger.debug("RoadrunnerSBMLModel from AbstractModel")
@@ -174,8 +174,8 @@ def parse_units(self, ureg: UnitRegistry) -> UnitsInformation:
@classmethod
def set_timecourse_selections(
- cls, r: roadrunner.RoadRunner, selections: List[str] = None
- ) -> List[str]:
+ cls, r: roadrunner.RoadRunner, selections: list[str] = None
+ ) -> list[str]:
"""Set the model selections for timecourse simulation."""
if selections is None:
r_model: roadrunner.ExecutableModel = r.model
@@ -252,7 +252,7 @@ def parameter_df(r: roadrunner.RoadRunner) -> pd.DataFrame:
doc: libsbml.SBMLDocument = libsbml.readSBMLFromString(r.getCurrentSBML())
model: libsbml.Model = doc.getModel()
sids = r_model.getGlobalParameterIds()
- parameters: List[libsbml.Parameter] = [model.getParameter(sid) for sid in sids]
+ parameters: list[libsbml.Parameter] = [model.getParameter(sid) for sid in sids]
data = {
"sid": sids,
"value": r_model.getGlobalParameterValues(),
@@ -276,7 +276,7 @@ def species_df(r: roadrunner.RoadRunner) -> pd.DataFrame:
model: libsbml.Model = doc.getModel()
sids = r_model.getFloatingSpeciesIds() + r_model.getBoundarySpeciesIds()
- species: List[libsbml.Species] = [model.getSpecies(sid) for sid in sids]
+ species: list[libsbml.Species] = [model.getSpecies(sid) for sid in sids]
data = {
"sid": sids,
diff --git a/src/sbmlsim/plot/__init__.py b/src/sbmlsim/plot/__init__.py
index 8e1dde9e..248cca34 100644
--- a/src/sbmlsim/plot/__init__.py
+++ b/src/sbmlsim/plot/__init__.py
@@ -1,2 +1,5 @@
"""Plotting in sbmlsim."""
+
from .plotting import Figure, Plot, Axis, SubPlot, Curve, ColorType, MarkerType
+
+__all__ = ["Figure", "Plot", "Axis", "SubPlot", "Curve", "ColorType", "MarkerType"]
diff --git a/src/sbmlsim/plot/plotting.py b/src/sbmlsim/plot/plotting.py
index 603ab00b..60ecefca 100644
--- a/src/sbmlsim/plot/plotting.py
+++ b/src/sbmlsim/plot/plotting.py
@@ -13,18 +13,19 @@
E.g. over which dimensions should an error be calculated and which
dimensions should be plotted individually.
"""
+
from __future__ import annotations
import copy
from copy import deepcopy
from dataclasses import dataclass
from enum import Enum
-from typing import Any, Dict, List, Optional, Union
+from typing import Any, Optional, Union
import numpy as np
from matplotlib.colors import to_hex, to_rgba
from pymetadata import log
-from sbmlsim.data import Data, DataSet
+from sbmlsim.data import Data
logger = log.get_logger(__name__)
@@ -156,7 +157,7 @@ class Line:
color: ColorType = None
thickness: float = 2.0
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"type": self.type,
@@ -175,7 +176,7 @@ class Marker:
line_color: ColorType = None
line_thickness: float = 1.0
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"size": self.size,
@@ -193,7 +194,7 @@ class Fill:
color: ColorType = None
second_color: ColorType = None
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"color": self.color,
@@ -294,7 +295,6 @@ def __copy__(self) -> "Style":
# https://matplotlib.org/3.1.0/gallery/lines_bars_and_markers/linestyles.html
MPL2SEDML_LINESTYLE_MAPPING = {
-
"": LineType.NONE,
"-": LineType.SOLID,
"solid": LineType.SOLID,
@@ -328,9 +328,9 @@ def __copy__(self) -> "Style":
}
SEDML2MPL_MARKER_MAPPING = {v: k for (k, v) in MPL2SEDML_MARKER_MAPPING.items()}
- def to_mpl_curve_kwargs(self) -> Dict:
+ def to_mpl_curve_kwargs(self) -> dict:
"""Convert to matplotlib curve keyword arguments."""
- kwargs: Dict[str, Any] = {}
+ kwargs: dict[str, Any] = {}
if self.line:
if self.line.color:
kwargs["color"] = self.line.color.color
@@ -358,7 +358,7 @@ def to_mpl_curve_kwargs(self) -> Dict:
return kwargs
- def _mpl_error_kwargs(self) -> Dict[str, Any]:
+ def _mpl_error_kwargs(self) -> dict[str, Any]:
"""Define keywords for error bars."""
error_kwargs = {
"error_kw": {
@@ -368,7 +368,7 @@ def _mpl_error_kwargs(self) -> Dict[str, Any]:
}
return error_kwargs
- def to_mpl_points_kwargs(self) -> Dict[str, Any]:
+ def to_mpl_points_kwargs(self) -> dict[str, Any]:
"""Convert to matplotlib point curve keyword arguments."""
points_kwargs = self.to_mpl_curve_kwargs()
for key in ["fill.color", "fill.second_color"]:
@@ -405,9 +405,9 @@ def to_mpl_bar_kwargs(self):
**self._mpl_error_kwargs(),
}
- def to_mpl_area_kwargs(self) -> Dict[str, Any]:
+ def to_mpl_area_kwargs(self) -> dict[str, Any]:
"""Define keyword dictionary for a shaded area."""
- kwargs: Dict[str, Any] = {}
+ kwargs: dict[str, Any] = {}
if self.line:
if self.line.color:
@@ -443,7 +443,6 @@ def from_mpl_kwargs(**kwargs) -> "Style":
color=kwargs.get("markeredgecolor", None),
)
-
# Line
linestyle = Style.MPL2SEDML_LINESTYLE_MAPPING[kwargs.get("linestyle", "-")]
line = Line(color=color, type=linestyle, thickness=kwargs.get("linewidth", 1.0))
@@ -695,7 +694,7 @@ def __str__(self) -> str:
return "\n".join(info)
@staticmethod
- def _add_default_style_kwargs(d: Dict, dtype: str) -> Dict:
+ def _add_default_style_kwargs(d: dict, dtype: str) -> dict:
"""Add the default plotting style arguments."""
if dtype == Data.Types.TASK:
@@ -762,7 +761,7 @@ def __init__(
if "name" in kwargs:
self.name = kwargs["name"]
- self.kwargs: Dict[str, Any] = kwargs
+ self.kwargs: dict[str, Any] = kwargs
def __repr__(self) -> str:
"""Get representation string."""
@@ -816,8 +815,8 @@ def __init__(
xaxis: Axis = None,
yaxis: Axis = None,
yaxis_right: Axis = None,
- curves: List[Curve] = None,
- areas: List[ShadedArea] = None,
+ curves: list[Curve] = None,
+ areas: list[ShadedArea] = None,
legend: bool = True,
facecolor: ColorType = None,
title_visible: bool = True,
@@ -856,15 +855,15 @@ def __init__(
self._xaxis: Axis = None
self._yaxis: Axis = None
self._yaxis_right: Axis = None
- self._curves: List[Curve] = None
- self._areas: List[ShadedArea] = None
+ self._curves: list[Curve] = None
+ self._areas: list[ShadedArea] = None
self._figure: Figure = None
self.xaxis: Axis = xaxis
self.yaxis: Axis = yaxis
self.yaxis_right: Axis = yaxis_right
- self.curves: List[Curve] = curves
- self.areas: List[ShadedArea] = areas
+ self.curves: list[Curve] = curves
+ self.areas: list[ShadedArea] = areas
self.legend: bool = legend
self.facecolor: ColorType = facecolor
@@ -1069,12 +1068,12 @@ def add_area(self, area: ShadedArea):
self.areas.append(area)
@property
- def curves(self) -> List[Curve]:
+ def curves(self) -> list[Curve]:
"""Get curves."""
return self._curves
@curves.setter
- def curves(self, value: List[Curve]):
+ def curves(self, value: list[Curve]):
"""Set curves."""
self._curves = list()
if value is not None:
@@ -1082,12 +1081,12 @@ def curves(self, value: List[Curve]):
self.add_curve(curve)
@property
- def areas(self) -> List[ShadedArea]:
+ def areas(self) -> list[ShadedArea]:
"""Get areas."""
return self._areas
@areas.setter
- def areas(self, value: List[ShadedArea]) -> None:
+ def areas(self, value: list[ShadedArea]) -> None:
"""Set areas."""
self._areas = list()
if value is not None:
@@ -1173,7 +1172,7 @@ def add_data(
"No label provided on curve, using default label 'yid'. "
"To not plot a label use 'label=None'"
)
- if 'markeredgecolor' not in kwargs:
+ if "markeredgecolor" not in kwargs:
kwargs["markeredgecolor"] = "black"
# xerr data
@@ -1192,6 +1191,8 @@ def add_data(
xerr_label = "±SE"
xerr = Data(xid_se, dataset=dataset, task=task)
+ _ = xerr_label
+
# yerr data
yerr = None
yerr_label = ""
@@ -1309,7 +1310,7 @@ def __init__(
experiment: "SimulationExperiment", # noqa: F821
sid: str,
name: str = None,
- subplots: List[SubPlot] = None,
+ subplots: list[SubPlot] = None,
height: float = None,
width: float = None,
num_rows: int = 1,
@@ -1320,7 +1321,7 @@ def __init__(
self.experiment: "SimulationExperiment" = experiment # noqa: F821
if subplots is None:
subplots = list()
- self.subplots: List[SubPlot] = subplots
+ self.subplots: list[SubPlot] = subplots
self.num_rows: int = num_rows
self.num_cols: int = num_cols
self._height: float = height
@@ -1330,7 +1331,6 @@ def __init__(
# print(f"[{self.num_rows}, {self.num_cols}], ({self.height}, {self.width})")
# print(f"Figure: [{self.panel_height}, {self.panel_width}]")
-
def __repr__(self) -> str:
"""Get representation string."""
return (
@@ -1380,7 +1380,7 @@ def set_title(self, title):
def create_plots(
self, xaxis: Axis = None, yaxis: Axis = None, legend: bool = True
- ) -> List[Plot]:
+ ) -> list[Plot]:
"""Create plots in the figure.
Settings are applied to all generated plots. E.g. if an xaxis is provided
@@ -1399,11 +1399,11 @@ def create_plots(
return plots
@property
- def plots(self) -> List[Plot]:
+ def plots(self) -> list[Plot]:
"""Get plots in this figure."""
return self.get_plots()
- def get_plots(self) -> List[Plot]:
+ def get_plots(self) -> list[Plot]:
"""Get plots in this figure."""
return [subplot.plot for subplot in self.subplots]
@@ -1450,7 +1450,7 @@ def add_subplot(
)
return plot
- def add_plots(self, plots: List[Plot], copy_plots: bool = False) -> None:
+ def add_plots(self, plots: list[Plot], copy_plots: bool = False) -> None:
"""Add plots to figure.
For every plot a subplot is generated.
@@ -1481,7 +1481,7 @@ def add_plots(self, plots: List[Plot], copy_plots: bool = False) -> None:
plot.figure = self
@staticmethod
- def from_plots(sid, plots: List[Plot]) -> "Figure":
+ def from_plots(sid, plots: list[Plot]) -> "Figure":
"""Create figure object from list of plots."""
num_plots = len(plots)
return Figure(
diff --git a/src/sbmlsim/plot/serialization_matplotlib.py b/src/sbmlsim/plot/serialization_matplotlib.py
index e475e3e7..02ffea5a 100644
--- a/src/sbmlsim/plot/serialization_matplotlib.py
+++ b/src/sbmlsim/plot/serialization_matplotlib.py
@@ -1,14 +1,13 @@
"""Serialization of Figure object to matplotlib."""
-from typing import Any, Dict, List, Optional
+from __future__ import annotations
+from typing import Any, Optional
import numpy as np
from matplotlib import pyplot as plt
-from matplotlib.axis import Axis as AxisMPL
from matplotlib.figure import Figure as FigureMPL
from matplotlib.gridspec import GridSpec
from pymetadata import log
-from pymetadata.console import console
from sbmlsim.plot import Axis, Curve, Figure, SubPlot
from sbmlsim.plot.plotting import (
@@ -60,7 +59,9 @@ def _get_scale(cls, axis: Axis) -> str:
@classmethod
def to_figure(
- cls, experiment: "SimulationExperiment", figure: Figure # noqa: F821
+ cls,
+ experiment, # "SimulationExperiment",
+ figure: Figure, # noqa: F821
) -> FigureMPL:
"""Convert sbmlsim.Figure to matplotlib figure."""
@@ -102,7 +103,7 @@ def to_figure(
)
# secondary axis
ax2: Optional[plt.Axes] = None
- axes: List[plt.Axes] = [ax1]
+ axes: list[plt.Axes] = [ax1]
if yax_right:
for curve in plot.curves:
if (
@@ -142,11 +143,10 @@ def to_figure(
barhstack_y = None
# plot ordered curves
- abstract_curves: List[AbstractCurve] = sorted(
+ abstract_curves: list[AbstractCurve] = sorted(
plot.curves + plot.areas, key=lambda x: x.order
)
for abstract_curve in abstract_curves:
-
if (
abstract_curve.yaxis_position
and abstract_curve.yaxis_position == YAxisPosition.RIGHT
@@ -207,7 +207,7 @@ def to_figure(
else yerr.magnitude
)
- kwargs: Dict[str, Any] = {}
+ kwargs: dict[str, Any] = {}
if curve.style:
style: Style = curve.style.resolve_style()
if curve.type == CurveType.POINTS:
@@ -295,7 +295,7 @@ def to_figure(
yto_data = yto.magnitude[:, 0] if yto is not None else None
label = area.name if area.name else "__nolabel__"
- kwargs: Dict[str, Any] = {}
+ kwargs: dict[str, Any] = {}
if area.style:
style: Style = area.style.resolve_style()
kwargs = style.to_mpl_area_kwargs()
@@ -402,13 +402,13 @@ def apply_axis_settings(sax: Axis, ax: plt.Axes, axis_type: str):
# hide none-existing axes
if plot.xaxis is None:
- ax.spines['right'].set_visible(False)
- ax.spines['left'].set_visible(False)
+ ax.spines["right"].set_visible(False)
+ ax.spines["left"].set_visible(False)
ax1.xaxis.set_visible(False)
if plot.yaxis is None:
- ax.spines['top'].set_visible(False)
- ax.spines['bottom'].set_visible(False)
+ ax.spines["top"].set_visible(False)
+ ax.spines["bottom"].set_visible(False)
ax1.yaxis.set_visible(False)
xgrid = xax.grid if xax else None
diff --git a/src/sbmlsim/report/experiment_report.py b/src/sbmlsim/report/experiment_report.py
index 93c3b5b1..154544c5 100644
--- a/src/sbmlsim/report/experiment_report.py
+++ b/src/sbmlsim/report/experiment_report.py
@@ -1,15 +1,15 @@
"""Create report of simulation experiments."""
+
import json
import os
import shutil
import sys
from enum import Enum
from pathlib import Path
-from typing import Dict, Optional
+from typing import Optional
import jinja2
from pymetadata import log
-from pymetadata.console import console
from sbmlsim import RESOURCES_DIR, __version__
from sbmlsim.experiment import ExperimentResult, SimulationExperiment
@@ -25,7 +25,7 @@ class ReportResults:
def __init__(self):
"""Construct ReportResults."""
- self.data: Dict[str, Dict] = {}
+ self.data: dict[str, dict] = {}
def to_json(self, json_path: Path):
"""Write to JSON."""
@@ -94,7 +94,7 @@ class ReportType(Enum):
LATEX = 3
def __init__(
- self, results: ReportResults, metadata: Dict = None, template_path=TEMPLATE_PATH
+ self, results: ReportResults, metadata: dict = None, template_path=TEMPLATE_PATH
):
"""Construct an ExperimentReport."""
if isinstance(results, list):
@@ -116,7 +116,7 @@ def create_report(
output_path: Path,
filename: Optional[str] = None,
report_type: ReportType = ReportType.HTML,
- f_filter_context: Optional[Dict] = None,
+ f_filter_context: Optional[dict] = None,
**kwargs,
) -> Path:
"""Create report of SimulationExperiments.
@@ -134,7 +134,7 @@ def create_report(
lstrip_blocks=True,
)
- def write_report(filename: str, context: Dict, template_str: str) -> Path:
+ def write_report(filename: str, context: dict, template_str: str) -> Path:
"""Write the report file from given context and template."""
template = env.get_template(template_str)
text = template.render(context)
@@ -191,6 +191,6 @@ def write_report(filename: str, context: Dict, template_str: str) -> Path:
output_path = write_report(
filename=filename, context=context, template_str=f"index.{suffix}"
)
- report_path_str: str = str(output_path).replace('\\', '/')
+ report_path_str: str = str(output_path).replace("\\", "/")
logger.info(f"report created: file://{report_path_str}")
return output_path
diff --git a/src/sbmlsim/result/datagenerator.py b/src/sbmlsim/result/datagenerator.py
index 9d2551ce..f698bb30 100644
--- a/src/sbmlsim/result/datagenerator.py
+++ b/src/sbmlsim/result/datagenerator.py
@@ -1,7 +1,5 @@
"""DataGenerator."""
-from typing import Dict
-
from sbmlsim.data import DataSet
from sbmlsim.result import XResult
@@ -10,8 +8,8 @@ class DataGeneratorFunction:
"""DataGeneratorFunction."""
def __call__(
- self, xresults: Dict[str, XResult], dsets: Dict[str, DataSet] = None
- ) -> Dict[str, XResult]:
+ self, xresults: dict[str, XResult], dsets: dict[str, DataSet] = None
+ ) -> dict[str, XResult]:
"""Call the function."""
raise NotImplementedError
@@ -23,7 +21,7 @@ def __init__(self, index: int, dimension: str = "_time"):
self.index = index
self.dimension = dimension
- def __call__(self, xresults: Dict[str, XResult], dsets=None) -> Dict[str, XResult]:
+ def __call__(self, xresults: dict[str, XResult], dsets=None) -> dict[str, XResult]:
"""Reduce based on '_time' dimension with given index."""
results = {}
for key, xres in xresults.items():
@@ -49,8 +47,8 @@ class DataGenerator:
def __init__(
self,
f: DataGeneratorFunction,
- xresults: Dict[str, XResult],
- dsets: Dict[str, DataSet] = None,
+ xresults: dict[str, XResult],
+ dsets: dict[str, DataSet] = None,
):
self.xresults = xresults
self.dsets = dsets
diff --git a/src/sbmlsim/result/report.py b/src/sbmlsim/result/report.py
index af5afd7e..d9b2ddf7 100644
--- a/src/sbmlsim/result/report.py
+++ b/src/sbmlsim/result/report.py
@@ -1,7 +1,5 @@
"""Reports."""
-from typing import Dict
-
from pymetadata import log
@@ -14,7 +12,7 @@ class Report:
Collections of data generators.
"""
- def __init__(self, sid: str, name: str = None, datasets: Dict[str, str] = None):
+ def __init__(self, sid: str, name: str = None, datasets: dict[str, str] = None):
"""Construct report."""
self.sid: str = sid
self.name: str = name
@@ -22,7 +20,7 @@ def __init__(self, sid: str, name: str = None, datasets: Dict[str, str] = None):
if datasets is None:
self.datasets = {}
- self.datasets: Dict[str, str] = datasets
+ self.datasets: dict[str, str] = datasets
def add_dataset(self, label: str, data_id: str) -> None:
"""Add dataset for given label."""
diff --git a/src/sbmlsim/sensitivity/analysis.py b/src/sbmlsim/sensitivity/analysis.py
index 440c94ae..be12f7c7 100644
--- a/src/sbmlsim/sensitivity/analysis.py
+++ b/src/sbmlsim/sensitivity/analysis.py
@@ -1,7 +1,5 @@
-"""Sensitivity analysis.
+"""Sensitivity analysis."""
-
-"""
import multiprocessing
import os
import time
@@ -24,6 +22,7 @@
@dataclass
class SensitivityOutput:
"""Output measurement for SensitivityAnalysis."""
+
uid: str
name: str
unit: Optional[str]
@@ -47,9 +46,13 @@ class SensitivitySimulation:
This function is called repeatedly during the sensitivity calculation.
"""
- def __init__(self, model_path: Path, selections: list[str],
- changes_simulation: dict[str, float],
- outputs: list[SensitivityOutput]):
+ def __init__(
+ self,
+ model_path: Path,
+ selections: list[str],
+ changes_simulation: dict[str, float],
+ outputs: list[SensitivityOutput],
+ ):
self.model_path = model_path
self.selections = selections
self.changes_simulation = changes_simulation
@@ -65,7 +68,8 @@ def __init__(self, model_path: Path, selections: list[str],
for key in y:
if key not in outputs_dict:
raise ValueError(
- f"Key '{key}' missing in outputs dictionary: '{outputs_dict}")
+ f"Key '{key}' missing in outputs dictionary: '{outputs_dict}"
+ )
@staticmethod
def load_model(model_path: Path, selections: list[str]) -> roadrunner.RoadRunner:
@@ -77,8 +81,9 @@ def load_model(model_path: Path, selections: list[str]) -> roadrunner.RoadRunner
return rr
@staticmethod
- def apply_changes(r: roadrunner.RoadRunner, changes: dict[str, float],
- reset_all: bool = True) -> None:
+ def apply_changes(
+ r: roadrunner.RoadRunner, changes: dict[str, float], reset_all: bool = True
+ ) -> None:
"""Apply changes after possible reset of the model."""
if reset_all:
r.resetAll()
@@ -87,17 +92,20 @@ def apply_changes(r: roadrunner.RoadRunner, changes: dict[str, float],
# print(f"{key=} {value=}")
r.setValue(key, value)
- def simulate(self, r: roadrunner.RoadRunner, changes: dict[str, float]) -> dict[
- str, float]:
+ def simulate(
+ self, r: roadrunner.RoadRunner, changes: dict[str, float]
+ ) -> dict[str, float]:
"""Run a model simulation and return scalar results dictionary."""
- raise NotImplemented
+ raise NotImplementedError
@classmethod
- def parameter_values(cls, r: roadrunner.RoadRunner,
- parameters: list[SensitivityParameter],
- changes: dict[str, float]
- ) -> dict[str, float]:
+ def parameter_values(
+ cls,
+ r: roadrunner.RoadRunner,
+ parameters: list[SensitivityParameter],
+ changes: dict[str, float],
+ ) -> dict[str, float]:
"""Get the parameter values for a given set of changes."""
cls.apply_changes(r, changes, reset_all=True)
@@ -111,21 +119,22 @@ def parameter_values(cls, r: roadrunner.RoadRunner,
def plot(self) -> None:
"""Plot the model simulation."""
- raise NotImplemented
+ raise NotImplementedError
class SensitivityAnalysis:
"""Parent class for all sensitivity analysis."""
- def __init__(self,
- sensitivity_simulation: SensitivitySimulation,
- parameters: list[SensitivityParameter],
- groups: list[AnalysisGroup],
- results_path: Path,
- seed: Optional[int] = None,
- n_cores: Optional[int] = None,
- cache_results: bool = False,
- ) -> None:
+ def __init__(
+ self,
+ sensitivity_simulation: SensitivitySimulation,
+ parameters: list[SensitivityParameter],
+ groups: list[AnalysisGroup],
+ results_path: Path,
+ seed: Optional[int] = None,
+ n_cores: Optional[int] = None,
+ cache_results: bool = False,
+ ) -> None:
"""Create a sensitivity analysis for given parameter ids.
Based on the results matrix the sensitivity is calculated.
@@ -179,8 +188,9 @@ def __init__(self,
# multiple sensitivities are stored
# sensitivity matrix; shape: (num_parameters x num_outputs); could be multiple
- self.sensitivity: dict[str, dict[str, xr.DataArray]] = {g.uid: {} for g in
- self.groups}
+ self.sensitivity: dict[str, dict[str, xr.DataArray]] = {
+ g.uid: {} for g in self.groups
+ }
@property
def output_ids(self) -> list[str]:
@@ -233,7 +243,7 @@ def execute(self):
def create_samples(self) -> None:
"""Create and set parameter samples."""
- raise NotImplemented
+ raise NotImplementedError
@property
def num_samples(self) -> int:
@@ -245,8 +255,9 @@ def num_samples(self) -> int:
samples = self.samples[self.group_ids[0]]
return samples.shape[0]
- def simulate_samples(self, cache_filename: Optional[str] = None,
- cache: bool = False) -> None:
+ def simulate_samples(
+ self, cache_filename: Optional[str] = None, cache: bool = False
+ ) -> None:
"""Simulate all samples in parallel.
:param cache_filename: Path to the cache path.
@@ -267,7 +278,7 @@ def simulate_samples(self, cache_filename: Optional[str] = None,
np.full((self.num_samples, self.num_outputs), np.nan),
dims=["sample", "output"],
coords={"sample": range(self.num_samples), "output": self.outputs},
- name="results"
+ name="results",
)
# load model
@@ -284,14 +295,17 @@ def split_into_chunks(items, n):
m = len(items)
k, r = divmod(m, n)
chunks = [
- items[i * k + min(i, r):(i + 1) * k + min(i + 1, r)]
+ items[i * k + min(i, r) : (i + 1) * k + min(i + 1, r)]
for i in range(n)
]
chunked_samples = [
- [{
- **group.changes,
- **dict(zip(self.parameter_ids, samples[k, :].values))
- } for k in chunk]
+ [
+ {
+ **group.changes,
+ **dict(zip(self.parameter_ids, samples[k, :].values)),
+ }
+ for k in chunk
+ ]
for chunk in chunks
]
return chunks, chunked_samples
@@ -317,11 +331,12 @@ def split_into_chunks(items, n):
# write to cache
self.write_cache(data=self.results, cache_filename=cache_filename, cache=cache)
- def calculate_sensitivity(self, cache_filename: Optional[str] = None,
- cache: bool = False):
+ def calculate_sensitivity(
+ self, cache_filename: Optional[str] = None, cache: bool = False
+ ):
"""Calculate the sensitivity matrices."""
- raise NotImplemented
+ raise NotImplementedError
def samples_table(self) -> pd.DataFrame:
return self._data_table(d=self.samples)
@@ -334,7 +349,7 @@ def _data_table(self, d: dict[str, xr.DataArray]) -> pd.DataFrame:
for group in self.groups:
da: xr.DataArray = d[group.uid]
item = {
- 'group': group.uid,
+ "group": group.uid,
# 'group_name': group.name,
**da.sizes,
}
@@ -342,14 +357,15 @@ def _data_table(self, d: dict[str, xr.DataArray]) -> pd.DataFrame:
return pd.DataFrame(items)
def read_cache(self, cache_filename: str, cache: bool) -> Optional[Any]:
- cache_path: Optional[
- Path] = self.results_path / cache_filename if cache_filename else None
+ cache_path: Optional[Path] = (
+ self.results_path / cache_filename if cache_filename else None
+ )
if cache and not cache_path:
raise ValueError("Cache path is required for caching.")
# retrieve from cache
if cache and cache_path.exists():
- with open(cache_path, 'rb') as f:
+ with open(cache_path, "rb") as f:
data = dill.load(f)
console.print(f"Simulated samples loaded from cache: '{cache_path}'")
return data
@@ -357,10 +373,11 @@ def read_cache(self, cache_filename: str, cache: bool) -> Optional[Any]:
return None
def write_cache(self, data: Any, cache_filename: str, cache: bool) -> Optional[Any]:
- cache_path: Optional[
- Path] = self.results_path / cache_filename if cache_filename else None
+ cache_path: Optional[Path] = (
+ self.results_path / cache_filename if cache_filename else None
+ )
if cache_path:
- with open(cache_path, 'wb') as f:
+ with open(cache_path, "wb") as f:
console.print(f"Simulated samples written to cache: '{cache_path}'")
dill.dump(data, f)
@@ -371,7 +388,7 @@ def sensitivity_df(self, group_id: str, key: str) -> pd.DataFrame:
return pd.DataFrame(
sensitivity.values,
columns=sensitivity.coords["output"],
- index=sensitivity.coords["parameter"]
+ index=sensitivity.coords["parameter"],
)
def plot(self, **kwargs):
@@ -387,9 +404,8 @@ def plot_sensitivity(
title: Optional[str] = None,
cmap: str = "seismic",
fig_path: Optional[Path] = None,
- **kwargs
+ **kwargs,
) -> None:
-
df = self.sensitivity_df(group_id=group_id, key=sensitivity_key)
heatmap(
df=df,
@@ -400,24 +416,20 @@ def plot_sensitivity(
title=title,
cmap=cmap,
fig_path=fig_path,
- **kwargs
+ **kwargs,
)
-def run_simulation(
- params_tuple
-):
+def run_simulation(params_tuple):
"""Pass all required arguments as parameter tuple."""
sensitivity_simulation, r, chunked_changes = params_tuple
outputs = []
- for kc in track(range(len(chunked_changes)),
- description=f"Simulate samples PID={os.getpid()}"):
+ for kc in track(
+ range(len(chunked_changes)), description=f"Simulate samples PID={os.getpid()}"
+ ):
changes = chunked_changes[kc]
# console.print(f"PID={os.getpid()} | k={kc}")
- Y = sensitivity_simulation.simulate(
- r=r,
- changes=changes
- )
+ Y = sensitivity_simulation.simulate(r=r, changes=changes)
outputs.append(Y)
return outputs
diff --git a/src/sbmlsim/sensitivity/sensitivity_fast.py b/src/sbmlsim/sensitivity/sensitivity_fast.py
index ff9cfaa7..1ab421e6 100644
--- a/src/sbmlsim/sensitivity/sensitivity_fast.py
+++ b/src/sbmlsim/sensitivity/sensitivity_fast.py
@@ -39,7 +39,6 @@
import numpy as np
import xarray as xr
from SALib import ProblemSpec
-from SALib.analyze import fast
from SALib.sample import fast_sampler
from sbmlsim.sensitivity import (
@@ -95,12 +94,14 @@ def __init__(
# define the problem specification
self.ssa_problems: dict[str, ProblemSpec] = {}
for group in self.groups:
- self.ssa_problems[group.uid] = ProblemSpec({
- 'num_vars': self.num_parameters,
- 'names': self.parameter_ids,
- 'bounds': [[p.lower_bound, p.upper_bound] for p in self.parameters],
- "outputs": self.output_ids,
- })
+ self.ssa_problems[group.uid] = ProblemSpec(
+ {
+ "num_vars": self.num_parameters,
+ "names": self.parameter_ids,
+ "bounds": [[p.lower_bound, p.upper_bound] for p in self.parameters],
+ "outputs": self.output_ids,
+ }
+ )
def create_samples(self) -> None:
"""Create samples for FAST."""
@@ -111,21 +112,23 @@ def create_samples(self) -> None:
for gid in self.group_ids:
# libssa samples based on definition
ssa_samples = fast_sampler.sample(
- self.ssa_problems[gid], N=self.N, M=self.M,
+ self.ssa_problems[gid],
+ N=self.N,
+ M=self.M,
)
self.ssa_problems[gid].set_samples(ssa_samples)
self.samples[gid] = xr.DataArray(
ssa_samples,
dims=["sample", "parameter"],
- coords={"sample": range(num_samples),
- "parameter": self.parameter_ids},
- name="samples"
+ coords={"sample": range(num_samples), "parameter": self.parameter_ids},
+ name="samples",
)
- def calculate_sensitivity(self, cache_filename: Optional[str] = None,
- cache: bool = False):
- """ Perform extended Fourier Amplitude Sensitivity Test on model outputs.
+ def calculate_sensitivity(
+ self, cache_filename: Optional[str] = None, cache: bool = False
+ ):
+ """Perform extended Fourier Amplitude Sensitivity Test on model outputs.
Returns a dictionary with keys 'S1' and 'ST', where each entry is a list of
size D (the number of parameters) containing the indices in the same order
@@ -146,16 +149,16 @@ def calculate_sensitivity(self, cache_filename: Optional[str] = None,
self.sensitivity[gid][key] = xr.DataArray(
np.full((self.num_parameters, self.num_outputs), np.nan),
dims=["parameter", "output"],
- coords={"parameter": self.parameter_ids,
- "output": self.output_ids},
- name=key
+ coords={"parameter": self.parameter_ids, "output": self.output_ids},
+ name=key,
)
# Calculate FAST indices
for ko in range(self.num_outputs):
Yo = Y[:, ko]
Si = SALib.analyze.fast.analyze(
- self.ssa_problems[gid], Yo,
+ self.ssa_problems[gid],
+ Yo,
M=self.M,
num_resamples=100,
conf_level=0.95,
@@ -165,8 +168,9 @@ def calculate_sensitivity(self, cache_filename: Optional[str] = None,
self.sensitivity[gid][key][:, ko] = Si[key]
# write to cache
- self.write_cache(data=self.sensitivity, cache_filename=cache_filename,
- cache=cache)
+ self.write_cache(
+ data=self.sensitivity, cache_filename=cache_filename, cache=cache
+ )
def plot(self):
super().plot()
@@ -183,11 +187,13 @@ def plot(self):
vcenter=0.5,
vmin=0.0,
vmax=1.0,
- fig_path=self.results_path / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}_{key}.png"
+ fig_path=self.results_path
+ / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}_{key}.png",
)
# barplots
plot_S1_ST_indices(
sa=self,
- fig_path=self.results_path / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}.png",
+ fig_path=self.results_path
+ / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}.png",
)
diff --git a/src/sbmlsim/sensitivity/sensitivity_sobol.py b/src/sbmlsim/sensitivity/sensitivity_sobol.py
index a439a481..6ca09ad8 100644
--- a/src/sbmlsim/sensitivity/sensitivity_sobol.py
+++ b/src/sbmlsim/sensitivity/sensitivity_sobol.py
@@ -53,7 +53,6 @@
import numpy as np
import xarray as xr
from SALib import ProblemSpec
-from SALib.analyze import sobol
from SALib.sample import saltelli
from sbmlsim.sensitivity import (
@@ -103,12 +102,14 @@ def __init__(
# define the problem specification
self.ssa_problems: dict[str, ProblemSpec] = {}
for group in self.groups:
- self.ssa_problems[group.uid] = ProblemSpec({
- 'num_vars': self.num_parameters,
- 'names': self.parameter_ids,
- 'bounds': [[p.lower_bound, p.upper_bound] for p in self.parameters],
- "outputs": self.output_ids,
- })
+ self.ssa_problems[group.uid] = ProblemSpec(
+ {
+ "num_vars": self.num_parameters,
+ "names": self.parameter_ids,
+ "bounds": [[p.lower_bound, p.upper_bound] for p in self.parameters],
+ "outputs": self.output_ids,
+ }
+ )
def create_samples(self) -> None:
"""Create samples for sobol.
@@ -124,20 +125,21 @@ def create_samples(self) -> None:
for gid in self.group_ids:
# libsa samples based on definition
- ssa_samples = saltelli.sample(self.ssa_problems[gid], N=self.N,
- calc_second_order=True)
+ ssa_samples = saltelli.sample(
+ self.ssa_problems[gid], N=self.N, calc_second_order=True
+ )
self.ssa_problems[gid].set_samples(ssa_samples)
self.samples[gid] = xr.DataArray(
ssa_samples,
dims=["sample", "parameter"],
- coords={"sample": range(num_samples),
- "parameter": self.parameter_ids},
- name="samples"
+ coords={"sample": range(num_samples), "parameter": self.parameter_ids},
+ name="samples",
)
- def calculate_sensitivity(self, cache_filename: Optional[str] = None,
- cache: bool = False):
+ def calculate_sensitivity(
+ self, cache_filename: Optional[str] = None, cache: bool = False
+ ):
"""Calculate the sensitivity matrices for SOBOL analysis."""
data = self.read_cache(cache_filename, cache)
@@ -154,9 +156,8 @@ def calculate_sensitivity(self, cache_filename: Optional[str] = None,
self.sensitivity[gid][key] = xr.DataArray(
np.full((self.num_parameters, self.num_outputs), np.nan),
dims=["parameter", "output"],
- coords={"parameter": self.parameter_ids,
- "output": self.output_ids},
- name=key
+ coords={"parameter": self.parameter_ids, "output": self.output_ids},
+ name=key,
)
# Calculate Sobol indices for every output, typically with a confidence
@@ -164,7 +165,8 @@ def calculate_sensitivity(self, cache_filename: Optional[str] = None,
for ko in range(self.num_outputs):
Yo = Y[:, ko]
Si = SALib.analyze.sobol.analyze(
- self.ssa_problems[gid], Yo,
+ self.ssa_problems[gid],
+ Yo,
calc_second_order=True,
num_resamples=100,
conf_level=0.95,
@@ -175,8 +177,9 @@ def calculate_sensitivity(self, cache_filename: Optional[str] = None,
self.sensitivity[gid][key][:, ko] = Si[key]
# write to cache
- self.write_cache(data=self.sensitivity, cache_filename=cache_filename,
- cache=cache)
+ self.write_cache(
+ data=self.sensitivity, cache_filename=cache_filename, cache=cache
+ )
def plot(self):
super().plot()
@@ -193,11 +196,13 @@ def plot(self):
vcenter=0.5,
vmin=0.0,
vmax=1.0,
- fig_path=self.results_path / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}_{key}.png"
+ fig_path=self.results_path
+ / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}_{key}.png",
)
# barplots
plot_S1_ST_indices(
sa=self,
- fig_path=self.results_path / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}.png",
+ fig_path=self.results_path
+ / f"{self.prefix}_sensitivity_{kg:>02}_{group.uid}.png",
)
diff --git a/src/sbmlsim/serialization.py b/src/sbmlsim/serialization.py
index 39638fc7..7e82869e 100644
--- a/src/sbmlsim/serialization.py
+++ b/src/sbmlsim/serialization.py
@@ -1,17 +1,18 @@
"""Helpers for JSON serialization of experiments."""
+
import json
from enum import Enum
from json import JSONEncoder
from pathlib import Path
-from typing import Any, Dict, Optional, Union
+from typing import Any, Optional, Union
from matplotlib.pyplot import Figure as MPLFigure
from numpy import ndarray
-def from_json(json_info: Union[str, Path]) -> Dict[Any, Any]:
+def from_json(json_info: Union[str, Path]) -> dict[Any, Any]:
"""Load data from JSON."""
- d: Dict[Any, Any]
+ d: dict[Any, Any]
if isinstance(json_info, Path):
with open(json_info, "r") as f_json:
d = json.load(f_json)
diff --git a/src/sbmlsim/simulation/__init__.py b/src/sbmlsim/simulation/__init__.py
index d5782cb9..dd47e5fd 100644
--- a/src/sbmlsim/simulation/__init__.py
+++ b/src/sbmlsim/simulation/__init__.py
@@ -1,4 +1,13 @@
"""Package for simulation."""
+
from .simulation import AbstractSim, Dimension
from .timecourse import TimecourseSim, Timecourse
from .scan import ScanSim
+
+__all__ = [
+ "AbstractSim",
+ "Dimension",
+ "TimecourseSim",
+ "Timecourse",
+ "ScanSim",
+]
diff --git a/src/sbmlsim/simulation/algorithm.py b/src/sbmlsim/simulation/algorithm.py
index 47eb64fe..b22eb004 100644
--- a/src/sbmlsim/simulation/algorithm.py
+++ b/src/sbmlsim/simulation/algorithm.py
@@ -1,6 +1,6 @@
"""Handling of algorithms and algorithm parameters."""
-from typing import List, Optional, Union
+from typing import Optional, Union
from pymetadata.metadata import KISAO, KISAOType
from pymetadata import log
@@ -51,7 +51,7 @@ class Algorithm(BaseObject):
def __init__(
self,
kisao: KISAOType,
- parameters: Optional[List[AlgorithmParameter]] = None,
+ parameters: Optional[list[AlgorithmParameter]] = None,
sid: Optional[str] = None,
name: Optional[str] = None,
):
@@ -66,7 +66,7 @@ def __init__(
super(Algorithm, self).__init__(sid, name)
self.kisao: KISAO = kisao
- self.parameters: Optional[List[AlgorithmParameter]] = parameters
+ self.parameters: Optional[list[AlgorithmParameter]] = parameters
def __repr__(self) -> str:
"""Get string representation."""
diff --git a/src/sbmlsim/simulation/calculation.py b/src/sbmlsim/simulation/calculation.py
index c6fde4fa..bfff1c8d 100644
--- a/src/sbmlsim/simulation/calculation.py
+++ b/src/sbmlsim/simulation/calculation.py
@@ -1,8 +1,6 @@
"""Module for performing all the Calculations."""
-
-from abc import abstractmethod
-from typing import List, Optional
+from typing import Optional
from sbmlsim.simulation.base import BaseObject, BaseObjectSIdRequired, Symbol, Target
@@ -86,7 +84,7 @@ def __init__(
unit: Optional[str] = None,
name: Optional[str] = None,
term: Optional[str] = None,
- applied_dimensions: Optional[List[AppliedDimension]] = None,
+ applied_dimensions: Optional[list[AppliedDimension]] = None,
):
"""Construct Variable."""
super(Variable, self).__init__(sid=sid, name=name)
@@ -96,7 +94,7 @@ def __init__(
self.symbol: Optional[Symbol] = symbol
self.unit: Optional[str] = unit
self.term: Optional[str] = term
- self.applied_dimensions: Optional[List[AppliedDimension]] = applied_dimensions
+ self.applied_dimensions: Optional[list[AppliedDimension]] = applied_dimensions
def __repr__(self) -> str:
"""Get string representation."""
@@ -123,7 +121,7 @@ def __init__(
unit: Optional[str] = None,
name: Optional[str] = None,
term: Optional[str] = None,
- applied_dimensions: Optional[List[AppliedDimension]] = None,
+ applied_dimensions: Optional[list[AppliedDimension]] = None,
):
"""Construct DependentVariable."""
super(DependentVariable, self).__init__(
@@ -150,15 +148,15 @@ class Calculation(BaseObjectSIdRequired):
def __init__(
self,
sid: str,
- variables: List[Variable],
- parameters: List[Parameter],
+ variables: list[Variable],
+ parameters: list[Parameter],
math: str,
name: Optional[str] = None,
):
"""Construct Calculation."""
super(Calculation, self).__init__(sid=sid, name=name)
- self.variables: List[Variable] = variables
- self.parameters: List[Parameter] = pars
+ self.variables: list[Variable] = variables
+ self.parameters: list[Parameter] = pars
self.math: str = math
# @abstractmethod
@@ -206,18 +204,18 @@ class FunctionalRange(Calculation):
if __name__ == "__main__":
from pymetadata.console import console
- pars: List[Parameter] = [
+ pars: list[Parameter] = [
Parameter(sid="p1", value=10.0, unit="mM"),
Parameter(sid="p2", value=0),
]
console.log(pars)
- dims: List[AppliedDimension] = [
+ dims: list[AppliedDimension] = [
AppliedDimension(sid="dim1", target="repeated_task1")
]
console.log(dims)
- vars: List[Variable] = [
+ vars: list[Variable] = [
Variable(
sid="S1_model1",
target="S1",
diff --git a/src/sbmlsim/simulation/range.py b/src/sbmlsim/simulation/range.py
index 08a8f192..47be1723 100644
--- a/src/sbmlsim/simulation/range.py
+++ b/src/sbmlsim/simulation/range.py
@@ -1,8 +1,9 @@
"""Module handling ranges."""
+
import itertools
-from abc import ABC, abstractmethod
-from enum import Enum, auto, unique
-from typing import Dict, Iterable, List, Tuple, Union
+from abc import abstractmethod
+from enum import Enum, auto
+from typing import Iterable, Tuple, Union
import numpy as np
@@ -52,7 +53,7 @@ class VectorRange(Range):
def __init__(
self,
sid: str,
- values: Union[List, Tuple, np.ndarray],
+ values: Union[list, Tuple, np.ndarray],
name: str = None,
):
"""Construct VectorRange."""
@@ -192,8 +193,8 @@ class FunctionalRange(Calculation, Range):
def __init__(
self,
sid: str,
- variables: List[Variable],
- parameters: List[Parameter],
+ variables: list[Variable],
+ parameters: list[Parameter],
math: str,
range: str,
name: str = None,
@@ -227,7 +228,7 @@ class Dimension:
the index is the corresponding index of the dimension.
"""
- def __init__(self, dimension: str, index: np.ndarray = None, changes: Dict = None):
+ def __init__(self, dimension: str, index: np.ndarray = None, changes: dict = None):
"""Dimension.
If no index is provided the index is calculated from the changes.
@@ -268,7 +269,7 @@ def __len__(self) -> int:
return len(self.index)
@staticmethod
- def indices_from_dimensions(dimensions: List["Dimension"]):
+ def indices_from_dimensions(dimensions: list["Dimension"]):
"""Get indices of all combinations of dimensions."""
index_vecs = [dim.index for dim in dimensions]
return list(itertools.product(*index_vecs))
diff --git a/src/sbmlsim/simulation/scan.py b/src/sbmlsim/simulation/scan.py
index 2a065ad4..1bc7f24e 100644
--- a/src/sbmlsim/simulation/scan.py
+++ b/src/sbmlsim/simulation/scan.py
@@ -2,8 +2,8 @@
Allows scans over other simulations.
"""
+
from copy import deepcopy
-from typing import Dict, List
import numpy as np
from pymetadata import log
@@ -24,8 +24,8 @@ class ScanSim(AbstractSim):
def __init__(
self,
simulation: AbstractSim,
- dimensions: List[Dimension] = None,
- mapping: Dict[str, int] = None,
+ dimensions: list[Dimension] = None,
+ mapping: dict[str, int] = None,
):
"""Scan a simulation.
@@ -69,7 +69,7 @@ def __repr__(self) -> str:
f"[{', '.join([str(d) for d in self.dimensions])}])"
)
- def dimensions(self) -> List[Dimension]:
+ def dimensions(self) -> list[Dimension]:
"""Get dimensions."""
return self.dimensions
@@ -84,7 +84,7 @@ def indices(self):
"""Get indices of all combinations."""
return Dimension.indices_from_dimensions(self.dimensions)
- def add_model_changes(self, model_changes: Dict) -> None:
+ def add_model_changes(self, model_changes: dict) -> None:
"""Add model changes to first timecourse."""
# import here to avoid circular import
from sbmlsim.simulation import TimecourseSim
@@ -144,7 +144,7 @@ def to_simulations(self):
from sbmlsim.simulation import Timecourse, TimecourseSim
from sbmlsim.units import UnitRegistry
- ureg = UnitRegistry(on_redefinition='ignore')
+ ureg = UnitRegistry(on_redefinition="ignore")
Q_ = ureg.Quantity
uinfo = UnitsInformation(
udict={k: "dimensionless" for k in ["X", "[X]", "n", "Y"]}, ureg=ureg
diff --git a/src/sbmlsim/simulation/sensitivity.py b/src/sbmlsim/simulation/sensitivity.py
index a6ff0e3f..7be13dc4 100644
--- a/src/sbmlsim/simulation/sensitivity.py
+++ b/src/sbmlsim/simulation/sensitivity.py
@@ -4,13 +4,15 @@
"""
from enum import Enum
-from typing import Dict, Iterable
+from typing import Iterable
import libsbml
import numpy as np
from pymetadata import log
from pymetadata.console import console
import roadrunner
+
+from sbmlsim.model import RoadrunnerSBMLModel
from sbmlsim.simulation import Dimension, ScanSim, TimecourseSim
@@ -114,7 +116,7 @@ def distribution_sensitivity_scan(
@staticmethod
def create_sampling_dimension(
model: roadrunner.RoadRunner,
- changes: Dict = None,
+ changes: dict = None,
cv: float = 0.1,
size: int = 10,
distribution: DistributionType = DistributionType.NORMAL_DISTRIBUTION,
@@ -155,7 +157,7 @@ def create_sampling_dimension(
@staticmethod
def create_difference_dimension(
model: roadrunner.RoadRunner,
- changes: Dict = None,
+ changes: dict = None,
difference: float = 0.1,
stype: SensitivityType = SensitivityType.PARAMETER_SENSITIVITY,
exclude_filter=None,
@@ -193,13 +195,13 @@ def create_difference_dimension(
@staticmethod
def reference_dict(
- model: roadrunner.RoadRunner,
- changes: Dict = None,
+ model: RoadrunnerSBMLModel,
+ changes: dict = None,
stype: SensitivityType = SensitivityType.PARAMETER_SENSITIVITY,
exclude_filter=None,
exclude_zero: bool = True,
zero_eps: float = 1e-8,
- ) -> Dict:
+ ) -> dict:
"""Get key:value dict for sensitivity analysis.
Values are based on the reference state of the model with the applied
@@ -212,14 +214,14 @@ def reference_dict(
:return:
"""
# reset model
- model.resetAll()
+ model.r.resetAll()
# apply normalized model changes
if changes is None:
changes = {}
for key, item in changes.items():
try:
- model[key] = item.magnitude
+ model.r[key] = item.magnitude
except AttributeError as err:
logger.error(
f"Change is not a Quantity with unit: '{key} = {item}'. "
@@ -227,7 +229,7 @@ def reference_dict(
)
raise err
- doc: libsbml.SBMLDocument = libsbml.readSBMLFromString(model.getSBML())
+ doc: libsbml.SBMLDocument = libsbml.readSBMLFromString(model.r.getSBML())
sbml_model: libsbml.Model = doc.getModel()
ids = []
@@ -250,17 +252,17 @@ def reference_dict(
for s in sbml_model.getListOfSpecies():
ids.append(s.getId())
- def value_dict(ids: Iterable[str]) -> Dict[str, float]:
+ def value_dict(ids: Iterable[str]) -> dict[str, float]:
"""Key: value dict from current model state.
Non-zero and exclude filtering is applied.
"""
- d: Dict[str, float] = {}
+ d: dict[str, float] = {}
for key in sorted(ids):
if exclude_filter and exclude_filter(key):
continue
- value = model[key]
+ value = model.r[key]
if exclude_zero:
if np.abs(value) < zero_eps:
continue
diff --git a/src/sbmlsim/simulation/simulation.py b/src/sbmlsim/simulation/simulation.py
index 824646af..798dc01f 100644
--- a/src/sbmlsim/simulation/simulation.py
+++ b/src/sbmlsim/simulation/simulation.py
@@ -1,7 +1,7 @@
"""Abstract base simulation."""
+
import abc
from abc import ABC
-from typing import Dict, List
from pymetadata import log
@@ -117,7 +117,7 @@ class AbstractSim(ABC):
"""
@abc.abstractmethod
- def dimensions(self) -> List[Dimension]:
+ def dimensions(self) -> list[Dimension]:
"""Get dimension of the simulation."""
raise NotImplementedError
@@ -127,11 +127,11 @@ def normalize(self, uinfo: UnitsInformation) -> None:
raise NotImplementedError
@abc.abstractmethod
- def add_model_changes(self, changes: Dict) -> None:
+ def add_model_changes(self, changes: dict) -> None:
"""Add model changes to model."""
raise NotImplementedError
- def to_dict(self) -> Dict[str, str]:
+ def to_dict(self) -> dict[str, str]:
"""Convert to dictionary."""
d = {
"type": self.__class__.__name__,
diff --git a/src/sbmlsim/simulation/timecourse.py b/src/sbmlsim/simulation/timecourse.py
index 543174b3..6559a8b8 100644
--- a/src/sbmlsim/simulation/timecourse.py
+++ b/src/sbmlsim/simulation/timecourse.py
@@ -1,8 +1,9 @@
"""Definition of timecourses and timecourse simulations."""
+
import json
from copy import deepcopy
from pathlib import Path
-from typing import Any, Dict, List, Optional, Union
+from typing import Any, Optional, Union
import numpy as np
from pint import Quantity
@@ -10,7 +11,7 @@
from sbmlsim.serialization import ObjectJSONEncoder
from sbmlsim.simulation import AbstractSim, Dimension
-from sbmlsim.units import Units, UnitsInformation
+from sbmlsim.units import UnitsInformation
logger = log.get_logger(__name__)
@@ -33,8 +34,8 @@ def __init__(
start: float,
end: float,
steps: int,
- changes: Dict[str, Quantity] = None,
- model_changes: Dict[str, Quantity] = None,
+ changes: dict[str, Quantity] = None,
+ model_changes: dict[str, Quantity] = None,
model_manipulations: dict = None,
discard: bool = False,
):
@@ -71,7 +72,7 @@ def __repr__(self) -> str:
"""Get string representation."""
return f"Timecourse([{self.start}:{self.end}])"
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary."""
d = dict()
for key in self.__dict__:
@@ -90,7 +91,7 @@ def add_model_change(self, sid: str, change) -> None:
"""Add model change."""
self.model_changes[sid] = change
- def add_model_changes(self, model_changes: Dict[str, str]) -> None:
+ def add_model_changes(self, model_changes: dict[str, str]) -> None:
"""Add model changes."""
self.model_changes.update(model_changes)
@@ -124,8 +125,8 @@ class TimecourseSim(AbstractSim):
def __init__(
self,
- timecourses: Union[List[Timecourse], Timecourse],
- selections: Optional[List[str]] = None,
+ timecourses: Union[list[Timecourse], Timecourse],
+ selections: Optional[list[str]] = None,
reset: bool = True,
time_offset: float = 0.0,
):
@@ -181,11 +182,11 @@ def _time(self) -> np.ndarray:
res: np.ndarray = np.concatenate(time_vecs)
return res
- def dimensions(self) -> List[Dimension]:
+ def dimensions(self) -> list[Dimension]:
"""Get dimensions."""
return [Dimension(dimension="time", index=self.time)]
- def add_model_changes(self, model_changes: Dict) -> None:
+ def add_model_changes(self, model_changes: dict) -> None:
"""Add model changes to given simulation."""
if self.timecourses:
tc = self.timecourses[0] # type: Timecourse
@@ -202,7 +203,7 @@ def strip_units(self) -> None:
for tc in self.timecourses:
tc.strip_units()
- def to_dict(self) -> Dict[str, Any]:
+ def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary."""
d = {
"type": self.__class__.__name__,
diff --git a/src/sbmlsim/simulator/__init__.py b/src/sbmlsim/simulator/__init__.py
index b68fa2e8..e181cb72 100644
--- a/src/sbmlsim/simulator/__init__.py
+++ b/src/sbmlsim/simulator/__init__.py
@@ -1,2 +1,7 @@
"""Package for simulator."""
+
from .simulation_serial import SimulatorSerial
+
+__all__ = [
+ "SimulatorSerial",
+]
diff --git a/src/sbmlsim/simulator/simulation_serial.py b/src/sbmlsim/simulator/simulation_serial.py
index 78a9c488..76e9e099 100644
--- a/src/sbmlsim/simulator/simulation_serial.py
+++ b/src/sbmlsim/simulator/simulation_serial.py
@@ -1,5 +1,6 @@
"""Serial simulator."""
-from typing import List, Optional, Union
+
+from typing import Optional, Union
from pathlib import Path
import roadrunner
import pandas as pd
@@ -23,7 +24,11 @@ class SimulatorSerial:
cores.
"""
- def __init__(self, model: Union[str|Path|RoadrunnerSBMLModel|AbstractModel] = None, **kwargs):
+ def __init__(
+ self,
+ model: Union[str | Path | RoadrunnerSBMLModel | AbstractModel] = None,
+ **kwargs,
+ ):
"""Initialize serial simulator.
:param model: Path to model or model
@@ -36,13 +41,13 @@ def __init__(self, model: Union[str|Path|RoadrunnerSBMLModel|AbstractModel] = No
self.integrator_settings = {
"absolute_tolerance": 1e-10,
"relative_tolerance": 1e-10,
- **kwargs
+ **kwargs,
}
# set model
self.set_model(model)
- def set_model(self, model: Union[str|Path|RoadrunnerSBMLModel|AbstractModel]):
+ def set_model(self, model: Union[str | Path | RoadrunnerSBMLModel | AbstractModel]):
"""Set model for simulator and updates the integrator settings."""
# logger.info("SimulatorSerial.set_model")
self.model = None
@@ -67,7 +72,6 @@ def set_model(self, model: Union[str|Path|RoadrunnerSBMLModel|AbstractModel]):
self.set_integrator_settings(**self.integrator_settings)
# logger.info("model loading finished")
-
def set_integrator_settings(self, **kwargs):
"""Set settings in the integrator."""
RoadrunnerSBMLModel.set_integrator_settings(self.r, **kwargs)
@@ -109,7 +113,7 @@ def run_scan(self, scan: ScanSim) -> XResult:
# based on the indices the result structure must be created
return XResult.from_dfs(dfs=dfs, scan=scan, uinfo=self.uinfo)
- def _timecourses(self, simulations: List[TimecourseSim]) -> List[pd.DataFrame]:
+ def _timecourses(self, simulations: list[TimecourseSim]) -> list[pd.DataFrame]:
return [self._timecourse(sim) for sim in simulations]
def _timecourse(self, simulation: TimecourseSim) -> pd.DataFrame:
@@ -132,7 +136,6 @@ def _timecourse(self, simulation: TimecourseSim) -> pd.DataFrame:
frames = []
t_offset = simulation.time_offset
for k, tc in enumerate(simulation.timecourses):
-
if k == 0 and tc.model_changes:
# [1] apply model changes of first simulation
logger.debug("Applying model changes")
@@ -188,7 +191,6 @@ def _timecourse(self, simulation: TimecourseSim) -> pd.DataFrame:
if tc.changes:
logger.debug("Applying simulation changes")
for key, item in tc.changes.items():
-
# FIXME: handle concentrations/amounts/default
# TODO: Figure out the hasOnlySubstanceUnit flag! (roadrunner)
# r: roadrunner.ExecutableModel = self.r
@@ -199,7 +201,6 @@ def _timecourse(self, simulation: TimecourseSim) -> pd.DataFrame:
self.r[key] = float(item)
logger.debug(f"\t{key} = {item}")
-
# run simulation
integrator = self.r.integrator
# FIXME: support simulation by times
diff --git a/src/sbmlsim/task/__init__.py b/src/sbmlsim/task/__init__.py
index af77c33a..b6fa68c2 100644
--- a/src/sbmlsim/task/__init__.py
+++ b/src/sbmlsim/task/__init__.py
@@ -1,2 +1,7 @@
"""Package for tasks."""
+
from .task import Task
+
+__all__ = [
+ "Task",
+]
diff --git a/src/sbmlsim/task/task.py b/src/sbmlsim/task/task.py
index feadf46c..319c9e98 100644
--- a/src/sbmlsim/task/task.py
+++ b/src/sbmlsim/task/task.py
@@ -1,5 +1,4 @@
"""Tasks."""
-from typing import Dict
class Task:
@@ -34,7 +33,7 @@ def __repr__(self) -> str:
"""Get representation."""
return f"Task(model={self.model_id} simulation={self.simulation_id})"
- def to_dict(self) -> Dict[str, str]:
+ def to_dict(self) -> dict[str, str]:
"""Convert to dictionary."""
d = {
"model": self.model_id,
diff --git a/src/sbmlsim/task/task_new.py b/src/sbmlsim/task/task_new.py
index ef37dc5d..c8b99fd0 100644
--- a/src/sbmlsim/task/task_new.py
+++ b/src/sbmlsim/task/task_new.py
@@ -19,7 +19,6 @@
"""
from dataclasses import dataclass
-from typing import List
from sbmlsim.simulation import Dimension
@@ -32,8 +31,8 @@ class Change:
target: str
symbol: str
- variables: List # current values
- parameters: List
+ variables: list # current values
+ parameters: list
math: str
range: str # this is precalculated
@@ -68,8 +67,8 @@ class SubTask:
model: str
simulation: str
- changes: List[str]
- model_changes: List[str]
- model_manipulations: List[str]
+ changes: list[str]
+ model_changes: list[str]
+ model_manipulations: list[str]
order: int
discard: bool = False
diff --git a/src/sbmlsim/units.py b/src/sbmlsim/units.py
index a1410b93..52fe5d27 100644
--- a/src/sbmlsim/units.py
+++ b/src/sbmlsim/units.py
@@ -2,12 +2,13 @@
Used for model and data unit conversions.
"""
+
from __future__ import annotations
import os
import warnings
from collections.abc import MutableMapping
from pathlib import Path
-from typing import Dict, Iterator, Optional, Union
+from typing import Iterator, Optional, Union
import libsbml
import numpy as np
@@ -30,7 +31,7 @@
Quantity([])
logger = log.get_logger(__name__)
-UdictType = Dict[str, str]
+UdictType = dict[str, str]
class UnitsInformation(MutableMapping):
@@ -128,10 +129,10 @@ def from_sbml(
]
@staticmethod
- def model_uid_dict(model: libsbml.Model, ureg: UnitRegistry) -> Dict[str, str]:
+ def model_uid_dict(model: libsbml.Model, ureg: UnitRegistry) -> dict[str, str]:
"""Populate the model uid dict for lookup."""
- uid_dict: Dict[str, str] = {}
+ uid_dict: dict[str, str] = {}
# add SBML definitions
for key in UnitsInformation.sbml_uids:
@@ -195,10 +196,10 @@ def from_sbml_doc(
if not model:
ValueError(f"No model found in SBMLDocument: {doc}")
- uid_dict: Dict[str, str] = UnitsInformation.model_uid_dict(model, ureg=ureg)
+ uid_dict: dict[str, str] = UnitsInformation.model_uid_dict(model, ureg=ureg)
# add additional units
- udict: Dict[str, str] = {}
+ udict: dict[str, str] = {}
# add time unit
time_uid: str = model.getTimeUnits()
@@ -290,7 +291,7 @@ def from_sbml_doc(
@staticmethod
def _default_ureg() -> pint.UnitRegistry:
"""Get default unit registry."""
- ureg = pint.UnitRegistry(on_redefinition='ignore')
+ ureg = pint.UnitRegistry(on_redefinition="ignore")
ureg.define("none = count")
ureg.define("item = count")
ureg.define("percent = 0.01*count")
@@ -306,8 +307,8 @@ def _default_ureg() -> pint.UnitRegistry:
@staticmethod
def normalize_changes(
- changes: Dict[str, Quantity], uinfo: "UnitsInformation"
- ) -> Dict[str, Quantity]:
+ changes: dict[str, Quantity], uinfo: "UnitsInformation"
+ ) -> dict[str, Quantity]:
"""Normalize all changes to units in given units dictionary.
This is a major helper function allowing to convert changes
@@ -343,9 +344,7 @@ def normalize_changes(
item = Q_(item, uinfo[key])
except DimensionalityError as err:
logger.error(
- f"DimensionalityError "
- f"'{key} = {item}'."
- f"\n{err}"
+ f"DimensionalityError " f"'{key} = {item}'." f"\n{err}"
)
changes_normed[key] = item
@@ -450,7 +449,6 @@ def udef_to_str(cls, udef: libsbml.UnitDefinition) -> str:
if __name__ == "__main__":
from sbmlsim.resources import DEMO_SBML
- model_path = MODEL_DEMO
- ureg = UnitRegistry(on_redefinition='ignore')
- uinfo = UnitsInformation.from_sbml(model_path, ureg=ureg)
+ ureg = UnitRegistry(on_redefinition="ignore")
+ uinfo = UnitsInformation.from_sbml(DEMO_SBML, ureg=ureg)
console.log(uinfo.udict)
diff --git a/tests/comparison/test_diff.py b/tests/comparison/test_diff.py
index aa6c2293..63ff1160 100644
--- a/tests/comparison/test_diff.py
+++ b/tests/comparison/test_diff.py
@@ -1,7 +1,8 @@
"""Test difference."""
+
import pytest
-from tests.data.diff import simulate_examples
+from data.diff import simulate_examples
@pytest.mark.skip(reason="no diff support")
diff --git a/tests/data/combine/omex/henkel_biomodels/create_omex.py b/tests/data/combine/omex/henkel_biomodels/create_omex.py
index a5cb94e5..27edac08 100755
--- a/tests/data/combine/omex/henkel_biomodels/create_omex.py
+++ b/tests/data/combine/omex/henkel_biomodels/create_omex.py
@@ -1,11 +1,11 @@
"""Create omex files from SED-ML files."""
from pathlib import Path
-from typing import List
from pymetadata import omex as pyomex
from pymetadata import log
+
logger = log.get_logger(__name__)
@@ -29,7 +29,7 @@ def create_omex_from_sedml(sedml_path: Path, omex_path: Path) -> None:
def create_all_omex() -> None:
"""Create all omex from the SED-ML file."""
- sedml_paths: List[Path] = []
+ sedml_paths: list[Path] = []
for p in SEDML_DIR.rglob("*"):
sedml_suffixes = {".xml", ".sedml"}
if p.is_file() and p.suffix in sedml_suffixes:
diff --git a/tests/fit/test_fit.py b/tests/fit/test_fit.py
index ac6ba9ec..c061cc7e 100644
--- a/tests/fit/test_fit.py
+++ b/tests/fit/test_fit.py
@@ -1,6 +1,7 @@
"""Test fit."""
+
from pathlib import Path
-from typing import Any, Dict
+from typing import Any
import pytest
@@ -47,7 +48,7 @@
@pytest.mark.skip(reason="no fit support")
@pytest.mark.parametrize("fit_kwargs", fit_kwargs_testdata)
-def test_fit_settings(fit_kwargs: Dict[str, Any]) -> None:
+def test_fit_settings(fit_kwargs: dict[str, Any]) -> None:
"""Test various arguments to optimization problem."""
op = op_mid1oh_iv()
opt_result: OptimizationResult = run_optimization(
@@ -56,7 +57,7 @@ def test_fit_settings(fit_kwargs: Dict[str, Any]) -> None:
size=1,
n_cores=1,
serial=True,
- **fit_kwargs
+ **fit_kwargs,
)
assert opt_result is not None
@@ -83,7 +84,7 @@ def test_optimization_analysis(tmp_path: Path) -> None:
algorithm=OptimizationAlgorithmType.LEAST_SQUARE,
size=1,
n_cores=1,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
op_analysis = OptimizationAnalysis(
opt_result=opt_result,
@@ -91,7 +92,7 @@ def test_optimization_analysis(tmp_path: Path) -> None:
output_name="tests",
op=op,
show_plots=False,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
op_analysis.run()
@@ -116,7 +117,7 @@ def test_loss_function(loss_function: LossFunctionType) -> None:
size=1,
n_cores=1,
serial=True,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
assert opt_result
assert op.loss_function == loss_function
@@ -131,7 +132,7 @@ def test_fit_lsq_serial() -> None:
size=1,
n_cores=1,
serial=True,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
assert opt_result is not None
@@ -145,7 +146,7 @@ def test_fit_de_serial() -> None:
size=1,
n_cores=1,
serial=True,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
assert opt_result is not None
@@ -159,7 +160,7 @@ def test_fit_lsq_parallel() -> None:
size=1,
n_cores=1,
serial=False,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
assert opt_result is not None
@@ -173,6 +174,6 @@ def test_fit_de_parallel():
size=1,
n_cores=1,
serial=False,
- **fit_kwargs_default
+ **fit_kwargs_default,
)
assert opt_result is not None
diff --git a/tests/simulation/test_simulation.py b/tests/simulation/test_simulation.py
index d9632d3c..48437edc 100644
--- a/tests/simulation/test_simulation.py
+++ b/tests/simulation/test_simulation.py
@@ -1,14 +1,15 @@
"""Test simulations."""
-from sbmlsim.model import ModelChange
+from sbmlsim.model import RoadrunnerSBMLModel
from sbmlsim.simulation import Timecourse, TimecourseSim
from sbmlsim.simulator import SimulatorSerial
+from sbmlsim.resources import REPRESSILATOR_SBML
-def test_timecourse_simulation(repressilator_model_state: str) -> None:
+def test_timecourse_simulation() -> None:
"""Run timecourse simulation."""
- simulator = SimulatorSerial()
- simulator.set_model(repressilator_model_state)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
tc = Timecourse(start=0, end=100, steps=100)
s = simulator.run_timecourse(TimecourseSim(tc))
@@ -28,38 +29,10 @@ def test_timecourse_simulation(repressilator_model_state: str) -> None:
assert xres is not None
-def test_timecourse_combined(repressilator_model_state: str) -> None:
- """Test timecourse combination."""
- simulator = SimulatorSerial()
- simulator.set_model(repressilator_model_state)
-
- xres = simulator.run_timecourse(
- simulation=TimecourseSim(
- [
- Timecourse(start=0, end=100, steps=100),
- Timecourse(
- start=0,
- end=50,
- steps=100,
- model_changes={ModelChange.CLAMP_SPECIES: {"X": True}},
- ),
- Timecourse(
- start=0,
- end=100,
- steps=100,
- model_changes={ModelChange.CLAMP_SPECIES: {"X": False}},
- ),
- ]
- )
- )
-
- assert xres._time.values[-1] == 250.0
-
-
-def test_timecourse_concat(repressilator_model_state: str) -> None:
+def test_timecourse_concat() -> None:
"""Reuse of timecourses."""
- simulator = SimulatorSerial()
- simulator.set_model(repressilator_model_state)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
tc = Timecourse(start=0, end=50, steps=100, changes={"X": 10})
xres = simulator.run_timecourse(simulation=TimecourseSim([tc] * 3))
@@ -70,10 +43,10 @@ def test_timecourse_concat(repressilator_model_state: str) -> None:
assert xres["[X]"].values[202] == 10.0
-def test_timecourse_empty(repressilator_model_state: str) -> None:
+def test_timecourse_empty() -> None:
"""Reuse of timecourses."""
- simulator = SimulatorSerial()
- simulator.set_model(repressilator_model_state)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
tc = Timecourse(start=0, end=50, steps=100, changes={"X": 10})
tcsim = TimecourseSim([None, tc, None])
@@ -85,10 +58,10 @@ def test_timecourse_empty(repressilator_model_state: str) -> None:
assert len(xres._time) == 101
-def test_timecourse_discard(repressilator_model_state: str) -> None:
+def test_timecourse_discard() -> None:
"""Test discarding pre-simulation."""
- simulator = SimulatorSerial()
- simulator.set_model(repressilator_model_state)
+ model = RoadrunnerSBMLModel(REPRESSILATOR_SBML)
+ simulator = SimulatorSerial(model)
xres = simulator.run_timecourse(
simulation=TimecourseSim(
diff --git a/tests/test_sensitivity.py b/tests/test_sensitivity.py
index c59d23f5..ac456e95 100644
--- a/tests/test_sensitivity.py
+++ b/tests/test_sensitivity.py
@@ -1,4 +1,5 @@
"""Test sensitivity simulations."""
+
import pytest
from sbmlsim.examples import example_sensitivity
@@ -52,5 +53,5 @@ def test_sensitivity_change() -> None:
plus = ModelSensitivity.apply_change_to_dict(p_ref, change=0.1)
minus = ModelSensitivity.apply_change_to_dict(p_ref, change=-0.1)
for key in ["KM", "eff", "n", "ps_0", "ps_a", "tau_mRNA", "tau_prot"]:
- assert pytest.approx(1.1 * p_ref[key].magnitude) == plus[key].magnitude
- assert pytest.approx(0.9 * p_ref[key].magnitude) == minus[key].magnitude
+ assert pytest.approx(1.1 * p_ref[key]) == plus[key]
+ assert pytest.approx(0.9 * p_ref[key]) == minus[key]
diff --git a/tests/test_units.py b/tests/test_units.py
index bd940e52..97f8099b 100644
--- a/tests/test_units.py
+++ b/tests/test_units.py
@@ -1,6 +1,7 @@
"""Test units."""
+
from pathlib import Path
-from typing import List, Tuple
+from typing import Tuple
import libsbml
import pytest
@@ -10,7 +11,7 @@
from sbmlsim.units import UnitRegistry, Units, UnitsInformation
-sbml_paths: List[Path] = [
+sbml_paths: list[Path] = [
DEMO_SBML,
MIDAZOLAM_SBML,
REPRESSILATOR_SBML,
@@ -54,7 +55,7 @@ def test_example_units() -> None:
example_units.run_demo_example()
-def create_udef_examples() -> List[Tuple[libsbml.UnitDefinition, str]]:
+def create_udef_examples() -> list[Tuple[libsbml.UnitDefinition, str]]:
"""Create example UnitDefinitions for testing."""
udef0 = libsbml.UnitDefinition(3, 1)
@@ -90,6 +91,6 @@ def create_udef_examples() -> List[Tuple[libsbml.UnitDefinition, str]]:
@pytest.mark.parametrize("udef, s", udef_examples)
def test_udef_to_str(udef: libsbml.UnitDefinition, s: str) -> None:
"""Test UnitDefinition to string."""
- _ = libsbml.UnitDefinition_printUnits(udef)
+ _ = libsbml.UnitDefinition.printUnits(udef)
s2 = Units.udef_to_str(udef)
assert s2 == s