-
Notifications
You must be signed in to change notification settings - Fork 15
Simulation-related changes required for M4.5.7 #125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 18 commits
c8680fc
caa7150
53e53cb
a8a4970
04d926d
300d00f
333f502
0ee08b4
ee06716
7cbffc3
2ac972f
84f25d2
7219d1a
ff85f35
511c951
3b49afd
132ac98
3219e04
da8ca4d
dbc99db
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,71 +1,159 @@ | ||
from app.config import Config | ||
import json | ||
import osparc | ||
import tempfile | ||
|
||
from app.config import Config | ||
from flask import abort | ||
from osparc.rest import ApiException | ||
from time import sleep | ||
|
||
|
||
OPENCOR_SOLVER = "simcore/services/comp/opencor" | ||
DATASET_4_SOLVER = "simcore/services/comp/rabbit-ss-0d-cardiac-model" | ||
DATASET_17_SOLVER = "simcore/services/comp/human-gb-0d-cardiac-model" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would switching to a dictionary remove some of the if statement? Something like: SOLVER_LOOKUP = { ...} There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can't see how it would. Here, we really want to distinguish between CellML-based simulation datasets (which require |
||
DATASET_78_SOLVER = "simcore/services/comp/kember-cardiac-model" | ||
|
||
|
||
class SimulationException(Exception): | ||
pass | ||
|
||
|
||
def run_simulation(model_url, json_config): | ||
with tempfile.NamedTemporaryFile(mode="w+") as temp_config_file: | ||
json.dump(json_config, temp_config_file) | ||
def start_simulation(data): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it worth splitting up start simulation into a few more functions? The flow is kind of hard for me to follow with all of the if statements There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried to think of a way to split this function but couldn't come up with a simple solution, not least since it relies on the use of exceptions which would need to be propagated. |
||
# Determine the type of simulation. | ||
|
||
temp_config_file.seek(0) | ||
solver_name = data["solver"]["name"] | ||
|
||
try: | ||
api_client = osparc.ApiClient(osparc.Configuration( | ||
host=Config.OSPARC_API_URL, | ||
username=Config.OSPARC_API_KEY, | ||
password=Config.OSPARC_API_SECRET | ||
)) | ||
if solver_name == OPENCOR_SOLVER: | ||
if not "opencor" in data: | ||
abort(400, description="Missing OpenCOR settings") | ||
else: | ||
if "osparc" in data: | ||
if ((solver_name != DATASET_4_SOLVER) | ||
and (solver_name != DATASET_17_SOLVER) | ||
and (solver_name != DATASET_78_SOLVER)): | ||
abort(400, description="Unknown oSPARC solver") | ||
else: | ||
abort(400, description="Missing oSPARC settings") | ||
|
||
# Upload the configuration file. | ||
# Start the simulation. | ||
|
||
files_api = osparc.FilesApi(api_client) | ||
try: | ||
api_client = osparc.ApiClient(osparc.Configuration( | ||
host=Config.OSPARC_API_URL, | ||
username=Config.OSPARC_API_KEY, | ||
password=Config.OSPARC_API_SECRET | ||
)) | ||
|
||
try: | ||
config_file = files_api.upload_file(temp_config_file.name) | ||
except: | ||
raise SimulationException( | ||
"the simulation configuration file could not be uploaded") | ||
# Upload the configuration file, in the case of an OpenCOR simulation or | ||
# in the case of an oSPARC simulation input file. | ||
|
||
# Create the simulation. | ||
has_solver_input = "input" in data["solver"] | ||
|
||
solvers_api = osparc.SolversApi(api_client) | ||
if solver_name == OPENCOR_SOLVER: | ||
temp_config_file = tempfile.NamedTemporaryFile(mode="w+") | ||
|
||
solver = solvers_api.get_solver_release( | ||
"simcore/services/comp/opencor", "1.0.3") | ||
json.dump(data["opencor"]["json_config"], temp_config_file) | ||
|
||
temp_config_file.seek(0) | ||
|
||
try: | ||
files_api = osparc.FilesApi(api_client) | ||
|
||
job = solvers_api.create_job( | ||
solver.id, | ||
solver.version, | ||
osparc.JobInputs({ | ||
"model_url": model_url, | ||
"config_file": config_file | ||
}) | ||
) | ||
config_file = files_api.upload_file(temp_config_file.name) | ||
except ApiException as e: | ||
raise SimulationException( | ||
f"the simulation configuration file could not be uploaded ({e})") | ||
|
||
# Start the simulation job. | ||
temp_config_file.close() | ||
elif has_solver_input: | ||
temp_input_file = tempfile.NamedTemporaryFile(mode="w+") | ||
|
||
status = solvers_api.start_job(solver.id, solver.version, job.id) | ||
temp_input_file.write(data["solver"]["input"]["value"]) | ||
temp_input_file.seek(0) | ||
|
||
if status.state != "PUBLISHED": | ||
raise SimulationException("the simulation job could not be submitted") | ||
try: | ||
files_api = osparc.FilesApi(api_client) | ||
|
||
# Wait for the simulation job to be complete (or to fail). | ||
input_file = files_api.upload_file(temp_input_file.name) | ||
except ApiException as e: | ||
raise SimulationException( | ||
f"the solver input file could not be uploaded ({e})") | ||
|
||
while True: | ||
status = solvers_api.inspect_job(solver.id, solver.version, job.id) | ||
temp_input_file.close() | ||
|
||
if status.progress == 100: | ||
break | ||
# Create the simulation job with the job inputs that matches our | ||
# simulation type. | ||
|
||
sleep(1) | ||
solvers_api = osparc.SolversApi(api_client) | ||
|
||
status = solvers_api.inspect_job(solver.id, solver.version, job.id) | ||
try: | ||
solver = solvers_api.get_solver_release( | ||
solver_name, data["solver"]["version"]) | ||
except ApiException as e: | ||
raise SimulationException( | ||
f"the requested solver could not be retrieved ({e})") | ||
|
||
if solver_name == OPENCOR_SOLVER: | ||
job_inputs = { | ||
"model_url": data["opencor"]["model_url"], | ||
"config_file": config_file | ||
} | ||
else: | ||
if has_solver_input: | ||
data["osparc"]["job_inputs"][data["solver"]["input"]["name"]] = input_file | ||
|
||
job_inputs = data["osparc"]["job_inputs"] | ||
|
||
job = solvers_api.create_job( | ||
solver.id, | ||
solver.version, | ||
osparc.JobInputs(job_inputs) | ||
) | ||
|
||
# Start the simulation job. | ||
|
||
status = solvers_api.start_job(solver.id, solver.version, job.id) | ||
|
||
if status.state != "PUBLISHED": | ||
raise SimulationException( | ||
"the simulation job could not be submitted") | ||
|
||
res = { | ||
"status": "ok", | ||
"data": { | ||
"job_id": job.id, | ||
"solver": { | ||
"name": solver.id, | ||
"version": solver.version | ||
} | ||
} | ||
} | ||
except SimulationException as e: | ||
res = { | ||
"status": "nok", | ||
"description": e.args[0] if len(e.args) > 0 else "unknown" | ||
} | ||
|
||
return res | ||
|
||
|
||
def check_simulation(data): | ||
try: | ||
# Check whether the simulation has completed (or failed). | ||
|
||
api_client = osparc.ApiClient(osparc.Configuration( | ||
host=Config.OSPARC_API_URL, | ||
username=Config.OSPARC_API_KEY, | ||
password=Config.OSPARC_API_SECRET | ||
)) | ||
solvers_api = osparc.SolversApi(api_client) | ||
job_id = data["job_id"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need to check the keys from data? Or are they always correct? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The data is checked on line 1145 in main.py. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Oh yes, I had forgotten that I had done that check. :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks @alan-wu! I missed that |
||
solver_name = data["solver"]["name"] | ||
solver_version = data["solver"]["version"] | ||
status = solvers_api.inspect_job(solver_name, solver_version, job_id) | ||
|
||
if status.progress == 100: | ||
# The simulation has completed, but was it successful? | ||
|
||
if status.state != "SUCCESS": | ||
raise SimulationException("the simulation failed") | ||
|
@@ -74,33 +162,44 @@ def run_simulation(model_url, json_config): | |
|
||
try: | ||
outputs = solvers_api.get_job_outputs( | ||
solver.id, solver.version, job.id) | ||
except: | ||
solver_name, solver_version, job_id) | ||
except ApiException as e: | ||
raise SimulationException( | ||
"the simulation job outputs could not be retrieved") | ||
f"the simulation job outputs could not be retrieved ({e})") | ||
|
||
# Download the simulation results. | ||
|
||
try: | ||
files_api = osparc.FilesApi(api_client) | ||
|
||
results_filename = files_api.download_file( | ||
outputs.results["output_1"].id) | ||
except: | ||
raise SimulationException("the simulation results could not be retrieved") | ||
outputs.results[list(outputs.results.keys())[0]].id) | ||
except ApiException as e: | ||
raise SimulationException( | ||
f"the simulation results could not be retrieved ({e})") | ||
|
||
results_file = open(results_filename, "r") | ||
|
||
res = { | ||
"status": "ok", | ||
"results": json.load(results_file) | ||
} | ||
|
||
if solver_name == OPENCOR_SOLVER: | ||
res["results"] = json.load(results_file) | ||
else: | ||
res["results"] = results_file.read() | ||
|
||
results_file.close() | ||
except SimulationException as e: | ||
else: | ||
# The simulation is not complete yet. | ||
|
||
res = { | ||
"status": "nok", | ||
"description": e.args[0] if len(e.args) > 0 else "unknown" | ||
"status": "ok" | ||
} | ||
except SimulationException as e: | ||
res = { | ||
"status": "nok", | ||
"description": e.args[0] if len(e.args) > 0 else "unknown" | ||
} | ||
|
||
temp_config_file.close() | ||
|
||
return res | ||
return res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think using
DEPLOY_ENV
would be better than having a route for debugging?ie:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like having a route for debugging, BUT that warning (and also error further down the code) is only relevant for SIM-Core based simulation datasets (depending on what you do with the
/sim/dataset/<id_>
endpoint), not CellML-based simulation datasets.Here, I am trying to reuse the
/sim/dataset/<id_>
endpoint and that warning (and error) is not relevant to me, no matter whether it's a SIM-Core based simulation or a CellML-based one. So, I just don't want to see that message at all, be it in production or in my development environment. It clutters my console for no good reason.I could have copied/pasted most of that code, but that would have been a bit silly, hence the debugging route approach.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A debug flag could be setup using a environment variable instead. The debug route is not useful to the person calling it unless they are also running the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, although such an environment variable would normally be set IF we want to get debug information while, here, debug information is provided by default (which is wrong in the first place).
Anyway, what should such an environment variable be called? What about
SPARC_API_DEBUGGING
? If it's not set or set to1
(orON
?) then debugging would be on, and if it is set to0
(orOFF
?) then debugging would be off.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SPARC_API_DEBUGGING is good. Having it default to be on is consistent with the DEPLOY_ENV = development so that is fine. TRUE or FALSE is probably more consistent with standard although it is going to be string comparison regardless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, going to use
SPARC_API_DEBUGGING
with unset meaningTRUE
.