Skip to content
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
0f58837
update to openstudio 3.8, e+ 24.1 and pyfmi 2.11
TShapinsky May 9, 2024
2698801
first pass at migration to pyenergyplus
TShapinsky May 21, 2024
0f98b0b
first version of pyenergyplus migration passing integration tests
TShapinsky Jan 6, 2025
dbc3978
fix formatting, bump min python to 3.11
TShapinsky Jan 7, 2025
9b56504
fix exception import stack
TShapinsky Jan 7, 2025
67f2f57
fix api tests, import changes, point id changes
TShapinsky Jan 7, 2025
8e63da1
add all the new stuff that didn't get comitted earlier
TShapinsky Jan 7, 2025
ff98040
remove goaws from simulation ci test
TShapinsky Jan 7, 2025
6696232
fix simulation test
TShapinsky Jan 7, 2025
7c4e45d
fix arguments for timescale test
TShapinsky Jan 7, 2025
96bc99a
starting fixing mock step job
TShapinsky Jan 7, 2025
978d947
implement initialize_simulation
TShapinsky Jan 7, 2025
2129555
correct increment of time in mock job
TShapinsky Jan 7, 2025
24559ad
remove deprecated models from the scaling tests
TShapinsky Jan 7, 2025
5c792ff
reduce scale test models to current set
TShapinsky Jan 8, 2025
8cd08ad
fix simulation model testing
TShapinsky Jan 8, 2025
d0bc4d2
fix influxdb reporting
TShapinsky Jan 8, 2025
de7c73c
add sleep to allow mock step run to fall behind
TShapinsky Jan 8, 2025
d5e9098
fix step duration
TShapinsky Jan 8, 2025
fdbcaa5
Remove more redundant variables in modelica step_run.py
TShapinsky Jan 31, 2025
52bc234
bump bake-action to v5
TShapinsky Jan 31, 2025
8637193
remove files which are no longer needed
TShapinsky Jan 31, 2025
415aebc
Cleanup and document StepRunBase
TShapinsky Feb 3, 2025
9fad399
clean up StepRunProcess and begin cleaning openstudio StepRun
TShapinsky Feb 3, 2025
9fb7bc0
refactor handling of points for openstudio
TShapinsky Feb 4, 2025
aad0f83
fix unit tests
TShapinsky Feb 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
python-version: "3.11"

- name: Run pre-commit
uses: pre-commit/[email protected]
Expand All @@ -39,7 +39,7 @@ jobs:
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
python-version: "3.11"

- name: Install poetry
uses: abatilo/actions-poetry@v3
Expand Down Expand Up @@ -92,7 +92,7 @@ jobs:
GIT_COMMIT: ${{ github.sha }}
run: |
printenv
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d worker mongo redis minio mc goaws
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d worker mongo redis minio mc

- name: Dump docker logs before tests
uses: jwalton/gh-docker-logs@v2
Expand All @@ -117,7 +117,7 @@ jobs:
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
python-version: "3.11"

- name: Install poetry
uses: abatilo/actions-poetry@v3
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ repos:
- id: requirements-txt-fixer
- id: mixed-line-ending
args: ["--fix=auto"]
- repo: https://github.com/pre-commit/mirrors-autopep8
rev: v2.0.1
- repo: https://github.com/hhatto/autopep8
rev: v2.3.1
hooks:
- id: autopep8
args:
Expand Down
9 changes: 8 additions & 1 deletion alfalfa_web/server/api-v2.js
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ router.param("pointId", (req, res, next, id) => {
const error = validate(
{ id },
{
id: "required|uuid"
id: "required|string"
}
);
if (error) return res.status(400).json({ message: error });
Expand Down Expand Up @@ -164,6 +164,13 @@ router.get("/runs/:runId/time", async (req, res, next) => {
.catch(next);
});

router.get("/runs/:runId/log", async (req, res, next) => {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would allow job/run logs to be accessed from the web api.

api
.getRunLog(req.run)
.then((log) => res.json({ payload: { log } }))
.catch(next);
});

router.get("/runs/:runId/points", (req, res, next) => {
api
.getPointsByRun(req.run)
Expand Down
14 changes: 11 additions & 3 deletions alfalfa_web/server/api.js
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,11 @@ class AlfalfaAPI {
return await getHashValue(this.redis, run.ref_id, "sim_time");
};

getRunLog = async (run) => {
const log_lines = await this.redis.lRange(`run:${run.ref_id}:log`, -100, -1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice. I like the simple design of streaming the log into redis. is this new or is only the api endpoint new? I guess I'll find out as I go through this PR.

The only question that comes to mind is what might the load look like as the scale gets big? Redis is powerful, but I sense these logs could get verbose. Nevertheless, good stuff, and if there is a performance impact I'm sure it can be mitigated by filtering the logs or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. LRANGE is O(S+N) where S is the start offset, and N is the number of elements. So it will currently grow linearly. However, if we are interested in getting only a few of the most recent lines we could switch from doing an RPUSH to LPUSH. That way we wouldn't need any offset to retrieve the log. This would make the writes and reads both constant complexity operations so there wouldn't be any issue with big logs, outside of storage space.

return log_lines.join("\n");
};

getPointsByRun = async (run) => {
const pointsCursor = this.points.find({ run: run._id });
return Promise.resolve(pointsCursor.toArray());
Expand Down Expand Up @@ -126,7 +131,8 @@ class AlfalfaAPI {
const pointDict = {
id: point.ref_id,
name: point.name,
type: point.point_type
type: point.point_type,
units: point.units
};
return pointDict;
};
Expand Down Expand Up @@ -197,7 +203,7 @@ class AlfalfaAPI {

const { startDatetime, endDatetime, timescale, realtime, externalClock } = data;

const job = `alfalfa_worker.jobs.${sim_type === "MODELICA" ? "modelica" : "openstudio"}.StepRun`;
const job = `alfalfa_worker.jobs.${sim_type === "MODELICA" ? "modelica" : "openstudio"}.step_run.StepRun`;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.step_run.StepRun I'm sure there is a motivation here. Why did you add the seemingly redundant job name?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I see, now that I'm further along in the review. This corresponds to your module structure.

const params = {
run_id: run.ref_id,
start_datetime: startDatetime,
Expand Down Expand Up @@ -297,7 +303,9 @@ class AlfalfaAPI {

createRunFromModel = async (model) => {
const runId = uuidv1();
const job = `alfalfa_worker.jobs.${model.model_name.endsWith(".fmu") ? "modelica" : "openstudio"}.CreateRun`;
const job = `alfalfa_worker.jobs.${
model.model_name.endsWith(".fmu") ? "modelica" : "openstudio"
}.create_run.CreateRun`;
const params = {
model_id: model.ref_id,
run_id: runId
Expand Down
7 changes: 2 additions & 5 deletions alfalfa_worker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM ghcr.io/nrel/alfalfa-dependencies:3.1.0 AS base
FROM ghcr.io/nrel/alfalfa-dependencies:prepare_080 AS base

ENV HOME=/alfalfa

Expand All @@ -21,10 +21,7 @@ ENV PYTHONPATH="${HOME}:${PYTHONPATH}"

COPY ./alfalfa_worker ${HOME}/alfalfa_worker

RUN pip3.8 install virtualenv \
&& pip3.8 install \
scipy \
symfit
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The worker is no longer providing builtin python dependencies outside of what base alfalfa needs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is partially due to removing the refrig_case test which required symfit. However symfit doesn't support python 3.12 and the project in general seems semi-abandoned so I removed the test case and replaced it with a small office based test model.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in other words, if someone has an EnergyPlus model with Python EMS that uses third party Python modules, they're out of luck with Alfalfa. Is that right? It seems reasonable, as Alfalfa cannot anticipate every module that someone might want to use.

Can we say that Alfalfa supports any module that comes bundled with EnergyPlus?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite. https://github.com/NREL/alfalfa/wiki/How-to-Migrate-EnergyPlus-Python-Plugins you can provide a requirements.txt which results in a virtual environment being created for your model which has your requirements installed.

Copy link
Member

@anyaelena anyaelena May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is staying the same, correct? The E+ model with Python EMS + 3rd party Python modules just also needs to take responsibility for that requirements.txt. This makes sense to me and I think it's important to continue to support this in Alfalfa.

COPY ./alfalfa_worker /alfalfa/alfalfa_worker

COPY ./deploy /alfalfa/deploy
COPY ./deploy/wait-for-it.sh /usr/local/wait-for-it.sh
Expand Down
8 changes: 8 additions & 0 deletions alfalfa_worker/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,22 @@
import os
import sys
import traceback
from logging import StreamHandler, basicConfig
from pathlib import Path

# Determine which worker to load based on the QUEUE.
# This may be temporary for now, not sure on how else
# to determine which worker gets launched
from alfalfa_worker.dispatcher import Dispatcher
from alfalfa_worker.lib.constants import DATETIME_FORMAT

if __name__ == '__main__':

basicConfig(level=os.environ.get("LOGLEVEL", "INFO"),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output logs at LOGLEVEL and above to stdout.

handlers=[StreamHandler(sys.stdout)],
format='%(asctime)s - %(name)s - %(levelname)s: %(message)s',
datefmt=DATETIME_FORMAT)

try:
workdir = Path(os.environ.get('RUN_DIR', '/runs'))
dispatcher = Dispatcher(workdir)
Expand Down
2 changes: 1 addition & 1 deletion alfalfa_worker/dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def process_message(self, message):
"""
try:
message_body = json.loads(message)
self.logger.info(f"Processing message of {message_body}")
self.logger.debug(f"Processing message of {message_body}")
job = message_body.get('job')
if job:
params = message_body.get('params', {})
Expand Down
94 changes: 0 additions & 94 deletions alfalfa_worker/jobs/modelica/create_run.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
import json
import os
from pathlib import Path
from uuid import uuid4

from pyfmi import load_fmu

from alfalfa_worker.lib.enums import RunStatus, SimType
from alfalfa_worker.lib.job import Job
Expand All @@ -19,7 +14,6 @@ def __init__(self, model_id, run_id=None):
# Define FMU specific attributes
self.upload_fmu: Path = self.dir / model_name
self.fmu_path = self.dir / 'model.fmu'
self.fmu_json = self.dir / 'tags.json'
self.model_name = model_name

# Needs to be set after files are uploaded / parsed.
Expand All @@ -34,99 +28,11 @@ def exec(self):
"""
self.logger.info("add_fmu for {}".format(self.run.ref_id))

# Create the FMU tags (no longer external now that python2 is deprecated)
self.create_tags()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the key change with all of these deleted lines of code, is that the FMU is supposed to come with the tags metadata baked into the FMU's resources directory. Is that correct? This seems to be aligned with the BOPTEST convention.

# insert tags into db
self.insert_fmu_tags()
self.upload_fmu.rename(self.fmu_path)

def validate(self) -> None:
assert (self.dir / 'model.fmu').exists(), "model file not created"
assert (self.dir / 'tags.json').exists(), "tags file not created"

def cleanup(self) -> None:
super().cleanup()
self.set_run_status(RunStatus.READY)

def get_site_ref(self, haystack_json):
"""
Find the site given the haystack JSON file. Remove 'r:' from string.
:param haystack_json: json serialized Haystack document
:return: site_ref: id of site
"""
site_ref = ''
with open(haystack_json) as json_file:
data = json.load(json_file)
for entity in data:
if 'site' in entity:
if entity['site'] == 'm:':
site_ref = entity['id'].replace('r:', '')
break
return site_ref

def insert_fmu_tags(self):
with open(self.fmu_json, 'r') as f:
data = f.read()
points_json = json.loads(data)

self.run_manager.add_site_to_mongo(points_json, self.run)

def create_tags(self):
# 1.0 setup the inputs
fmu = load_fmu(self.upload_fmu)

# 2.0 get input/output variables from the FMU
# causality = 1 is parameter, 2 is input, 3 is output
input_names = fmu.get_model_variables(causality=2).keys()
output_names = fmu.get_model_variables(causality=3).keys()

# 3.0 add site tagging
tags = []

fmu_upload_name = os.path.basename(self.model_name) # without directories
fmu_upload_name = os.path.splitext(fmu_upload_name)[0] # without extension

# TODO: Figure out how to find geo_city
sitetag = {
"dis": "s:%s" % fmu_upload_name,
"id": "r:%s" % self.run.ref_id,
"site": "m:",
"datetime": "s:",
"simStatus": "s:Stopped",
"simType": "s:fmu",
"siteRef": "r:%s" % self.run.ref_id
}
tags.append(sitetag)

# 4.0 add input tagging
for var_input in input_names:
if not var_input.endswith("_activate"):
tag_input = {
"id": "r:%s" % uuid4(),
"dis": "s:%s" % var_input,
"siteRef": "r:%s" % self.run.ref_id,
"point": "m:",
"writable": "m:",
"writeStatus": "s:disabled",
"kind": "s:Number",
}
tags.append(tag_input)
tag_input = {}

# 5.0 add output tagging
for var_output in output_names:
tag_output = {
"id": "r:%s" % uuid4(),
"dis": "s:%s" % var_output,
"siteRef": "r:%s" % self.run.ref_id,
"point": "m:",
"cur": "m:",
"curVal": "n:",
"curStatus": "s:disabled",
"kind": "s:Number",
}
tags.append(tag_output)

# 6.0 write tags to the json file
with open(self.fmu_json, 'w') as outfile:
json.dump(tags, outfile)
Loading
Loading