-
Notifications
You must be signed in to change notification settings - Fork 14
Added simple pre and post processing functionality #361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #361 +/- ##
==========================================
+ Coverage 88.86% 89.19% +0.33%
==========================================
Files 15 22 +7
Lines 1706 1999 +293
==========================================
+ Hits 1516 1783 +267
- Misses 190 216 +26 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
General concept:The goal is to collect postprocessing data from parametric studies into a unified format. With a given output_file (a lightweight abstraction over a file path), all cases are gathered into a single DataFrame: file = OutputFile(file_name="force.dat", folder="forces")
file.add_time(0)
table = load_tables(output_file=file, dir_name="tests/test_postprocessing/Cases") To simplify usage further, it's also possible to list all output_files in a folder containing multiple cases:
Planned features:
StatusThe implementation definitely needs to be polished but the question is how to represent the postprocessing functionality to the user:
file = OutputFile(file_name="xxxxxxx", folder="xxxx")
file.add_time(0)
table = load_tables(output_file=file, dir_name="tests/test_postprocessing/Cases")
What interface would you choose or do you have a another idea? |
Hey @HenningScheufler this is a great idea. A quick comment - probably a good idea to use multi-indexing on the DataFrame side for each case to make the data more 'manipulable'. |
@HenningScheufler nice work so far. I think I have a better idea now on what you intended for the No hard opinion on the API right now, except that I'd make What I'm not that sold on is that it is necessary to add a dependency on |
This PR aims to rapidly develop a proof of concept that demonstrates the core idea: extending foamlib with additional functionality to simplify both preprocessing and postprocessing in OpenFOAM workflows. To test the implementation, navigate to example/parametricStudy and run: python createStudy.py
python runStudy.py
python gatherResults.py This will generate a results/ folder containing three CSV files, each representing collected and combined results from the parametric study. Next steps
DiscussionThe general idea behind this PR is to provide a higher-level interface within foamlib that lowers the barrier of entry for new OpenFOAM users and helps them perform parametric studies more efficiently. This would make foamlib not just a utility library, but a framework that enables non-expert users to:
To support this, the library will likely need dedicated preprocessing and postprocessing modules, as well as optional visualization components. These additions should aim to reduce the complexity typically associated with OpenFOAM workflows, especially for researchers, students, or engineers unfamiliar with python In this discussion, the goal is to focus on the direction and scope of the library:
Technical details — including code structure, dependencies, and implementation specifics — are for now out of scope and could be tackled at a later stage |
The long format table format offers the possibility to combine tables with different sampling rate and is also the preferred table format for plotting (plotly express and seaborn). But it it possible to switch between the representation with pd.pivot and pd.melt |
@gerlero i cannot reproduce why python 3.7 and 3.8 crashes. Do you have any ideas? |
Likely missing |
I wrote the documentation could you please review it? |
I'm happy to develop 'meshing.py' & 'dataAnalysis.py' for my particular lagrangian-focussed case. At this stage I'm thinking to abstract If you folks wanted I could keep a |
Thanks Henning! Was thinking, during runs some particular cases diverge (throw a error). If |
older version of pydantic need typing.List |
Could you elaborate on this? i don't completely understand your goals. Maybe you can create a new issue where you describe your goals. |
This will be quite challenging to implement, especially considering that OpenFOAM is typically run on large computing clusters using a job scheduling system. Moreover, simulations involving very large meshes can cost several thousand euros per run to build. |
@gerlero ready to review |
@HenningScheufler Thanks! I’ll take a look over the next week. |
I'm looking to reduce the setup needed for complete CFD novices to compute jet-like flows for health-related innovations (more information on our GitHub). Basically 'turnkey' app (currently in Docker) that allows higher horizontal scaling (following folding@home) using citizen science approach. If that's not clear enough let me know. |
I'm hearing I'm probably an atypical user so thinking better to not listen to me. Thanks Henning. |
@HenningScheufler I've checked out the branch and built the new docs. The new functionality seems fine to me. Right now I'm not able to review everything more thoroughly, but I trust that you've taken the overall design into careful consideration (and that you're using it yourself and it provides good value); in any case, if there's any particular design decision you want to discuss, I'm happy to review that/those specific part(s) of the code.
Let me know if I'm missing something you want me to comment on (and thanks again for all the work). |
Agreed, do you plan to add any major feature in the future (1.x.0) or are you mainly looking at fixed at the moment (1.0.x)
This could be tackled later once the package gets larger but currently i would ship it as one package
The additional dependencies can be installed with (mame TBD)
The documentation needs more work but it currently in a sufficient state (and should be tackled in a separate PR). For the way forward, a dashboard and the CLI should be added. What is the merge strategy? |
The goal is to have a initial implementation that serves as discussion platform for pre and post processing functionality