Replies: 4 comments 32 replies
-
There are several papers that explore different sampling strategies for super droplets, e.g.:
|
Beta Was this translation helpful? Give feedback.
-
Hey Jason, Sylwester pointed me to your discussion thread. We've been doing some work lately looking at a different part of the SDM algorithm, but looking at results across different sampling methods, and he thought I might find this interesting (he's right). I'm very curious what you mean by constant expected collision rate - how are you flattening that from a value calculated in pairs to define a single superdroplet value? How does kernel choice play into that? How does the resampling handle other attributes (such as dry mass), or other attributes that potentially have interesting affects on the way the size spectrum behaves? I don't know if you're familiar with PyPartMC, another particle based model developed by Sylwester and the open-atmos community geared towards aerosol representation. It has a different coalescence algorithm (the Weighted Flow Algorithm) as opposed to SDM, which you might find interesting (described in DeVille et al., 2011). Instead of having multiplicities, weights are a function of size and are not an attribute of the droplets. This changes the update process in many ways of course but the key thing that "constant expected collision rate" reminded me of is that in WFA, collision rates of particles of a certain size can be precomputed because the multiplicity/weighting of that is a constant, and not particle specific. It allows for some other interesting optimization as well, because of this they are able to do an optimized linear sampling rather than the random pairs. Whether or not its similar to what you've done here, I think there are some interesting analogies there! Secondly, I'm interested in your resampling method for other reasons: I'm trying to do analysis on a coagulation scheme in the 2D environment. The model runs for a set "SpinUp" time to equilibrate with just condensation and advection before processes like Coagulation/Sedimentation are turned on. The idea is that this gives a chance for the Kohler stiffness/unrealistic prescribed initial supersaturation to even out, but I would love to sanity check that the differences in sampling methods are only through the Coagulation process alone (as expected), and not from anything happening in the SpinUp. Love the visualizations! Such a cool way to look at droplet growth evolution. |
Beta Was this translation helpful? Give feedback.
-
Very interesting, thanks for the explanation of the algorithm! I'll have to let this sink for a while-- I think I mostly understand what you are doing with creating the collisions probability pdf, but I don't have much intuition for the implications beyond that. How unwieldy does the matrix multiplication get at large system Ns? Seems pretty analogous to the way binned models can optimize how to move mass. I'm still a little confused by the resampling step -- does it only inherit non radius attributes and the radius is chosen based on equal pdf areas, or is the value of the pdf chosen at radius of superdroplet i, and inheriting everything including size attributes (which might not be equal collision rates)? New questions are:
The analogy to WFA is pretty strong, now that I understand it a little better: different algorithm, but they are able to choose and scale pairs so that everything tested the same collision rate (within a small range), to maximize the ratio of successful events to pairs tested. I got very excited about this when I learned about it and the overlap it might have with my sublinear project, but havent been able to find a way to apply it to SDM with the uneven multiplicity spread, and the need to know the whole system before making being able to make optimized sampling choices. Very interested to hear your thoughts on these! |
Beta Was this translation helpful? Give feedback.
-
Ahh I was misunderstanding before, I see what you are saying about the unique set, and point 3 that number of superdroplets assigned is the issue, not multiplicity <1. I'd love to try it out, I feel like that would give me more intuition with the algorithm! |
Beta Was this translation helpful? Give feedback.
-
I started a conversation with Clare Singer the other day and was curious if anyone else had any thoughts.
In short, what is the best way to initialize super droplet sizes and multiplicities?
I am using between 2^12 and 2^14 super droplets in the parcel environment, dv=1000 m^3. I have been using constant multiplicity as implemented in PySDM the whole time until recently experimenting with constant (multiplicity) * r^p; i.e. constant pth moment (p=1 constant length, p=2 constant area, etc.). It seems to really change things for p=3 for example when using just a single lognormal mode around ~.12 micron. The collision sampling frequency increases on the large tail of the distribution, decreases for the small tail, and the maximum and minumum resolved radii both increase, all leading I think to faster droplet growth over time.
Am I simply not using enough super droplets?
Beta Was this translation helpful? Give feedback.
All reactions