-
Notifications
You must be signed in to change notification settings - Fork 37
particle injection implementation + refactoring #183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #183 +/- ##
==========================================
- Coverage 97.22% 96.89% -0.34%
==========================================
Files 44 48 +4
Lines 3826 4215 +389
==========================================
+ Hits 3720 4084 +364
- Misses 106 131 +25 ☔ View full report in Codecov by Sentry. |
ef85d32 to
713f8ea
Compare
aa4fbe6 to
f1c1dab
Compare
e317303 to
cbd10f1
Compare
ad553be to
33da5c2
Compare
33da5c2 to
a746091
Compare
| sed_nu_fnu = synch.sed_flux(sed_x * ev_per_gamma.to("Hz", equivalencies=u.spectral())) | ||
| else: | ||
| # for the last iteration, the default interpolated picks large part of the peak at the last bean, | ||
| # causing SED much higher; so we must use exactly the same interpolation points as the reference data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...SED to be much ...
Also I am not sure if I understand the problem here. You either way need to interpolate because the energy bins are shifting.
Does the approach for i==4 work also for i<4? If so you could just use it for testing all the time moments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The i==4 is the moment when we start to have a high peak at the last bean where acceleration and loss rates get equal, and intepolation error is high in this single point, so it needs some sort of a hack/workaround to make the sed_flux method work correctly. I'm not sure if it makes sense to use the same workaround for earlier steps when it's not needed. Using it in the last step only, we don't need to reset the n_e on the blob back to the original distribution.
| }) | ||
| my_data.append(df) | ||
|
|
||
| ignore_pos = [8,13,29] # positions where sharp density changes occur |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
strange to have it in 3 points, or are those problematic points show up for different times?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes they are for different times... i think for just 3 points it's not worth the effort to use different points for different times. Added the exaplnation
d984058 to
d9950a9
Compare
| step_time = self._step_duration | ||
| else: | ||
| max_times = self._calc_max_time_per_bin( | ||
| en_bins, density, subgroups_density, en_chg_rates_grouped, rel_injection_rates_grouped, abs_injection_rates_grouped) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you have a function with so many parameters it is more clear and less error prone not to use position arguments but execute it in this style ...(AAA=aaa, BBB=bbb, ...)
especially that some of the functions take first rel_inejction and some abs_injection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cleaned it up to have the same order (rel first, abs second) everywhere.
I started also converting it to the AAA=aaa, BBB=bbb, ... style, but because parameter names are quite long, the code started getting quite obscure and difficult to read with these repetitions. So i think I would keep it as is, especially that every parameter uses a different unit, so the chances for an unnoticed error is rather low.
| max_times = np.repeat(np.inf, len(density)) | ||
| for i in (range(subgroups_density.shape[0])): | ||
| if np.any((density_grouped[i] == 0) & (abs_injection_rates_grouped[i] < 0)): | ||
| raise ValueError("Particles cannot escape when density is zero") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with 'absolute' escape you might also get negative density that would be unphysical as well, so you can change the condition from == 0 to <=0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, but in such case I'd prefer to catch it earlier, for easier debugging. So I just added a check in the place where density is calculated. So here, it will not be possible to get negative value.
…, removed "unit" parameter, prevent negative densities
No description provided.