forked from spacetelescope/pysynphot
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathpinned_throughput.txt
165 lines (119 loc) · 5.95 KB
/
pinned_throughput.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
PROBLEM:
=======
Throughput files containing filter throughput curves that decay to a
very small number, but not actually zero (ie, are not "pinned at zero"
at the ends) cause problems for Pysynphot that they did not cause for
SYNPHOT.
WHY?
====
Differences in waveset handling.
- SYNPHOT takes the intersection of all wavelength sets defined for all
elements of a calculation, and discards all data falling outside the
defined range of the intersection.
- Pysynphot takes the union of all wavelength sets defined for all
elements of a calculation, and preserves all data.
Thus, if the last datapoint in a throughput file is at lambda=20000,
throughput=1e-9:
- SYNPHOT discards all data longward of lambda=20000
- Pysynphot extrapolates a constant throughput of 1e-9 instead
HOW DOES THE PROBLEM MANIFEST?
=============================
We have identified two areas in which the problem manifests:
- countrate calculations will integrate over the full wavelength domain of
the spectrum domain, so the resulting countrate will be larger.
- renormalization calculations will be performed over the full
wavelength domain of the spectrum being renormalized, so the
renormalization constant will be smaller
However, there may be more manifestations. The fundamental problem is
that the throughput curve for these filters are shaped differently for
Pysynphot than they were for SYNPHOT. This is likely to ripple
throughout the system.
WHICH INSTRUMENTS ARE AFFECTED?
===============================
Only a preliminary estimate has been made by checking for the
existence of throughput files that are not precisely equal to zero at
both endpoints. This is an overestimate, because some files, such as
longpass filters and internal optics, should not in fact fall to zero.
According to this preliminary estimate,
ACS, NICMOS, and STIS have many files that are affected.
COS has a few. WFPC3 has only 2 aperture files.
WFPC2 has many; so do FOC,FOS,HRS,HSP.
All supported non-HST modes (Johnson, etc) are unaffected.
WHY SHOULD WE CHANGE THE FILES?
WHY DON'T YOU JUST CHANGE THE CODE INSTEAD?
===========================================
- Taking the union, rather than the intersection, of the wavesets is a
fundamental design feature of Pysynphot that was introduced in order to
solve other problems with SYNPHOT.
- The throughput files in their current state are not well defined.
If the throughput goes to 1e-9 at 20000A, what does it do at 25000?
- You as the instrument teams will have better control over what
Pysynphot should do outside the currently defined ranges if you
actually tell it (ie, modify the files so that the behavior is
welldefined) than allow it to guess (ie, let the software extrapolate).
WHAT ARE THE OPTIONS FOR MODIFICATION?
======================================
Option 1: Update the throughput tables so they are well defined.
**This option is preferred by the NICMOS team.
Pros:
- gives you complete control over the throughput behavior
- requires no code changes for us
- is "the right thing to do"
Cons:
- requires thinking, editing, testing, and delivering a lot of
files
- thinking: what is the correct extrapolation behavior?
- editing: updating the tables to include them
- testing: testing new throughput tables (how?)
How can we minimize the work required?
--------------------------------------
The NICMOS team recommends the following actions to minimize the work
required by the teams:
- In changing the files:
- define a standard modification, eg, "add an extra point 0.001A
from the current end of the table with throughput=0". Then SSB
could write a script that would apply this change, and the teams
would only need to provide a list of affected files.
It would be possible to selectively hand-edit files that warrant
more sophisticated extrapolation.
- In reducing the testing/validation load:
- The algorithm described above, by essentially making a step
function just past the current end of the defined data, would
come closest to the current SYNPHOT behavior, which would in
turn minimize the expected change, thus making it easier to
validate the test results.
- Using the ETC regression tests (ie, the last public release of
the ETC software, using synphot) as the pre-delivery tests would
preclude the need to invent new tests.
- Establishing an appropriately small difference threshold, below
which the tests would be validated, would further make the
validation stage easier.
Option 2: Add a header keyword that will tell Pysynphot to taper the
throughput curve to zero when the file is read in.
**This option has been rejected by the NICMOS team.
Pros:
- requires less thinking, editing, and testing:
- thinking: which files should do this?
- editing: identical for all affected files => scripted
- testing: easier to test files with a new header keyword that
most software ignores, than to test files with actual data table
changes
Cons:
- gives you less control over the behavior
- requires some well-localized code changes for us
- is a bit of a hack
How would the tapering be done?
-------------------------------
One extra point would be added to each end of the table.
The wavelengths to use for these extra points are
calculated by using the same ratio as for the 2 interior points
wcopy[0] = wcopy[1]*wcopy[1]/wcopy[2]
wcopy[-1] = wcopy[-2]*wcopy[-2]/wcopy[-3]
The throughput of the new points would be set to zero.
This change would be implemented when the file was read in, so that
it would be sure to propagate through the entire system.
When the resulting throughput curve was used in any calculations,
it would be linearly interpolated between the last defined point in
the file, and zero at the extra point; longward/shortward of the
extra points would then be extrapolated at the constant value of
zero.