Skip to content

Updated dichotomous multiplicative model#103

Open
alibow wants to merge 8 commits intomainfrom
alibow/mult_model_dichotomous
Open

Updated dichotomous multiplicative model#103
alibow wants to merge 8 commits intomainfrom
alibow/mult_model_dichotomous

Conversation

@alibow
Copy link
Contributor

@alibow alibow commented Feb 4, 2021

No description provided.

Copy link
Member

@aflaxman aflaxman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. Does the terminology here correspond to Nathaniel's latex writeup?

What is up with the 0 DALYs averted in one location?
image

@alibow
Copy link
Contributor Author

alibow commented Feb 4, 2021

@aflaxman

The terminology largely aligns with Nathaniel's write-up, but I will update to completely align - I think that is a good idea.

The location with zero coverage has exactly the same values for eats_fortified and eats_fortifiable in our coverage data. Both of these values are slightly less than eats_vehicle, so when we layer in that additional piece of the scale-up, there will be non-zero values, but they will be very small.

@NathanielBlairStahn
Copy link
Collaborator

The terminology largely aligns with Nathaniel's write-up, but I will update to completely align - I think that is a good idea.

Rather than aligning it with my math notation, it might be better to use more descriptive variable names, e.g. "p_counterfactual" or "p_intervention" instead of "p_star." Or even spell out "prevalence" or "coverage" instead of "p" and "alpha." In my view, the difference is that math notation should ideally be compact and easy to visually digest, whereas code should ideally be self-explanatory.

@NathanielBlairStahn
Copy link
Collaborator

# generate draws
    """This currently relies on two major assumptions:
    1. Normal distribution (not truncated)
    2. The same percentile from the eats_fortified and eats_fortifiable distributions sampled for each draw
    
    Assumption number two is likely overly restrictive, but was chosen such that eats_fortified will 
    always be less than eats_fortifiable at the draw level (this is consistent with methodology described
    in 2017 concept model, but is achieved by setting the same random seed to sample each of these
    parameters)"""
    
    for data in [alpha, alpha_star]:
        np.random.seed(1246)
        for i in list(range(0,1000)):
            
            data[f'draw_{i}'] = np.random.normal(data.value_mean, 
                                                      (data.value_975_percentile - data.value_025_percentile) / 2 / 1.96) / 100

Two comments on this, both of which can be addressed by adding assert statements (though if they fail, it might require more work):

  1. Normal distribution (not truncated)

Since you're not truncating, I think you should add an assert statement to verify that all draws are between 0 and 1.

  1. The same percentile from the eats_fortified and eats_fortifiable distributions sampled for each draw...
    is achieved by setting the same random seed to sample each of these
    parameters

Are you sure that setting the random seed will cause the percentile ranks to be the same? That's plausible, but it seems like a scipy implementation detail that I wouldn't want to dig into.

I think you should add an assert statement after the for loop to verify that alpha_star >= alpha in each draw.

@NathanielBlairStahn
Copy link
Collaborator

Why is the minimum DALYs averted negative in some locations but not others? Is this due to the relative risks, the effect size, or both? I think I would expect to get a negative value for all locations since the relative risks should be the same across locations...

image

@alibow
Copy link
Contributor Author

alibow commented Feb 6, 2021

@NathanielBlairStahn I think you are looking at an old version of the notebook before I updated a few things -- now the minimum value for all locations is negative and it is updated to a truncated normal assumption for the coverage parameters. I am not sure why the value you highlighted is the way it is, but I can check out the old commit and see if I can see if that issue was silently propagated forward. I will add an assert statement to ensure alpha <= alpha_star and also update the terminology :)

@NathanielBlairStahn
Copy link
Collaborator

# generate draws
    """This currently relies on two major assumptions:
    1. Truncated normal distribution
    2. The same percentile from the eats_fortified and eats_fortifiable distributions sampled for each draw
    
    Assumption number two is likely overly restrictive, but was chosen such that eats_fortified will 
    always be less than eats_fortifiable at the draw level (this is consistent with methodology described
    in 2017 concept model, but is achieved by setting the same random seed to sample each of these
    parameters)"""
      
    for data in [alpha, alpha_star]:
              
        data['value_std'] = (data.value_975_percentile - data.value_025_percentile) / 2 / 1.96
        data['a'] = (data.value_025_percentile - data.value_mean) / data.value_std
        data['b'] = (data.value_975_percentile - data.value_mean) / data.value_std       
        np.random.seed(11)
        for i in list(range(0,1000)):
            data[f'draw_{i}'] = scipy.stats.truncnorm.rvs(data.a, data.b, data.value_mean, data.value_std) / 100

@alibow Oh ok, I don't know why the version I was looking at was out of date. Is the above code the most current? I still have one comment:

I haven't used a scipy truncnorm distribution before, but if I'm understanding the documentation correctly, it looks like you're truncating values at the endpoints of the confidence intervals for our data (i.e. the 2.5-percentile and 97.5-percentile for the estimate). But we really just need to make sure the values are between 0% and 100% because they're proportions, so I think it should be:

        data['a'] = (0 - data.value_mean) / data.value_std
        data['b'] = (100 - data.value_mean) / data.value_std

Does that look right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments