Alzheimer's: Update testing model#1888
Conversation
docs/source/models/intervention_models/alzheimers/testing_diagnosis.rst
Outdated
Show resolved
Hide resolved
| that most people will get retested within 5 years (Lilly requested that | ||
| tests occur every 3-5 years). Specifically, the probability that a | ||
| simulant *doesn't* get retested between 3 and 5 years (i.e., on one of | ||
| the 5 time steps at 3, 3.5, 4, 4.5, 5) is :math:`(1-0.5)^5 = 3.125\%`. |
There was a problem hiding this comment.
Hmm this works I suppose. I had done it differently in the MSLT and would need to math it out more to compare the relative distribution of testing across years. The MSLT will also be the vast majority of retesting so it probably won't make a huge difference what we do here.
I was thinking, especially since it is the same simulants being retested every time, that when a simulant tested negative we would assign them a "re-test date" uniformly selected for 3-5 years in the future. This would take more data storage, but since everyone in the sim is positive we shouldn't have all that many people testing negative and needing future dates stored.
I suppose I could also update the MSLT to reflect this new methos if we prefer it.
There was a problem hiding this comment.
Let's have Nathaniel update this to request the time until retest be uniformly distributed between 3 and 5 years, as we discussed this morning. How to accomplish this can be a detail left to the engineers, but I expect that having a retest_by column of pd.Timestamp data will be a straightforward approach for them.
There was a problem hiding this comment.
Ok, instead of introducing an explicit "re-test date," I updated my hazard function to be non-constant so that it results in a uniform distribution for the waiting time instead of a geometric distribution.
| upon entering the simulation, we will assign a BBBM testing history to | ||
| each initialized simulant who is eligible for a BBBM test. Since | ||
| simulants are only eligible for testing every three years (more | ||
| precisely, every 6 time steps), we will assign a random test date within |
There was a problem hiding this comment.
@aflaxman I wrote this section post hoc to describe what the engineers actually did, which involves counting time steps explicitly. I think they do it that way to ensure uniformity in sampling since the time steps are not exactly half a year (though really they should be if we wanted to do things better...), so maybe if we just rounded from a continuous time, it would tend to be biased in one direction or the other (though I'm not totally sure). Now that we're re-doing this piece, do you think I should rewrite this in terms of continuous time to avoid locking us into a particular time step, or is it fine to leave it in terms of the number of time steps? Rewriting it will require an explicit strategy for mapping continuous time to a discrete time step -- I'm not sure what the best strategy is or what strategies are compatible with how the engineers think about time steps. I think we are slated to have a wider team meeting about time-step-related issues like this at some point, but I'm not sure when.
There was a problem hiding this comment.
Fine to keep to minimal change and not rewrite
|
Fine to leave in terms of number of time steps. But I don't want to make it a habit for future projects. :yuck face:
________________________________
From: Nathaniel Blair-Stahn ***@***.***>
Sent: Friday, January 23, 2026 11:30 AM
To: ihmeuw/vivarium_research ***@***.***>
Cc: Abraham Flaxman ***@***.***>; Mention ***@***.***>
Subject: Re: [ihmeuw/vivarium_research] Alzheimer's: Update testing model (PR #1888)
@NathanielBlairStahn commented on this pull request.
________________________________
In docs/source/models/intervention_models/alzheimers/testing_diagnosis.rst<https://urldefense.com/v3/__https://github.com/ihmeuw/vivarium_research/pull/1888*discussion_r2722549117__;Iw!!K-Hz7m0Vt54!l7L2oHL2m5hCL3KuGDzucYs7ocr5VvMlbwCWBkemxFm6OoI6AZVbT28DHgPS4DHCdhTRstFsLfSXrFcTa_1T$>:
On initialization
'''''''''''''''''
…-In order to avoid having all eligible simulants be tested immediately
-upon entering the simulation, we will assign a BBBM testing history to
-each initialized simulant who is eligible for a BBBM test. Since
-simulants are only eligible for testing every three years (more
-precisely, every 6 time steps), we will assign a random test date within
@aflaxman<https://urldefense.com/v3/__https://github.com/aflaxman__;!!K-Hz7m0Vt54!l7L2oHL2m5hCL3KuGDzucYs7ocr5VvMlbwCWBkemxFm6OoI6AZVbT28DHgPS4DHCdhTRstFsLfSXrKbHkLEn$> I wrote this section post hoc to describe what the engineers actually did, which involves counting time steps explicitly. I think they do it that way to ensure uniformity in sampling since the time steps are not exactly half a year (though really they should be if we wanted to do things better...), so maybe if we just rounded from a continuous time, it would tend to be biased in one direction or the other (though I'm not totally sure). Now that we're re-doing this piece, do you think I should rewrite this in terms of continuous time to avoid locking us into a particular time step, or is it fine to leave it in terms of the number of time steps? Rewriting it will require an explicit strategy for mapping continuous time to a discrete time step -- I'm not sure what the best strategy is or what strategies are compatible with how the engineers think about time steps. I think we are slated to have a wider team meeting about time-step-related issues like this at some point, but I'm not sure when.
—
Reply to this email directly, view it on GitHub<https://urldefense.com/v3/__https://github.com/ihmeuw/vivarium_research/pull/1888*pullrequestreview-3699236630__;Iw!!K-Hz7m0Vt54!l7L2oHL2m5hCL3KuGDzucYs7ocr5VvMlbwCWBkemxFm6OoI6AZVbT28DHgPS4DHCdhTRstFsLfSXrIVICEVP$>, or unsubscribe<https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AAAMQJBZHR4YHTYOFOL5WI34IJZD7AVCNFSM6AAAAACSU3WWGGVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTMOJZGIZTMNRTGA__;!!K-Hz7m0Vt54!l7L2oHL2m5hCL3KuGDzucYs7ocr5VvMlbwCWBkemxFm6OoI6AZVbT28DHgPS4DHCdhTRstFsLfSXrMjVaNAz$>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
| that most people will get retested within 5 years (Lilly requested that | ||
| tests occur every 3-5 years). Specifically, the probability that a | ||
| simulant *doesn't* get retested between 3 and 5 years (i.e., on one of | ||
| the 5 time steps at 3, 3.5, 4, 4.5, 5) is :math:`(1-0.5)^5 = 3.125\%`. |
There was a problem hiding this comment.
Let's have Nathaniel update this to request the time until retest be uniformly distributed between 3 and 5 years, as we discussed this morning. How to accomplish this can be a detail left to the engineers, but I expect that having a retest_by column of pd.Timestamp data will be a straightforward approach for them.
| upon entering the simulation, we will assign a BBBM testing history to | ||
| each initialized simulant who is eligible for a BBBM test. Since | ||
| simulants are only eligible for testing every three years (more | ||
| precisely, every 6 time steps), we will assign a random test date within |
There was a problem hiding this comment.
Fine to keep to minimal change and not rewrite
…e 'on timestep' instructions
docs/source/models/intervention_models/alzheimers/testing_diagnosis.rst
Outdated
Show resolved
Hide resolved
|
|
||
| - Simulant is not in MCI or AD dementia state (they can only be in | ||
| susceptible or preclinical) | ||
| - Simulant age is :math:`\ge 65` and :math:`< 80` |
There was a problem hiding this comment.
Note this change: >= 65 now, at CSU client's request. (It was different before)
Albrja/mic 6782/update testing Update testing model - *Category*: Feature - *JIRA issue*: https://jira.ihme.washington.edu/browse/MIC-6782 - *Research reference*: ihmeuw/vivarium_research#1888 Changes and notes -update retesting period so simulants are retested uniformly from 3-5 years instead of every 3 years -update initialization to reflect updates to testing history sampling Verification and Testing Looked at simulants in interactive simulation and saw the correct sampling for simulants test history and future test dates.
Note: There are some things left to confirm or update, so I marked this as a draft pull request: