Skip to content

Commit ac68d32

Browse files
committed
change recommendation from scipy.optimize.minimize to skopt.optimizer.gp_minimize -- works much better (and faster)!
1 parent e703724 commit ac68d32

File tree

4 files changed

+11
-21
lines changed

4 files changed

+11
-21
lines changed

content/assignments/Assignment_2:Search_of_Associative_Memory_Model/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ You will fit **eight parameters** to optimize the match to human recall data:
110110
7. $m_{1_{\text{max}}}$: maximum number of *contextual* association cueing failures
111111
8. $m_{2_{\text{max}}}$: maximum number of *episodic* association cueing failures
112112

113-
You can choose any approach you wish to fit these parameters. My "recommended" approach is to use [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) to minimize the mean squared error between the point-by-point observed vs. model-predicted values for the following behavioral curves:
113+
You can choose any approach you wish to fit these parameters. My "recommended" approach is to use [skopt.optimizer.gp_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.optimizer.gp_minimize.html#skopt.optimizer.gp_minimize) to minimize the mean squared error between the point-by-point observed vs. model-predicted values for the following behavioral curves:
114114
- $p(\text{first recall})$: probability of recalling each item **first** as a function of its *presentation position*
115115
- $p(\textit{recall})$: probability of recalling each item at *any* output position as a function of its presentation position
116116
- lag-CRP: probability of recalling item $i$ given that item $j$ was the previous recall, as a function of $lag = i - j$.
@@ -139,7 +139,7 @@ You can use the [example notebook](https://contextlab.github.io/memory-models-co
139139
- To help with computing mean squared error, it will be useful to have a function that takes in a dataset as input and returns a vector comprising each of these curves, for each list length and presentation rate, concatenated together into a single vector.
140140

141141
### **Step 4: Fit Model Parameters**
142-
- To compute mean squared error for a given set of model parameters, use the function you wrote above to compute the concatenated behavioral curves for the *observed recalls* and the *model-predicted recalls*. The average squared point-by-point difference between the vectors is the mean squared error. You'll want to set up [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) to find the set of model parameters that minimizes the mean squared error between the observed and predicted curves, using only the training dataset.
142+
- To compute mean squared error for a given set of model parameters, use the function you wrote above to compute the concatenated behavioral curves for the *observed recalls* and the *model-predicted recalls*. The average squared point-by-point difference between the vectors is the mean squared error. You'll want to set up [skopt.optimizer.gp_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.optimizer.gp_minimize.html#skopt.optimizer.gp_minimize) to find the set of model parameters that minimizes the mean squared error between the observed and predicted curves, using only the training dataset.
143143
- Importantly, you should use the same parameters across all trials and experimental conditions. You're fitting the *average* performance, not data from individual trials or participants.
144144

145145
### **Step 5: Generate Key Plots**

content/assignments/Assignment_2:Search_of_Associative_Memory_Model/sam_assignment_template.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@
203203
"Other tasks:\n",
204204
" - Fit params to [Murdock (1962) dataset](https://github.com/ContextLab/memory-models-course/tree/main/content/assignments/Assignment_2%3ASearch_of_Associative_Memory_Model/Murd62%20data) that you downloaded with the `load_data` function.\n",
205205
" - You'll need to define a \"loss\" function. I suggest computing MSE for one or more behavioral curves, computed for a subset of the Murdock (1962) participants/lists\n",
206-
" - I suggest using [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) to estimate the model parameters.\n",
206+
" - I suggest using [skopt.optimizer.gp_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.optimizer.gp_minimize.html#skopt.optimizer.gp_minimize) to estimate the model parameters.\n",
207207
" - Create observed/predicted plots for held-out data:\n",
208208
" - p(first recall)\n",
209209
" - p(recall)\n",

content/assignments/Assignment_3:Context_Maintenance_and_Retrieval_Model/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Fit the model to the following curves and measures from the Polyn et al. (2009)
120120
There are several possible ways to accomplish this. My recommended approach is:
121121
1. Split the dataset into a training set and a test set
122122
2. Compute the above curves/measures for the training set and concatenate them into a single vector
123-
3. Use [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize) to find the set of model parameters that minimizes the mean squared error between the observed curves and the CMR-estimated curves (using the given parameters).
123+
3. Use [skopt.optimizer.gp_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.optimizer.gp_minimize.html#skopt.optimizer.gp_minimize) to find the set of model parameters that minimizes the mean squared error between the observed curves and the CMR-estimated curves (using the given parameters).
124124
4. Compare the observed performance vs. CMR-estimated performance (using the best-fitting parameters) for the test data
125125

126126

content/assignments/Assignment_3:Context_Maintenance_and_Retrieval_Model/cmr_assignment_template.ipynb

Lines changed: 7 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
{
44
"cell_type": "markdown",
55
"metadata": {
6-
"id": "view-in-github",
7-
"colab_type": "text"
6+
"colab_type": "text",
7+
"id": "view-in-github"
88
},
99
"source": [
1010
"<a href=\"https://colab.research.google.com/github/ContextLab/memory-models-course/blob/main/content/assignments/Assignment_3%3AContext_Maintenance_and_Retrieval_Model/cmr_assignment_template.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
@@ -113,19 +113,13 @@
113113
"\n",
114114
"data = load_data()"
115115
]
116-
},
117-
{
118-
"cell_type": "code",
119-
"source": [],
120-
"metadata": {
121-
"id": "IjVqOOsZEM4q"
122-
},
123-
"id": "IjVqOOsZEM4q",
124-
"execution_count": null,
125-
"outputs": []
126116
}
127117
],
128118
"metadata": {
119+
"colab": {
120+
"include_colab_link": true,
121+
"provenance": []
122+
},
129123
"kernelspec": {
130124
"display_name": "memory-course",
131125
"language": "python",
@@ -142,12 +136,8 @@
142136
"nbconvert_exporter": "python",
143137
"pygments_lexer": "ipython3",
144138
"version": "3.11.0"
145-
},
146-
"colab": {
147-
"provenance": [],
148-
"include_colab_link": true
149139
}
150140
},
151141
"nbformat": 4,
152142
"nbformat_minor": 5
153-
}
143+
}

0 commit comments

Comments
 (0)