You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: notebooks/03 Training and Testing.ipynb
+14-14Lines changed: 14 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -413,7 +413,7 @@
413
413
"id": "eebb2ef2",
414
414
"metadata": {},
415
415
"source": [
416
-
"The core capability of PyEPO is to build optimization models with GurobiPy, Pyomo, or any other solvers and algorithms, then embed the optimization model into an artificial neural network for the end-to-end training. For this purpose, PyEPO implements **SPO+ loss** and **differentiable Black-Box optimizer**, **differentiable perturbed optimizer**, and **Fenchel-Young loss with Perturbation** as PyTorch autograd modules.\n",
416
+
"The core capability of PyEPO is to build optimization models with GurobiPy, Pyomo, or any other solvers and algorithms, then embed the optimization model into an artificial neural network for the end-to-end training. For this purpose, PyEPO implements **SPO+ loss** and **differentiable Black-Box optimizer**, **differentiable perturbed optimizer**, **Fenchel-Young loss with Perturbation**, **Noise Contrastive Estimation**, and **Learning to Rank** as PyTorch autograd modules.\n",
417
417
"\n",
418
418
"We will train and test the above aproaches."
419
419
]
@@ -1436,7 +1436,7 @@
1436
1436
"id": "7eaab10b",
1437
1437
"metadata": {},
1438
1438
"source": [
1439
-
"it uses a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples."
1439
+
"It uses a noise contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. For the NCE, the cost vector needs to be predicted from contextual data and maximizes the separation of the probability of the optimal solution."
1440
1440
]
1441
1441
},
1442
1442
{
@@ -1473,8 +1473,8 @@
1473
1473
"``pyepo.func.NCE`` allows us to use a noise contrastive estimiation loss for training, which requires parameters:\n",
1474
1474
"- ``optmodel``: an PyEPO optimization model\n",
1475
1475
"- ``processes``: number of processors for multi-thread, 1 for single-core, 0 for all of cores\n",
1476
-
"- ``solve-ratio``: a ratio between 0 and 1 that denotes for what proportion of cost vectors predicted during training the instantiated optimization problem should be solved. Whenever the optimization problem is solved, the obtained solution is added to the solution pool which is ranked over.\n",
1477
-
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set."
1476
+
"- ``solve_ratio``: the ratio of new solutions computed during training\n",
1477
+
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set"
1478
1478
]
1479
1479
},
1480
1480
{
@@ -1661,7 +1661,7 @@
1661
1661
"id": "2327a93b",
1662
1662
"metadata": {},
1663
1663
"source": [
1664
-
"The listwise learning to rank loss measures the difference in how the predicted cost vector and the true cost vector rank a pool of feasible solutions, where listwise ranking measures the scores of the whole ranked lists."
1664
+
"A autograd module for listwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly. For the listwise LTR, the cost vector needs to be predicted from contextual data, and the loss measures the scores of the whole ranked lists."
1665
1665
]
1666
1666
},
1667
1667
{
@@ -1703,8 +1703,8 @@
1703
1703
"``pyepo.func.listwiseLTR`` allows us to use a listwise learning to rank loss for training, which requires parameters:\n",
1704
1704
"- ``optmodel``: an PyEPO optimization model\n",
1705
1705
"- ``processes``: number of processors for multi-thread, 1 for single-core, 0 for all of cores\n",
1706
-
"- ``solve-ratio``: a ratio between 0 and 1 that denotes for what proportion of cost vectors predicted during training the instantiated optimization problem should be solved. Whenever the optimization problem is solved, the obtained solution is added to the solution pool which is ranked over.\n",
1707
-
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set."
1706
+
"- ``solve_ratio``: the ratio of new solutions computed during training\n",
1707
+
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set"
1708
1708
]
1709
1709
},
1710
1710
{
@@ -1911,7 +1911,7 @@
1911
1911
"id": "7477432c",
1912
1912
"metadata": {},
1913
1913
"source": [
1914
-
"The pairwise learning to rank loss measures the difference in how the predicted cost vector and the true cost vector rank a pool of feasible solutions, where pairwise ranking aim to learn the relative ordering of pairs of items."
1914
+
"An autograd module for pairwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly. For the pairwise LTR, the cost vector needs to be predicted from contextual data, and the loss learns the relative ordering of pairs of items."
1915
1915
]
1916
1916
},
1917
1917
{
@@ -1950,10 +1950,10 @@
1950
1950
"id": "27d7caf5",
1951
1951
"metadata": {},
1952
1952
"source": [
1953
-
"``pyepo.func.listwiseLTR`` allows us to use a listwise learning to rank loss for training, which requires parameters:\n",
1953
+
"``pyepo.func.pairwiseLTR`` allows us to use a listwise learning to rank loss for training, which requires parameters:\n",
1954
1954
"- ``optmodel``: an PyEPO optimization model\n",
1955
1955
"- ``processes``: number of processors for multi-thread, 1 for single-core, 0 for all of cores\n",
1956
-
"- ``solve-ratio``: a ratio between 0 and 1 that denotes for what proportion of cost vectors predicted during training the instantiated optimization problem should be solved. Whenever the optimization problem is solved, the obtained solution is added to the solution pool which is ranked over.\n",
1956
+
"- ``solve_ratio``: the ratio of new solutions computed during training\n",
1957
1957
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set."
1958
1958
]
1959
1959
},
@@ -2162,7 +2162,7 @@
2162
2162
"id": "ab4fef25",
2163
2163
"metadata": {},
2164
2164
"source": [
2165
-
"The pointwise learning to rank loss measures the difference in how the predicted cost vector and the true cost vector rank a pool of feasible solutions, where pointwise ranking calculates the ranking scores of the items."
2165
+
"An autograd module for pointwise learning to rank, where the goal is to learn an objective function that ranks a pool of feasible solutions correctly. For the pointwise LTR, the cost vector needs to be predicted from contextual data, and calculates the ranking scores of the items."
2166
2166
]
2167
2167
},
2168
2168
{
@@ -2201,11 +2201,11 @@
2201
2201
"id": "05dea9c7",
2202
2202
"metadata": {},
2203
2203
"source": [
2204
-
"``pyepo.func.listwiseLTR`` allows us to use a listwise learning to rank loss for training, which requires parameters:\n",
2204
+
"``pyepo.func.pointwiseLTR`` allows us to use a listwise learning to rank loss for training, which requires parameters:\n",
2205
2205
"- ``optmodel``: an PyEPO optimization model\n",
2206
2206
"- ``processes``: number of processors for multi-thread, 1 for single-core, 0 for all of cores\n",
2207
-
"- ``solve-ratio``: a ratio between 0 and 1 that denotes for what proportion of cost vectors predicted during training the instantiated optimization problem should be solved. Whenever the optimization problem is solved, the obtained solution is added to the solution pool which is ranked over.\n",
2208
-
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set."
2207
+
"- ``solve_ratio``: the ratio of new solutions computed during training\n",
2208
+
"- ``dataset``: a dataset to initialize the solution pool with. Usually this is simply the training set"
0 commit comments