Skip to content

Commit dcb384e

Browse files
authored
rm mentions to deepse public mlflow (#334)
1 parent c98babd commit dcb384e

File tree

1 file changed

+9
-17
lines changed

1 file changed

+9
-17
lines changed

tutorials/05_Evaluation.ipynb

+9-17
Original file line numberDiff line numberDiff line change
@@ -732,23 +732,17 @@
732732
"metadata": {},
733733
"source": [
734734
"## Storing results in MLflow\n",
735-
"Storing evaluation results in CSVs is fine but not enough if you want to compare and track multiple evaluation runs. MLflow is a handy tool when it comes to tracking experiments. So we decided to use it to track all of `Pipeline.eval()` with reproducability of your experiments in mind."
735+
"Storing evaluation results in CSVs is fine but not enough if you want to compare and track multiple evaluation runs. MLflow is a handy tool when it comes to tracking experiments. So we decided to use it to track all of `Pipeline.eval()` with reproducibility of your experiments in mind."
736736
]
737737
},
738738
{
739739
"attachments": {},
740740
"cell_type": "markdown",
741741
"metadata": {},
742742
"source": [
743-
"### Host your own MLflow or use deepset's public MLflow"
744-
]
745-
},
746-
{
747-
"attachments": {},
748-
"cell_type": "markdown",
749-
"metadata": {},
750-
"source": [
751-
"If you don't want to use deepset's public MLflow instance under https://public-mlflow.deepset.ai, you can easily host it yourself."
743+
"### MLflow setup\n",
744+
"\n",
745+
"Uncomment the following cell to install and run MLflow locally (does not work in Colab). For other options, refer to the [MLflow documentation](https://www.mlflow.org/docs/latest/index.html)."
752746
]
753747
},
754748
{
@@ -907,8 +901,8 @@
907901
" evaluation_set_meta={\"name\": \"nq_dev_subset_v2.json\"},\n",
908902
" pipeline_meta={\"name\": \"sparse-pipeline\"},\n",
909903
" add_isolated_node_eval=True,\n",
910-
" experiment_tracking_tool=\"mlflow\",\n",
911-
" experiment_tracking_uri=\"https://public-mlflow.deepset.ai\",\n",
904+
" # experiment_tracking_tool=\"mlflow\", # UNCOMMENT TO USE MLFLOW\n",
905+
" # experiment_tracking_uri=\"YOUR-MLFLOW-TRACKING-URI\", # UNCOMMENT TO USE MLFLOW\n",
912906
" reuse_index=True,\n",
913907
")"
914908
]
@@ -948,8 +942,8 @@
948942
" evaluation_set_meta={\"name\": \"nq_dev_subset_v2.json\"},\n",
949943
" pipeline_meta={\"name\": \"embedding-pipeline\"},\n",
950944
" add_isolated_node_eval=True,\n",
951-
" experiment_tracking_tool=\"mlflow\",\n",
952-
" experiment_tracking_uri=\"https://public-mlflow.deepset.ai\",\n",
945+
" # experiment_tracking_tool=\"mlflow\", # UNCOMMENT TO USE MLFLOW\n",
946+
" # experiment_tracking_uri=\"YOUR-MLFLOW-TRACKING-URI\", # UNCOMMENT TO USE MLFLOW\n",
953947
" reuse_index=True,\n",
954948
" answer_scope=\"context\",\n",
955949
")"
@@ -960,9 +954,7 @@
960954
"cell_type": "markdown",
961955
"metadata": {},
962956
"source": [
963-
"You can now open MLflow (e.g. https://public-mlflow.deepset.ai/ if you used the public one hosted by deepset) and look for the haystack-eval-experiment experiment. Try out mlflow's compare function and have fun...\n",
964-
"\n",
965-
"Note that on our public mlflow instance we are not able to log artifacts like the evaluation results or the piplines.yaml file."
957+
"You can now open MLflow and look for the haystack-eval-experiment experiment. Try out mlflow's compare function and have fun..."
966958
]
967959
},
968960
{

0 commit comments

Comments
 (0)