From 5c982a36d6d6ede0112feff1750c979a6cebfee7 Mon Sep 17 00:00:00 2001 From: "Daniel J. Beutel" Date: Sun, 8 Mar 2026 20:28:26 +0100 Subject: [PATCH 1/5] docs(framework): Document local SuperLink usage --- .../source/how-to-configure-audit-logging.rst | 4 +- .../docs/source/how-to-run-flower-locally.rst | 195 +++++++++++++++ ...w-to-run-flower-with-deployment-engine.rst | 14 +- .../docs/source/how-to-run-simulations.rst | 13 +- .../source/how-to-use-cli-json-output.rst | 224 +++++++++--------- .../docs/source/ref-flower-configuration.rst | 16 +- .../source/ref-flower-runtime-comparison.rst | 10 +- framework/docs/source/simulate.rst | 1 + .../source/tutorial-quickstart-fastai.rst | 64 ++--- .../tutorial-quickstart-huggingface.rst | 69 ++---- .../docs/source/tutorial-quickstart-jax.rst | 89 ++----- .../docs/source/tutorial-quickstart-mlx.rst | 69 ++---- .../tutorial-quickstart-pytorch-lightning.rst | 68 ++---- .../source/tutorial-quickstart-pytorch.rst | 69 ++---- .../tutorial-quickstart-scikitlearn.rst | 79 ++---- .../source/tutorial-quickstart-tensorflow.rst | 66 ++---- .../source/tutorial-quickstart-xgboost.rst | 65 ++--- ...-build-a-strategy-from-scratch-pytorch.rst | 5 +- ...al-series-customize-the-client-pytorch.rst | 5 +- ...series-get-started-with-flower-pytorch.rst | 74 ++---- ...-a-federated-learning-strategy-pytorch.rst | 8 +- 21 files changed, 514 insertions(+), 693 deletions(-) create mode 100644 framework/docs/source/how-to-run-flower-locally.rst diff --git a/framework/docs/source/how-to-configure-audit-logging.rst b/framework/docs/source/how-to-configure-audit-logging.rst index cf83ae2a0c5c..2bd3ea26ddee 100644 --- a/framework/docs/source/how-to-configure-audit-logging.rst +++ b/framework/docs/source/how-to-configure-audit-logging.rst @@ -102,12 +102,12 @@ Here is an example output when a user runs ``flwr run`` (note the ``"action": INFO : ControlServicer.StartRun INFO : [AUDIT] {"timestamp": "2025-07-12T10:24:21Z", "actor": {"actor_id": "...", "description": "...", "ip_address": "..."}, "event": {"action": "ControlServicer.StartRun", "run_id": "...", "fab_hash": "..."}, "status": "completed"} -Here is another example output when a user runs ``flwr ls``: +Here is another example output when a user runs ``flwr list``: .. code-block:: shell INFO : [AUDIT] {"timestamp": "2025-07-12T10:26:35Z", "actor": {"actor_id": "...", "description": "...", "ip_address": "..."}, "event": {"action": "ControlServicer.ListRuns", "run_id": null, "fab_hash": null}, "status": "started"} - INFO : ControlServicer.List + INFO : ControlServicer.ListRuns INFO : [AUDIT] {"timestamp": "2025-07-12T10:26:35Z", "actor": {"actor_id": "...", "description": "...", "ip_address": "..."}, "event": {"action": "ControlServicer.ListRuns", "run_id": null, "fab_hash": null}, "status": "completed"} And here is an example when a SuperNode pulls a message from the SuperLink: diff --git a/framework/docs/source/how-to-run-flower-locally.rst b/framework/docs/source/how-to-run-flower-locally.rst new file mode 100644 index 000000000000..ce175ee82735 --- /dev/null +++ b/framework/docs/source/how-to-run-flower-locally.rst @@ -0,0 +1,195 @@ +:og:description: Learn how local `flwr run` uses a managed local SuperLink, how to inspect runs, stream logs, stop runs, and stop the background local SuperLink process. +.. meta:: + :description: Learn how local `flwr run` uses a managed local SuperLink, how to inspect runs, stream logs, stop runs, and stop the background local SuperLink process. + +=========================================== +Run Flower Locally with a Managed SuperLink +=========================================== + +When you use a local Flower configuration profile with ``options.*`` and no explicit +``address``, ``flwr`` does not call the simulation runtime directly. Instead, Flower +starts a managed local ``flower-superlink`` on demand, submits the run through the +Control API, and the local SuperLink executes the run with the simulation runtime. + +This is the default experience for a profile like the one created automatically in your +Flower configuration: + +.. code-block:: toml + + [superlink.local] + options.num-supernodes = 10 + options.backend.client-resources.num-cpus = 1 + options.backend.client-resources.num-gpus = 0 + +If ``FLWR_HOME`` is unset, Flower stores this managed local runtime under +``$HOME/.flwr/local-superlink``. + +**************************** + What Flower starts for you +**************************** + +On the first command that needs the local Control API, Flower starts a local +``flower-superlink`` process automatically. That process: + +- listens on ``127.0.0.1:39093`` for the Control API +- listens on ``127.0.0.1:39094`` for SimulationIO +- keeps running in the background after your command finishes +- is reused by later ``flwr run``, ``flwr list``, ``flwr log``, and ``flwr stop`` + commands + +You can override those default ports with the environment variables +``FLWR_LOCAL_CONTROL_API_PORT`` and ``FLWR_LOCAL_SIMULATIONIO_API_PORT``. + +***************** + Submit a run +***************** + +From your Flower App directory, submit a run as usual: + +.. code-block:: shell + + $ flwr run . + +Representative output: + +.. code-block:: text + + Successfully built flwrlabs.myapp.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 + +Plain ``flwr run .`` submits the run, prints the run ID, and returns. If you want to +submit the run and immediately follow the logs in the same terminal, use: + +.. code-block:: shell + + $ flwr run . --stream + +************ + List runs +************ + +To see all runs known to the local SuperLink: + +.. code-block:: shell + + $ flwr list + +To inspect one run in detail: + +.. code-block:: shell + + $ flwr list --run-id 1859953118041441032 + +*********** + View logs +*********** + +To stream logs continuously: + +.. code-block:: shell + + $ flwr log 1859953118041441032 --stream + +To fetch the currently available logs once and return: + +.. code-block:: shell + + $ flwr log 1859953118041441032 --show + +Representative streamed output: + +.. code-block:: text + + INFO : Starting FedAvg strategy: + INFO : Number of rounds: 3 + INFO : [ROUND 1/3] + INFO : configure_train: Sampled 5 nodes (out of 10) + INFO : aggregate_train: Received 5 results and 0 failures + ... + +************ + Stop a run +************ + +To stop a submitted or running run: + +.. code-block:: shell + + $ flwr stop 1859953118041441032 + +This stops the run only. It does **not** stop the background local SuperLink process. + +********************************* + Local runtime files and state +********************************* + +The managed local SuperLink keeps its files in ``$FLWR_HOME/local-superlink/``: + +- ``state.db`` stores the local SuperLink state +- ``ffs/`` stores SuperLink file artifacts +- ``superlink.log`` stores the local SuperLink process output + +These files persist across local runs until you remove them yourself. + +*************************************** + Stop the background local SuperLink +*************************************** + +There is currently no dedicated ``flwr`` command to stop the managed local SuperLink +process. To stop it, first inspect the matching process and then terminate it. + +macOS/Linux +=========== + +Inspect the process: + +.. code-block:: shell + + $ ps aux | grep '[f]lower-superlink.*--control-api-address 127.0.0.1:39093' + +Stop the process: + +.. code-block:: shell + + $ pkill -f 'flower-superlink.*--control-api-address 127.0.0.1:39093' + +Windows PowerShell +================== + +Inspect the process: + +.. code-block:: powershell + + PS> Get-CimInstance Win32_Process | + >> Where-Object { + >> $_.CommandLine -like '*flower-superlink*--control-api-address 127.0.0.1:39093*' + >> } | + >> Select-Object ProcessId, CommandLine + +Stop the process: + +.. code-block:: powershell + + PS> Get-CimInstance Win32_Process | + >> Where-Object { + >> $_.CommandLine -like '*flower-superlink*--control-api-address 127.0.0.1:39093*' + >> } | + >> ForEach-Object { Stop-Process -Id $_.ProcessId } + +If you changed the local Control API port with ``FLWR_LOCAL_CONTROL_API_PORT``, replace +``39093`` in the commands above. + +******************* + Troubleshooting +******************* + +If a local run fails before it starts, or if the managed local SuperLink does not come +up correctly, inspect: + +.. code-block:: text + + $FLWR_HOME/local-superlink/superlink.log + +That log contains the output of the background ``flower-superlink`` process and is the +first place to check for startup errors, port conflicts, or runtime failures. diff --git a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst index b3c91409b1fc..2eaa139b2fd9 100644 --- a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst +++ b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst @@ -76,8 +76,10 @@ executing ``flwr new``: .. note:: - If you decide to run the project with ``flwr run .``, the Simulation Engine will be - used. Continue to Step 2 to know how to instead use the Deployment Engine. + If you decide to run the project with ``flwr run .`` against the default local + profile, Flower submits the run to a managed local SuperLink, which then executes + it with the Simulation Runtime. Continue to Step 2 to instead point ``flwr run`` at + a named SuperLink connection for the Deployment Runtime. .. tip:: @@ -176,10 +178,10 @@ At this point, you have launched two SuperNodes that are connected to the same SuperLink. The system is idling waiting for a ``Run`` to be submitted. Before you can run your Flower App through the federation we need a way to tell ``flwr run`` that the App is to be executed via the SuperLink we just started, instead of using the local -Simulation Engine (the default). Doing this is easy: define a new SuperLink connection -in the **Flower Configuration** file, indicate the address of the SuperLink and pass a -certificate (if any) or set the insecure flag (only when testing locally, real -deployments require TLS). +managed local SuperLink workflow used by the default local profile. Doing this is easy: +define a new SuperLink connection in the **Flower Configuration** file, indicate the +address of the SuperLink and pass a certificate (if any) or set the insecure flag (only +when testing locally, real deployments require TLS). 1. Find the Flower Configuration TOML file in your machine. This file is automatically create for your when you first use a Flower CLI command. Use ``flwr config list`` to diff --git a/framework/docs/source/how-to-run-simulations.rst b/framework/docs/source/how-to-run-simulations.rst index 37fbaae43042..6ab74d084148 100644 --- a/framework/docs/source/how-to-run-simulations.rst +++ b/framework/docs/source/how-to-run-simulations.rst @@ -109,11 +109,14 @@ multiple apps to choose from. The example below uses the ``PyTorch`` quickstart flwr new @flwrlabs/quickstart-pytorch Then, follow the instructions shown after completing the |flwr_new_link|_ command. When -you execute |flwr_run_link|_, you'll be using the ``Simulation Engine``. - -For local simulation profiles, ``flwr run`` submits the run to a local SuperLink via the -Control API. If the profile has ``options.*`` and no explicit ``address``, Flower starts -a local SuperLink automatically when needed. +you execute |flwr_run_link|_, the run will execute with the ``Simulation Runtime``. + +For local simulation profiles, ``flwr run`` submits the run to a managed local +SuperLink via the Control API. If the profile has ``options.*`` and no explicit +``address``, Flower starts a local SuperLink automatically when needed, keeps it +running in the background, and reuses it for ``flwr list``, ``flwr log``, and +``flwr stop``. See :doc:`how-to-run-flower-locally` for the full local workflow and +runtime lifecycle. Simulation examples =================== diff --git a/framework/docs/source/how-to-use-cli-json-output.rst b/framework/docs/source/how-to-use-cli-json-output.rst index a7b54aa4b568..c2756432e9a9 100644 --- a/framework/docs/source/how-to-use-cli-json-output.rst +++ b/framework/docs/source/how-to-use-cli-json-output.rst @@ -2,28 +2,28 @@ Use CLI JSON output ##################### -The `Flower CLIs `_ come with a built-in JSON output mode. This mode -is useful when you want to consume the output of a Flower CLI programmatically. For -example, you might want to use the output of the ``flwr`` CLI in a script or a -continuous integration pipeline. +The `Flower CLI `_ can return JSON output for automation and +integration with other tools. .. note:: - The JSON output mode is currently only available when using the Flower CLIs with a - `SuperLink `_. Learn more about the `SuperLink` - in the `Flower Architecture Overview `_ page. + JSON output is available for the commands documented here because they operate + through the SuperLink Control API. This includes remote SuperLinks as well as the + managed local SuperLink used by local simulation profiles. -In this guide, we'll show you how to specify a JSON output with the ``flwr run``, ``flwr -ls``, and ``flwr stop`` commands. We will also provide examples of the JSON output for -each of these commands. +This guide shows JSON output for: + +- |flwr_run| +- |flwr_list| +- |flwr_stop| .. |flwr_run| replace:: ``flwr run`` -.. |flwr_ls| replace:: ``flwr ls`` +.. |flwr_list| replace:: ``flwr list`` .. |flwr_stop| replace:: ``flwr stop`` -.. _flwr_ls: ref-api-cli.html#flwr-ls +.. _flwr_list: ref-api-cli.html#flwr-list .. _flwr_run: ref-api-cli.html#flwr-run @@ -33,27 +33,27 @@ each of these commands. ``flwr run`` JSON output ************************** -The |flwr_run|_ command runs a Flower app from a provided directory. Note that if the -app path argument is not passed to ``flwr run``, the current working directory is used -as the default Flower app directory. By default, executing the ``flwr run`` command -prints the status of the app build and run process as follows: +The |flwr_run| command submits a Flower App run. For a local app, the CLI first builds +a FAB and then starts the run through the Control API. + +Representative default output: .. code-block:: bash - $ flwr run - Loading project configuration... - Success - 🎊 Successfully built flwrlabs.myawesomeapp.1-0-0.014c8eb3.fab - 🎊 Successfully started run 1859953118041441032 + $ flwr run . local --stream + Successfully built flwrlabs.myawesomeapp.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 + ... -To get the output in JSON format, pass an additional ``--format json`` flag: +To return structured JSON instead, use ``--format json``: .. code-block:: bash - $ flwr run --format json + $ flwr run . local --format json { "success": true, - "run-id": 1859953118041441032, + "run-id": "1859953118041441032", "fab-id": "flwrlabs/myawesomeapp", "fab-name": "myawesomeapp", "fab-version": "1.0.0", @@ -61,138 +61,142 @@ To get the output in JSON format, pass an additional ``--format json`` flag: "fab-filename": "flwrlabs.myawesomeapp.1-0-0.014c8eb3.fab" } -The JSON output for ``flwr run`` contains the following fields: - -- ``success``: A boolean indicating whether the command was successful. -- ``run-id``: The ID of the run. -- ``fab-id``: The ID of the Flower app. -- ``fab-name``: The name of the Flower app. -- ``fab-version``: The version of the Flower app. -- ``fab-hash``: The short hash of the Flower app. -- ``fab-filename``: The filename of the Flower app. - -If the command fails, the JSON output will contain two fields, ``success`` with the -value of ``false`` and ``error-message``. For example, if the command fails to find the -name of the federation on the SuperLink, the output will look like this: +The |flwr_run| JSON output contains: -.. _json_error_output: +- ``success``: ``true`` if the command succeeded +- ``run-id``: the submitted run ID +- ``fab-id``: the Flower App identifier +- ``fab-name``: the Flower App name +- ``fab-version``: the Flower App version +- ``fab-hash``: the short FAB hash +- ``fab-filename``: the built FAB filename -.. code-block:: bash +If the command fails, the JSON output contains ``success: false`` and +``error-message``. - $ flwr run --format json - { - "success": false, - "error-message": "Loading project configuration... \nSuccess\n There is no `[missing]` federation declared in the `pyproject.toml`.\n The following federations were found:\n\nfed-existing-1\nfed-existing-2\n\n" - } +*************************** + ``flwr list`` JSON output +*************************** -************************* - ``flwr ls`` JSON output -************************* +The |flwr_list| command queries runs from the current SuperLink connection. -The |flwr_ls|_ command lists all the runs in the current project. Similar to ``flwr -run``, if the app path argument is not passed to ``flwr ls``, the current working -directory is used as the Flower app directory. By default, the command list the details -of all runs in a Flower federation in a tabular format: +Representative default output: .. code-block:: bash - $ flwr ls - Loading project configuration... - Success - πŸ“„ Listing all runs... - ┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓ - ┃ Run ID ┃ FAB ┃ Status ┃ Elapsed ┃ Created At ┃ Running At ┃ Finished At ┃ - ┑━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩ - β”‚ 185995311804 β”‚ flwrlabs/my… β”‚ finished:co… β”‚ 00:00:55 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ - β”‚ 1441032 β”‚ (v1.0.0) β”‚ β”‚ β”‚ 11:12:33Z β”‚ 11:12:33Z β”‚ 11:13:29Z β”‚ - β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ - β”‚ 142007406570 β”‚ flwrlabs/my… β”‚ running β”‚ 00:00:05 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ N/A β”‚ - β”‚ 11601420 β”‚ (v1.0.0) β”‚ β”‚ β”‚ 12:18:39Z β”‚ 12:18:39Z β”‚ β”‚ - β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ - -To get the output in JSON format, simply pass the ``--format json`` flag: + $ flwr list + Listing all runs... + ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┓ + ┃ Run ID ┃ FAB ┃ Status ┃ Elapsed ┃ Pending At ┃ Running At ┃ Finished At ┃ + ┑━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━┩ + β”‚ 1859953118041441032 β”‚ flwrlabs/myawes… β”‚ finished:completed β”‚ 00:00:55 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ + β”‚ β”‚ (v1.0.0) β”‚ β”‚ β”‚ 11:12:33Z β”‚ 11:12:33Z β”‚ 11:13:28Z β”‚ + β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ + β”‚ 1420074065701160142 β”‚ flwrlabs/myawes… β”‚ running β”‚ 00:00:09 β”‚ 2024-12-16 β”‚ 2024-12-16 β”‚ N/A β”‚ + β”‚ 0 β”‚ (v1.0.0) β”‚ β”‚ β”‚ 12:18:39Z β”‚ 12:18:39Z β”‚ β”‚ + β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ + +To return structured JSON instead: .. code-block:: bash - $ flwr ls --format json + $ flwr list --format json { "success": true, "runs": [ { - "run-id": 1859953118041441032, - "fab-id": "flwrlabs/myawesomeapp1", - "fab-name": "myawesomeapp1", + "run-id": "1859953118041441032", + "federation": "", + "fab-id": "flwrlabs/myawesomeapp", + "fab-name": "myawesomeapp", "fab-version": "1.0.0", "fab-hash": "014c8eb3", "status": "finished:completed", - "elapsed": "00:00:55", - "created-at": "2024-12-16 11:12:33Z", + "elapsed": 55.0, + "pending-at": "2024-12-16 11:12:33Z", + "starting-at": "2024-12-16 11:12:33Z", "running-at": "2024-12-16 11:12:33Z", - "finished-at": "2024-12-16 11:13:29Z" + "finished-at": "2024-12-16 11:13:28Z", + "network-traffic": { + "inbound-bytes": 12345, + "outbound-bytes": 6789, + "total-bytes": 19134 + }, + "compute-time": { + "serverapp-seconds": 5.2, + "clientapp-seconds": 42.7, + "total-seconds": 47.9 + } }, { - "run-id": 14200740657011601420, - "fab-id": "flwrlabs/myawesomeapp2", - "fab-name": "myawesomeapp2", + "run-id": "14200740657011601420", + "federation": "", + "fab-id": "flwrlabs/myawesomeapp", + "fab-name": "myawesomeapp", "fab-version": "1.0.0", "fab-hash": "014c8eb3", "status": "running", - "elapsed": "00:00:09", - "created-at": "2024-12-16 12:18:39Z", + "elapsed": 9.0, + "pending-at": "2024-12-16 12:18:39Z", + "starting-at": "2024-12-16 12:18:39Z", "running-at": "2024-12-16 12:18:39Z", - "finished-at": "N/A" - }, + "finished-at": "N/A", + "network-traffic": { + "inbound-bytes": 4567, + "outbound-bytes": 2345, + "total-bytes": 6912 + }, + "compute-time": { + "serverapp-seconds": 0.6, + "clientapp-seconds": 8.1, + "total-seconds": 8.7 + } + } ] } -The JSON output for ``flwr ls`` contains similar fields as ``flwr run`` with the -addition of the ``status``, ``elapsed``, ``created-at``, ``running-at``, and -``finished-at`` fields. The ``runs`` key contains a list of dictionaries, each -representing a run. The additional fields are: +Each entry under ``runs`` contains: -- ``status``: The status of the run, either pending, starting, running, or finished. -- ``elapsed``: The time elapsed since the run started, formatted as ``HH:MM:SS``. -- ``created-at``: The time the run was created. -- ``running-at``: The time the run started running. -- ``finished-at``: The time the run finished. +- ``run-id``: the run ID +- ``federation``: the federation identifier, if any +- ``fab-id`` / ``fab-name`` / ``fab-version`` / ``fab-hash``: Flower App metadata +- ``status``: the current run status +- ``elapsed``: elapsed run time in seconds +- ``pending-at`` / ``starting-at`` / ``running-at`` / ``finished-at``: run timestamps +- ``network-traffic``: inbound, outbound, and total bytes +- ``compute-time``: ServerApp, ClientApp, and total compute time in seconds -All timestamps adhere to ISO 8601, UTC and are formatted as ``YYYY-MM-DD HH:MM:SSZ``. +To return the detail view for a single run, use: -You can also use the ``--run-id`` flag to list the details for one run. In this case, -the JSON output will have the same structure as above with only one entry in the -``runs`` key. For more details of this command, see the |flwr_ls|_ documentation. If the -command fails, the JSON output will return two fields, ``success`` and -``error-message``, as shown in :ref:`the above example `. Note that -the content of the error message will be different depending on the error that occurred. +.. code-block:: bash + + $ flwr list --run-id 1859953118041441032 --format json + +This returns the same top-level structure with one entry in ``runs``. *************************** ``flwr stop`` JSON output *************************** -The |flwr_stop|_ command stops a running Flower app for a provided run ID. Similar to -``flwr run``, if the app path argument is not passed to ``flwr stop``, the current -working directory is used as the Flower app directory. By default, the command prints -the status of the stop process as follows: +The |flwr_stop| command stops a submitted or running run by run ID. + +Representative default output: .. code-block:: bash $ flwr stop 1859953118041441032 - Loading project configuration... - Success - βœ‹ Stopping run ID 1859953118041441032... - βœ… Run 1859953118041441032 successfully stopped. + Stopping run ID 1859953118041441032... + Run 1859953118041441032 successfully stopped. -To get the output in JSON format, simply pass the ``--format json`` flag: +To return structured JSON instead: .. code-block:: bash $ flwr stop 1859953118041441032 --format json { "success": true, - "run-id": 1859953118041441032, + "run-id": "1859953118041441032" } -If the command fails, the JSON output will contain two fields ``success`` with the value -of ``false`` and ``error-message``, as shown in :ref:`the above example -`. Note that the content of the error message will be different -depending on the error that occurred. +If the command fails, the JSON output contains ``success: false`` and +``error-message``. diff --git a/framework/docs/source/ref-flower-configuration.rst b/framework/docs/source/ref-flower-configuration.rst index a1a9fcc85023..f29388b42db1 100644 --- a/framework/docs/source/ref-flower-configuration.rst +++ b/framework/docs/source/ref-flower-configuration.rst @@ -28,7 +28,7 @@ you can reference by name when running ``flwr`` commands. For example, you can s configurations for local testing, staging servers, and production deployments, then easily switch between them. -Most ``flwr`` commands (like ``flwr log``, ``flwr ls``, and ``flwr stop``) can use the +Most ``flwr`` commands (like ``flwr log``, ``flwr list``, and ``flwr stop``) can use the Flower Configuration from anywhere on your system. The exception is ``flwr run``, which must be executed from within a Flower App directory to access the app code. @@ -122,7 +122,8 @@ testing before deploying to real distributed environments. [superlink.local] options.num-supernodes = 10 -This creates a simulation connection configuration with 10 virtual SuperNodes. +This creates a managed local SuperLink profile that runs 10 virtual SuperNodes through +the simulation runtime. **Simulation with custom resources** @@ -133,9 +134,9 @@ This creates a simulation connection configuration with 10 virtual SuperNodes. options.backend.client-resources.num-cpus = 1 options.backend.client-resources.num-gpus = 0.1 -This creates a simulation connection configuration with 100 virtual SuperNodes, where -each is allocated 1 CPU and 10% of a GPU. This is useful when you want to control -resource distribution or simulate resource-constrained environments. +This creates a managed local SuperLink profile with 100 virtual SuperNodes, where each +is allocated 1 CPU and 10% of a GPU. This is useful when you want to control resource +distribution or simulate resource-constrained environments. **When to use each** @@ -150,7 +151,10 @@ optional parameters you can use to configure your local simulation. When you use a local simulation profile (``options.*``), Flower CLI commands that communicate with SuperLink use the Control API. If the profile has no explicit -``address``, Flower starts a local SuperLink automatically when needed. +``address``, Flower starts a managed local SuperLink automatically when needed and +reuses it across ``flwr run``, ``flwr list``, ``flwr log``, and ``flwr stop``. See +:doc:`how-to-run-flower-locally` for the full local workflow, background process +behavior, and runtime file locations. *************************** Remote Deployment Example diff --git a/framework/docs/source/ref-flower-runtime-comparison.rst b/framework/docs/source/ref-flower-runtime-comparison.rst index 5a4a04d662c7..fb581ca9147b 100644 --- a/framework/docs/source/ref-flower-runtime-comparison.rst +++ b/framework/docs/source/ref-flower-runtime-comparison.rst @@ -49,9 +49,10 @@ deployment runtime. - In-memory communication. - TLS-enabled gRPC. - - **Server-side Infrastructure** - - Simulation runtime coordinates the spawning of multiple workers (Python process) - which act as `simulated` SuperNodes. The simulation runtime can be started with - or without the `SuperLink `_. + - In the standard local CLI workflow, ``flwr run`` submits the run to a managed + local SuperLink, which then coordinates the simulation runtime and the workers + acting as `simulated` SuperNodes. The simulation runtime itself can still be + started with or without the `SuperLink `_. - The SuperLink awaits for SuperNodes to connect. User interface with the SuperLink using the `Flower CLI `_. - - **Server-side App execution** @@ -61,7 +62,8 @@ deployment runtime. runs independently from the SuperLink and communicates with it over gRPC via the ServerAppIO API. - - **Client-side Infrastructure** - - None. The simulation runtime is self-contained. + - No user-managed client-side infrastructure is required. For local CLI workflows, + the managed local SuperLink and simulation runtime remain self-contained. - SuperNodes connect to the SuperLink via TLS-enabled gRPC using the Fleet API. Node authentication can be enabled. - - **Client-side App execution** diff --git a/framework/docs/source/simulate.rst b/framework/docs/source/simulate.rst index 902456e346d0..6d07cf6c10e7 100644 --- a/framework/docs/source/simulate.rst +++ b/framework/docs/source/simulate.rst @@ -15,4 +15,5 @@ Problem-oriented how-to guides show step-by-step how to achieve a specific goal. .. toctree:: :titlesonly: + how-to-run-flower-locally how-to-run-simulations diff --git a/framework/docs/source/tutorial-quickstart-fastai.rst b/framework/docs/source/tutorial-quickstart-fastai.rst index 216d6311b3b4..952dd83c3dfa 100644 --- a/framework/docs/source/tutorial-quickstart-fastai.rst +++ b/framework/docs/source/tutorial-quickstart-fastai.rst @@ -43,8 +43,9 @@ Next, activate your environment, then run: # Install project and dependencies $ pip install -e . -This example by default runs the Flower Simulation Engine, creating a federation of 10 -nodes using `FedAvg +This example uses a local simulation profile that ``flwr run`` submits to a managed +local SuperLink, which then executes the run with the Flower Simulation Runtime, +creating a federation of 10 nodes using `FedAvg `_ as the aggregation strategy. The dataset will be partitioned using Flower Dataset's `IidPartitioner @@ -53,30 +54,21 @@ Let's run the project: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-fastai.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (4.72 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 5 nodes (out of 10) INFO : aggregate_train: Received 5 results and 0 failures @@ -84,42 +76,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_loss': 3.1197, 'eval_acc': 0.14874} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.8071, 'eval_acc': 0.7488} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.5015, 'eval_acc': 0.8547} - INFO : + INFO : ... INFO : Strategy execution finished in 72.84s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (4.719 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : {1: {}, 2: {}, 3: {}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'eval_acc': '1.4875e-01', 'eval_loss': '3.1197e+00'}, - INFO : 2: {'eval_acc': '7.4883e-01', 'eval_loss': '8.0705e-01'}, - INFO : 3: {'eval_acc': '8.5467e-01', 'eval_loss': '5.0145e-01'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-huggingface.rst b/framework/docs/source/tutorial-quickstart-huggingface.rst index d126b8bb4239..2374c89577d6 100644 --- a/framework/docs/source/tutorial-quickstart-huggingface.rst +++ b/framework/docs/source/tutorial-quickstart-huggingface.rst @@ -14,8 +14,10 @@ virtual environment and run everything within a :doc:`virtualenv `. Let's use ``flwr new`` to create a complete Flower+πŸ€— Hugging Face project. It will -generate all the files needed to run, by default with the Flower Simulation Engine, a -federation of 10 nodes using |fedavg|_ The dataset will be partitioned using +generate all the files needed to run a federation of 10 nodes using |fedavg|_. By +default, the generated app uses a local simulation profile that ``flwr run`` submits to +a managed local SuperLink, which then executes the run with the Flower Simulation +Runtime. The dataset will be partitioned using |flowerdatasets|_'s |iidpartitioner|_. Now that we have a rough idea of what this example is about, let's get started. First, @@ -57,30 +59,21 @@ To run the project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream + +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. -With default arguments you will see an output like this one: +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-huggingface.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (16.74 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 5 nodes (out of 10) INFO : aggregate_train: Received 5 results and 0 failures @@ -88,44 +81,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'val_loss': 0.0223, 'val_accuracy': 0.5024} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.7019} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'val_loss': 0.0221, 'val_accuracy': 0.5176} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.6845} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'val_loss': 0.0221, 'val_accuracy': 0.5042} - INFO : + INFO : ... INFO : Strategy execution finished in 151.02s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (16.737 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_loss': '6.9738e-01'}, - INFO : 2: {'train_loss': '7.0191e-01'}, - INFO : 3: {'train_loss': '6.8449e-01'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'val_accuracy': '5.0240e-01', 'val_loss': '2.2265e-02'}, - INFO : 2: {'val_accuracy': '5.1760e-01', 'val_loss': '2.2134e-02'}, - INFO : 3: {'val_accuracy': '5.0420e-01', 'val_loss': '2.2124e-02'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also run the project with GPU as follows: @@ -133,7 +96,7 @@ You can also run the project with GPU as follows: .. code-block:: shell # Run with default arguments - $ flwr run . localhost-gpu + $ flwr run . localhost-gpu --stream This will use the default arguments where each ``ClientApp`` will use 4 CPUs and at most 4 ``ClientApp``\s will run in a given GPU. diff --git a/framework/docs/source/tutorial-quickstart-jax.rst b/framework/docs/source/tutorial-quickstart-jax.rst index cc51e486c152..1bcf83c75193 100644 --- a/framework/docs/source/tutorial-quickstart-jax.rst +++ b/framework/docs/source/tutorial-quickstart-jax.rst @@ -14,9 +14,11 @@ dataset using Flower and `JAX `_ with the create a virtual environment and run everything within a :doc:`virtualenv `. -Let's use ``flwr new`` to create a complete Flower+JAX project. It will generate all the -files needed to run, by default with the Flower Simulation Engine, a federation of 50 -nodes using |fedavg|_. The MNIST dataset will be partitioned using |flowerdatasets|_'s +Let's use ``flwr new`` to create a complete Flower+JAX project. It will generate all +the files needed to run a federation of 50 nodes using |fedavg|_. By default, the +generated app uses a local simulation profile that ``flwr run`` submits to a managed +local SuperLink, which then executes the run with the Flower Simulation Runtime. The +MNIST dataset will be partitioned using |flowerdatasets|_'s |iidpartitioner|_. Now that we have a rough idea of what this example is about, let's get started. First, @@ -58,30 +60,21 @@ To run the project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-jax.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 5 - INFO : β”œβ”€β”€ ArrayRecord (0.41 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): {'lr': 0.1} - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.40) | evaluate ( 0.40) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/5] INFO : configure_train: Sampled 20 nodes (out of 50) INFO : aggregate_train: Received 20 results and 0 failures @@ -89,64 +82,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 20 nodes (out of 50) INFO : aggregate_evaluate: Received 20 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_loss': 1.3394, 'eval_acc': 0.4984} - INFO : INFO : [ROUND 2/5] - INFO : configure_train: Sampled 20 nodes (out of 50) - INFO : aggregate_train: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 1.4135, 'train_acc': 0.5531} - INFO : configure_evaluate: Sampled 20 nodes (out of 50) - INFO : aggregate_evaluate: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 1.1782, 'eval_acc': 0.6906} - INFO : - INFO : [ROUND 3/5] - INFO : configure_train: Sampled 20 nodes (out of 50) - INFO : aggregate_train: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.9190, 'train_acc': 0.7186} - INFO : configure_evaluate: Sampled 20 nodes (out of 50) - INFO : aggregate_evaluate: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.7702, 'eval_acc': 0.8094} - INFO : - INFO : [ROUND 4/5] - INFO : configure_train: Sampled 20 nodes (out of 50) - INFO : aggregate_train: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.5969, 'train_acc': 0.8295} - INFO : configure_evaluate: Sampled 20 nodes (out of 50) - INFO : aggregate_evaluate: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.3409, 'eval_acc': 0.916} - INFO : + INFO : ... INFO : [ROUND 5/5] - INFO : configure_train: Sampled 20 nodes (out of 50) - INFO : aggregate_train: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.3680, 'train_acc': 0.8902} - INFO : configure_evaluate: Sampled 20 nodes (out of 50) - INFO : aggregate_evaluate: Received 20 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.2366, 'eval_acc': 0.9359} - INFO : + INFO : ... INFO : Strategy execution finished in 60.58s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.412 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_acc': '2.8214e-01', 'train_loss': '2.1116e+00'}, - INFO : 2: {'train_acc': '5.5307e-01', 'train_loss': '1.4135e+00'}, - INFO : 3: {'train_acc': '7.1858e-01', 'train_loss': '9.1897e-01'}, - INFO : 4: {'train_acc': '8.2946e-01', 'train_loss': '5.9692e-01'}, - INFO : 5: {'train_acc': '8.9023e-01', 'train_loss': '3.6800e-01'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'eval_acc': '4.9844e-01', 'eval_loss': '1.3394e+00'}, - INFO : 2: {'eval_acc': '6.9062e-01', 'eval_loss': '1.1782e+00'}, - INFO : 3: {'eval_acc': '8.0938e-01', 'eval_loss': '7.7016e-01'}, - INFO : 4: {'eval_acc': '9.1602e-01', 'eval_loss': '3.4092e-01'}, - INFO : 5: {'eval_acc': '9.3594e-01', 'eval_loss': '2.3663e-01'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-mlx.rst b/framework/docs/source/tutorial-quickstart-mlx.rst index 3eb72e45c6d7..322895fc579c 100644 --- a/framework/docs/source/tutorial-quickstart-mlx.rst +++ b/framework/docs/source/tutorial-quickstart-mlx.rst @@ -48,9 +48,11 @@ In this federated learning tutorial, we will learn how to train a simple MLP on using Flower and MLX. It is recommended to create a virtual environment and run everything within a :doc:`virtualenv `. -Let's use ``flwr new`` to create a complete Flower+MLX project. It will generate all the -files needed to run, by default with the Simulation Engine, a federation of 10 nodes -using |fedavg_link|_. The dataset will be partitioned using Flower Dataset's +Let's use ``flwr new`` to create a complete Flower+MLX project. It will generate all +the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the +generated app uses a local simulation profile that ``flwr run`` submits to a managed +local SuperLink, which then executes the run with the Flower Simulation Runtime. The +dataset will be partitioned using Flower Dataset's `IidPartitioner `_. @@ -93,30 +95,21 @@ To run the project do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments, you will see output like this: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments, you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-mlx.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.10 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (1.00) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 10 nodes (out of 10) INFO : aggregate_train: Received 10 results and 0 failures @@ -124,44 +117,14 @@ With default arguments, you will see output like this: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'accuracy': 0.2720000118017197, 'loss': 2.24028} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 10 nodes (out of 10) - INFO : aggregate_train: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'accuracy': 0.38191667497158055, 'loss': 2.076018} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'accuracy': 0.38441667854785927, 'loss': 2.078289} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 10 nodes (out of 10) - INFO : aggregate_train: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'accuracy': 0.5058750063180925, 'loss': 1.80676848} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'accuracy': 0.5099166750907898, 'loss': 1.80801609} - INFO : + INFO : ... INFO : Strategy execution finished in 9.96s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.102 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'accuracy': '2.7038e-01', 'loss': '2.2391e+00'}, - INFO : 2: {'accuracy': '3.8192e-01', 'loss': '2.0760e+00'}, - INFO : 3: {'accuracy': '5.0588e-01', 'loss': '1.8068e+00'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'accuracy': '2.7200e-01', 'loss': '2.2403e+00'}, - INFO : 2: {'accuracy': '3.8442e-01', 'loss': '2.0783e+00'}, - INFO : 3: {'accuracy': '5.0992e-01', 'loss': '1.8080e+00'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-pytorch-lightning.rst b/framework/docs/source/tutorial-quickstart-pytorch-lightning.rst index e72f198a23e2..f39bab44bd68 100644 --- a/framework/docs/source/tutorial-quickstart-pytorch-lightning.rst +++ b/framework/docs/source/tutorial-quickstart-pytorch-lightning.rst @@ -44,36 +44,29 @@ Next, activate your environment, then run: # Install project and dependencies $ pip install -e . -By default, Flower Simulation Engine will be started and it will create a federation of -4 nodes using |fedavg|_ as the aggregation strategy. The dataset will be partitioned -using Flower Dataset's |iidpartitioner|_. To run the project, do: +By default, this project uses a local simulation profile that ``flwr run`` submits to a +managed local SuperLink, which then executes the run with the Flower Simulation Runtime. +It creates a federation of 4 nodes using |fedavg|_ as the aggregation strategy. The +dataset will be partitioned using Flower Dataset's |iidpartitioner|_. To run the +project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-pytorch-lightning.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.39 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 0.50) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 2 nodes (out of 4) INFO : aggregate_train: Received 2 results and 0 failures @@ -81,43 +74,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 2 nodes (out of 4) INFO : aggregate_evaluate: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.0495} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 2 nodes (out of 4) - INFO : aggregate_train: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.0420} - INFO : configure_evaluate: Sampled 2 nodes (out of 4) - INFO : aggregate_evaluate: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.0455} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 2 nodes (out of 4) - INFO : aggregate_train: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 0.05082} - INFO : configure_evaluate: Sampled 2 nodes (out of 4) - INFO : aggregate_evaluate: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 0.0441} - INFO : + INFO : ... INFO : Strategy execution finished in 159.24s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.389 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_loss': '4.8696e-02'}, - INFO : 2: {'train_loss': '4.1957e-02'}, - INFO : 3: {'train_loss': '5.0818e-02'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'eval_loss': '4.9516e-02'}, - INFO : 2: {'eval_loss': '4.5510e-02'}, - INFO : 3: {'eval_loss': '4.4052e-02'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : Each simulated `ClientApp` (two per round) will also log a summary of their local training process. Expect this output to be similar to: diff --git a/framework/docs/source/tutorial-quickstart-pytorch.rst b/framework/docs/source/tutorial-quickstart-pytorch.rst index b8e75ea0c081..a57b432c3dde 100644 --- a/framework/docs/source/tutorial-quickstart-pytorch.rst +++ b/framework/docs/source/tutorial-quickstart-pytorch.rst @@ -50,8 +50,10 @@ environment and run everything within a :doc:`virtualenv `. Let's use ``flwr new`` to create a complete Flower+PyTorch project. It will generate all -the files needed to run, by default with the Flower Simulation Engine, a federation of -10 nodes using |fedavg_link|_. The dataset will be partitioned using Flower Dataset's +the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the +generated app uses a local simulation profile that ``flwr run`` submits to a managed +local SuperLink, which then executes the run with the Flower Simulation Runtime. The +dataset will be partitioned using Flower Dataset's `IidPartitioner `_. @@ -61,7 +63,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: @@ -94,30 +96,21 @@ To run the project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-pytorch.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.24 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): {'lr': 0.01} - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 5 nodes (out of 10) INFO : aggregate_train: Received 5 results and 0 failures @@ -125,44 +118,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_loss': 2.31319, 'eval_acc': 0.10004} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 2.1097401} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 2.2529, 'eval_acc': 0.142002} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 1.9476833} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 1.9190, 'eval_acc': 0.2974005} - INFO : + INFO : ... INFO : Strategy execution finished in 16.56s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.238 MB) - INFO : - INFO : Aggregated Client-side Train Metrics: - INFO : { 1: {'train_loss': '2.1839e+00'}, - INFO : 2: {'train_loss': '2.0512e+00'}, - INFO : 3: {'train_loss': '1.9784e+00'}} - INFO : - INFO : Aggregated Client-side Evaluate Metrics: - INFO : { 1: {'eval_acc': '1.0770e-01', 'eval_loss': '2.2858e+00'}, - INFO : 2: {'eval_acc': '2.1810e-01', 'eval_loss': '1.9734e+00'}, - INFO : 3: {'eval_acc': '2.7140e-01', 'eval_loss': '1.9069e+00'}} - INFO : INFO : Server-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-scikitlearn.rst b/framework/docs/source/tutorial-quickstart-scikitlearn.rst index 81ad0d90e146..5af9180c6355 100644 --- a/framework/docs/source/tutorial-quickstart-scikitlearn.rst +++ b/framework/docs/source/tutorial-quickstart-scikitlearn.rst @@ -42,8 +42,10 @@ environment and run everything within a :doc:`virtualenv `. Let's use ``flwr new`` to create a complete Flower+scikit-learn project. It will -generate all the files needed to run, by default with the Flower Simulation Engine, a -federation of 10 nodes using |fedavg_link|_ The dataset will be partitioned using +generate all the files needed to run a federation of 10 nodes using |fedavg_link|_. By +default, the generated app uses a local simulation profile that ``flwr run`` submits to +a managed local SuperLink, which then executes the run with the Flower Simulation +Runtime. The dataset will be partitioned using |flowerdatasets|_'s |iidpartitioner|_ Now that we have a rough idea of what this example is about, let's get started. First, @@ -85,30 +87,21 @@ To run the project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-sklearn.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.06 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (1.00) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 10 nodes (out of 10) INFO : aggregate_train: Received 10 results and 0 failures @@ -116,56 +109,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'test_logloss': 1.23306, 'accuracy': 0.69154, 'precision': 0.68659, 'recall': 0.68046, 'f1': 0.65752} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 10 nodes (out of 10) - INFO : aggregate_train: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_logloss': 0.8565170774432291} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'test_logloss': 0.8805, 'accuracy': 0.73425, 'precision': 0.792371, 'recall': 0.7329, 'f1': 0.70438} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 10 nodes (out of 10) - INFO : aggregate_train: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_logloss': 0.703260769576} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'test_logloss': 0.70207, 'accuracy': 0.77250, 'precision': 0.82201, 'recall': 0.76348, 'f1': 0.75069} - INFO : + INFO : ... INFO : Strategy execution finished in 17.87s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.060 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_logloss': '1.3937e+00'}, - INFO : 2: {'train_logloss': '8.5652e-01'}, - INFO : 3: {'train_logloss': '7.0326e-01'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: { 'accuracy': '6.9158e-01', - INFO : 'f1': '6.5752e-01', - INFO : 'precision': '6.8659e-01', - INFO : 'recall': '6.8046e-01', - INFO : 'test_logloss': '1.2331e+00'}, - INFO : 2: { 'accuracy': '7.3425e-01', - INFO : 'f1': '7.0439e-01', - INFO : 'precision': '7.9237e-01', - INFO : 'recall': '7.3295e-01', - INFO : 'test_logloss': '8.8056e-01'}, - INFO : 3: { 'accuracy': '7.7250e-01', - INFO : 'f1': '7.5069e-01', - INFO : 'precision': '8.2201e-01', - INFO : 'recall': '7.6348e-01', - INFO : 'test_logloss': '7.0208e-01'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-tensorflow.rst b/framework/docs/source/tutorial-quickstart-tensorflow.rst index 8219b8d00265..1b9f1f083a94 100644 --- a/framework/docs/source/tutorial-quickstart-tensorflow.rst +++ b/framework/docs/source/tutorial-quickstart-tensorflow.rst @@ -42,8 +42,10 @@ virtual environment and run everything within a :doc:`virtualenv `. Let's use ``flwr new`` to create a complete Flower+TensorFlow project. It will generate -all the files needed to run, by default with the Flower Simulation Engine, a federation -of 10 nodes using |fedavg_link|_. The dataset will be partitioned using Flower Dataset's +all the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, +the generated app uses a local simulation profile that ``flwr run`` submits to a +managed local SuperLink, which then executes the run with the Flower Simulation Runtime. +The dataset will be partitioned using Flower Dataset's `IidPartitioner `_. @@ -86,30 +88,21 @@ To run the project, do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments you will see an output like this one: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-tensorflow.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.16 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 5 nodes (out of 10) INFO : aggregate_train: Received 5 results and 0 failures @@ -117,43 +110,14 @@ With default arguments you will see an output like this one: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_acc': 0.1216, 'eval_loss': 2.2686} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 1.8099, 'train_acc': 0.3373} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_acc': 0.4273, 'eval_loss': 1.6684} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 1.6749, 'train_acc': 0.3965} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_acc': 0.4281, 'eval_loss': 1.5807} - INFO : + INFO : ... INFO : Strategy execution finished in 16.60s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.163 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_acc': '2.6240e-01', 'train_loss': '2.0014e+00'}, - INFO : 2: {'train_acc': '3.3725e-01', 'train_loss': '1.8099e+00'}, - INFO : 3: {'train_acc': '3.9655e-01', 'train_loss': '1.6750e+00'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'eval_acc': '1.2160e-01', 'eval_loss': '2.2686e+00'}, - INFO : 2: {'eval_acc': '4.2730e-01', 'eval_loss': '1.6684e+00'}, - INFO : 3: {'eval_acc': '4.2810e-01', 'eval_loss': '1.5807e+00'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : Saving final model to disk as final_model.keras... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-quickstart-xgboost.rst b/framework/docs/source/tutorial-quickstart-xgboost.rst index 37ec70c06164..d9421aff2138 100644 --- a/framework/docs/source/tutorial-quickstart-xgboost.rst +++ b/framework/docs/source/tutorial-quickstart-xgboost.rst @@ -50,8 +50,10 @@ virtual environment and run everything within a :doc:`virtualenv `. Let's use ``flwr new`` to create a complete Flower+XGBoost project. It will generate all -the files needed to run, by default with the Simulation Engine, a federation of 10 nodes -using |fedxgbbagging_link|_ strategy. The dataset will be partitioned using Flower +the files needed to run a federation of 10 nodes using |fedxgbbagging_link|_ strategy. +By default, the generated app uses a local simulation profile that ``flwr run`` +submits to a managed local SuperLink, which then executes the run with the Flower +Simulation Runtime. The dataset will be partitioned using Flower Dataset's `IidPartitioner `_. @@ -99,30 +101,21 @@ To run the project do: .. code-block:: shell - # Run with default arguments - $ flwr run . + # Run with default arguments and stream logs + $ flwr run . --stream -With default arguments, you will see output like this: +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. For the full local workflow, see :doc:`how-to-run-flower-locally`. + +With default arguments, you will see streamed output like this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-xgboost.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedXgbBagging strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.00 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): (empty!) - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.10) | evaluate ( 0.10) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 2 nodes (out of 10) INFO : aggregate_train: Received 2 results and 0 failures @@ -130,42 +123,14 @@ With default arguments, you will see output like this: INFO : configure_evaluate: Sampled 2 nodes (out of 10) INFO : aggregate_evaluate: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'auc': 0.7677505289821278} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 2 nodes (out of 10) - INFO : aggregate_train: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {} - INFO : configure_evaluate: Sampled 2 nodes (out of 10) - INFO : aggregate_evaluate: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'auc': 0.7758267351298489} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 2 nodes (out of 10) - INFO : aggregate_train: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {} - INFO : configure_evaluate: Sampled 2 nodes (out of 10) - INFO : aggregate_evaluate: Received 2 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'auc': 0.7811659285552999} - INFO : + INFO : ... INFO : Strategy execution finished in 132.88s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.195 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : {1: {}, 2: {}, 3: {}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'auc': '7.6775e-01'}, - INFO : 2: {'auc': '7.7583e-01'}, - INFO : 3: {'auc': '7.8117e-01'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section diff --git a/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.rst b/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.rst index f53678929d03..1b6feef8229b 100644 --- a/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.rst +++ b/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.rst @@ -366,7 +366,10 @@ Finally, let's run the ``FlowerApp``: .. code-block:: shell - $ flwr run . + $ flwr run . --stream + +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. See :doc:`how-to-run-flower-locally` for the full local workflow. After starting the run you will notice two things: diff --git a/framework/docs/source/tutorial-series-customize-the-client-pytorch.rst b/framework/docs/source/tutorial-series-customize-the-client-pytorch.rst index 97625eb6e79d..e354fc610358 100644 --- a/framework/docs/source/tutorial-series-customize-the-client-pytorch.rst +++ b/framework/docs/source/tutorial-series-customize-the-client-pytorch.rst @@ -302,7 +302,10 @@ Finally, we run the Flower App. .. code-block:: shell - $ flwr run . + $ flwr run . --stream + +Plain ``flwr run .`` submits the run, prints the run ID, and returns without streaming +logs. See :doc:`how-to-run-flower-locally` for the full local workflow. You will observe that the training metadata from each client is logged to the console of the ``ServerApp``. If you finish embedding the creation of the ``TrainProcessMetadata`` diff --git a/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.rst b/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.rst index 599f6f203e28..dc4d4c120eb3 100644 --- a/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.rst +++ b/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.rst @@ -559,31 +559,23 @@ with Flower! The last step is to run our simulation in the command line, as foll .. code-block:: shell - $ flwr run . + $ flwr run . --stream -This will execute the federated learning simulation with 10 clients, or SuperNodes, -defined in the ``[superlink.local]`` section in your Flower Configuration file. You -should expect an output log similar to this: +This submits the run to the managed local SuperLink for the ``[superlink.local]`` +profile, which then executes the federated learning simulation with 10 clients, or +SuperNodes, using the Flower Simulation Runtime. Plain ``flwr run .`` submits the run, +prints the run ID, and returns without streaming logs. For the full local workflow, see +:doc:`how-to-run-flower-locally`. + +You should expect streamed output similar to this: .. code-block:: shell - Loading project configuration... - Success + Successfully built flwrlabs.quickstart-pytorch.1-0-0.014c8eb3.fab + Starting local SuperLink on 127.0.0.1:39093... + Successfully started run 1859953118041441032 INFO : Starting FedAvg strategy: INFO : β”œβ”€β”€ Number of rounds: 3 - INFO : β”œβ”€β”€ ArrayRecord (0.24 MB) - INFO : β”œβ”€β”€ ConfigRecord (train): {'lr': 0.01} - INFO : β”œβ”€β”€ ConfigRecord (evaluate): (empty!) - INFO : β”œβ”€β”€> Sampling: - INFO : β”‚ β”œβ”€β”€Fraction: train (0.50) | evaluate ( 1.00) - INFO : β”‚ β”œβ”€β”€Minimum nodes: train (2) | evaluate (2) - INFO : β”‚ └──Minimum available nodes: 2 - INFO : └──> Keys in records: - INFO : β”œβ”€β”€ Weighted by: 'num-examples' - INFO : β”œβ”€β”€ ArrayRecord key: 'arrays' - INFO : └── ConfigRecord key: 'config' - INFO : - INFO : INFO : [ROUND 1/3] INFO : configure_train: Sampled 5 nodes (out of 10) INFO : aggregate_train: Received 5 results and 0 failures @@ -591,44 +583,14 @@ should expect an output log similar to this: INFO : configure_evaluate: Sampled 10 nodes (out of 10) INFO : aggregate_evaluate: Received 10 results and 0 failures INFO : └──> Aggregated MetricRecord: {'eval_loss': 2.304821, 'eval_acc': 0.0965} - INFO : INFO : [ROUND 2/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 2.17333} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 2.304577, 'eval_acc': 0.10030} - INFO : + INFO : ... INFO : [ROUND 3/3] - INFO : configure_train: Sampled 5 nodes (out of 10) - INFO : aggregate_train: Received 5 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'train_loss': 2.16953} - INFO : configure_evaluate: Sampled 10 nodes (out of 10) - INFO : aggregate_evaluate: Received 10 results and 0 failures - INFO : └──> Aggregated MetricRecord: {'eval_loss': 2.29976, 'eval_acc': 0.1015} - INFO : + INFO : ... INFO : Strategy execution finished in 17.18s - INFO : INFO : Final results: - INFO : - INFO : Global Arrays: - INFO : ArrayRecord (0.238 MB) - INFO : - INFO : Aggregated ClientApp-side Train Metrics: - INFO : { 1: {'train_loss': '2.2581e+00'}, - INFO : 2: {'train_loss': '2.1733e+00'}, - INFO : 3: {'train_loss': '2.1695e+00'}} - INFO : - INFO : Aggregated ClientApp-side Evaluate Metrics: - INFO : { 1: {'eval_acc': '9.6500e-02', 'eval_loss': '2.3048e+00'}, - INFO : 2: {'eval_acc': '1.0030e-01', 'eval_loss': '2.3046e+00'}, - INFO : 3: {'eval_acc': '1.0150e-01', 'eval_loss': '2.2998e+00'}} - INFO : INFO : ServerApp-side Evaluate Metrics: INFO : {} - INFO : - Saving final model to disk... You can also override the parameters defined in the ``[tool.flwr.app.config]`` section @@ -637,7 +599,7 @@ in ``pyproject.toml`` like this: .. code-block:: shell # Run the simulation with 5 server rounds and 3 local epochs - $ flwr run . --run-config "num-server-rounds=5 local-epochs=3" + $ flwr run . --stream --run-config "num-server-rounds=5 local-epochs=3" .. tip:: @@ -649,11 +611,13 @@ Behind the scenes So how does this work? How does Flower execute this simulation? -When we execute ``flwr run``, we tell Flower that there are 10 clients +When we execute ``flwr run`` against the default local profile, Flower submits the run +to the managed local SuperLink and tells it that there are 10 clients (``options.num-supernodes = 10``, where each SuperNode launches one ``ClientApp``). -Flower then asks the ``ServerApp`` to issue instructions to those nodes using the -``FedAvg`` strategy. In this example, ``FedAvg`` is configured with two key parameters: +The local SuperLink then starts the ``ServerApp`` and asks it to issue instructions to +those nodes using the ``FedAvg`` strategy. In this example, ``FedAvg`` is configured +with two key parameters: - ``fraction-train=0.5`` β†’ select 50% of the available clients for training - ``fraction-evaluate=1.0`` β†’ select 100% of the available clients for evaluation diff --git a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst index c545812906bb..27fc3204e5f5 100644 --- a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst +++ b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst @@ -155,7 +155,7 @@ Next, run the training with the following command: .. code-block:: shell - $ flwr run . + $ flwr run . --stream ************************************** Server-side parameter **evaluation** @@ -253,7 +253,7 @@ Finally, we run the simulation. .. code-block:: shell - $ flwr run . + $ flwr run . --stream You'll note that the server logs the metrics returned by the callback after each round. Also, at the end of the run, note the ``ServerApp-side Evaluate Metrics`` shown: @@ -317,7 +317,7 @@ rounds to 15 to see the learning rate decay in action. .. code-block:: shell - $ flwr run . --run-config="num-server-rounds=15" + $ flwr run . --stream --run-config="num-server-rounds=15" You'll note that in the ``configure_train`` stage of rounds 5 and 10, the learning rate is decreased by a factor of 0.5 and the new learning rate is printed to the terminal. @@ -412,7 +412,7 @@ Finally, run the simulation with the following command: .. code-block:: shell - $ flwr run . + $ flwr run . --stream ******* Recap From 5220ae8af6eb7b8f97d2c4524f0adf88acea56b2 Mon Sep 17 00:00:00 2001 From: "Daniel J. Beutel" Date: Sun, 8 Mar 2026 20:35:08 +0100 Subject: [PATCH 2/5] Add [simulation] to pip install flwr --- framework/docs/source/tutorial-quickstart-huggingface.rst | 2 +- framework/docs/source/tutorial-quickstart-ios.rst | 2 +- framework/docs/source/tutorial-quickstart-jax.rst | 2 +- framework/docs/source/tutorial-quickstart-mlx.rst | 2 +- framework/docs/source/tutorial-quickstart-scikitlearn.rst | 2 +- framework/docs/source/tutorial-quickstart-tensorflow.rst | 2 +- framework/docs/source/tutorial-quickstart-xgboost.rst | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/framework/docs/source/tutorial-quickstart-huggingface.rst b/framework/docs/source/tutorial-quickstart-huggingface.rst index 2374c89577d6..9fbedfbccbbe 100644 --- a/framework/docs/source/tutorial-quickstart-huggingface.rst +++ b/framework/docs/source/tutorial-quickstart-huggingface.rst @@ -26,7 +26,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: diff --git a/framework/docs/source/tutorial-quickstart-ios.rst b/framework/docs/source/tutorial-quickstart-ios.rst index 6d709606fde1..35daeaa7bc80 100644 --- a/framework/docs/source/tutorial-quickstart-ios.rst +++ b/framework/docs/source/tutorial-quickstart-ios.rst @@ -37,7 +37,7 @@ server environment. We first need to install Flower. You can do this by using pi .. code-block:: shell - $ pip install flwr + $ pip install flwr[simulation] Or Poetry: diff --git a/framework/docs/source/tutorial-quickstart-jax.rst b/framework/docs/source/tutorial-quickstart-jax.rst index 1bcf83c75193..ce726eb16276 100644 --- a/framework/docs/source/tutorial-quickstart-jax.rst +++ b/framework/docs/source/tutorial-quickstart-jax.rst @@ -27,7 +27,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: diff --git a/framework/docs/source/tutorial-quickstart-mlx.rst b/framework/docs/source/tutorial-quickstart-mlx.rst index 322895fc579c..f6ff157aac52 100644 --- a/framework/docs/source/tutorial-quickstart-mlx.rst +++ b/framework/docs/source/tutorial-quickstart-mlx.rst @@ -62,7 +62,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: diff --git a/framework/docs/source/tutorial-quickstart-scikitlearn.rst b/framework/docs/source/tutorial-quickstart-scikitlearn.rst index 5af9180c6355..69d399c5d670 100644 --- a/framework/docs/source/tutorial-quickstart-scikitlearn.rst +++ b/framework/docs/source/tutorial-quickstart-scikitlearn.rst @@ -54,7 +54,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: diff --git a/framework/docs/source/tutorial-quickstart-tensorflow.rst b/framework/docs/source/tutorial-quickstart-tensorflow.rst index 1b9f1f083a94..01fb8b93581e 100644 --- a/framework/docs/source/tutorial-quickstart-tensorflow.rst +++ b/framework/docs/source/tutorial-quickstart-tensorflow.rst @@ -55,7 +55,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: diff --git a/framework/docs/source/tutorial-quickstart-xgboost.rst b/framework/docs/source/tutorial-quickstart-xgboost.rst index d9421aff2138..a064b15a6e93 100644 --- a/framework/docs/source/tutorial-quickstart-xgboost.rst +++ b/framework/docs/source/tutorial-quickstart-xgboost.rst @@ -68,7 +68,7 @@ install Flower in your new environment: .. code-block:: shell # In a new Python environment - $ pip install flwr + $ pip install flwr[simulation] Then, run the command below: From 823f3ff4cf6fbb857cbe3c6bd91aea9dc7b7043e Mon Sep 17 00:00:00 2001 From: "Daniel J. Beutel" Date: Sun, 8 Mar 2026 20:57:17 +0100 Subject: [PATCH 3/5] Replace Simulation/Deployment "Engine" with "Runtime" --- ...run-quickstart-examples-docker-compose.rst | 8 ++--- .../source/how-to-design-stateful-clients.rst | 2 +- .../docs/source/how-to-install-flower.rst | 8 ++--- .../how-to-manage-flower-federations.rst | 4 +-- ...w-to-run-flower-with-deployment-engine.rst | 8 ++--- .../docs/source/how-to-run-simulations.rst | 36 +++++++++---------- ...-a-federated-learning-strategy-pytorch.rst | 4 +-- 7 files changed, 35 insertions(+), 35 deletions(-) diff --git a/framework/docs/source/docker/run-quickstart-examples-docker-compose.rst b/framework/docs/source/docker/run-quickstart-examples-docker-compose.rst index 9f1e4f95f944..c7a1ba3e0ea8 100644 --- a/framework/docs/source/docker/run-quickstart-examples-docker-compose.rst +++ b/framework/docs/source/docker/run-quickstart-examples-docker-compose.rst @@ -1,6 +1,6 @@ -:og:description: Beginner’s guide to running Flower quickstart examples with the Deployment Engine using Docker Compose, showcasing its powerful federated learning capabilities. +:og:description: Beginner’s guide to running Flower quickstart examples with the Deployment Runtime using Docker Compose, showcasing its powerful federated learning capabilities. .. meta:: - :description: Beginner’s guide to running Flower quickstart examples with the Deployment Engine using Docker Compose, showcasing its powerful federated learning capabilities. + :description: Beginner’s guide to running Flower quickstart examples with the Deployment Runtime using Docker Compose, showcasing its powerful federated learning capabilities. #################################################### Run Flower Quickstart Examples with Docker Compose @@ -9,8 +9,8 @@ Flower provides a set of `quickstart examples `_ to help you get started with the framework. These examples are designed to demonstrate the capabilities of Flower and by -default run using the Simulation Engine. This guide demonstrates how to run them using -Flower's Deployment Engine via Docker Compose. +default run using the Simulation Runtime. This guide demonstrates how to run them using +Flower's Deployment Runtime via Docker Compose. .. important:: diff --git a/framework/docs/source/how-to-design-stateful-clients.rst b/framework/docs/source/how-to-design-stateful-clients.rst index 5ab41149a7bb..a5c6d9fe6f58 100644 --- a/framework/docs/source/how-to-design-stateful-clients.rst +++ b/framework/docs/source/how-to-design-stateful-clients.rst @@ -24,7 +24,7 @@ By design, ClientApp_ objects are stateless. This means that the ``ClientApp`` object is recreated each time a new ``Message`` is to be processed. This behavior is identical -with Flower's Simulation Engine and Deployment Engine. For the former, it allows us to +with Flower's Simulation Runtime and Deployment Runtime. For the former, it allows us to simulate the running of a large number of nodes on a single machine or across multiple machines. For the latter, it enables each ``SuperNode`` to be part of multiple runs, each running a different ``ClientApp``. diff --git a/framework/docs/source/how-to-install-flower.rst b/framework/docs/source/how-to-install-flower.rst index 750dbd448743..785be774a278 100644 --- a/framework/docs/source/how-to-install-flower.rst +++ b/framework/docs/source/how-to-install-flower.rst @@ -25,8 +25,8 @@ Stable releases are available on `PyPI `_: python -m pip install flwr -For simulations that use the Virtual Client Engine, ``flwr`` should be installed with -the ``simulation`` extra: +For simulations that use the Simulation Runtime, ``flwr`` should be installed with the +``simulation`` extra: :: @@ -95,7 +95,7 @@ versions (alpha, beta, release candidate) before the stable release happens: python -m pip install -U --pre flwr -For simulations that use the Virtual Client Engine, ``flwr`` pre-releases should be +For simulations that use the Simulation Runtime, ``flwr`` pre-releases should be installed with the ``simulation`` extra: :: @@ -111,7 +111,7 @@ The latest (potentially unstable) changes in Flower are available as nightly rel python -m pip install -U flwr-nightly -For simulations that use the Virtual Client Engine, ``flwr-nightly`` should be installed +For simulations that use the Simulation Runtime, ``flwr-nightly`` should be installed with the ``simulation`` extra: :: diff --git a/framework/docs/source/how-to-manage-flower-federations.rst b/framework/docs/source/how-to-manage-flower-federations.rst index b134fe6a3d56..4d3f4af2603a 100644 --- a/framework/docs/source/how-to-manage-flower-federations.rst +++ b/framework/docs/source/how-to-manage-flower-federations.rst @@ -1,6 +1,6 @@ -:og:description: Guide to manage Flower federations using the Deployment Engine. +:og:description: Guide to manage Flower federations using the Deployment Runtime. .. meta:: - :description: Guide to manage Flower federations using the Deployment Engine. + :description: Guide to manage Flower federations using the Deployment Runtime. .. |flower_cli_federation_link| replace:: ``Flower CLI`` diff --git a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst index 2eaa139b2fd9..3cee775dfbef 100644 --- a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst +++ b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst @@ -1,12 +1,12 @@ -:og:description: Guide to use Flower's Deployment Engine and run a Flower App trough a federation consisting of a SuperLink and two SuperNodes. +:og:description: Guide to use Flower's Deployment Runtime and run a Flower App through a federation consisting of a SuperLink and two SuperNodes. .. meta:: - :description: Guide to use Flower's Deployment Engine and run a Flower App trough a federation consisting of a SuperLink and two SuperNodes. + :description: Guide to use Flower's Deployment Runtime and run a Flower App through a federation consisting of a SuperLink and two SuperNodes. ####################################### - Run Flower with the Deployment Engine + Run Flower with the Deployment Runtime ####################################### -This how-to guide demonstrates how to set up and run Flower with the Deployment Engine +This how-to guide demonstrates how to set up and run Flower with the Deployment Runtime using minimal configurations to illustrate the workflow. This is a complementary guide to the :doc:`docker/index` guides. diff --git a/framework/docs/source/how-to-run-simulations.rst b/framework/docs/source/how-to-run-simulations.rst index 6ab74d084148..11e16e5b8c6f 100644 --- a/framework/docs/source/how-to-run-simulations.rst +++ b/framework/docs/source/how-to-run-simulations.rst @@ -1,6 +1,6 @@ -:og:description: Run federated learning simulations in Flower using the VirtualClientEngine for scalable, resource-aware, and multi-node simulations on any system configuration. +:og:description: Run federated learning simulations in Flower using the Simulation Runtime for scalable, resource-aware, and multi-node simulations on any system configuration. .. meta:: - :description: Run federated learning simulations in Flower using the VirtualClientEngine for scalable, resource-aware, and multi-node simulations on any system configuration. + :description: Run federated learning simulations in Flower using the Simulation Runtime for scalable, resource-aware, and multi-node simulations on any system configuration. .. |clientapp_link| replace:: ``ClientApp`` @@ -37,7 +37,7 @@ workloads makes sense. .. note:: - Flower's ``Simulation Engine`` is built on top of `Ray `_, an + Flower's ``Simulation Runtime`` is built on top of `Ray `_, an open-source framework for scalable Python workloads. Flower fully supports Linux and macOS. On Windows, Ray support remains experimental, and while you can run simulations directly from the `PowerShell @@ -49,13 +49,13 @@ workloads makes sense. If you're on Windows and see unexpected terminal output (e.g.: ``οΏ½ β–‘[32mβ–‘[1m``), check :ref:`this FAQ entry `. -Flower's ``Simulation Engine`` schedules, launches, and manages |clientapp_link|_ +Flower's ``Simulation Runtime`` schedules, launches, and manages |clientapp_link|_ instances. It does so through a ``Backend``, which contains several workers (i.e., Python processes) that can execute a ``ClientApp`` by passing it a |context_link|_ and a |message_link|_. These ``ClientApp`` objects are identical to those used by Flower's -`Deployment Engine `_, making alternating +`Deployment Runtime `_, making alternating between *simulation* and *deployment* an effortless process. The execution of -``ClientApp`` objects through Flower's ``Simulation Engine`` is: +``ClientApp`` objects through Flower's ``Simulation Runtime`` is: - **Resource-aware**: Each backend worker executing ``ClientApp``\s gets assigned a portion of the compute and memory on your system. You can define these at the @@ -67,8 +67,8 @@ between *simulation* and *deployment* an effortless process. The execution of ``ClientApps`` are typically executed in batches of N, where N is the number of backend workers. - **Self-managed**: This means that you, as a user, do not need to launch ``ClientApps`` - manually; instead, the ``Simulation Engine``'s internals orchestrates the execution of - all ``ClientApp``\s. + manually; instead, the ``Simulation Runtime`` orchestrates the execution of all + ``ClientApp``\s. - **Ephemeral**: This means that a ``ClientApp`` is only materialized when it is required by the application (e.g., to do `fit() `_). The object is destroyed afterward, @@ -81,7 +81,7 @@ between *simulation* and *deployment* an effortless process. The execution of `Designing Stateful Clients `_ guide for a complete walkthrough. -The ``Simulation Engine`` delegates to a ``Backend`` the role of spawning and managing +The ``Simulation Runtime`` delegates to a ``Backend`` the role of spawning and managing ``ClientApps``. The default backend is the ``RayBackend``, which uses `Ray `_, an open-source framework for scalable Python workloads. In particular, each worker is an `Actor @@ -144,7 +144,7 @@ The complete list of examples can be found in `the Flower GitHub Defining ``ClientApp`` resources ********************************** -By default, the ``Simulation Engine`` assigns two CPU cores to each backend worker. This +By default, the ``Simulation Runtime`` assigns two CPU cores to each backend worker. This means that if your system has 10 CPU cores, five backend workers can be running in parallel, each executing a different ``ClientApp`` instance. @@ -224,14 +224,14 @@ concurrency in your simulations, this does not stop you from running hundreds or thousands of clients in the same round and having orders of magnitude more *dormant* (i.e., not participating in a round) clients. Let's say you want to have 100 clients per round but your system can only accommodate 8 clients concurrently. The ``Simulation -Engine`` will schedule 100 ``ClientApps`` to run and then will execute them in a +Runtime`` will schedule 100 ``ClientApps`` to run and then will execute them in a resource-aware manner in batches of 8. ***************************** - Simulation Engine resources + Simulation Runtime resources ***************************** -By default, the ``Simulation Engine`` has **access to all system resources** (i.e., all +By default, the ``Simulation Runtime`` has **access to all system resources** (i.e., all CPUs, all GPUs). However, in some settings, you might want to limit how many of your system resources are used for simulation. You can do this in the :doc:`Flower Configuration ` by setting the ``options.backend.init-args`` @@ -261,7 +261,7 @@ For the highest performance, do not set ``options.backend.init-args``. ***************************** The preferred way of running simulations should always be |flwr_run_link|_. However, the -core functionality of the ``Simulation Engine`` can be used from within a Google Colab +core functionality of the ``Simulation Runtime`` can be used from within a Google Colab or Jupyter environment by means of `run_simulation `_. @@ -302,7 +302,7 @@ for a complete example on how to run Flower Simulations in Colab. Multi-node Flower simulations ******************************* -Flower's ``Simulation Engine`` allows you to run FL simulations across multiple compute +Flower's ``Simulation Runtime`` allows you to run FL simulations across multiple compute nodes so that you're not restricted to running simulations on a _single_ machine. Before starting your multi-node simulation, ensure that you: @@ -334,7 +334,7 @@ need to run the command ``ray stop`` in each node's terminal (including the head .. note:: When attaching a new node to the head, all its resources (i.e., all CPUs, all GPUs) - will be visible by the head node. This means that the ``Simulation Engine`` can + will be visible by the head node. This means that the ``Simulation Runtime`` can schedule as many ``ClientApp`` instances as that node can possibly run. In some settings, you might want to exclude certain resources from the simulation. You can do this by appending ``--num-cpus=`` and/or @@ -383,9 +383,9 @@ need to run the command ``ray stop`` in each node's terminal (including the head Yes. If you are using the ``RayBackend`` (the *default* backend) you can first interconnect your nodes through Ray's cli and then launch the simulation. Refer to :ref:`multinodesimulations` for a step-by-step guide. -.. dropdown:: My ``ServerApp`` also needs to make use of the GPU (e.g., to do evaluation of the *global model* after aggregation). Is this GPU usage taken into account by the ``Simulation Engine``? +.. dropdown:: My ``ServerApp`` also needs to make use of the GPU (e.g., to do evaluation of the *global model* after aggregation). Is this GPU usage taken into account by the ``Simulation Runtime``? - No. The ``Simulation Engine`` only manages ``ClientApps`` and therefore is only aware of the system resources they require. If your ``ServerApp`` makes use of substantial compute or memory resources, factor that into account when setting ``num_cpus`` and ``num_gpus``. + No. The ``Simulation Runtime`` only manages ``ClientApps`` and therefore is only aware of the system resources they require. If your ``ServerApp`` makes use of substantial compute or memory resources, factor that into account when setting ``num_cpus`` and ``num_gpus``. .. dropdown:: Can I indicate on what resource a specific instance of a ``ClientApp`` should run? Can I do resource placement? diff --git a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst index 27fc3204e5f5..74cd9baed63b 100644 --- a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst +++ b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst @@ -425,8 +425,8 @@ so little code, right? In the later sections, we've seen how we can communicate arbitrary values between server and clients to fully customize client-side execution. With that capability, we built a -large-scale Federated Learning simulation using the Flower Virtual Client Engine and ran -an experiment involving 1000 clients in the same workload β€” all in the same Flower +large-scale Federated Learning simulation using the Flower Simulation Runtime and ran an +experiment involving 1000 clients in the same workload β€” all in the same Flower project! ************ From c68ad5e2b1638fbb8b6bda68606140c697149b34 Mon Sep 17 00:00:00 2001 From: "Daniel J. Beutel" Date: Sun, 8 Mar 2026 20:58:02 +0100 Subject: [PATCH 4/5] Format --- .../docs/source/how-to-run-flower-locally.rst | 26 +++++++++---------- ...w-to-run-flower-with-deployment-engine.rst | 10 +++---- .../docs/source/how-to-run-simulations.rst | 19 +++++++------- .../source/how-to-use-cli-json-output.rst | 10 +++---- .../docs/source/tutorial-quickstart-jax.rst | 11 ++++---- .../docs/source/tutorial-quickstart-mlx.rst | 7 +++-- .../source/tutorial-quickstart-pytorch.rst | 3 +-- .../tutorial-quickstart-scikitlearn.rst | 3 +-- .../source/tutorial-quickstart-tensorflow.rst | 7 +++-- .../source/tutorial-quickstart-xgboost.rst | 7 +++-- ...-a-federated-learning-strategy-pytorch.rst | 3 +-- 11 files changed, 48 insertions(+), 58 deletions(-) diff --git a/framework/docs/source/how-to-run-flower-locally.rst b/framework/docs/source/how-to-run-flower-locally.rst index ce175ee82735..8f02e93af1c0 100644 --- a/framework/docs/source/how-to-run-flower-locally.rst +++ b/framework/docs/source/how-to-run-flower-locally.rst @@ -2,9 +2,9 @@ .. meta:: :description: Learn how local `flwr run` uses a managed local SuperLink, how to inspect runs, stream logs, stop runs, and stop the background local SuperLink process. -=========================================== -Run Flower Locally with a Managed SuperLink -=========================================== +############################################# + Run Flower Locally with a Managed SuperLink +############################################# When you use a local Flower configuration profile with ``options.*`` and no explicit ``address``, ``flwr`` does not call the simulation runtime directly. Instead, Flower @@ -40,9 +40,9 @@ On the first command that needs the local Control API, Flower starts a local You can override those default ports with the environment variables ``FLWR_LOCAL_CONTROL_API_PORT`` and ``FLWR_LOCAL_SIMULATIONIO_API_PORT``. -***************** +************** Submit a run -***************** +************** From your Flower App directory, submit a run as usual: @@ -65,9 +65,9 @@ submit the run and immediately follow the logs in the same terminal, use: $ flwr run . --stream -************ +*********** List runs -************ +*********** To see all runs known to the local SuperLink: @@ -120,9 +120,9 @@ To stop a submitted or running run: This stops the run only. It does **not** stop the background local SuperLink process. -********************************* +******************************* Local runtime files and state -********************************* +******************************* The managed local SuperLink keeps its files in ``$FLWR_HOME/local-superlink/``: @@ -132,9 +132,9 @@ The managed local SuperLink keeps its files in ``$FLWR_HOME/local-superlink/``: These files persist across local runs until you remove them yourself. -*************************************** +************************************* Stop the background local SuperLink -*************************************** +************************************* There is currently no dedicated ``flwr`` command to stop the managed local SuperLink process. To stop it, first inspect the matching process and then terminate it. @@ -180,9 +180,9 @@ Stop the process: If you changed the local Control API port with ``FLWR_LOCAL_CONTROL_API_PORT``, replace ``39093`` in the commands above. -******************* +***************** Troubleshooting -******************* +***************** If a local run fails before it starts, or if the managed local SuperLink does not come up correctly, inspect: diff --git a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst index 3cee775dfbef..5553439037a3 100644 --- a/framework/docs/source/how-to-run-flower-with-deployment-engine.rst +++ b/framework/docs/source/how-to-run-flower-with-deployment-engine.rst @@ -2,9 +2,9 @@ .. meta:: :description: Guide to use Flower's Deployment Runtime and run a Flower App through a federation consisting of a SuperLink and two SuperNodes. -####################################### +######################################## Run Flower with the Deployment Runtime -####################################### +######################################## This how-to guide demonstrates how to set up and run Flower with the Deployment Runtime using minimal configurations to illustrate the workflow. This is a complementary guide @@ -77,9 +77,9 @@ executing ``flwr new``: .. note:: If you decide to run the project with ``flwr run .`` against the default local - profile, Flower submits the run to a managed local SuperLink, which then executes - it with the Simulation Runtime. Continue to Step 2 to instead point ``flwr run`` at - a named SuperLink connection for the Deployment Runtime. + profile, Flower submits the run to a managed local SuperLink, which then executes it + with the Simulation Runtime. Continue to Step 2 to instead point ``flwr run`` at a + named SuperLink connection for the Deployment Runtime. .. tip:: diff --git a/framework/docs/source/how-to-run-simulations.rst b/framework/docs/source/how-to-run-simulations.rst index 11e16e5b8c6f..f0452113b91c 100644 --- a/framework/docs/source/how-to-run-simulations.rst +++ b/framework/docs/source/how-to-run-simulations.rst @@ -111,12 +111,11 @@ multiple apps to choose from. The example below uses the ``PyTorch`` quickstart Then, follow the instructions shown after completing the |flwr_new_link|_ command. When you execute |flwr_run_link|_, the run will execute with the ``Simulation Runtime``. -For local simulation profiles, ``flwr run`` submits the run to a managed local -SuperLink via the Control API. If the profile has ``options.*`` and no explicit -``address``, Flower starts a local SuperLink automatically when needed, keeps it -running in the background, and reuses it for ``flwr list``, ``flwr log``, and -``flwr stop``. See :doc:`how-to-run-flower-locally` for the full local workflow and -runtime lifecycle. +For local simulation profiles, ``flwr run`` submits the run to a managed local SuperLink +via the Control API. If the profile has ``options.*`` and no explicit ``address``, +Flower starts a local SuperLink automatically when needed, keeps it running in the +background, and reuses it for ``flwr list``, ``flwr log``, and ``flwr stop``. See +:doc:`how-to-run-flower-locally` for the full local workflow and runtime lifecycle. Simulation examples =================== @@ -144,8 +143,8 @@ The complete list of examples can be found in `the Flower GitHub Defining ``ClientApp`` resources ********************************** -By default, the ``Simulation Runtime`` assigns two CPU cores to each backend worker. This -means that if your system has 10 CPU cores, five backend workers can be running in +By default, the ``Simulation Runtime`` assigns two CPU cores to each backend worker. +This means that if your system has 10 CPU cores, five backend workers can be running in parallel, each executing a different ``ClientApp`` instance. More often than not, you would probably like to adjust the resources your ``ClientApp`` @@ -227,9 +226,9 @@ round but your system can only accommodate 8 clients concurrently. The ``Simulat Runtime`` will schedule 100 ``ClientApps`` to run and then will execute them in a resource-aware manner in batches of 8. -***************************** +****************************** Simulation Runtime resources -***************************** +****************************** By default, the ``Simulation Runtime`` has **access to all system resources** (i.e., all CPUs, all GPUs). However, in some settings, you might want to limit how many of your diff --git a/framework/docs/source/how-to-use-cli-json-output.rst b/framework/docs/source/how-to-use-cli-json-output.rst index c2756432e9a9..b6c742735afb 100644 --- a/framework/docs/source/how-to-use-cli-json-output.rst +++ b/framework/docs/source/how-to-use-cli-json-output.rst @@ -33,8 +33,8 @@ This guide shows JSON output for: ``flwr run`` JSON output ************************** -The |flwr_run| command submits a Flower App run. For a local app, the CLI first builds -a FAB and then starts the run through the Control API. +The |flwr_run| command submits a Flower App run. For a local app, the CLI first builds a +FAB and then starts the run through the Control API. Representative default output: @@ -71,8 +71,7 @@ The |flwr_run| JSON output contains: - ``fab-hash``: the short FAB hash - ``fab-filename``: the built FAB filename -If the command fails, the JSON output contains ``success: false`` and -``error-message``. +If the command fails, the JSON output contains ``success: false`` and ``error-message``. *************************** ``flwr list`` JSON output @@ -198,5 +197,4 @@ To return structured JSON instead: "run-id": "1859953118041441032" } -If the command fails, the JSON output contains ``success: false`` and -``error-message``. +If the command fails, the JSON output contains ``success: false`` and ``error-message``. diff --git a/framework/docs/source/tutorial-quickstart-jax.rst b/framework/docs/source/tutorial-quickstart-jax.rst index ce726eb16276..015b14ad532c 100644 --- a/framework/docs/source/tutorial-quickstart-jax.rst +++ b/framework/docs/source/tutorial-quickstart-jax.rst @@ -14,12 +14,11 @@ dataset using Flower and `JAX `_ with the create a virtual environment and run everything within a :doc:`virtualenv `. -Let's use ``flwr new`` to create a complete Flower+JAX project. It will generate all -the files needed to run a federation of 50 nodes using |fedavg|_. By default, the -generated app uses a local simulation profile that ``flwr run`` submits to a managed -local SuperLink, which then executes the run with the Flower Simulation Runtime. The -MNIST dataset will be partitioned using |flowerdatasets|_'s -|iidpartitioner|_. +Let's use ``flwr new`` to create a complete Flower+JAX project. It will generate all the +files needed to run a federation of 50 nodes using |fedavg|_. By default, the generated +app uses a local simulation profile that ``flwr run`` submits to a managed local +SuperLink, which then executes the run with the Flower Simulation Runtime. The MNIST +dataset will be partitioned using |flowerdatasets|_'s |iidpartitioner|_. Now that we have a rough idea of what this example is about, let's get started. First, install Flower in your new environment: diff --git a/framework/docs/source/tutorial-quickstart-mlx.rst b/framework/docs/source/tutorial-quickstart-mlx.rst index f6ff157aac52..6e9d99052dea 100644 --- a/framework/docs/source/tutorial-quickstart-mlx.rst +++ b/framework/docs/source/tutorial-quickstart-mlx.rst @@ -48,12 +48,11 @@ In this federated learning tutorial, we will learn how to train a simple MLP on using Flower and MLX. It is recommended to create a virtual environment and run everything within a :doc:`virtualenv `. -Let's use ``flwr new`` to create a complete Flower+MLX project. It will generate all -the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the +Let's use ``flwr new`` to create a complete Flower+MLX project. It will generate all the +files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the generated app uses a local simulation profile that ``flwr run`` submits to a managed local SuperLink, which then executes the run with the Flower Simulation Runtime. The -dataset will be partitioned using Flower Dataset's -`IidPartitioner +dataset will be partitioned using Flower Dataset's `IidPartitioner `_. Now that we have a rough idea of what this example is about, let's get started. First, diff --git a/framework/docs/source/tutorial-quickstart-pytorch.rst b/framework/docs/source/tutorial-quickstart-pytorch.rst index a57b432c3dde..a0172dadaa3a 100644 --- a/framework/docs/source/tutorial-quickstart-pytorch.rst +++ b/framework/docs/source/tutorial-quickstart-pytorch.rst @@ -53,8 +53,7 @@ Let's use ``flwr new`` to create a complete Flower+PyTorch project. It will gene the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the generated app uses a local simulation profile that ``flwr run`` submits to a managed local SuperLink, which then executes the run with the Flower Simulation Runtime. The -dataset will be partitioned using Flower Dataset's -`IidPartitioner +dataset will be partitioned using Flower Dataset's `IidPartitioner `_. Now that we have a rough idea of what this example is about, let's get started. First, diff --git a/framework/docs/source/tutorial-quickstart-scikitlearn.rst b/framework/docs/source/tutorial-quickstart-scikitlearn.rst index 69d399c5d670..9c211131abcf 100644 --- a/framework/docs/source/tutorial-quickstart-scikitlearn.rst +++ b/framework/docs/source/tutorial-quickstart-scikitlearn.rst @@ -45,8 +45,7 @@ Let's use ``flwr new`` to create a complete Flower+scikit-learn project. It will generate all the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, the generated app uses a local simulation profile that ``flwr run`` submits to a managed local SuperLink, which then executes the run with the Flower Simulation -Runtime. The dataset will be partitioned using -|flowerdatasets|_'s |iidpartitioner|_ +Runtime. The dataset will be partitioned using |flowerdatasets|_'s |iidpartitioner|_ Now that we have a rough idea of what this example is about, let's get started. First, install Flower in your new environment: diff --git a/framework/docs/source/tutorial-quickstart-tensorflow.rst b/framework/docs/source/tutorial-quickstart-tensorflow.rst index 01fb8b93581e..fe7a7ddfe4c1 100644 --- a/framework/docs/source/tutorial-quickstart-tensorflow.rst +++ b/framework/docs/source/tutorial-quickstart-tensorflow.rst @@ -43,10 +43,9 @@ virtual environment and run everything within a :doc:`virtualenv Let's use ``flwr new`` to create a complete Flower+TensorFlow project. It will generate all the files needed to run a federation of 10 nodes using |fedavg_link|_. By default, -the generated app uses a local simulation profile that ``flwr run`` submits to a -managed local SuperLink, which then executes the run with the Flower Simulation Runtime. -The dataset will be partitioned using Flower Dataset's -`IidPartitioner +the generated app uses a local simulation profile that ``flwr run`` submits to a managed +local SuperLink, which then executes the run with the Flower Simulation Runtime. The +dataset will be partitioned using Flower Dataset's `IidPartitioner `_. Now that we have a rough idea of what this example is about, let's get started. First, diff --git a/framework/docs/source/tutorial-quickstart-xgboost.rst b/framework/docs/source/tutorial-quickstart-xgboost.rst index a064b15a6e93..070db9d3f6d3 100644 --- a/framework/docs/source/tutorial-quickstart-xgboost.rst +++ b/framework/docs/source/tutorial-quickstart-xgboost.rst @@ -51,10 +51,9 @@ virtual environment and run everything within a :doc:`virtualenv Let's use ``flwr new`` to create a complete Flower+XGBoost project. It will generate all the files needed to run a federation of 10 nodes using |fedxgbbagging_link|_ strategy. -By default, the generated app uses a local simulation profile that ``flwr run`` -submits to a managed local SuperLink, which then executes the run with the Flower -Simulation Runtime. The dataset will be partitioned using Flower -Dataset's `IidPartitioner +By default, the generated app uses a local simulation profile that ``flwr run`` submits +to a managed local SuperLink, which then executes the run with the Flower Simulation +Runtime. The dataset will be partitioned using Flower Dataset's `IidPartitioner `_. |fedxgbbagging_link|_ (bootstrap aggregation) is an ensemble method that improves diff --git a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst index 74cd9baed63b..66c8bac7adfa 100644 --- a/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst +++ b/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.rst @@ -426,8 +426,7 @@ so little code, right? In the later sections, we've seen how we can communicate arbitrary values between server and clients to fully customize client-side execution. With that capability, we built a large-scale Federated Learning simulation using the Flower Simulation Runtime and ran an -experiment involving 1000 clients in the same workload β€” all in the same Flower -project! +experiment involving 1000 clients in the same workload β€” all in the same Flower project! ************ Next steps From a533d5ef2a8ec246f6b776aad4587dc7622f5898 Mon Sep 17 00:00:00 2001 From: "Daniel J. Beutel" Date: Mon, 9 Mar 2026 19:10:08 +0100 Subject: [PATCH 5/5] Add Flower configuration link --- framework/docs/source/how-to-run-flower-locally.rst | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/framework/docs/source/how-to-run-flower-locally.rst b/framework/docs/source/how-to-run-flower-locally.rst index 8f02e93af1c0..c4db753b355d 100644 --- a/framework/docs/source/how-to-run-flower-locally.rst +++ b/framework/docs/source/how-to-run-flower-locally.rst @@ -6,10 +6,11 @@ Run Flower Locally with a Managed SuperLink ############################################# -When you use a local Flower configuration profile with ``options.*`` and no explicit -``address``, ``flwr`` does not call the simulation runtime directly. Instead, Flower -starts a managed local ``flower-superlink`` on demand, submits the run through the -Control API, and the local SuperLink executes the run with the simulation runtime. +When you use a local profile in the :doc:`Flower configuration +` with ``options.*`` and no explicit ``address``, ``flwr`` +does not call the simulation runtime directly. Instead, Flower starts a managed local +``flower-superlink`` on demand, submits the run through the Control API, and the local +SuperLink executes the run with the simulation runtime. This is the default experience for a profile like the one created automatically in your Flower configuration: