Skip to content

Commit f32302b

Browse files
authored
Merge pull request #181 from NeLy-EPFL/dev-v1.0.0-pre.3
Release: v1.0.0-pre.3
2 parents 4bc6d5e + eabc0e5 commit f32302b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3229
-1963
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## Simulating embodied sensorimotor control with NeuroMechFly 2.0
1+
## Simulating embodied sensorimotor control with NeuroMechFly v2
22

33
![](https://github.com/NeLy-EPFL/_media/blob/main/flygym/banner_large.jpg?raw=true)
44

@@ -11,7 +11,7 @@
1111

1212
![overview_video](https://github.com/NeLy-EPFL/_media/blob/main/flygym/overview_video.gif?raw=true)
1313

14-
This repository contains the source code for FlyGym, the Python library for NeuroMechFly 2.0, a digital twin of the adult fruit fly *Drosophila* melanogaster that can see, smell, walk over challenging terrain, and interact with the environment (see our [NeuroMechFly 2.0 paper](https://www.biorxiv.org/content/10.1101/2023.09.18.556649)).
14+
This repository contains the source code for FlyGym, the Python library for NeuroMechFly v2, a digital twin of the adult fruit fly *Drosophila* melanogaster that can see, smell, walk over challenging terrain, and interact with the environment (see our [NeuroMechFly v2 paper](https://www.biorxiv.org/content/10.1101/2023.09.18.556649)).
1515

1616
NeuroMechFly consists of the following components:
1717
- **Biomechanical model:** The biomechanical model is based on a micro-CT scan of a real adult female fly (see our original NeuroMechFly publication). We have adjusted several body segments (in particular in the antennae) to better reflect the biological reality.

doc/source/api_ref/examples/locomotion.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,12 @@ Hybrid turning controller
3636
:undoc-members:
3737
:show-inheritance:
3838
:inherited-members:
39+
40+
Hybrid turning fly
41+
-------------------------
42+
43+
.. autoclass:: flygym.examples.locomotion.HybridTurningFly
44+
:members:
45+
:undoc-members:
46+
:show-inheritance:
47+
:inherited-members:

doc/source/api_ref/examples/vision.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Connectome-constrained vision model
5454
:show-inheritance:
5555
:inherited-members:
5656

57-
.. autoclass:: flygym.examples.vision.RealisticVisionController
57+
.. autoclass:: flygym.examples.vision.RealisticVisionFly
5858
:members:
5959
:undoc-members:
6060
:show-inheritance:

doc/source/api_ref/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ This section of the documentation provides the complete API reference for the Fl
77
.. toctree::
88
:maxdepth: 2
99

10+
mdp_specs
1011
fly
1112
arena
1213
camera

doc/source/api_ref/mdp_specs.rst

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
MDP Task Specifications
2+
=======================
3+
4+
As discussed in the `"Interacting with NeuroMechFly" tutorial <https://neuromechfly.org/tutorials/gym_basics_and_kinematic_replay.html>`_, we formulate the control problem as a "task," or "environment," which receives an *action* from the controller and returns (i) an *observation* of the state of the fly and (ii) a reward (optional). The task also return an *info* dictionary, which can be used to provide additional information about the task, a *terminated* flag to indicate whether the task has due to certain conditions being met, and a *truncated* flag to indicate whether the task has been terminated early due to technical issues (e.g. physics errors, timeout, etc.). On this page, we will specify the content of the *action*, the *observation*, and the conditions for *termination* and *truncation* to return True.
5+
6+
Default ``Simulation``
7+
----------------------
8+
9+
**Action:** The action space is a `Dict space <https://gymnasium.farama.org/api/spaces/composite/#dict>`_ with the following keys:
10+
11+
* "joints": The control signal for the actuated DoFs (e.g. if ``Fly.control == "position"``, then this is the target joint angle). This is a NumPy array of shape (num_actuated_joints,). The order of the DoFs is the same as ``Fly.actuated_joints``.
12+
* "adhesion" (if ``Fly.enable_adhesion`` is True): The on/off signal of leg adhesion as a NumPy array of shape (6,), one for each leg. The order of the legs is: LF, LM, LH, RF, RM, RH (L/R = left/right, F/M/H = front/middle/hind).
13+
14+
**Observation:** The observation space is a Dict space with the following keys:
15+
16+
* "joints": The joint states as a NumPy array of shape (3, num_actuated_joints). The three rows are the angle, angular velocity, and force at each DoF. The order of the DoFs is the same as ``Fly.actuated_joints``
17+
* "fly": The fly state as a NumPy array of shape (4, 3). 0th row: x, y, z position of the fly in arena. 1st row: x, y, z velocity of the fly in arena. 2nd row: orientation of fly around x, y, z axes. 3rd row: rate of change of fly orientation.
18+
* "contact_forces": Readings of the touch contact sensors, one placed for each of the body segments specified in ``Fly.contact_sensor_placements``. This is a NumPy array of shape (num_contact_sensor_placements, 3).
19+
* "end_effectors": The positions of the end effectors (most distal tarsus link) of the legs as a NumPy array of shape (6, 3). The order of the legs is: LF, LM, LH, RF, RM, RH (L/R = left/right, F/M/H = front/middle/hind).
20+
* "fly_orientation": NumPy array of shape (3,). This is the vector (x, y, z) pointing toward the direction that the fly is facing.
21+
* "vision" (if ``Fly.enable_vision`` is True): The light intensities sensed by the ommatidia on the compound eyes. This is a NumPy array of shape (2, num_ommatidia_per_eye, 2), where the zeroth dimension is the side (left, right in that order); the second dimension specifies the ommatidium, and the last column is for the spectral channel (yellow-type, pale-type in that order). Each ommatidium only has one channel with nonzero reading. The intensities are given on a [0, 1] scale.
22+
* "odor_intensity" (if ``Fly.enable_olfaction`` is True): The odor intensities sensed by the odor sensors (by default 2 antennae and 2 maxillary palps). This is a NumPy array of shape (odor_space_dimension, num_sensors).
23+
24+
**Info:** The info dictionary contains the following:
25+
26+
* "vision_updated" (if ``Fly.enable_vision`` is True): A boolean indicating whether the vision has been updated in the current step. This is useful because the visual input is usually updated at a much lower frequency than the physics simulation.
27+
* "flip" (if ``Fly.detect_flip`` is True): A boolean indicating whether the fly has flipped upside down.
28+
* "flip_counter" (if ``Fly.detect_flip`` is True): The number of simulation steps during which all legs of the fly have been off the ground (detected using a threshold of ground contact forces). Useful for debugging.
29+
* "contact_forces" (if ``Fly.detect_flip`` is True): The contact forces sensed by the legs. Useful for debugging.
30+
* "neck_actuation" (if ``Fly.head_stabilization_model`` is specified): The neck actuation applied.
31+
32+
**Reward, termination, and truncation:** By default, the task always returns False for the terminated and truncated flags and 0 for the reward. The user is expected to modify this behavior by extending the ``Simulation`` or ``Fly`` classes as needed.
33+
34+
35+
Examples under ``flygym/examples``
36+
----------------------------------
37+
38+
Hybrid turning controller (``HybridTurningController``)
39+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
40+
41+
**Action**: The ``flygym.example.locomotion.HybridTurningController`` class expects a single NumPy array of shape (2,) as its action. The values are the descending walking drive on the left and right sides of the fly. See the `tutorial on the hybrid turning controller <https://neuromechfly.org/tutorials/turning.html>`_ for more details.
42+
43+
**Observation, reward, termination, and truncation:** The ``flygym.example.locomotion.HybridTurningController`` class returns the same observation, reward, "terminated" flag, and "truncated" flag as the default ``Simulation`` class.
44+
45+
**Info:** In addition to what is provided in the default ``Simulation``, the ``flygym.example.locomotion.HybridTurningController`` class includes the following in the "info" dictionary:
46+
47+
* "joints", "adhesion": The hybrid turning controller computes the appropriate joint angles and adhesion signals based on the descending inputs, CPG states, and mechanosensory feedback. These values are the computed low-level motor commands applied to the underlying base ``Simulation``.
48+
* "net_corrections": The net correction amounts applied to the legs as a NumPy array of shape (6,). Refer to the `tutorial on the hybrid turning controller <https://neuromechfly.org/tutorials/hybrid_controller.html>`__ for more details. The order of legs is: LF, LM, LH, RF, RM, RH (L/R = left/right, F/M/H = front/middle/hind).
49+
50+
51+
52+
Simple object following (``VisualTaxis``)
53+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
54+
55+
**Action:** The ``flygym.example.vision.VisualTaxis`` class expects the same action as ``HybridTurningController``.
56+
57+
**Observation:** The ``flygym.example.vision.VisualTaxis`` class returns an array of shape (2, 3) as the observation. The two rows of the array specify the left vs. right eyes (in this order). The three columns are the azimuth (left-right) and elevation (top-down) positions of the object in the visual field, and the size of the object in the visual field. All values are normalized, either by the width/height or by the size of the visual field, to the range [0, 1].
58+
59+
**Reward, termination, truncation, and info:** The ``flygym.example.vision.VisualTaxis`` class always returns 0 for the reward, False for the "terminated" and "truncated" flags, and an empty dictionary for the "info" dictionary.
60+
61+
62+
63+
Path integration task (``PathIntegrationController``)
64+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
65+
66+
**Action, reward, termination, truncation, and info:** The ``flygym.examples.path_integration.PathIntegrationController`` class returns the same action, observation, reward, "terminated" flag, "truncated" flag, and "info" dictionary as ``HybridTurningController``.
67+
68+
**Observation:** In addition to what is returned by ``HybridTurningController``, ``flygym.examples.path_integration.PathIntegrationController`` also provides the following in the observation dictionary:
69+
70+
* "stride_diff_unmasked": The relative shift of the tips of the legs from one simulation step to another. The shift is computed in the reference from of the fly and presented as a NumPy array of shape (6, 3). The order of the legs (0th axis of the array) is: LF, LM, LH, RF, RM, RH (L/R = left/right, F/M/H = front/middle/hind). The 1st axis of the array is the x, y, z components of the shift. The shift is computed by comparing the positions of the legs in the current step with the positions in the previous step. The shift is not masked by the leg's contact with the ground.
71+
72+
73+
74+
Plume tracking task (``PlumeNavigationTask``)
75+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
76+
77+
**Action, observation, reward, termination, info:** The ``flygym.examples.olfaction.PlumeNavigationTask`` class expects the same action and returns the observation, reward, "terminated" flag, and "info" dictionary as ``HybridTurningController``.
78+
79+
**Truncation:** The ``flygym.examples.olfaction.PlumeNavigationTask`` class returns True for the "truncated" flag if and only if the fly has left the area on the arena where the plume is simulated.
80+
81+
82+
83+
NeuroMechFly with connectome-constrained vision network (``RealisticVisionController``)
84+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
85+
86+
**Action, reward, termination, and truncation:** The ``flygym.examples.realistic_vision.RealisticVisionController`` class expects the same action and returns the same reward, "terminated" flag, and "truncated" flags as ``HybridTurningController``.
87+
88+
**Observation:** In addition to what is returned by the ``HybridTurningController``, the ``flygym.examples.realistic_vision.RealisticVisionController`` class also provides the following in the observation dictionary:
89+
90+
* "nn_activities_arr": The activities of the visual system neurons, represented as a NumPy array of shape (2, num_cells_per_eye). The 0th dimension corresponds to the eyes in the order (left, right).
91+
92+
**Info:** In addition to what is returned by the ``HybridTurningController``, the ``flygym.examples.realistic_vision.RealisticVisionController`` class also provides the following in the "info" dictionary:
93+
94+
* "nn_activities": Activities of the visual system neurons as a ``flyvision.LayerActivity`` object. This is similar to ``obs["nn_activities_arr"]`` but in the form of a ``flyvision.LayerActivity`` object rather than a plain array.

doc/source/changelog.rst

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,26 @@
11
Change Log
22
==========
33

4+
* **1.0.0:** In spring 2024, NeuroMechFly was used, for the second time, in a course titled "`Controlling behavior in animals and robots <https://edu.epfl.ch/coursebook/en/controlling-behavior-in-animals-and-robots-BIOENG-456>`_" at EPFL. At the same time, we revised the NeuroMechFly v2 manuscript. In these processes, we significantly improved the FlyGym package, added new functionalities, and incorporated changes as we received feedback from the students. These enhancements are released as FlyGym version 1.0.0. This release is not backward compatible; please refer to the `tutorials <https://neuromechfly.org/tutorials/index.html>`_ and `API references <https://neuromechfly.org/api_ref/index.html>`_ for more information. The main changes are:
5+
6+
* Major API changes:
7+
8+
* The ``NeuroMechFly`` class is split into ``Fly``, a class that represents the fly, and ``Simulation``, a class that represents the simulation, which can potentially contain multiple flies.
9+
* The ``Parameters`` class is deprecated. Parameters related to the fly (such as joint parameters, actuated DoFs, etc.) should be set directly on the ``Fly`` object. Parameters related to the simulation (such as the time step, the render cameras, etc.) should be set directly on the ``Simulation`` object.
10+
* A new ``Camera`` class is introduced. A simulation can contain multiple cameras.
11+
12+
* New `examples <https://github.com/NeLy-EPFL/flygym/tree/main/flygym/examples>`_:
13+
14+
* Path integration based on ascending mechanosensory feedback.
15+
* Head stabilization based on ascending mechanosensory feedback.
16+
* Navigating a complex plume, simulated separately in a fluid mechanics simulator.
17+
* Following another fly using a realistic, connectome-constrained neural network that processes visual inputs.
18+
419
* **0.2.5:** Modify model file to make it compatible with MuJoCo 3.1.1. Disable Python 3.7 support accordingly.
520
* **0.2.4:** Set MuJoCo version to 2.3.7. Documentation updates.
621
* **0.2.3:** Various bug fixes. Improved placement of the spherical treadmill in the tethered environment.
722
* **0.2.2:** Changed default joint kp and adhesion forces to those used in the controller comparison task. Various minor bug fixes. Documentation updates.
823
* **0.2.1:** Simplified class names: ``NeuroMechFlyMuJoCo`` → ``NeuroMechFly``, ``MuJoCoParameters`` → ``Parameters``. Minor documentation updates.
924
* **0.2.0:** The current base version — major API change from 0.1.x.
10-
* **0.1.x** The version used during the development of NeuroMechFly 2.0.
25+
* **0.1.x:** Versions used during the initial development of NeuroMechFly v2.
1126
* **Unversioned:** Version used for the Spring 2023 offering of BIOENG-456 Controlling Behavior in Animals and Robots course at EPFL.

doc/source/gallery/index.rst

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,19 @@
11
Gallery
22
=======
33

4+
.. toctree::
5+
:hidden:
6+
7+
video_3_forces
8+
video_4_climbing
9+
video_8_controller_comparison
10+
video_9_visual_taxis
11+
video_10_odour_taxis
12+
video_11_head_stabilization
13+
video_12_multimodal_navigation
14+
video_13_plume_navigation
15+
video_14_fly_follow_fly
16+
417
NeuroMechFly can be used to emulate a wide range of behaviours and scenarios. Here are some examples of the experiments that can be conducted using Flygym.
518

619
.. raw:: html

doc/source/gallery/video_10_odour_taxis.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
Odour Taxis
2-
=======
2+
===========
3+
34
Our simulated fly, walks toward an attractive odour source (in orange) while avoiding two aversive odours sources (in blue).
45

56
.. raw:: html

doc/source/gallery/video_11_head_stabilization.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
Head stabilization on complex terrain
2-
=======
2+
=====================================
3+
34
Here we incorporate ascending proprioceptive signals to stabilize the head of NeuroMechFly while it traverses a complex terrain. This effectively reduces the perceived self-induced motion in the vision.
45

56
.. raw:: html

doc/source/gallery/video_12_multimodal_navigation.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
Multimodal navigation
2-
=======
2+
=====================
3+
34
Here we demonstrate how a high level Reinforcement Learning trained agent can avoid a pilar using vision while walking toward an odour source.
45

56
.. raw:: html

0 commit comments

Comments
 (0)