[Question] Fluidity Index: Next Generation Super-Intelligence Benchmarks #4315
Replies: 12 comments 1 reply
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Thank you for posting this. I will move this post to our Discussions for follow up. |
Beta Was this translation helpful? Give feedback.
-
|
We're part of Nvidia Inception too, so there's lots of room to get this done! |
Beta Was this translation helpful? Give feedback.
-
|
I've lately been wondering how to program the reward state and transfer protocols on the RL-Game. I suppose energy transfer from tasking onto the robot can be simulated relative to token output quality as observed by the environment. A card is a good example to encapsulate this mechanism that can read and write onto doors or direct charging outlets. |
Beta Was this translation helpful? Give feedback.
-
|
https://github.com/robotlearning123/awesome-isaac-gym?tab=readme-ov-file#tutorials Here is some more data on Isaac lab. |
Beta Was this translation helpful? Give feedback.
-
|
https://nvlabs.github.io/SONIC/ this robot demos seem contextual to our research. @Shay-7278 |
Beta Was this translation helpful? Give feedback.
-
Overview of Implementing Your RL Benchmark Game in Isaac LabBased on the diagram you provided, which appears to depict the architecture of NVIDIA's Isaac Lab Arena (a benchmark evaluation framework built on Isaac Lab), this setup is highly applicable for your scenario. Isaac Lab is a modular, GPU-accelerated simulation framework for robot learning, particularly reinforcement learning (RL), and it supports creating custom benchmarks for humanoid robots. It includes tools for task definitions, environment configurations, evaluation, and integration with partner benchmarks like RoboCasa or GROOT Bench. While the arXiv paper you linked (2510.20636v1) discusses general AI adaptability benchmarks (Fluidity Index), it doesn't directly address robotics—perhaps you intended a related paper like 2511.04831v1 on Isaac Lab itself, which focuses on multi-task robot learning in simulation. Isaac Lab uses Gymnasium-compatible environments, making it straightforward to create a "RL-Game" (i.e., a custom RL task/environment) with wrapped layers for added functionality. For your humanoid non-stationary robot that earns "jobs" to charge batteries and optimizes mechanical activity for efficiency, you can extend existing humanoid tasks (e.g., locomotion or manipulation) to include battery dynamics. This aligns well with Isaac Lab's support for observability of internal states (like battery levels), reward shaping, and non-stationary elements via randomization. I'll break this down step-by-step: applicability, environment setup, reward and transfer protocols, observability, and policy options (including transformers). Applicability of RL-Game and Wrapped Environments
Humanoid support is strong—pre-built envs include generic humanoids and specific models like Unitree G1 or Agibot A2D for tasks like walking, picking, or stacking. These can be extended for your energy-optimized benchmark. Setting Up the EnvironmentUse Isaac Lab's Manager-Based workflow (recommended for benchmarks, as per your diagram) for high-level orchestration. Here's how to implement:
For wrapping: Programming Reward State and Transfer Protocols
This setup optimizes for less frequent charging by rewarding sustained activity. Internal Battery Humanoid Observability
Transformer-Based Internal Tooling RL Policies
This framework should get you started—it's flexible for your energy-optimized humanoid. If you share more details (e.g., specific robot model or code snippets), I can refine further. For hands-on, check the Isaac Lab docs at isaac-sim.github.io/IsaacLab/. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
https://www.youtube.com/watch?v=rrUHZKlrxms Here is an example of Boston Dynamics recent Self Swap mechanism on 4 hours of the humanoid's work on two batteries. Atlas seems to already self-replenish before battery swap however the work Atlas does is not correlate to its replenishing/swap/charging costs. |
Beta Was this translation helpful? Give feedback.
-
|
I've found https://github.com/isaac-sim/IsaacLabEvalTasks for task specific benchmarks. The project's challenges are looking for pre-existing battery specific policies or configurations and generalizing this across task specific evaluations. These are necessary to coordinate with the whole body control mechanisms instructed by the vision language action. |
Beta Was this translation helpful? Give feedback.
-
|
https://www.youtube.com/shorts/yFaQvjIWfPg here is a real life experiment that could be used to benchmark humanoids |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Question
I'd like to implement an benchmark RL-Game with a wrapped environment specifically for a humanoid non stationary robot that can charge its own batteries from earning on their job. Optimizing its mechanical activity to charge less often. Based of our lab research found here:
https://arxiv.org/abs/2510.20636v1
I'm curious RL-GAME and wrapped environment applicability in this scenario relative to the internal battery humanoid observability capabilities in the simulation configurations. Perhaps there are transformer based internal tooling RL policies that can be accessed from this software.
Build Info
Describe the versions that you are currently using:
Beta Was this translation helpful? Give feedback.
All reactions