Skip to content

Conversation

@kevinmingtarja
Copy link
Collaborator

@kevinmingtarja kevinmingtarja commented Dec 20, 2025

Previously, autodown would error with:

# skylet.log
AutostopEvent error: SSH private key not found: /root/.ssh/id_rsa
FileNotFoundError: SSH private key not found: /root/.ssh/id_rsa

Because we don't copy the SSH private key file to the remote cluster for Slurm, which is intentional for the time being, as it's not necessary to do so.

# Check if we are running inside a Slurm job (Only happens with autodown,
# where the Skylet will invoke terminate_instances on the remote cluster),
# where we assume SSH between nodes have been set up on each node's
# ssh config.
# TODO(kevin): Validate this assumption. Another way would be to
# mount the private key to the remote cluster, like we do with
# other clouds' API keys.
if slurm_utils.is_inside_slurm_job():
logger.debug('Running inside a Slurm job, using machine\'s ssh config')
ssh_private_key = None

We have this detection specifically for when skylet tries to autodown itself, where we check if this code is being executed on the remote slurm cluster or not. If so, don't pass in the private key path to SSHCommandRunner (because the private only exists on the API server and thus the path is only valid within that context).

But the way we were checking is wrong. This PR fixes this by using a marker file instead of checking the SLURM_JOB_ID environment variable. Since SlurmCommandRunner uses SSH (not srun, for now at least) to connect to compute nodes, Slurm environment variables aren't available to the skylet, and so the check always fails, and autodown wouldn't happen.

The sbatch provision script now creates a .sky_slurm_cluster marker file in the cluster's home directory during provisioning.

On top of that, we now use LocalProcessCommandRunner for the autodown case, since it is being run from inside the Slurm cluster.

# skylet.log
D 12-21 12:09:53.739 PID=1143650 instance.py:499] Running inside a Slurm cluster, using local execution
D 12-21 12:09:53.739 PID=1143650 command_runner.py:1421] Running command locally: /bin/bash -c 'true && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (squeue -h --name sky-53ea-kevin-7a2eebbf -o "%T")'
D 12-21 12:09:53.759 PID=1143650 command_runner.py:1421] Running command locally: /bin/bash -c 'true && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (scancel --name sky-53ea-kevin-7a2eebbf --signal TERM --full)'

Tested (run the relevant ones):

  • Code formatting: install pre-commit (auto-check on commit) or bash format.sh
  • Any manual or new tests for this PR (please specify below)
  • All smoke tests: /smoke-test (CI) or pytest tests/test_smoke.py (local)
  • Relevant individual tests: /smoke-test -k test_name (CI) or pytest tests/test_smoke.py::test_name (local)
  • Backward compatibility: /quicktest-core (CI) or pytest tests/smoke_tests/test_backward_compat.py (local)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kevinmingtarja, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue preventing skylet from properly performing autodown operations on Slurm clusters. The problem stemmed from an unreliable method of detecting the Slurm environment, which caused skylet to incorrectly search for an SSH private key that is intentionally not present on remote Slurm nodes. The solution involves implementing a robust marker file system to accurately determine the Slurm cluster context, thereby enabling autodown to function as expected without errors.

Highlights

  • Autodown Error Fix: Resolved an AutostopEvent error where skylet failed to autodown on Slurm clusters due to an SSH private key not being found, which is intentionally not copied to remote Slurm nodes.
  • Incorrect Slurm Context Detection: The previous method of detecting if skylet was running inside a Slurm job, using the SLURM_JOB_ID environment variable, was flawed because Slurm environment variables are not available to skylet when SlurmCommandRunner uses SSH.
  • Marker File Implementation: Introduced a new mechanism using a .sky_slurm_cluster marker file, created during provisioning, to reliably identify when skylet is operating within a Slurm cluster.
  • Refactored Detection Logic: The is_inside_slurm_job function was renamed to is_inside_slurm_cluster and updated to check for the presence of the new marker file, ensuring correct behavior for autodown.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the autodown functionality for Slurm clusters, which was failing due to an incorrect check for whether the code was executing within a Slurm job. The previous method of checking the SLURM_JOB_ID environment variable was unreliable. The new approach, which uses a marker file created during provisioning, is a more robust solution. The changes are well-contained and the accompanying test updates are appropriate. My review includes a suggestion to further improve the robustness of the marker file check and clarify its behavior in the code comments.

@kevinmingtarja
Copy link
Collaborator Author

kevinmingtarja commented Dec 21, 2025

@kevinmingtarja
Copy link
Collaborator Author

kevinmingtarja commented Dec 21, 2025

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@kevinmingtarja
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the Slurm autodown feature, which was failing due to an incorrect check for whether it's running inside a Slurm cluster. The fix is well-implemented, replacing the environment variable check with a more robust marker file mechanism. The introduction of a local execution mode for SlurmClient using LocalProcessCommandRunner is a clean solution for the autodown case. The code changes are clear, correct, and the accompanying test updates are appropriate. I have one suggestion to improve maintainability by refactoring duplicated code for SlurmClient instantiation.

@kevinmingtarja kevinmingtarja marked this pull request as ready for review December 21, 2025 12:46
class SlurmClient:
"""Client for Slurm control plane operations."""

def __init__(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you think about just adding a boolean argument is_local_execution_mode: bool = False and pass that in from sky/provision/slurm/instance.py to make the intent more explicit?

Copy link
Collaborator

@SeungjinYang SeungjinYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment, take it or leave it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants