Buy cheap GPU at : ACCESS
Follow Me: https://x.com/WhalePiz
Telegram Group: https://t.me/Nexgenexplore
| Requirement | Details |
|---|---|
| CPU Architecture | arm64 or amd64 |
| Recommended RAM | 25 GB |
| CUDA Devices (Recommended) | RTX 3090, RTX 4090, A100, H100 |
| Python Version | Python >= 3.10 (For Mac, you may need to upgrade) |
- Install
sudo
apt update && apt upgrade -y- Install other dependencies
apt install screen curl iptables build-essential git wget lz4 jq make gcc nano automake autoconf tmux htop nvme-cli libgbm1 pkg-config libssl-dev libleveldb-dev tar clang bsdmainutils ncdu unzip libleveldb-dev -y- Install Python
apt install python3 python3-pip python3-venv python3-dev -y4.Install Node.js and yarn
apt update
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt install -y nodejs
node -v
npm install -g yarn
yarn -v- Install Yarn
curl -o- -L https://yarnpkg.com/install.sh | bashexport PATH="$HOME/.yarn/bin:$HOME/.config/yarn/global/node_modules/.bin:$PATH"source ~/.bashrc- Create a screen session
screen -S gensyn- Clone the Repository and Run
cd $HOME && rm -rf gensyn-testnet && git clone https://github.com/whalepiz/gensyn-testnet.git && chmod +x gensyn-testnet/gensyn.sh && ./gensyn-testnet/gensyn.sh
- You will get a link like this slick-cars-shake.loca.lt
- Access the website and enter your IP address as I have highlighted in red
- Then log in with your email and return to the original terminal to continue the process
Answer prompts:
Would you like to push models you train in the RL swarm to the Hugging Face Hub? [y/N]>>> PressNto join testnetHuggingFaceneeds2GBupload bandwidth for each model you train, you can pressY, and enter your access-token.
Enter the name of the model you want to use in huggingface repo/name format, or press [Enter] to use the default model.>>> For default model, pressEnteror choose one of these (More model parameters (B) need more vRAM):Gensyn/Qwen2.5-0.5B-InstructQwen/Qwen3-0.6Bnvidia/AceInstruct-1.5Bdnotitia/Smoothie-Qwen3-1.7BGensyn/Qwen2.5-1.5B-Instruct
- Use Mobaxterm to establish a connection to your VPS.
- Once connected, transfer the
swarm.pemfile from the following directory to your local machine:/root/rl-swarm/swarm.pem
- Open Windows Explorer and search for
\wsl.localhostto access your Ubuntu directories. - The primary directories are:
- If installed under a specific username:
\wsl.localhost\Ubuntu\home\<your_username>\rl-swarm - If installed under root:
\wsl.localhost\Ubuntu\root\rl-swarm
- If installed under a specific username:
- Locate the
swarm.pemfile within therl-swarmfolder.
-
Open Windows PowerShell and execute the following command to connect to your GPU server:
sftp -P PORT [email protected]- Replace
[email protected]with your GPU server's hostname. - Replace
PORTwith the specific port from your server's SSH connection details. - The username might vary (e.g.,
root), depending on your server configuration.
- Replace
-
After establishing the connection, you will see the
sftp>prompt. -
To navigate to the folder containing
swarm.pem, use thecdcommand:cd /home/ubuntu/rl-swarm -
To download the file, use the
getcommand:get swarm.pemThe file will be saved in the directory where you ran the
sftpcommand, typically:- If you executed the command in PowerShell, the file will be stored in
C:\Users\<your_pc_username>.
- If you executed the command in PowerShell, the file will be stored in
-
Once the download is complete, type
exitto close the connection.
You've successfully backed up the swarm.pem file from your VPS, WSL, or GPU server.