diff --git a/.ipynb_checkpoints/MiXLab-checkpoint.ipynb b/.ipynb_checkpoints/MiXLab-checkpoint.ipynb
new file mode 100644
index 0000000..dd14ab9
--- /dev/null
+++ b/.ipynb_checkpoints/MiXLab-checkpoint.ipynb
@@ -0,0 +1,12730 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text",
+ "id": "view-in-github"
+ },
+ "source": [
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ygDyFQvR5Gci"
+ },
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "#
**Welcome to Mi XL ab ** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XU_IUOV6owRg"
+ },
+ "source": [
+ "## About MiXLab "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lkyLut0_ntrJ"
+ },
+ "source": [
+ "MiXLab is a mix of multiple amazing colab notebooks found on the internet (mostly from github).\n",
+ "\n",
+ "The name MiXLab is inspired from this awesome 3rd party Android file manager app called MiXplorer and combined with (Google) Colab at the end, resulting in MiXLab.\n",
+ "\n",
+ "What is the aim of MiXLab, you might ask?\n",
+ "Well... educational purpose, I guess..."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "uzuVvbfSo16m"
+ },
+ "source": [
+ "## Features "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N1eYiTlGoqaA"
+ },
+ "source": [
+ "Here's what you can do with MiXLab\n",
+ "* Mount/unmount remote storage (Google Drive / rclone).\n",
+ "* Hosted/P2P downloader.\n",
+ "* Some other useful tools such as File Manager, Remote Connection and System Monitor to monitor the VM's state."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e-0yDs4C0HkB"
+ },
+ "source": [
+ "# ✦ *Change Log* ✦ \n",
+ "\n",
+ "Last modified: 2021-09-29 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3_30gF8Am-PQ"
+ },
+ "source": [
+ "2021-09-29 \n",
+ " \n",
+ "Added cell on Real-ESRGAN to download the results. \n",
+ "Changed back the default runtime type CPU only (no hardware accelerator). \n",
+ "Added a lot more options to Real-ESRGAN . \n",
+ "Removed \"custom_command\" field from Real-ESRGAN . \n",
+ "Added a temporary field \"custom_command\" to Real-ESRGAN ."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eA2hvW0ZYn2u"
+ },
+ "source": [
+ "2021-09-28 \n",
+ " \n",
+ "Added a simple implementation of Real-ESRGAN ."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Be03jPf-0L0F"
+ },
+ "source": [
+ "2021-09-28 \n",
+ " \n",
+ "MiXLab is now using VueTorrent for the qBittorrent alternate web interface.\n",
+ "\n",
+ ">Note: there seem to be something wrong with VueTorrent not automatically redirecting user to the main page, serving the login page instead, while there is no need to login. You simply have to click on the login button and then it should take you to the main page."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "21-Wb8ywqQeJ"
+ },
+ "source": [
+ "# ✦ *Colab Stay Alive* ✦ "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nYEj5CeCqbTY"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Colab Stay Alive \n",
+ "# @markdown This cell runs a JS code that will automatically press the reconnect button when you got disconnected due to idle.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import output\n",
+ "\n",
+ "display(IPython.display.Javascript('''\n",
+ " function ClickConnect(){\n",
+ " btn = document.querySelector(\"colab-connect-button\")\n",
+ " if (btn != null){\n",
+ " console.log(\"Clicked on the connect button\"); \n",
+ " btn.click() \n",
+ " }\n",
+ " \n",
+ " btn = document.getElementById('connect')\n",
+ " if (btn != null){\n",
+ " console.log(\"Clicked on the reconnect button\"); \n",
+ " btn.click() \n",
+ " }\n",
+ " }\n",
+ " \n",
+ "setInterval(ClickConnect,60000)\n",
+ "'''))\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cRwNEZJmUFMg"
+ },
+ "source": [
+ "If the cell above doesn't work, try to run one of these codes below on your browser's developer tool/console.\n",
+ "\n",
+ "\n",
+ "\n",
+ ">Code 1(credit to rockyourcode)\n",
+ "function ClickConnect() {\n",
+ " console.log('Working')\n",
+ " document\n",
+ " .querySelector('#top-toolbar > colab-connect-button')\n",
+ " .shadowRoot.querySelector('#connect')\n",
+ " .click()\n",
+ "}\n",
+ "\n",
+ "setInterval(ClickConnect, 60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 2(credit to Kavyajeet Bora on stack overflow)\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Working\"); \n",
+ " document.querySelector(\"colab-toolbar-button#connect\").click() \n",
+ "}\n",
+ "setInterval(ClickConnect,60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 3\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Connnect Clicked - Start\"); \n",
+ " document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click();\n",
+ " console.log(\"Connnect Clicked - End\"); \n",
+ "};\n",
+ "setInterval(ClickConnect, 60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 4(credit to Stephane Belemkoabga on stack overflow)\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Working\"); \n",
+ " document.querySelector(\"colab-connect-button\").click() \n",
+ "}\n",
+ "setInterval(ClickConnect,60000)
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GaegjvHPPW9q"
+ },
+ "source": [
+ "# ✦ *Mount/Unmount Storage* ✦ \n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4sXeh7Tdx1v-"
+ },
+ "source": [
+ "## Google Drive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LkGoo1n9PNgj"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Mount/Unmount Google Drive \n",
+ "# @markdown This cell will mount/unmount Google Drive to /content/drive/
\n",
+ "MODE = \"MOUNT\" #@param [\"MOUNT\", \"UNMOUNT\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import drive\n",
+ "drive.mount._DEBUG = False\n",
+ "if MODE == \"MOUNT\":\n",
+ " drive.mount('/content/drive', force_remount=True)\n",
+ "elif MODE == \"UNMOUNT\":\n",
+ " try:\n",
+ " drive.flush_and_unmount()\n",
+ " except ValueError:\n",
+ " pass\n",
+ " get_ipython().system_raw(\"rm -rf /root/.config/Google/DriveFS\")\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "EgMPgxmrTCvF"
+ },
+ "outputs": [],
+ "source": [
+ "# @markdown ← Force re-mount Google Drive \n",
+ "\n",
+ "drive.mount(\"/content/drive\", force_remount=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nPuXzMyawnzo"
+ },
+ "outputs": [],
+ "source": [
+ "# @markdown This cell is not needed (won't do anything if you run it and here just for reference).\n",
+ "\n",
+ "## ============================= FORM ============================= #\n",
+ "## @markdown ← Mount Google Drive (Cloud SDK) \n",
+ "## @markdown This cell will mount Google Drive to /content/downloads/
\n",
+ "## @markdown > currently there is no way to unmount the drive.\n",
+ "## ================================================================ #\n",
+ "\n",
+ "#!apt-get install -y -qq software-properties-common python-software-properties module-init-tools\n",
+ "#!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null\n",
+ "#!apt-get update -qq 2>&1 > /dev/null\n",
+ "#!apt-get -y install -qq google-drive-ocamlfuse fuse\n",
+ "#from google.colab import auth\n",
+ "#auth.authenticate_user()\n",
+ "#from oauth2client.client import GoogleCredentials\n",
+ "#creds = GoogleCredentials.get_application_default()\n",
+ "#import getpass\n",
+ "#!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL\n",
+ "#vcode = getpass.getpass()\n",
+ "#!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}\n",
+ "\n",
+ "#!mkdir -p downloads\n",
+ "#!google-drive-ocamlfuse drive downloads\n",
+ "\n",
+ "#from IPython.display import HTML, clear_output\n",
+ "\n",
+ "#clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "477G4hACPgqM"
+ },
+ "source": [
+ "## rclone "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0VJ4VO1X8YE6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Install rclone \n",
+ "build_version = \"stable\" #@param [\"stable\", \"beta\"]\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "if build_version == \"stable\":\n",
+ "\t!curl https://rclone.org/install.sh | sudo bash\n",
+ "else:\n",
+ "\t!curl https://rclone.org/install.sh | sudo bash -s beta\n",
+ "\n",
+ "\n",
+ "try:\n",
+ "\tos.makedirs(\"/root/.config/rclone\", exist_ok=True)\n",
+ "except OSError as error:\n",
+ "\tpass\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KTXERiVMIKgw"
+ },
+ "source": [
+ "### rclone 1 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2db3MpgeQdT9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone \n",
+ "Mode = \"Copy\" # @param [\"Move\", \"Copy\", \"Sync\", \"Verify\", \"Dedupe\", \"Clean Empty Dirs\", \"Empty Trash\"]\n",
+ "Source = \"\" # @param {type:\"string\"}\n",
+ "Destination = \"\" # @param {type:\"string\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "Extra_Arguments = \"--local-no-check-updated\" # @param {type:\"string\"}\n",
+ "COPY_SHARED_FILES = False # @param{type: \"boolean\"}\n",
+ "Compare = \"Size & Checksum\"\n",
+ "TRANSFERS, CHECKERS = 20, 20\n",
+ "THROTTLE_TPS = True\n",
+ "BRIDGE_TRANSFER = False # @param{type: \"boolean\"}\n",
+ "FAST_LIST = False # @param{type: \"boolean\"}\n",
+ "OPTIMIZE_GDRIVE = True\n",
+ "SIMPLE_LOG = True\n",
+ "RECORD_LOGFILE = False # @param{type: \"boolean\"}\n",
+ "SKIP_NEWER_FILE = False\n",
+ "SKIP_EXISTED = False\n",
+ "SKIP_UPDATE_MODTIME = False\n",
+ "ONE_FILE_SYSTEM = False\n",
+ "LOG_LEVEL = \"DEBUG\"\n",
+ "SYNC_MODE = \"Delete after transfering\"\n",
+ "SYNC_TRACK_RENAME = True\n",
+ "DEDUPE_MODE = \"Largest\"\n",
+ "USE_TRASH = True\n",
+ "DRY_RUN = False # @param{type: \"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "from os import path as _p\n",
+ "\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ " \n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "\n",
+ "from datetime import datetime as _dt\n",
+ "from mixlab import (\n",
+ " displayOutput,\n",
+ " checkAvailable,\n",
+ " runSh,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " accessSettingFile,\n",
+ " memGiB,\n",
+ ")\n",
+ "\n",
+ "\n",
+ "def populateActionArg():\n",
+ " if Mode == \"Copy\":\n",
+ " actionArg = \"copy\"\n",
+ " elif Mode == \"Sync\":\n",
+ " actionArg = \"sync\"\n",
+ " elif Mode == \"Verify\":\n",
+ " actionArg = \"check\"\n",
+ " elif Mode == \"Dedupe\":\n",
+ " actionArg = \"dedupe largest\"\n",
+ " elif Mode == \"Clean Empty Dirs\":\n",
+ " actionArg = \"rmdirs\"\n",
+ " elif Mode == \"Empty Trash\":\n",
+ " actionArg = \"delete\"\n",
+ " else:\n",
+ " actionArg = \"move\"\n",
+ "\n",
+ " return actionArg\n",
+ "\n",
+ "\n",
+ "def populateCompareArg():\n",
+ " if Compare == \"Mod-Time\":\n",
+ " compareArg = \"--ignore-size\"\n",
+ " elif Compare == \"Size\":\n",
+ " compareArg = \"--size-only\"\n",
+ " elif Compare == \"Checksum\":\n",
+ " compareArg = \"-c --ignore-size\"\n",
+ " else:\n",
+ " compareArg = \"-c\"\n",
+ "\n",
+ " return compareArg\n",
+ "\n",
+ "\n",
+ "def populateOptimizeGDriveArg():\n",
+ " return (\n",
+ " \"--buffer-size 256M \\\n",
+ " --drive-chunk-size 256M \\\n",
+ " --drive-upload-cutoff 256M \\\n",
+ " --drive-acknowledge-abuse \\\n",
+ " --drive-keep-revision-forever\"\n",
+ "\n",
+ " if OPTIMIZE_GDRIVE\n",
+ " else \"--buffer-size 128M\"\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def populateGDriveCopyArg():\n",
+ " if BRIDGE_TRANSFER and memGiB() < 13:\n",
+ " global TRANSFERS, CHECKERS\n",
+ " TRANSFERS, CHECKERS = 10, 80\n",
+ " else:\n",
+ " pass\n",
+ " return \"--disable copy\" if BRIDGE_TRANSFER else \"--drive-server-side-across-configs\"\n",
+ "\n",
+ "\n",
+ "def populateStatsArg():\n",
+ " statsArg = \"--stats-one-line --stats=5s\" if SIMPLE_LOG else \"--stats=5s -P\"\n",
+ " if LOG_LEVEL != \"OFF\":\n",
+ " statsArg += \" -v\" if SIMPLE_LOG else \"-vv\"\n",
+ " elif LOG_LEVEL == \"INFO\":\n",
+ " statsArg += \" --log-level INFO\"\n",
+ " elif LOG_LEVEL == \"ERROR\":\n",
+ " statsArg += \" --log-level ERROR\"\n",
+ " else:\n",
+ " statsArg += \" --log-level DEBUG\"\n",
+ " return statsArg\n",
+ "\n",
+ "\n",
+ "def populateSyncModeArg():\n",
+ " if Mode != \"Sync\":\n",
+ " return \"\"\n",
+ " elif SYNC_MODE == \"Delete before transfering\":\n",
+ " syncModeArg = \"--delete-before\"\n",
+ " elif SYNC_MODE == \"Delete after transfering\":\n",
+ " syncModeArg = \"--delete-after\"\n",
+ " else:\n",
+ " syncModeArg = \"--delete-during\"\n",
+ " if SYNC_TRACK_RENAME:\n",
+ " syncModeArg += \" --track-renames\"\n",
+ " return syncModeArg\n",
+ "\n",
+ "\n",
+ "def populateDedupeModeArg():\n",
+ " if DEDUPE_MODE == \"Interactive\":\n",
+ " dedupeModeArg = \"--dedupe-mode interactive\"\n",
+ " elif DEDUPE_MODE == \"Skip\":\n",
+ " dedupeModeArg = \"--dedupe-mode skip\"\n",
+ " elif DEDUPE_MODE == \"First\":\n",
+ " dedupeModeArg = \"--dedupe-mode first\"\n",
+ " elif DEDUPE_MODE == \"Newest\":\n",
+ " dedupeModeArg = \"--dedupe-mode newest\"\n",
+ " elif DEDUPE_MODE == \"Oldest\":\n",
+ " dedupeModeArg = \"--dedupe-mode oldest\"\n",
+ " elif DEDUPE_MODE == \"Rename\":\n",
+ " dedupeModeArg = \"--dedupe-mode rename\"\n",
+ " else:\n",
+ " dedupeModeArg = \"--dedupe-mode largest\"\n",
+ "\n",
+ " return dedupeModeArg\n",
+ "\n",
+ "\n",
+ "def generateCmd():\n",
+ " sharedFilesArgs = (\n",
+ " \"--drive-shared-with-me --files-from /content/upload.txt --no-traverse\"\n",
+ " if COPY_SHARED_FILES\n",
+ " else \"\"\n",
+ " )\n",
+ "\n",
+ " logFileArg = f\"--log-file /content/rclone_log.txt -vv -P\"\n",
+ "\n",
+ " args = [\n",
+ " \"rclone\",\n",
+ " f\"--config {rcloneConfigurationPath}/rclone.conf\",\n",
+ " '--user-agent \"Mozilla\"',\n",
+ " populateActionArg(),\n",
+ " f'\"{Source}\"',\n",
+ " f'\"{Destination}\"' if Mode in (\"Move\", \"Copy\", \"Sync\") else \"\",\n",
+ " f\"--transfers {str(TRANSFERS)}\",\n",
+ " f\"--checkers {str(CHECKERS)}\",\n",
+ " ]\n",
+ "\n",
+ " if Mode == \"Verify\":\n",
+ " args.append(\"--one-way\")\n",
+ " elif Mode == \"Empty Trash\":\n",
+ " args.append(\"--drive-trashed-only --drive-use-trash=false\")\n",
+ " else:\n",
+ " args.extend(\n",
+ " [\n",
+ " populateGDriveCopyArg(),\n",
+ " populateSyncModeArg(),\n",
+ " populateCompareArg(),\n",
+ " populateOptimizeGDriveArg(),\n",
+ " \"-u\" if SKIP_NEWER_FILE else \"\",\n",
+ " \"--ignore-existing\" if SKIP_EXISTED else \"\",\n",
+ " \"--no-update-modtime\" if SKIP_UPDATE_MODTIME else \"\",\n",
+ " \"--one-file-system\" if ONE_FILE_SYSTEM else \"\",\n",
+ " \"--tpslimit 95 --tpslimit-burst 40\" if THROTTLE_TPS else \"\",\n",
+ " \"--fast-list\" if FAST_LIST else \"\",\n",
+ " \"--delete-empty-src-dirs\" if Mode == \"Move\" else \"\",\n",
+ " ]\n",
+ " )\n",
+ " args.extend(\n",
+ " [\n",
+ " \"-n\" if DRY_RUN else \"\",\n",
+ " populateStatsArg() if not RECORD_LOGFILE else logFileArg,\n",
+ " sharedFilesArgs,\n",
+ " Extra_Arguments,\n",
+ " ]\n",
+ " )\n",
+ "\n",
+ " return args\n",
+ "\n",
+ "\n",
+ "def executeRclone():\n",
+ " prepareSession()\n",
+ " if Source.strip() == \"\":\n",
+ " displayOutput(\"❌ The source field is empty!\")\n",
+ " return\n",
+ " if checkAvailable(\"/content/rclone_log.txt\"):\n",
+ " if not checkAvailable(\"/content/logfiles\"):\n",
+ " runSh(\"mkdir -p -m 666 /content/logfiles\")\n",
+ " job = accessSettingFile(\"job.txt\")\n",
+ " runSh(\n",
+ " f'mv /content/rclone_log.txt /content/logfiles/{job[\"title\"]}_{job[\"status\"]}_logfile.txt'\n",
+ " )\n",
+ "\n",
+ " onGoingJob = {\n",
+ " \"title\": f'{Mode}_{Source}_{Destination}_{_dt.now().strftime(\"%a-%H-%M-%S\")}',\n",
+ " \"status\": \"ongoing\",\n",
+ " }\n",
+ " accessSettingFile(\"job.txt\", onGoingJob)\n",
+ "\n",
+ " cmd = \" \".join(generateCmd())\n",
+ " runSh(cmd, output=True)\n",
+ " displayOutput(Mode, \"success\")\n",
+ "\n",
+ " onGoingJob[\"status\"] = \"finished\"\n",
+ " accessSettingFile(\"job.txt\", onGoingJob)\n",
+ "\n",
+ "executeRclone()\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "wkc0wCvPIUFh"
+ },
+ "source": [
+ "### rclone 2 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "t03ZdwQ-IvPv"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone \n",
+ "Mode = \"Copy\" #@param [\"Copy\", \"Move\", \"Sync\", \"Checker\", \"Deduplicate\", \"Remove Empty Directories\", \"Empty Trash\"]\n",
+ "Source = \"\" #@param {type:\"string\"}\n",
+ "Destination = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Global Configuration ⚙️ \n",
+ "Extra_Arguments = \"--local-no-check-updated\" #@param {type:\"string\"}\n",
+ "Compare = \"Size & Mod-Time\" #@param [\"Size & Mod-Time\", \"Size & Checksum\", \"Only Mod-Time\", \"Only Size\", \"Only Checksum\"]\n",
+ "Checkers = 10 #@param {type:\"slider\", min:1, max:40, step:1}\n",
+ "Transfers = 10 #@param {type:\"slider\", min:1, max:20, step:1}\n",
+ "Dry_Run = False #@param {type:\"boolean\"}\n",
+ "Do_not_cross_filesystem_boundaries = False\n",
+ "Do_not_update_modtime_if_files_are_identical = False #@param {type:\"boolean\"}\n",
+ "Google_Drive_optimization = False #@param {type:\"boolean\"}\n",
+ "Large_amount_of_files_optimization = False #@param {type:\"boolean\"}\n",
+ "Simple_Ouput = True #@param {type:\"boolean\"}\n",
+ "Skip_all_files_that_exist = False #@param {type:\"boolean\"}\n",
+ "Skip_files_that_are_newer_on_the_destination = False #@param {type:\"boolean\"}\n",
+ "Output_Log_File = \"OFF\" #@param [\"OFF\", \"NOTICE\", \"INFO\", \"ERROR\", \"DEBUG\"]\n",
+ "\n",
+ "#@markdown ↪️ Sync Configuration ↩️ \n",
+ "Sync_Mode = \"Delete during transfer\" #@param [\"Delete during transfer\", \"Delete before transfering\", \"Delete after transfering\"]\n",
+ "Track_Renames = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown 💞 Deduplicate Configuration 💞 \n",
+ "Deduplicate_Mode = \"Interactive\" #@param [\"Interactive\", \"Skip\", \"First\", \"Newest\", \"Oldest\", \"Largest\", \"Rename\"]\n",
+ "Deduplicate_Use_Trash = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "##### Importing the needed modules\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "##### Variable Declaration\n",
+ "# Optimized for Google Colaboratory\n",
+ "os.environ[\"bufferC\"] = \"--buffer-size 96M\"\n",
+ "\n",
+ "if Compare == \"Size & Checksum\":\n",
+ " os.environ[\"compareC\"] = \"-c\"\n",
+ "elif Compare == \"Only Mod-Time\":\n",
+ " os.environ[\"compareC\"] = \"--ignore-size\"\n",
+ "elif Compare == \"Only Size\":\n",
+ " os.environ[\"compareC\"] = \"--size-only\"\n",
+ "elif Compare == \"Only Checksum\":\n",
+ " os.environ[\"compareC\"] = \"-c --ignore-size\"\n",
+ "else:\n",
+ " os.environ[\"compareC\"] = \"\"\n",
+ "\n",
+ "os.environ[\"sourceC\"] = Source\n",
+ "os.environ[\"destinationC\"] = Destination\n",
+ "os.environ[\"transfersC\"] = \"--transfers \"+str(Transfers)\n",
+ "os.environ[\"checkersC\"] = \"--checkers \"+str(Checkers)\n",
+ "\n",
+ "if Skip_files_that_are_newer_on_the_destination == True:\n",
+ " os.environ[\"skipnewC\"] = \"-u\"\n",
+ "else:\n",
+ " os.environ[\"skipnewC\"] = \"\"\n",
+ " \n",
+ "if Skip_all_files_that_exist == True:\n",
+ " os.environ[\"skipexistC\"] = \"--ignore-existing\"\n",
+ "else:\n",
+ " os.environ[\"skipexistC\"] = \"\"\n",
+ " \n",
+ "if Do_not_cross_filesystem_boundaries == True:\n",
+ " os.environ[\"nocrossfilesystemC\"] = \"--one-file-system\"\n",
+ "else:\n",
+ " os.environ[\"nocrossfilesystemC\"] = \"\"\n",
+ " \n",
+ "if Do_not_update_modtime_if_files_are_identical == True:\n",
+ " os.environ[\"noupdatemodtimeC\"] = \"--no-update-modtime\"\n",
+ "else:\n",
+ " os.environ[\"noupdatemodtimeC\"] = \"\"\n",
+ "\n",
+ "if Large_amount_of_files_optimization == True:\n",
+ " os.environ[\"filesoptimizeC\"] = \"--fast-list\"\n",
+ "else:\n",
+ " os.environ[\"filesoptimizeC\"] = \"\"\n",
+ " \n",
+ "if Google_Drive_optimization == True:\n",
+ " os.environ[\"driveoptimizeC\"] = \"--drive-chunk-size 32M --drive-acknowledge-abuse --drive-keep-revision-forever\"\n",
+ "else:\n",
+ " os.environ[\"driveoptimizeC\"] = \"\"\n",
+ " \n",
+ "if Dry_Run == True:\n",
+ " os.environ[\"dryrunC\"] = \"-n\"\n",
+ "else:\n",
+ " os.environ[\"dryrunC\"] = \"\"\n",
+ " \n",
+ "if Output_Log_File != \"OFF\":\n",
+ " os.environ[\"statsC\"] = \"--log-file=/root/.rclone_log/rclone_log.txt\"\n",
+ "else:\n",
+ " if Simple_Ouput == True:\n",
+ " os.environ[\"statsC\"] = \"-v --stats-one-line --stats=5s\"\n",
+ " else:\n",
+ " os.environ[\"statsC\"] = \"-v --stats=5s\"\n",
+ " \n",
+ "if Output_Log_File == \"INFO\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level INFO\"\n",
+ "elif Output_Log_File == \"ERROR\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level ERROR\"\n",
+ "elif Output_Log_File == \"DEBUG\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level DEBUG\"\n",
+ "else:\n",
+ " os.environ[\"loglevelC\"] = \"\"\n",
+ "\n",
+ "os.environ[\"extraC\"] = Extra_Arguments\n",
+ "\n",
+ "if Sync_Mode == \"Delete during transfer\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-during\"\n",
+ "elif Sync_Mode == \"Delete before transfering\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-before\"\n",
+ "elif Sync_Mode == \"Delete after transfering\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-after\"\n",
+ " \n",
+ "if Track_Renames == True:\n",
+ " os.environ[\"trackrenamesC\"] = \"--track-renames\"\n",
+ "else:\n",
+ " os.environ[\"trackrenamesC\"] = \"\"\n",
+ " \n",
+ "if Deduplicate_Mode == \"Interactive\":\n",
+ " os.environ[\"deduplicateC\"] = \"interactive\"\n",
+ "elif Deduplicate_Mode == \"Skip\":\n",
+ " os.environ[\"deduplicateC\"] = \"skip\"\n",
+ "elif Deduplicate_Mode == \"First\":\n",
+ " os.environ[\"deduplicateC\"] = \"first\"\n",
+ "elif Deduplicate_Mode == \"Newest\":\n",
+ " os.environ[\"deduplicateC\"] = \"newest\"\n",
+ "elif Deduplicate_Mode == \"Oldest\":\n",
+ " os.environ[\"deduplicateC\"] = \"oldest\"\n",
+ "elif Deduplicate_Mode == \"Largest\":\n",
+ " os.environ[\"deduplicateC\"] = \"largest\"\n",
+ "elif Deduplicate_Mode == \"Rename\":\n",
+ " os.environ[\"deduplicateC\"] = \"rename\"\n",
+ " \n",
+ "if Deduplicate_Use_Trash == True:\n",
+ " os.environ[\"deduplicatetrashC\"] = \"\"\n",
+ "else:\n",
+ " os.environ[\"deduplicatetrashC\"] = \"--drive-use-trash=false\"\n",
+ "\n",
+ "\n",
+ "##### rclone Execution\n",
+ "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
+ " !mkdir -p -m 666 /root/.rclone_log/\n",
+ " display(HTML(\"Logging enabled, rclone will no longer display any output on the terminal. Please wait until the cell stop by itself. \"))\n",
+ "\n",
+ "if Mode == \"Copy\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf copy \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Move\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf move \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Sync\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf sync \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $syncmodeC $trackrenamesC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Checker\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf check \"$sourceC\" \"$destinationC\" $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Deduplicate\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf dedupe \"$sourceC\" $checkersC $statsC $loglevelC --dedupe-mode $deduplicateC $deduplicatetrashC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Remove Empty Directories\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf rmdirs \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
+ "elif Mode == \"Empty Trash\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf cleanup \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
+ "\n",
+ "\n",
+ "##### Log Output\n",
+ "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
+ "\n",
+ " ##### Rename log file and output settings.\n",
+ " !mv /root/.rclone_log/rclone_log.txt /root/.rclone_log/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).txt\n",
+ " with open(\"/root/.rclone_log/\" + Mode + \"_settings.txt\", \"w\") as f:\n",
+ " f.write(\"Mode: \" + Mode + \\\n",
+ " \"\\nCompare: \" + Compare + \\\n",
+ " \"\\nSource: \\\"\" + Source + \\\n",
+ " \"\\\"\\nDestination: \\\"\" + Destination + \\\n",
+ " \"\\\"\\nTransfers: \" + str(Transfers) + \\\n",
+ " \"\\nCheckers: \" + str(Checkers) + \\\n",
+ " \"\\nSkip files that are newer on the destination: \" + str(Skip_files_that_are_newer_on_the_destination) + \\\n",
+ " \"\\nSkip all files that exist: \" + str(Skip_all_files_that_exist) + \\\n",
+ " \"\\nDo not cross filesystem boundaries: \" + str(Do_not_cross_filesystem_boundaries) + \\\n",
+ " \"\\nDo not update modtime if files are identical: \" + str(Do_not_update_modtime_if_files_are_identical) + \\\n",
+ " \"\\nDry-Run: \" + str(Dry_Run) + \\\n",
+ " \"\\nOutput Log Level: \" + Output_Log_File + \\\n",
+ " \"\\nExtra Arguments: \\\"\" + Extra_Arguments + \\\n",
+ " \"\\\"\\nSync Moden: \" + Sync_Mode + \\\n",
+ " \"\\nTrack Renames: \" + str(Track_Renames) + \\\n",
+ " \"\\nDeduplicate Mode: \" + Deduplicate_Mode + \\\n",
+ " \"\\nDeduplicate Use Trash: \" + str(Deduplicate_Use_Trash))\n",
+ "\n",
+ " ##### Compressing log file.\n",
+ " !rm -f /root/rclone_log.zip\n",
+ " !zip -r -q -j -9 /root/rclone_log.zip /root/.rclone_log/\n",
+ " !rm -rf /root/.rclone_log/\n",
+ " !mkdir -p -m 666 /root/.rclone_log/\n",
+ "\n",
+ " ##### Send Log\n",
+ " if os.path.isfile(\"/root/rclone_log.zip\") == True:\n",
+ " try:\n",
+ " files.download(\"/root/rclone_log.zip\")\n",
+ " !rm -f /root/rclone_log.zip\n",
+ " display(HTML(\"Sending log to your browser... \"))\n",
+ " except:\n",
+ " !mv /root/rclone_log.zip /content/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).zip\n",
+ " display(HTML(\"You can use file explorer to download the log file. \"))\n",
+ " else:\n",
+ " clear_output()\n",
+ " display(HTML(\"There is no log file. \"))\n",
+ " \n",
+ "\n",
+ "### Operation has been successfully completed.\n",
+ "if Mode != \"Config\":\n",
+ " display(HTML(\"✅ Operation has been successfully completed. \"))\n",
+ "\n",
+ "\n",
+ "##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
+ "if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ "else:\n",
+ "\tpass##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
+ "if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YSEyWbWfY9qx"
+ },
+ "source": [
+ "### Google Drive 750GB Upload Bandwidth Bypass "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Qvwz8vtgjSLM"
+ },
+ "source": [
+ "\n",
+ "Still work in progress! Use at your own risk! \n",
+ "Be sure to read everything in this block carefully. No, seriously. Read carefully. \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HBy4qgMQNm7Q"
+ },
+ "source": [
+ "**Always remember to install rclone first!** "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "UI9NTz-typuf"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Clone] AutorRclone \n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!git clone https://github.com/xyou365/AutoRclone /content/tools/AutoRclone\n",
+ "!sudo pip3 install -r /content/tools/AutoRclone/requirements.txt\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Y28rhXs2a7QV"
+ },
+ "source": [
+ "\n",
+ "Since Google has removed the ability to automatically enable the GDrive API from the good old \"Quickstart\" (as of 2021-04-15), you will have to manually create a project by yourself, to get the credentials.json.\n",
+ " \n",
+ "(This means that you have to do the initial job all by yourself. This includes creating a project on the Google Cloud Platform, enabling the GDrive API, setting up the OAuth 2.0, setting up the OAuth Screen, all that stuff.)\n",
+ " \n",
+ "Click here (opens in new tab) and follow along the tutorial there.\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0AR8nQi2w9_K"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Upload the \"credentials.json\" File \n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "else:\n",
+ " %cd \"$AutoRclone_path\"\n",
+ "\n",
+ " from google.colab import files\n",
+ " uploaded = files.upload()\n",
+ "\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pqe_u-WjESWe"
+ },
+ "source": [
+ "TO DO: Add \"remove token\" to be able to re-authorize with different account."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UFQoYxRKAclf"
+ },
+ "source": [
+ "#### Generate Project(s) and Service Account(s) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "dQsFZnNa8qN4"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Generate Service Account(s) on Existing Project(s) \n",
+ "#@markdown > This cell will generate the Service Accounts on ALL existing project(s)! Let's say you currenly have 2 projects, then the number of service accounts will be created is 200 (100 per project). To avoid any unwanted things like messing up your current project, it is highly recommended to run the cell below instead.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --quick-setup -1\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "n1OhWkE8Flds"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Generate New Project(s) and Service Account(s) \n",
+ "\n",
+ "the_amount_of_project_to_generate = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
+ "#@markdown > To avoid any unwanted things like messing up your current project, this cell will generate a NEW project instead, on the Google Cloud Platform, based on the number specified by the slider. It will also (trying to) enable the needed API(s) and create the Service Accounts. The number of Service Account created per project is 100. That is a lot. So the calculation here is 100 x 750GB = 7500GB or 7.5TB worth of upload bandwidth. There could be a chance that Google will notice your action. You obviously don't want that, right? Well... just don't be a glutton and slide the slider all the way to the right and you should be safe to go. (Realistically speaking though, 7.5TB is a lot of upload bandwidth. Even 750 x 5 should be sufficient enough... not to mention the limitation is just a day and will recharge after 24 hours).\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --quick-setup \"$the_amount_of_project_to_generate\" --new-only\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "k8UlN_AeTZqs"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Download the Service Account Keys (Optional) \n",
+ "Project_ID = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > After you have generated the project(s) and the service account(s) using one one the cell above, the service account keys should be automatically downloaded. You can still run this cell to manually do it yourself, or if you want to download keys from a specific project.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --download-keys \"$Project_ID\"\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8TsnaCxSV-9G"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Clear the \"accounts\" Folder (Optional) \n",
+ "#@markdown > If you think the \"accounts\" folder is cluttered, feel free to run this cell and then run the cell above this to re-download the service account keys.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import shutil\n",
+ "\n",
+ "accounts_path = \"/content/tools/AutoRclone/accounts\"\n",
+ "\n",
+ "if os.path.exists(accounts_path) and os.path.isdir(accounts_path):\n",
+ " shutil.rmtree(accounts_path)\n",
+ " os.makedirs(accounts_path)\n",
+ "else:\n",
+ " os.makedirs(accounts_path)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mOjIsl60XBvw"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Export the Email Addresses from the JSON Files to a Text File \n",
+ "input_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
+ "#@markdown > Path to the folder which contain the Service Account JSON files.\n",
+ "#output_name = \"\" #@param {type:\"string\"}\n",
+ "#output_path = \"\" #@param {type:\"string\"}\n",
+ "##@markdown > If both fields are empty, the default name and path for the output file will be used. Name = service-account-emails.txt Path = /content\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "#if output_name and output_path == \"\":\n",
+ "# output_name = \"service-account-emails\"\n",
+ "# output_path = \"/content\"\n",
+ "#elif output_name == \"\" and not output_path == \"\":\n",
+ "# output_name = \"service-account-emails\"\n",
+ "#elif not output_name == \"\" and output_path == \"\":\n",
+ "# output_path = \"/content\"\n",
+ "\n",
+ "\n",
+ "if input_path == \"\":\n",
+ " display(HTML(\"❌ The input_path field is empty! \"))\n",
+ "else:\n",
+ " if not os.path.exists(input_path):\n",
+ " display(HTML(\"❌ The path you have entered does not exist! \"))\n",
+ " elif os.path.exists(input_path) and os.path.isfile(input_path):\n",
+ " display(HTML(\"❌ The input_path is not a folder! \"))\n",
+ " elif os.path.exists(input_path) and os.path.isdir(input_path):\n",
+ " %cd \"$input_path\"\n",
+ " !grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > /content/service_account_emails.txt\n",
+ " #!grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > \"$output_path\"/\"$output_name\".txt\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()\n",
+ "\n",
+ " display(HTML(\"✅ The output is saved in /content/service_account_emails.txt \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "l-Sbt9djBtpe"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Bulk Rename the Service Account Keys (Optional) \n",
+ "service_account_keys_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
+ "rename_prefix = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the rename_prefix field is empty, the default prefix will be given: service_account_0 to 100.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "\n",
+ "if rename_prefix == \"\":\n",
+ " rename_prefix = \"service_account_\"\n",
+ "else:\n",
+ " rename_prefix = rename_prefix\n",
+ "\n",
+ "def main():\n",
+ " for count, filename in enumerate(os.listdir(service_account_keys_path)):\n",
+ " destination = rename_prefix + str(count) + \".json\"\n",
+ " source = service_account_keys_path + \"/\" + filename\n",
+ " destination = service_account_keys_path + \"/\" + destination\n",
+ " \n",
+ " # rename() function will\n",
+ " # rename all the files\n",
+ " os.rename(source, destination)\n",
+ " \n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "g5HCVqRNaj4Q"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Bulk Add the Service Accounts into a Team Drive (Optional) \n",
+ "Team_Drive_ID = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If this cell does not work or maybe not doing anything, simply create a Google Group (click here (opens in new tab)) and add all, if not, a number of the service accounts into that group and then on the Team Drive, just invite over the group's email into the Team Drive. The group's email should look something like this: group-name@googlegroups.com\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/AutoRclone/add_to_team_drive.py\"):\n",
+ " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
+ "else:\n",
+ " if Team_Drive_ID == \"\":\n",
+ " display(HTML(\"❌ The Team_Drive_ID field is empty! \"))\n",
+ " elif not Team_Drive_ID == \"\":\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 add_to_team_drive.py -d \"Team_Drive_ID\"\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "c1N141ZcEdwd"
+ },
+ "source": [
+ "#### Perform the Task "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eF7Wmr7unSD5"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Method 1 \n",
+ "Source = \"\" #@param {type:\"string\"}\n",
+ "Destination = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > I'm pretty sure this only works between Team Drive to Team Drive, but your mileage may vary.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/AutoRclone/rclone_sa_magic.py\"):\n",
+ " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
+ "else:\n",
+ " if Source is \"\" and not Destination is \"\":\n",
+ " display(HTML(\"❌ The Source field is empty! \"))\n",
+ " elif not Source is \"\" and Destination is \"\":\n",
+ " display(HTML(\"❌ The Destination field is empty! \"))\n",
+ " elif Source is \"\" and Destination is \"\":\n",
+ " display(HTML(\"❌ Both of the fields above are empty! \"))\n",
+ " else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 rclone_sa_magic.py -s \"$Source\" -d \"$Destination\" -b 1 -e 600\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "O0NHwsI_-d3W"
+ },
+ "source": [
+ "### rclone Configuration "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "PDc8KdYNQ2s-"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone WebUI Configuration \n",
+ "# @markdown >rclone WebUI Default CredentialUsername: userPassword: pass\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, signal, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ " checkAvailable,\n",
+ " displayOutput,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " accessSettingFile,\n",
+ " memGiB,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "prepareSession()\n",
+ "\n",
+ "pid = findProcess(\"rclone\", \"rcd\", isPid=True)\n",
+ "\n",
+ "try:\n",
+ " os.kill(int(pid), signal.SIGTERM)\n",
+ "except TypeError:\n",
+ " pass\n",
+ " \n",
+ "cmd = \"rclone rcd --rc-web-gui --rc-addr :5572\" \\\n",
+ " \" --rc-serve\" \\\n",
+ " \" --rc-user=user --rc-pass=pass\" \\\n",
+ " \" --rc-no-auth\" \\\n",
+ " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
+ " ' --user-agent \"Mozilla\"' \\\n",
+ " ' --transfers 16' \\\n",
+ " \" &\"\n",
+ "\n",
+ "runSh(cmd, shell=True)\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneWebUI', 5572, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneWebUI.yml\", 4099]).start('rcloneWebUI', displayB=False)\n",
+ "clear_output()\n",
+ "displayUrl(Server, pNamU='rclone WebUI : ')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "5HURZQEZQ6pT"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone CLI Configuration \n",
+ "# @markdown Run this cell to create and/or edit an rclone configuration.
\n",
+ "# @markdown > After you have created a configuration, download the configuration file.In the next time you want to mount an rclone drive, simply import the configuration file.\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\" #\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "# @markdown ---\n",
+ "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " runSh,\n",
+ " PortForward_wrapper\n",
+ ")\n",
+ "\n",
+ "import codecs, contextlib, locale, os, pty, select, signal, subprocess, sys, termios, time\n",
+ "from IPython.utils import text\n",
+ "import six\n",
+ "from google.colab import _ipython\n",
+ "from google.colab import _message\n",
+ "from google.colab.output import _tags\n",
+ "\n",
+ "# Linux read(2) limits to 0x7ffff000 so stay under that for clarity.\n",
+ "_PTY_READ_MAX_BYTES_FOR_TEST = 2**20 # 1MB\n",
+ "\n",
+ "_ENCODING = 'UTF-8'\n",
+ "\n",
+ "class ShellResult(object):\n",
+ " \"\"\"Result of an invocation of the shell magic.\n",
+ "\n",
+ " Note: This is intended to mimic subprocess.CompletedProcess, but has slightly\n",
+ " different characteristics, including:\n",
+ " * CompletedProcess has separate stdout/stderr properties. A ShellResult\n",
+ " has a single property containing the merged stdout/stderr stream,\n",
+ " providing compatibility with the existing \"!\" shell magic (which this is\n",
+ " intended to provide an alternative to).\n",
+ " * A custom __repr__ method that returns output. When the magic is invoked as\n",
+ " the only statement in the cell, Python prints the string representation by\n",
+ " default. The existing \"!\" shell magic also returns output.\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, args, returncode, command_output):\n",
+ " self.args = args\n",
+ " self.returncode = returncode\n",
+ " self.output = command_output\n",
+ "\n",
+ " def check_returncode(self):\n",
+ " if self.returncode:\n",
+ " raise subprocess.CalledProcessError(\n",
+ " returncode=self.returncode, cmd=self.args, output=self.output)\n",
+ "\n",
+ " def _repr_pretty_(self, p, cycle): # pylint:disable=unused-argument\n",
+ " # Note: When invoking the magic and not assigning the result\n",
+ " # (e.g. %shell echo \"foo\"), Python's default semantics will be used and\n",
+ " # print the string representation of the object. By default, this will\n",
+ " # display the __repr__ of ShellResult. Suppress this representation since\n",
+ " # the output of the command has already been displayed to the output window.\n",
+ " if cycle:\n",
+ " raise NotImplementedError\n",
+ "\n",
+ "\n",
+ "def _configure_term_settings(pty_fd):\n",
+ " term_settings = termios.tcgetattr(pty_fd)\n",
+ " # ONLCR transforms NL to CR-NL, which is undesirable. Ensure this is disabled.\n",
+ " # http://man7.org/linux/man-pages/man3/termios.3.html\n",
+ " term_settings[1] &= ~termios.ONLCR\n",
+ "\n",
+ " # ECHOCTL echoes control characters, which is undesirable.\n",
+ " term_settings[3] &= ~termios.ECHOCTL\n",
+ "\n",
+ " termios.tcsetattr(pty_fd, termios.TCSANOW, term_settings)\n",
+ "\n",
+ "\n",
+ "def _run_command(cmd, clear_streamed_output):\n",
+ " \"\"\"Calls the shell command, forwarding input received on the stdin_socket.\"\"\"\n",
+ " locale_encoding = locale.getpreferredencoding()\n",
+ " if locale_encoding != _ENCODING:\n",
+ " raise NotImplementedError(\n",
+ " 'A UTF-8 locale is required. Got {}'.format(locale_encoding))\n",
+ "\n",
+ " parent_pty, child_pty = pty.openpty()\n",
+ " _configure_term_settings(child_pty)\n",
+ "\n",
+ " epoll = select.epoll()\n",
+ " epoll.register(\n",
+ " parent_pty,\n",
+ " (select.EPOLLIN | select.EPOLLOUT | select.EPOLLHUP | select.EPOLLERR))\n",
+ "\n",
+ " try:\n",
+ " temporary_clearer = _tags.temporary if clear_streamed_output else _no_op\n",
+ "\n",
+ " with temporary_clearer(), _display_stdin_widget(\n",
+ " delay_millis=500) as update_stdin_widget:\n",
+ " # TODO(b/115531839): Ensure that subprocesses are terminated upon\n",
+ " # interrupt.\n",
+ " p = subprocess.Popen(\n",
+ " cmd,\n",
+ " shell=True,\n",
+ " executable='/bin/bash',\n",
+ " stdout=child_pty,\n",
+ " stdin=child_pty,\n",
+ " stderr=child_pty,\n",
+ " close_fds=True)\n",
+ " # The child PTY is only needed by the spawned process.\n",
+ " os.close(child_pty)\n",
+ "\n",
+ " return _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget)\n",
+ " finally:\n",
+ " epoll.close()\n",
+ " os.close(parent_pty)\n",
+ "\n",
+ "\n",
+ "class _MonitorProcessState(object):\n",
+ "\n",
+ " def __init__(self):\n",
+ " self.process_output = six.StringIO()\n",
+ " self.is_pty_still_connected = True\n",
+ "\n",
+ "\n",
+ "def _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget):\n",
+ " \"\"\"Monitors the given subprocess until it terminates.\"\"\"\n",
+ " state = _MonitorProcessState()\n",
+ "\n",
+ " # A single UTF-8 character can span multiple bytes. os.read returns bytes and\n",
+ " # could return a partial byte sequence for a UTF-8 character. Using an\n",
+ " # incremental decoder is incrementally fed input bytes and emits UTF-8\n",
+ " # characters.\n",
+ " decoder = codecs.getincrementaldecoder(_ENCODING)()\n",
+ "\n",
+ " num_interrupts = 0\n",
+ " echo_status = None\n",
+ " while True:\n",
+ " try:\n",
+ " result = _poll_process(parent_pty, epoll, p, cmd, decoder, state)\n",
+ " if result is not None:\n",
+ " return result\n",
+ " term_settings = termios.tcgetattr(parent_pty)\n",
+ " new_echo_status = bool(term_settings[3] & termios.ECHO)\n",
+ " if echo_status != new_echo_status:\n",
+ " update_stdin_widget(new_echo_status)\n",
+ " echo_status = new_echo_status\n",
+ " except KeyboardInterrupt:\n",
+ " try:\n",
+ " num_interrupts += 1\n",
+ " if num_interrupts == 1:\n",
+ " p.send_signal(signal.SIGINT)\n",
+ " elif num_interrupts == 2:\n",
+ " # Process isn't responding to SIGINT and user requested another\n",
+ " # interrupt. Attempt to send SIGTERM followed by a SIGKILL if the\n",
+ " # process doesn't respond.\n",
+ " p.send_signal(signal.SIGTERM)\n",
+ " time.sleep(0.5)\n",
+ " if p.poll() is None:\n",
+ " p.send_signal(signal.SIGKILL)\n",
+ " except KeyboardInterrupt:\n",
+ " # Any interrupts that occur during shutdown should not propagate.\n",
+ " pass\n",
+ "\n",
+ " if num_interrupts > 2:\n",
+ " # In practice, this shouldn't be possible since\n",
+ " # SIGKILL is quite effective.\n",
+ " raise\n",
+ "\n",
+ "\n",
+ "def _poll_process(parent_pty, epoll, p, cmd, decoder, state):\n",
+ " \"\"\"Polls the process and captures / forwards input and output.\"\"\"\n",
+ "\n",
+ " terminated = p.poll() is not None\n",
+ " if terminated:\n",
+ " termios.tcdrain(parent_pty)\n",
+ " # We're no longer interested in write events and only want to consume any\n",
+ " # remaining output from the terminated process. Continuing to watch write\n",
+ " # events may cause early termination of the loop if no output was\n",
+ " # available but the pty was ready for writing.\n",
+ " epoll.modify(parent_pty,\n",
+ " (select.EPOLLIN | select.EPOLLHUP | select.EPOLLERR))\n",
+ "\n",
+ " output_available = False\n",
+ "\n",
+ " events = epoll.poll()\n",
+ " input_events = []\n",
+ " for _, event in events:\n",
+ " if event & select.EPOLLIN:\n",
+ " output_available = True\n",
+ " raw_contents = os.read(parent_pty, _PTY_READ_MAX_BYTES_FOR_TEST)\n",
+ " import re\n",
+ " decoded_contents = re.sub(r\"http:\\/\\/127.0.0.1:53682\", Server[\"url\"], \n",
+ " decoder.decode(raw_contents))\n",
+ " sys.stdout.write(decoded_contents)\n",
+ " state.process_output.write(decoded_contents)\n",
+ "\n",
+ " if event & select.EPOLLOUT:\n",
+ " # Queue polling for inputs behind processing output events.\n",
+ " input_events.append(event)\n",
+ "\n",
+ " # PTY was disconnected or encountered a connection error. In either case,\n",
+ " # no new output should be made available.\n",
+ " if (event & select.EPOLLHUP) or (event & select.EPOLLERR):\n",
+ " state.is_pty_still_connected = False\n",
+ "\n",
+ " for event in input_events:\n",
+ " # Check to see if there is any input on the stdin socket.\n",
+ " # pylint: disable=protected-access\n",
+ " input_line = _message._read_stdin_message()\n",
+ " # pylint: enable=protected-access\n",
+ " if input_line is not None:\n",
+ " # If a very large input or sequence of inputs is available, it's\n",
+ " # possible that the PTY buffer could be filled and this write call\n",
+ " # would block. To work around this, non-blocking writes and keeping\n",
+ " # a list of to-be-written inputs could be used. Empirically, the\n",
+ " # buffer limit is ~12K, which shouldn't be a problem in most\n",
+ " # scenarios. As such, optimizing for simplicity.\n",
+ " input_bytes = bytes(input_line.encode(_ENCODING))\n",
+ " os.write(parent_pty, input_bytes)\n",
+ "\n",
+ " # Once the process is terminated, there still may be output to be read from\n",
+ " # the PTY. Wait until the PTY has been disconnected and no more data is\n",
+ " # available for read. Simply waiting for disconnect may be insufficient if\n",
+ " # there is more data made available on the PTY than we consume in a single\n",
+ " # read call.\n",
+ " if terminated and not state.is_pty_still_connected and not output_available:\n",
+ " sys.stdout.flush()\n",
+ " command_output = state.process_output.getvalue()\n",
+ " return ShellResult(cmd, p.returncode, command_output)\n",
+ "\n",
+ " if not output_available:\n",
+ " # The PTY is almost continuously available for reading input to provide\n",
+ " # to the underlying subprocess. This means that the polling loop could\n",
+ " # effectively become a tight loop and use a large amount of CPU. Add a\n",
+ " # slight delay to give resources back to the system while monitoring the\n",
+ " # process.\n",
+ " # Skip this delay if we read output in the previous loop so that a partial\n",
+ " # read doesn't unnecessarily sleep before reading more output.\n",
+ " # TODO(b/115527726): Rather than sleep, poll for incoming messages from\n",
+ " # the frontend in the same poll as for the output.\n",
+ " time.sleep(0.1)\n",
+ "\n",
+ "\n",
+ "@contextlib.contextmanager\n",
+ "def _display_stdin_widget(delay_millis=0):\n",
+ " \"\"\"Context manager that displays a stdin UI widget and hides it upon exit.\n",
+ "\n",
+ " Args:\n",
+ " delay_millis: Duration (in milliseconds) to delay showing the widget within\n",
+ " the UI.\n",
+ "\n",
+ " Yields:\n",
+ " A callback that can be invoked with a single argument indicating whether\n",
+ " echo is enabled.\n",
+ " \"\"\"\n",
+ " shell = _ipython.get_ipython()\n",
+ " display_args = ['cell_display_stdin', {'delayMillis': delay_millis}]\n",
+ " _message.blocking_request(*display_args, parent=shell.parent_header)\n",
+ "\n",
+ " def echo_updater(new_echo_status):\n",
+ " # Note: Updating the echo status uses colab_request / colab_reply on the\n",
+ " # stdin socket. Input provided by the user also sends messages on this\n",
+ " # socket. If user input is provided while the blocking_request call is still\n",
+ " # waiting for a colab_reply, the input will be dropped per\n",
+ " # https://github.com/googlecolab/colabtools/blob/56e4dbec7c4fa09fad51b60feb5c786c69d688c6/google/colab/_message.py#L100.\n",
+ " update_args = ['cell_update_stdin', {'echo': new_echo_status}]\n",
+ " _message.blocking_request(*update_args, parent=shell.parent_header)\n",
+ "\n",
+ " yield echo_updater\n",
+ "\n",
+ " hide_args = ['cell_remove_stdin', {}]\n",
+ " _message.blocking_request(*hide_args, parent=shell.parent_header)\n",
+ "\n",
+ "\n",
+ "@contextlib.contextmanager\n",
+ "def _no_op():\n",
+ " yield\n",
+ "\n",
+ "prepareSession()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneConfiguration', 53682, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneConfiguration.yml\", 4074]).start('rcloneConfiguration', displayB=False, v=False)\n",
+ "\n",
+ "printData = \"\"\"\n",
+ "Before finishing the configuration, you will be redirected to an address.\n",
+ "Replace the address http://127.0.0.0:53682 with {}\"\"\".format(Server['url'])\n",
+ "print(printData)\n",
+ "display(HTML('(Click here to see how to do it)'))\n",
+ "print(f\"{Server['url']}\", end=\"\\n\\n\")\n",
+ "_run_command(f\"rclone config --config {rcloneConfigurationPath}/rclone.conf\", False)\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qakuMVVjQlGU"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Mount/Unmount rclone Drive (Optional) \n",
+ "# @markdown Mount a remote drive as a local drive on a mountpoint.\n",
+ "# @markdown ---\n",
+ "Cache_Directory = \"DISK\" #@param [\"RAM\", \"DISK\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import uuid\n",
+ "import ipywidgets as widgets\n",
+ "from google.colab import output\n",
+ "import re\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ ")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def ShowAC():\n",
+ " clear_output(wait=True)\n",
+ " display(\n",
+ " widgets.HBox(\n",
+ " [widgets.VBox(\n",
+ " [widgets.HTML(\n",
+ " '''\n",
+ " Available drive to mount/unmount: \n",
+ " '''\n",
+ " ),\n",
+ " mountNam]\n",
+ " )\n",
+ " ]\n",
+ " )\n",
+ " )\n",
+ " \n",
+ " display(HTML(\" \"), MakeButton(\"Mount\", MountCMD, \"primary\"),\n",
+ " MakeButton(\"Unmount\", unmountCMD, \"danger\"))\n",
+ "\n",
+ "prepareSession()\n",
+ "content = open(f\"{rcloneConfigurationPath}/rclone.conf\").read()\n",
+ "avCon = re.findall(r\"^\\[(.+)\\]$\", content, re.M)\n",
+ "mountNam = widgets.Dropdown(options=avCon)\n",
+ "\n",
+ "if Cache_Directory == 'RAM':\n",
+ " cache_path = '/dev/shm'\n",
+ "elif Cache_Directory == 'DISK':\n",
+ " os.makedirs('/tmp', exist_ok=True)\n",
+ " cache_path = '/tmp'\n",
+ "\n",
+ "def MountCMD():\n",
+ " mPoint = f\"/content/drives/{mountNam.value}\"\n",
+ " os.makedirs(mPoint, exist_ok=True)\n",
+ " cmd = rf\"rclone mount {mountNam.value}: {mPoint}\" \\\n",
+ " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
+ " ' --user-agent \"Mozilla\"' \\\n",
+ " ' --buffer-size 256M' \\\n",
+ " ' --transfers 10' \\\n",
+ " ' --vfs-cache-mode full' \\\n",
+ " ' --vfs-cache-max-age 0h0m1s' \\\n",
+ " ' --vfs-cache-poll-interval 0m1s' \\\n",
+ " f' --cache-dir {cache_path}' \\\n",
+ " ' --allow-other' \\\n",
+ " ' --daemon'\n",
+ "\n",
+ " if runSh(cmd, shell=True) == 0:\n",
+ " print(f\"The drive have been successfully mounted! - \\t{mPoint}\")\n",
+ " else:\n",
+ " print(f\"Failed to mount the drive! - \\t{mPoint}\")\n",
+ "\n",
+ "def unmountCMD():\n",
+ " mPoint = f\"/content/drives/{mountNam.value}\"\n",
+ " if os.system(f\"fusermount -uz {mPoint}\") == 0:\n",
+ " runSh(f\"rm -r {mPoint}\")\n",
+ " print(f\"The drive have been successfully unmounted! - \\t{mPoint}\")\n",
+ " else:\n",
+ " runSh(f\"fusermount -uz {mPoint}\", output=True)\n",
+ "\n",
+ "ShowAC()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "G3rr1OuFRApD"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Upload Configuration File \n",
+ "# @markdown If you already have an rclone configuration file, you can upload it by running this cell.
\n",
+ "\n",
+ "# @markdown ---\n",
+ "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG', 'RCONFIG_append', \"GENERATELIST\"]\n",
+ "REMOTE = \"mnc\" # @param {type:\"string\"}\n",
+ "QUERY_PATTERN = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > For those who are unable to upload local file: StackOverflow
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from os import path as _p\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run # nosec\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "import importlib, mixlab\n",
+ "from google.colab import files # pylint: disable=import-error #nosec\n",
+ "from mixlab import checkAvailable, runSh, rcloneConfigurationPath, prepareSession\n",
+ "\n",
+ "\n",
+ "def generateUploadList():\n",
+ " prepareSession()\n",
+ " if checkAvailable(\"/content/upload.txt\"):\n",
+ " runSh(\"rm -f upload.txt\")\n",
+ " runSh(\n",
+ " f\"rclone --config {rcloneConfigurationPath}/rclone.conf lsf {REMOTE}: --include '{QUERY_PATTERN}' --drive-shared-with-me --files-only --max-depth 1 > /content/upload.txt\",\n",
+ " shell=True, # nosec\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def uploadLocalFiles():\n",
+ " prepareSession()\n",
+ " if MODE == \"UTILS\":\n",
+ " filePath = \"/root/.ipython/mixlab.py\"\n",
+ " elif MODE in (\"RCONFIG\", \"RCONFIG_append\"):\n",
+ " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
+ " else:\n",
+ " pass\n",
+ "\n",
+ " try:\n",
+ " if checkAvailable(filePath):\n",
+ " runSh(f\"rm -f {filePath}\")\n",
+ " display(HTML(\"Upload rclone.conf from your local machine. \"))\n",
+ " uploadedFile = files.upload()\n",
+ " fileNameDictKeys = uploadedFile.keys()\n",
+ " fileNo = len(fileNameDictKeys)\n",
+ " if fileNo > 1:\n",
+ " for fn in fileNameDictKeys:\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " return print(\"\\nOnly upload one configuration file!\")\n",
+ " elif fileNo == 0:\n",
+ " return print(\"\\nFile upload cancelled.\")\n",
+ " elif fileNo == 1:\n",
+ " for fn in fileNameDictKeys:\n",
+ " if checkAvailable(f\"/content/{fn}\"):\n",
+ " if MODE == \"RCONFIG_append\":\n",
+ " import urllib\n",
+ " urllib.request.urlretrieve(\"https://shirooo39.github.io/MiXLab/resources/configurations/rclone/rclone.conf\",\n",
+ " \"/usr/local/sessionSettings/rclone.conf\")\n",
+ " with open(f\"/content/{fn}\", 'r+') as r:\n",
+ " new_data = r.read()\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " with open(filePath, 'r+') as f:\n",
+ " old_data = f.read()\n",
+ " f.seek(0)\n",
+ " f.truncate(0)\n",
+ " f.write(old_data + new_data)\n",
+ " print(\"\\nUpdate completed.\")\n",
+ " else:\n",
+ " runSh(f'mv -f \"/content/{fn}\" {filePath}')\n",
+ " runSh(f\"chmod 666 {filePath}\")\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " importlib.reload(mixlab)\n",
+ " !rm /content/upload.txt\n",
+ " clear_output()\n",
+ " print(\"rclone.conf has been uploaded to Colab!\")\n",
+ " return\n",
+ " else:\n",
+ " print(\"\\nNo file is chosen!\")\n",
+ " return\n",
+ " except:\n",
+ " return print(\"\\nFailed to upload!\")\n",
+ "\n",
+ "\n",
+ "if MODE == \"GENERATELIST\":\n",
+ " generateUploadList()\n",
+ "else:\n",
+ " uploadLocalFiles()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BucL21B4RIGJ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Download Configuration File \n",
+ "# @markdown Download configuration file from the VM into your local machine.
\n",
+ "\n",
+ "# @markdown ---\n",
+ "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG']\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "def downloadFile():\n",
+ " if MODE == \"UTILS\":\n",
+ " filePath = \"/root/.ipython/mixlab.py\"\n",
+ " elif MODE == \"RCONFIG\":\n",
+ " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
+ " else:\n",
+ " pass\n",
+ " try:\n",
+ " files.download(filePath)\n",
+ " except FileNotFoundError:\n",
+ " print(\"File not found!\")\n",
+ "\n",
+ "if __name__ == \"__main__\":\n",
+ " downloadFile()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "_NGsTyR3Ra5N"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "## @markdown ← Sync Backup \n",
+ "# @markdown \n",
+ "#FAST_LIST = True\n",
+ "# ================================================================ #\n",
+ "\n",
+ "#from os import path as _p\n",
+ "\n",
+ "#if not _p.exists(\"/root/.ipython/rlab_utils.py\"):\n",
+ "# from shlex import split as _spl\n",
+ "# from subprocess import run # nosec\n",
+ "\n",
+ "# shellCmd = \"wget -qq https://biplobsd.github.io/RLabClone/res/rlab_utils.py \\\n",
+ "# -O /root/.ipython/rlab_utils.py\"\n",
+ "# run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "#from rlab_utils import (\n",
+ "# runSh,\n",
+ "# prepareSession,\n",
+ "# PATH_RClone_Config,\n",
+ "#)\n",
+ "\n",
+ "\n",
+ "#def generateCmd(src, dst):\n",
+ "# block=f\"{'':=<117}\"\n",
+ "# title=f\"\"\"+{f'Now Synchronizing... \"{src}\" > \"{dst}\" Fast List : {\"ON\" if FAST_LIST else \"OFF\"}':^{len(block)-2}}+\"\"\"\n",
+ "# print(f\"{block}\\n{title}\\n{block}\")\n",
+ "# cmd = f'rclone sync \"{src}\" \"{dst}\" --config {PATH_RClone_Config}/rclone.conf {\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" --transfers 20 --checkers 20 --drive-server-side-across-configs -c --buffer-size 256M --drive-chunk-size 256M --drive-upload-cutoff 256M --drive-acknowledge-abuse --drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 --stats-one-line --stats=5s -v'\n",
+ "# return cmd\n",
+ "\n",
+ "\n",
+ "#def executeSync():\n",
+ "# prepareSession()\n",
+ "# runSh(generateCmd(\"tdTdnMov:Movies\",\"tdMovRa4:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnTvs:TV Shows\",\"tdTvsRa5:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnRa6:Games\",\"tdGamRa7:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnRa8:Software\",\"tdSofRa9:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnR11:Tutorials\",\"tdTutR12:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnR13:Anime\",\"tdAniR14:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdn14:Music\",\"tdMusR15:\"), output=True)\n",
+ "\n",
+ "\n",
+ "#executeSync()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4DdRcv08fzTG"
+ },
+ "source": [
+ "# ✦ *Download Manager* ✦ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Sjvzf5WLsJya"
+ },
+ "source": [
+ "> It is recommended to download the file(s) into the VM's local disk first and then use rclone to upload (move/copy)to remote Drive, to avoid possible file corruption."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "o_uCXhC1S0GZ"
+ },
+ "source": [
+ "## ✧ *Hosted-File Downloader* ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nGKbLp4P8MXi"
+ },
+ "source": [
+ "### aria2 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "l8uIsoVrC6to"
+ },
+ "source": [
+ "#### aria2 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Z3fpZQeJ8N80"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] aria2 \n",
+ "Aria2_rpc = True\n",
+ "Ariang_WEBUI = True\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request, requests\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from urllib.parse import urlparse\n",
+ "\n",
+ "PORT = 8221\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " CWD,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " findPackageR\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "# Setting up aria2\n",
+ "runSh('apt install -y aria2')\n",
+ "pathlib.Path('ariang').mkdir(mode=0o777, exist_ok=True)\n",
+ "pathlib.Path('downloads').mkdir(mode=0o777, exist_ok=True)\n",
+ "\n",
+ "# Defining Github latest release tag\n",
+ "def latestTag(link):\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
+ " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
+ "\n",
+ "# Downloading the latest version of ariaNg\n",
+ "if not os.path.exists(\"ariang/index.html\"):\n",
+ " # BASE_URL = r\"https://github.com/mayswind/AriaNg\"\n",
+ " # LATEST_TAG = latestTag(BASE_URL)\n",
+ " # urlF = f'{BASE_URL}/releases/download/{LATEST_TAG}/' \\\n",
+ " # f'AriaNg-{LATEST_TAG}-AllInOne.zip'\n",
+ " urllib.request.urlretrieve(findPackageR('mayswind/AriaNg', 'AllInOne.zip'), 'ariang/new.zip')\n",
+ " with zipfile.ZipFile('ariang/new.zip', 'r') as zip_ref: zip_ref.extractall('ariang')\n",
+ " try:\n",
+ " pathlib.Path('ariang/new.zip').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ "# Starting up aria2 RPC and the WebUI (ariaNg)\n",
+ "try:\n",
+ " if not OUTPUT_DIR:\n",
+ " OUTPUT_DIR = f\"downloads/\"\n",
+ " elif not os.path.exists(OUTPUT_DIR):\n",
+ " \n",
+ " clear_output()\n",
+ " \n",
+ " print(\"Unable to find the defined path!\")\n",
+ " exx()\n",
+ "except:\n",
+ " OUTPUT_DIR = f\"{CWD}/downloads/\"\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " if not findProcess(\"aria2c\", \"--enable-rpc\"):\n",
+ " try:\n",
+ " trackers = requests.get(\"https://trackerslist.com/best_aria2.txt\").text\n",
+ " cmdC = r\"aria2c --enable-rpc --rpc-listen-port=6800 -D \" \\\n",
+ " fr\"-d {OUTPUT_DIR} \" \\\n",
+ " r\"-j 20 \" \\\n",
+ " r\"-c \" \\\n",
+ " fr\"--bt-tracker={trackers} \" \\\n",
+ " r\"--bt-request-peer-speed-limit=0 \" \\\n",
+ " r\"--bt-max-peers=0 \" \\\n",
+ " r\"--seed-ratio=0.0 \" \\\n",
+ " r\"--max-connection-per-server=10 \" \\\n",
+ " r\"--min-split-size=10M \" \\\n",
+ " r\"--follow-torrent=mem \" \\\n",
+ " r\"--disable-ipv6=true \" \\\n",
+ " r\" &\"\n",
+ " runSh(cmdC, shell=True)\n",
+ " except:\n",
+ " print(\"aria2 RPC is not enabled! Please enable the RPC first!\")\n",
+ "\n",
+ "# Configuring port forwarding\n",
+ "clear_output()\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Aria2_rpc', 6800, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/aria2.yml\", 5042])\n",
+ " data = Server.start('Aria2_rpc', displayB=False)\n",
+ " Host = urlparse(data['url']).hostname\n",
+ " port = \"80\"\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if Ariang_WEBUI:\n",
+ " if Aria2_rpc:\n",
+ " filePath = 'ariang/index.html'\n",
+ " with open(filePath, 'r+') as f:\n",
+ " read_data = f.read()\n",
+ " f.seek(0)\n",
+ " f.truncate(0)\n",
+ " read_data = re.sub(r'(rpcHost:\"\\w+.\")|rpcHost:\"\"', f'rpcHost:\"{Host}\"', read_data)\n",
+ " read_data = re.sub(r'protocol:\"\\w+.\"', r'protocol:\"ws\"', read_data)\n",
+ " read_data = re.sub(r'rpcPort:\"\\d+.\"', f'rpcPort:\"{port}\"', read_data)\n",
+ " f.write(read_data)\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " except:\n",
+ " runSh(f\"python3 -m http.server {PORT} &\", shell=True, cd=\"ariang/\")\n",
+ " \n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Ariang', PORT, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/ariang.yml\", 5043])\n",
+ "data2 = Server.start('Ariang', displayB=False)\n",
+ "data2['url'] = urlparse(data2['url'])._replace(scheme='http').geturl()\n",
+ "displayUrl(data2, pNamU='AriaNg : ')\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " display(HTML(\"\"\"aria2 RPC Configuration
Protocol Host Port
WebSocket \"\"\"+Host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YMSrqjUm_bDN"
+ },
+ "source": [
+ "#### aria2 > "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "xa483vhL_d0X"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] aria2 > \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > If OUTPUT_PATH is blank, the file will be downloaded into the default location.Default download location is /content/downloads\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import pathlib\n",
+ "import shutil\n",
+ "import hashlib\n",
+ "import requests\n",
+ "from urllib.parse import urlparse\n",
+ "from os import path, mkdir\n",
+ "if not path.exists(\"/root/.ipython/mixlab.py\"): \n",
+ " from subprocess import run\n",
+ " from shlex import split\n",
+ "\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(split(shellCmd))\n",
+ "\n",
+ "from mixlab import runSh\n",
+ "\n",
+ "def youtubedlInstall():\n",
+ " if not path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
+ " cmdC = \"rm -rf /content/sample_data/ && \" \\\n",
+ " \" mkdir -p -m 666 /root/.YouTube-DL/ &&\" \\\n",
+ " \" apt-get install atomicparsley &&\" \\\n",
+ " \" curl -L https://yt-dl.org/downloads/latest/youtube-dl \" \\\n",
+ " \"-o /usr/local/bin/youtube-dl &&\" \\\n",
+ " \" chmod a+rx /usr/local/bin/youtube-dl\"\n",
+ " get_ipython().system_raw(cmdC)\n",
+ "\n",
+ "def aria2Install():\n",
+ " runSh('apt install -y aria2')\n",
+ "\n",
+ "def istmd(URL): \n",
+ " link = urlparse(URL)\n",
+ " \n",
+ " #YandexDisk\n",
+ " if link.netloc == \"yadi.sk\":\n",
+ " API_ENDPOINT = 'https://cloud-api.yandex.net/v1/disk/public/resources/' \\\n",
+ " '?public_key={}&path=/{}&offset={}'\n",
+ " dry = False\n",
+ " def md5sum(file_path):\n",
+ " md5 = hashlib.md5()\n",
+ " with open(file_path, 'rb') as f:\n",
+ " for chunk in iter(lambda: f.read(128 * md5.block_size), b''):\n",
+ " md5.update(chunk)\n",
+ " return md5.hexdigest()\n",
+ "\n",
+ "\n",
+ " def check_and_download_file(target_path, url, size, checksum):\n",
+ " if path.isfile(target_path):\n",
+ " if size == path.getsize(target_path):\n",
+ " if checksum == md5sum(target_path):\n",
+ " print('URL {}'.format(url))\n",
+ " print('skipping correct {}'.format(target_path))\n",
+ " return\n",
+ " if not dry:\n",
+ " print('URL {}'.format(url))\n",
+ " print('downloading {}'.format(target_path))\n",
+ " runSh(f'aria2c -x 16 -s 16 -k 1M -d {OUTPUT_PATH} {url}', output=True)\n",
+ " # r = requests.get(url, stream=True)\n",
+ " # with open(target_path, 'wb') as f:\n",
+ " # shutil.copyfileobj(r.raw, f)\n",
+ "\n",
+ " def download_path(target_path, public_key, source_path, offset=0):\n",
+ " print('getting \"{}\" at offset {}'.format(source_path, offset))\n",
+ " current_path = path.join(target_path, source_path)\n",
+ " pathlib.Path(current_path).mkdir(parents=True, exist_ok=True)\n",
+ " jsn = requests.get(API_ENDPOINT.format(public_key, source_path, offset)).json()\n",
+ " def try_as_file(j):\n",
+ " if 'file' in j:\n",
+ " file_save_path = path.join(current_path, j['name'])\n",
+ " check_and_download_file(file_save_path, j['file'], j['size'], j['md5'])\n",
+ " return True\n",
+ " return False\n",
+ "\n",
+ " # first try to treat the actual json as a single file description\n",
+ " if try_as_file(jsn):\n",
+ " return\n",
+ "\n",
+ " # otherwise treat it as a directory\n",
+ " emb = jsn['_embedded']\n",
+ " items = emb['items']\n",
+ " for i in items:\n",
+ " # each item can be a file...\n",
+ " if try_as_file(i):\n",
+ " continue\n",
+ " # ... or a directory\n",
+ " else:\n",
+ " subdir_path = path.join(source_path, i['name'])\n",
+ " download_path(target_path, public_key, subdir_path)\n",
+ "\n",
+ " # check if current directory has more items\n",
+ " last = offset + emb['limit']\n",
+ " if last < emb['total']:\n",
+ " download_path(target_path, public_key, source_path, last)\n",
+ " download_path(OUTPUT_PATH, URL, '')\n",
+ " return False \n",
+ " return URL\n",
+ "\n",
+ "if not OUTPUT_PATH:\n",
+ " OUTPUT_PATH = \"/content/downloads/\"\n",
+ " \n",
+ "if not URL == \"\":\n",
+ " aria2Install()\n",
+ " youtubedlInstall()\n",
+ " try:\n",
+ " mkdir(\"downloads\")\n",
+ " except FileExistsError:\n",
+ " pass\n",
+ " url = istmd(URL)\n",
+ " if url != False:\n",
+ " print('URL {}'.format(URL))\n",
+ " cmdC = f'youtube-dl -o \"{OUTPUT_PATH}/%(title)s\" {URL} ' \\\n",
+ " '--external-downloader aria2c ' \\\n",
+ " '--external-downloader-args \"-x 16 -s 16 -k 1M\"'\n",
+ " runSh(cmdC, output=True)\n",
+ "else:\n",
+ " print(\"The URL field is emtpy!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N09EnjlB6wuV"
+ },
+ "source": [
+ "### bandcamp-dl "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0jLuWp0C604l"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM =============================#\n",
+ "#@markdown ← [Install] bandcamp-dl \n",
+ "#@markdown Make sure to run this cell first! \n",
+ "#================================================================#\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!pip3 install bandcamp-downloader\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LxU70FqH62an"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM =============================#\n",
+ "#@markdown ← [Run] bandcamp-dl \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "Download_location = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the \"Download_location\" field is left empty, downloads will be stored in: /content/downloads/bandcamp\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Download Options ⚙️ \n",
+ "Download_only_if_all_tracks_are_available = False #@param {type:\"boolean\"}\n",
+ "Overwrite_tracks_that_already_exist = False #@param {type:\"boolean\"}\n",
+ "Skip_grabbing_album_art = False #@param {type:\"boolean\"}\n",
+ "Embed_track_lyrics_If_available = False #@param {type:\"boolean\"}\n",
+ "Use_album_or_track_Label_as_iTunes_grouping = False #@param {type:\"boolean\"}\n",
+ "Embed_album_art_If_available = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Advanced Options ⚙️ \n",
+ "Enable_verbose_logging = False #@param {type:\"boolean\"}\n",
+ "Disable_slugification_of_track_album_and_artist_names = False #@param {type:\"boolean\"}\n",
+ "Only_allow_ASCII_characters = False #@param {type:\"boolean\"}\n",
+ "Retain_whitespace_in_filenames = False #@param {type:\"boolean\"}\n",
+ "Retain_uppercase_letters_in_filenames = False #@param {type:\"boolean\"}\n",
+ "Specify_allowed_characters_in_slugify = \"-_~\" #@param {type:\"string\"}\n",
+ "Specify_the_character_to_use_in_place_of_spaces = \"-\" #@param {type:\"string\"}\n",
+ "#================================================================#\n",
+ "\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "default_download_location = \"/content/downloads/bandcamp\"\n",
+ "custom_download_location = Download_location\n",
+ "\n",
+ "if Download_location is \"\":\n",
+ " Download_location = \"--base-dir=\" + default_download_location\n",
+ " \n",
+ " if os.path.exists(default_download_location):\n",
+ " pass\n",
+ " else:\n",
+ " os.makedirs(default_download_location)\n",
+ "else:\n",
+ " Download_location = \"--base-dir=\" + Download_location\n",
+ " \n",
+ " if os.path.exists(custom_download_location):\n",
+ " pass\n",
+ " else:\n",
+ " os.makedirs(custom_download_location)\n",
+ "\n",
+ "if Download_only_if_all_tracks_are_available is True:\n",
+ " full_album = \"-f\"\n",
+ "else:\n",
+ " full_album = \"\"\n",
+ "\n",
+ "if Overwrite_tracks_that_already_exist is True:\n",
+ " overwrite = \"-o\"\n",
+ "else:\n",
+ " overwrite = \"\"\n",
+ "\n",
+ "if Skip_grabbing_album_art is True:\n",
+ " no_art = \"-n\"\n",
+ "else:\n",
+ " no_art = \"\"\n",
+ "\n",
+ "if Embed_track_lyrics_If_available is True:\n",
+ " embed_lyrics = \"-e\"\n",
+ "else:\n",
+ " embed_lyrics = \"\"\n",
+ "\n",
+ "if Use_album_or_track_Label_as_iTunes_grouping is True:\n",
+ " group = \"-g\"\n",
+ "else:\n",
+ " group = \"\"\n",
+ "\n",
+ "if Embed_album_art_If_available is True:\n",
+ " embed_art = \"-r\"\n",
+ "else:\n",
+ " embed_art = \"\"\n",
+ "\n",
+ "if Enable_verbose_logging is True:\n",
+ " verbose_logging = \"-d\"\n",
+ "else:\n",
+ " verbose_logging = \"\"\n",
+ "\n",
+ "if Disable_slugification_of_track_album_and_artist_names is True:\n",
+ " no_slugify = \"-y\"\n",
+ "else:\n",
+ " no_slugify = \"\"\n",
+ "\n",
+ "if Only_allow_ASCII_characters is True:\n",
+ " ascii_only = \"-a\"\n",
+ "else:\n",
+ " ascii_only = \"\"\n",
+ "\n",
+ "if Retain_whitespace_in_filenames is True:\n",
+ " keep_spaces = \"-k\"\n",
+ "else:\n",
+ " keep_spaces = \"\"\n",
+ "\n",
+ "if Retain_uppercase_letters_in_filenames is True:\n",
+ " keep_upper = \"-u\"\n",
+ "else:\n",
+ " keep_upper = \"\"\n",
+ "\n",
+ "\n",
+ "if not URL is \"\":\n",
+ " !bandcamp-dl $full_album $overwrite $no_art $embed_lyrics $group $embed_art $verbose_logging $no_slugify $ascii_only $keep_spaces $keep_upper \"$Download_location\" \"$URL\"\n",
+ " \n",
+ " display(HTML(\"✅ bandcamp-dl has finished performing its task! \"))\n",
+ "else:\n",
+ " display(HTML(\"❌ The URL field is empty! \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bZ-Z0cUdz7IL"
+ },
+ "source": [
+ "### FunKiiU "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "yRmvnl090JmZ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← [Start] FunKiiU \n",
+ "#@markdown FunKiiU is a python tool for downloading Nintendo Wii U content from Nintendo's CDN. (Click here to check out the github repository)
\n",
+ "\n",
+ "#@markdown ---\n",
+ "title_id = \"\" #@param {type:\"string\"}\n",
+ "title_key = \"\" #@param {type:\"string\"}\n",
+ "#download_path = \"\" #@param {type:\"string\"}\n",
+ "run_in_simulated_mode = False #@param{type: \"boolean\"}\n",
+ "#@markdown > Download(s) are stored in (/content/install).\n",
+ "\n",
+ "# @markdown ---\n",
+ "automatically_clear_cell_output = False #@param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "#import subprocess\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "FunKiiU_clone_path = \"/content/tools/FunKiiU\"\n",
+ "FunKiiU_path = \"/content/tools/FunKiiU/FunKiiU.py\"\n",
+ "FunKiiU_download_path = \"/content/install\"\n",
+ "\n",
+ "\n",
+ "# Checks whether FunKiiU exists or not.\n",
+ "# If FunKiiU does not exist, it will be downloaded/pulled from its github repository.\n",
+ "if os.path.exists(FunKiiU_path):\n",
+ "\tpass\n",
+ "else:\n",
+ " os.system(\"git clone https://github.com/llakssz/FunKiiU \" + FunKiiU_clone_path)\n",
+ " \n",
+ " # This block here is not actually necessery as FunKiiU is able automatically create the \"install\" folder but, well...\n",
+ " try:\n",
+ " os.makedirs(FunKiiU_download_path, exist_ok=True)\n",
+ " except OSError as error:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "\n",
+ "# Fields checking.\n",
+ "# If both fields or one of them are empty, a message will be shown.\n",
+ "if title_id == \"\" and title_key == \"\":\n",
+ " display(HTML(\"❌ Both fields are empty! \"))\n",
+ "elif title_id == \"\" and not title_key == \"\":\n",
+ " display(HTML(\"❌ The title_id field is empty! \"))\n",
+ "elif not title_id == \"\" and title_key == \"\":\n",
+ " display(HTML(\"❌ The title_key field is empty! \"))\n",
+ "else:\n",
+ " # Passing the -simulate argument to run in simulated mode, if the above checkbox's value is True\n",
+ " if run_in_simulated_mode is True:\n",
+ " simulate = \" -simulate\"\n",
+ " else:\n",
+ " simulate = \"\"\n",
+ " \n",
+ " # The actual piece of command that runs FunKiiU\n",
+ " # ----- Downloading by running the command directly as the OS,\n",
+ " !python \"/content/tools/FunKiiU/FunKiiU.py\" -title \"$title_id\" -key \"$title_key\" $simulate\n",
+ " \n",
+ " # ----- Downloading the python way but still as the OS (does not show any output),\n",
+ " #os.system(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate)\n",
+ " \n",
+ " # ----- Downloading as subprocess and capture the output.\n",
+ " #FunKiiU_process = subprocess.Popen(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate, shell = True, stdout = subprocess.PIPE).stdout\n",
+ " #FunKiiU = FunKiiU_process.read()\n",
+ " #\n",
+ " #print(FunKiiU.decode())\n",
+ "\n",
+ " # Printing different message for regular download mode or simulated mode.\n",
+ " if run_in_simulated_mode is True:\n",
+ " display(HTML(\"✅ FunKiiU has finished doing the simulation. \"))\n",
+ " else:\n",
+ " display(HTML(\"✅ Download(s) are stored in: /content/install \"))\n",
+ " \n",
+ " # Will automatically clear console output if the above checkbox's value is True\n",
+ " # With this enabled, user won't be able to see anything, though.\n",
+ " if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ " else:\n",
+ " pass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0XaXh7Ix0VFu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clear \"install\" Folder \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "FunKiiU_download_path = \"/content/install\"\n",
+ "\n",
+ "if os.path.exists(FunKiiU_download_path):\n",
+ " os.system(\"rm -rf \" + FunKiiU_download_path)\n",
+ " os.makedirs(FunKiiU_download_path)\n",
+ "elif not os.path.exists(FunKiiU_download_path):\n",
+ " os.makedirs(FunKiiU_download_path)\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "s7IbnEdkYBkY"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Remove FunKiiU \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "FunKiiU_path = \"/content/tools/FunKiiU\"\n",
+ "\n",
+ "if os.path.exists(FunKiiU_download_path):\n",
+ " os.system(\"rm -rf \" + FunKiiU_path)\n",
+ "elif not os.path.exists(FunKiiU_path):\n",
+ " pass\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ERBVA5aIERou"
+ },
+ "source": [
+ "### Google Drive CLI "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Qs0bcnzAFDZq"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Clone] Google Drive CLI \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
+ "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
+ "\n",
+ "\n",
+ "def cloneGoogleDriveCLI():\n",
+ " if os.path.exists(GoogleDriveCLI_path1 + \"/gdrive\"):\n",
+ " pass\n",
+ " else:\n",
+ " # Big thanks to github user GrowtopiaJaw for providing a pre-compiled binary of Google Drive CLI.\n",
+ " # https://github.com/GrowtopiaJaw/gdrive\n",
+ " os.system(\"wget https://github.com/GrowtopiaJaw/gdrive/releases/download/v2.1.1/gdrive-linux-amd64\")\n",
+ " \n",
+ " if not os.path.exists(GoogleDriveCLI_path1):\n",
+ " # Big thanks to github user prasmussen for creating such an awesome tool.\n",
+ " # https://github.com/prasmussen/gdrive\n",
+ " os.makedirs(\"/content/tools/GoogleDriveCLI\")\n",
+ "\n",
+ " os.system(\"mv /content/gdrive-linux-amd64 \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
+ " os.system(\"chmod +x \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
+ "\n",
+ "\n",
+ "def initializeGoogleDriveCLI():\n",
+ " if not os.path.exists(GoogleDriveCLI_path2):\n",
+ " cloneGoogleDriveCLI()\n",
+ " initializeGoogleDriveCLI()\n",
+ " else:\n",
+ " !\"$GoogleDriveCLI_path2\" \"about\"\n",
+ " #clear_output(wait = True)\n",
+ "\n",
+ "\n",
+ "initializeGoogleDriveCLI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "V6fwq8QcF77j"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Google Drive CLI \n",
+ "download_id = \"\" #@param{type:\"string\"}\n",
+ "#@markdown > Currently only support downloading a publicly shared file (a file, NOT a folder).\n",
+ "download_path = \"\" #@param{type:\"string\"}\n",
+ "#@markdown > If left empty, the default download path will be used (/content/downloads).\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "download_path_default = \"/content/downloads\"\n",
+ "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
+ "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(GoogleDriveCLI_path2):\n",
+ " display(HTML(\"❌ Unable to locate the required binary! Make sure you have already run the cell above first! \"))\n",
+ "else:\n",
+ " if download_id == \"\":\n",
+ " display(HTML(\"❌ The download_id field is empty! \"))\n",
+ " else:\n",
+ " if download_path == \"\":\n",
+ " download_path = download_path_default\n",
+ " if not os.path.exists(download_path):\n",
+ " os.makedirs(download_path)\n",
+ " else:\n",
+ " pass\n",
+ " elif not os.path.exists(download_path):\n",
+ " os.makedirs(download_path)\n",
+ " else:\n",
+ " pass\n",
+ " \n",
+ " !\"/content/tools/GoogleDriveCLI/gdrive\" download --path \"$download_path\" \"$download_id\"\n",
+ " \n",
+ " if download_path is download_path_default:\n",
+ " display(HTML(\"The download_path field is empty. Download(s) are stored into the default download path (/content/downloads). \"))\n",
+ " else:\n",
+ " display(HTML(\"Download(s) are stored into (\" + download_path + \"). \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bEYznPNQ61sm"
+ },
+ "source": [
+ "### JDownloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LP35vcdpw2Vd"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] JDownloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from os import path as _p\n",
+ "\n",
+ "NEW_Account = True\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run # nosec\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "from mixlab import handleJDLogin\n",
+ "\n",
+ "handleJDLogin(NEW_Account)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1mctlRk1TTrc"
+ },
+ "source": [
+ "### MEGA "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "AelSL7BeTcJA"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← MEGA Login \n",
+ "# @markdown Please log in to MEGA first (only needed to use the Uploader).
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from functools import wraps\n",
+ "import errno\n",
+ "import os\n",
+ "import signal\n",
+ "import subprocess\n",
+ "import shlex\n",
+ "\n",
+ "class TimeoutError(Exception):\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):\n",
+ " def decorator(func):\n",
+ " def _handle_timeout(signum, frame):\n",
+ " raise TimeoutError(error_message)\n",
+ "\n",
+ " def wrapper(*args, **kwargs):\n",
+ " signal.signal(signal.SIGALRM, _handle_timeout)\n",
+ " signal.alarm(seconds)\n",
+ " try:\n",
+ " result = func(*args, **kwargs)\n",
+ " finally:\n",
+ " signal.alarm(0)\n",
+ " return result\n",
+ "\n",
+ " return wraps(func)(wrapper)\n",
+ "\n",
+ " return decorator\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from subprocess import run\n",
+ " from shlex import split\n",
+ "\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(split(shellCmd))\n",
+ "from mixlab import runSh\n",
+ "\n",
+ "@timeout(10)\n",
+ "def runShT(args):\n",
+ " return runSh(args, output=True)\n",
+ "\n",
+ "# Installing MEGAcmd\n",
+ "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
+ " print(\"Installing MEGA ...\")\n",
+ " runSh('sudo apt-get -y update')\n",
+ " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
+ " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
+ " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
+ " print(\"MEGA is installed.\")\n",
+ "else:\n",
+ " !pkill mega-cmd\n",
+ "\n",
+ "# Enter MEGA credential\n",
+ "USERNAME = \"\" # @param {type:\"string\"}\n",
+ "PASSWORD = \"\" # @param {type:\"string\"}\n",
+ "if not (USERNAME == \"\" or PASSWORD == \"\"):\n",
+ " try:\n",
+ " runShT(f\"mega-login {USERNAME} {PASSWORD}\")\n",
+ " except TimeoutError:\n",
+ " runSh('mega-whoami', output=True)\n",
+ "else:\n",
+ " print(\"Please enter your MEGA credential.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "p0Wg4seDVseV"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Downloader \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > URL: is the MEGA link you want to download (ex: mega.nz/file/file_link#decryption_key)OUTPUT_PATH: is where to store the downloaded file(s) (ex: /content/downloads/)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import sys, os, urllib.request\n",
+ "import time\n",
+ "import subprocess\n",
+ "import contextlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ ")\n",
+ "\n",
+ "if not OUTPUT_PATH:\n",
+ " os.makedirs(\"downloads\", exist_ok=True)\n",
+ " OUTPUT_PATH = \"downloads\"\n",
+ "# Installing MEGAcmd\n",
+ "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
+ " loadingAn()\n",
+ " print(\"Installing MEGA ...\")\n",
+ " runSh('sudo apt-get -y update')\n",
+ " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
+ " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
+ " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
+ " print(\"MEGA is installed.\")\n",
+ " clear_output()\n",
+ "\n",
+ "# Unix, Windows and old Macintosh end-of-line\n",
+ "newlines = ['\\n', '\\r\\n', '\\r']\n",
+ "\n",
+ "def unbuffered(proc, stream='stdout'):\n",
+ " stream = getattr(proc, stream)\n",
+ " with contextlib.closing(stream):\n",
+ " while True:\n",
+ " out = []\n",
+ " last = stream.read(1)\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " while last not in newlines:\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " out.append(last)\n",
+ " last = stream.read(1)\n",
+ " out = ''.join(out)\n",
+ " yield out\n",
+ "\n",
+ "def transfare():\n",
+ " import codecs\n",
+ " decoder = codecs.getincrementaldecoder(\"UTF-8\")()\n",
+ " cmd = [\"mega-get\", URL, OUTPUT_PATH]\n",
+ " proc = subprocess.Popen(\n",
+ " cmd,\n",
+ " stdout=subprocess.PIPE,\n",
+ " stderr=subprocess.STDOUT,\n",
+ " # Make all end-of-lines '\\n'\n",
+ " universal_newlines=True,\n",
+ " )\n",
+ " for line in unbuffered(proc):\n",
+ " print(line)\n",
+ " \n",
+ "transfare()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "3GKtYuBbUP-c"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Uploader \n",
+ "# Simple_torrent = False # @param{type: \"boolean\"}\n",
+ "# Peerflix = False # @param{type: \"boolean\"}\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > PATH_TO_FILE is the location of the file you want to upload located at. (ex: /content/downloads/file-to-upload.zip)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time\n",
+ "import subprocess\n",
+ "import contextlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "# Unix, Windows and old Macintosh end-of-line\n",
+ "newlines = ['\\n', '\\r\\n', '\\r']\n",
+ "\n",
+ "def unbuffered(proc, stream='stdout'):\n",
+ " stream = getattr(proc, stream)\n",
+ " with contextlib.closing(stream):\n",
+ " while True:\n",
+ " out = []\n",
+ " last = stream.read(1)\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " while last not in newlines:\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " out.append(last)\n",
+ " last = stream.read(1)\n",
+ " out = ''.join(out)\n",
+ " yield out\n",
+ "\n",
+ "def transfare():\n",
+ " cmd = \"\"\n",
+ " if Simple_torrent:\n",
+ " cmd = ['mega-put', 'downloads', '/colab']\n",
+ " elif Peerflix:\n",
+ " cmd = ['mega-put', 'peerflix', '/colab']\n",
+ " else:\n",
+ " cmd = ['mega-put', PATH_TO_FILE, '/colab']\n",
+ " proc = subprocess.Popen(\n",
+ " cmd,\n",
+ " stdout=subprocess.PIPE,\n",
+ " stderr=subprocess.STDOUT,\n",
+ " # Make all end-of-lines '\\n'\n",
+ " universal_newlines=True,\n",
+ " )\n",
+ " for line in unbuffered(proc):\n",
+ " clear_output(wait=True)\n",
+ " print(line)\n",
+ "\n",
+ "try:\n",
+ " transfare()\n",
+ "except FileNotFoundError:\n",
+ " print(\"Please log into your MEGA account first!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dEq11jIB5oee"
+ },
+ "source": [
+ "### pyLoad "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "a08IDWFG5rm1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] pyLoad \n",
+ "# @markdown pyLoad is a free and open-source download manager written in pure python.\n",
+ "# @markdown > pyLoad Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "PORT_FORWARD = \"argo_tunnel\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "os.makedirs('tools/pyload', exist_ok=True)\n",
+ "\n",
+ "# Downloading latest version of pyload\n",
+ "if not os.path.exists(\"tools/pyload/pyload-stable\"):\n",
+ " urlF = 'https://github.com/pyload/pyload/archive/stable.zip'\n",
+ " conf = 'https://raw.githubusercontent.com/shirooo39/' \\\n",
+ " 'MiXLab/master/resources/configurations/pyload/pyload.conf'\n",
+ " db = 'https://github.com/shirooo39/MiXLab/raw/master/' \\\n",
+ " 'resources/configurations/pyload/files.db'\n",
+ " urllib.request.urlretrieve(urlF, 'tools/pyload.zip')\n",
+ " urllib.request.urlretrieve(conf, 'tools/pyload/pyload.conf')\n",
+ " urllib.request.urlretrieve(db, 'tools/pyload/files.db')\n",
+ " with zipfile.ZipFile('tools/pyload.zip', 'r') as zip_ref: zip_ref.extractall('tools/pyload')\n",
+ " try:\n",
+ " pathlib.Path('tools/pyload.zip').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ " runSh(\"apt install python-pycurl python-qt4 tesseract-ocr libtesseract-dev\")\n",
+ " runSh(\"pip2 install pycrypto pyOpenSSL Jinja2 tesseract tesseract-ocr\")\n",
+ "\n",
+ "if not findProcess(\"python2.7\", \"pyLoadCore.py\"):\n",
+ " runCmd = \"python2.7 /content/tools/pyload/pyload-stable/pyLoadCore.py\" \\\n",
+ " \" --configdir=/content/tools/pyload\" \\\n",
+ " \" --no-remote\" \\\n",
+ " \" --daemon\"\n",
+ " runSh(runCmd, shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['pyload', 8000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/pyLoad.yml\", 4074]).start('pyload')\n",
+ "displayUrl(Server, pNamU='pyLoad : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ci0HTN9Xyxze"
+ },
+ "source": [
+ "### Pornhub Downloader "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cD9BrjIoAbF7"
+ },
+ "source": [
+ "> Recommended to use YouTube-DL instead."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jRrvPBr5y19U"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Required Module(s) \n",
+ "# ================================================================ #\n",
+ "\n",
+ "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
+ "from IPython.display import clear_output\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn(name=\"lds\")\n",
+ "textAn(\"Installing dependencies...\", ty='twg')\n",
+ "os.system('pip3 install youtube-dl')\n",
+ "os.system('pip3 install prettytable')\n",
+ "os.system('pip3 install bs4')\n",
+ "os.system('pip3 install requests')\n",
+ "%cd /content\n",
+ "os.system('git clone https://github.com/mariosemes/PornHub-downloader-python.git')\n",
+ "\n",
+ "clear_output()\n",
+ "print(\"The module(s) has been successfully installed.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OLj2mj4lzcOp"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] PornHub Downloader \n",
+ "pornhub_url = '' #@param {type: \"string\"}\n",
+ "option = \"single_download\" #@param [\"single_download\", \"batch_download\",\"add\",\"delete\"]\n",
+ "# @markdown > - Single Download link Eg: https://www.pornhub.com/view_video.php?viewkey=ph5d69a2093729e\n",
+ "#@markdown > - The batch option will ask you for the full path of your .txt file where you can import multiple URLs at once.Take care that every single URL in the .txt file is in his own row.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "%cd PornHub-downloader-python\n",
+ "\n",
+ "if option == 'single_download':\n",
+ " !python3 phdler.py custom \"$pornhub_url\"\n",
+ "\n",
+ "elif option == 'add':\n",
+ " !python3 phdler.py add \"$pornhub_url\"\n",
+ "\n",
+ "elif option == 'delete':\n",
+ " !python3 phdler.py delete \"$pornhub_url\"\n",
+ "\n",
+ "else:\n",
+ " !python3 phdler.py custom batch "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tL-ilxH0N_B9"
+ },
+ "source": [
+ "### Spotify Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JTAKDpp9OCEs"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Spotify Downloader \n",
+ "# @markdown Download Spotify playlists from YouTube with album-art and meta-tags
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, urllib.parse, re\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "from glob import glob\n",
+ "from urllib.parse import urlparse, parse_qs\n",
+ "from IPython.display import HTML, clear_output, YouTubeVideo\n",
+ "from IPython.utils.io import ask_yes_no\n",
+ "from google.colab import output, files\n",
+ "\n",
+ "\n",
+ "os.makedirs('tools/spotify-downloader/', exist_ok=True)\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "\n",
+ "# # Config files\n",
+ "# data = \"\"\"spotify-downloader:\n",
+ "# avconv: false\n",
+ "# download-only-metadata: false\n",
+ "# dry-run: false\n",
+ "# file-format: '{artist} - {track_name}'\n",
+ "# folder: /home/user/Music\n",
+ "# input-ext: .m4a\n",
+ "# log-level: INFO\n",
+ "# manual: false\n",
+ "# music-videos-only: false\n",
+ "# no-fallback-metadata: false\n",
+ "# no-metadata: false\n",
+ "# no-spaces: false\n",
+ "# output-ext: .mp3\n",
+ "# overwrite: prompt\n",
+ "# search-format: '{artist} - {track_name} lyrics'\n",
+ "# skip: null\n",
+ "# spotify_client_id: 4fe3fecfe5334023a1472516cc99d805\n",
+ "# spotify_client_secret: 0f02b7c483c04257984695007a4a8d5c\n",
+ "# trim-silence: false\n",
+ "# write-successful: null\n",
+ "# write-to: null\n",
+ "# youtube-api-key: null\n",
+ "# \"\"\"\n",
+ "# with open('tools/spotify-downloader/config.yml', 'w') as wnow:\n",
+ "# wnow.write(data)\n",
+ "\n",
+ "Links = widgets.Textarea(placeholder='''Link list\n",
+ "(one link per line)''')\n",
+ "\n",
+ "fileFormat = widgets.Text(\n",
+ " value='{artist} - {track_name}',\n",
+ " placeholder='File name format',\n",
+ " description=\"\"\"File Name : file format to save the downloaded track with, each\n",
+ " tag is surrounded by curly braces. Possible formats:\n",
+ " ['track_name', 'artist', 'album', 'album_artist',\n",
+ " 'genre', 'disc_number', 'duration', 'year',\n",
+ " 'original_date', 'track_number', 'total_tracks',\n",
+ " 'isrc']\"\"\",\n",
+ " disabled=False\n",
+ ")\n",
+ "\n",
+ "searchFormat = widgets.Text(\n",
+ " value='{artist} - {track_name} lyrics',\n",
+ " placeholder='Search format',\n",
+ " description=\"\"\"Search Format : search format to search for on YouTube, each tag is\n",
+ " surrounded by curly braces. Possible formats:\n",
+ " ['track_name', 'artist', 'album', 'album_artist',\n",
+ " 'genre', 'disc_number', 'duration', 'year',\n",
+ " 'original_date', 'track_number', 'total_tracks',\n",
+ " 'isrc']\"\"\",\n",
+ " disabled=False\n",
+ ")\n",
+ "\n",
+ "tab = widgets.Tab()\n",
+ "\n",
+ "LinksType = widgets.RadioButtons(\n",
+ " options=['Songs', 'Playlist', 'Album', 'Username', 'Artist'],\n",
+ " value='Songs',\n",
+ " layout={'width': 'max-content'},\n",
+ " description='Links type:',\n",
+ " disabled=False,\n",
+ ")\n",
+ "\n",
+ "SavePathYT = widgets.Dropdown(options=[\"/content/downloads\", \"/content\"])\n",
+ "\n",
+ "Extension = widgets.Select(options=[\"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"], value=\"mp3\")\n",
+ "\n",
+ "TrimSilence = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Trim silence',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='remove silence from the start of the audio',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "writeM3u = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Write .m3u playlist',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''generate an .m3u playlist file with youtube links\n",
+ " given a text file containing tracks''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "noMeta = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No metadata',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='do not embed metadata in tracks',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "nf = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No fallback metadata',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''do not use YouTube as fallback for metadata if track\n",
+ " not found on Spotify''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "dryRun = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Dry run',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip=''' show only track title and YouTube URL, and then skip\n",
+ " to the next track (if any)''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "MusicVidOnly = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Music Videos Only',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''search only for music videos on Youtube (works only\n",
+ " when YouTube API key is set''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "NoSpaces = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No Spaces',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''replace spaces with underscores in file names''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "manual = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='manually',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''choose the track to download manually from a list of\n",
+ " matching tracks''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "nr = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Keep original',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''do not remove the original file after conversion''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def MakeLabel(description, button_style):\n",
+ " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
+ "\n",
+ "def RefreshPathYT():\n",
+ " if os.path.exists(\"/content/drive/\"):\n",
+ " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content/downloads\", \"/content\"]\n",
+ "\n",
+ "\n",
+ "def ShowYT():\n",
+ " clear_output(wait=True)\n",
+ " RefreshPathYT()\n",
+ " mainTab = widgets.Box([widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
+ " LinksType, searchFormat, fileFormat, widgets.HBox([TrimSilence, writeM3u, noMeta]), widgets.HBox([nf, dryRun, MusicVidOnly]),widgets.HBox([NoSpaces, manual, nr])]),\n",
+ " widgets.VBox([widgets.HTML(\"Extension: \"), Extension,\n",
+ " widgets.HTML(\"Extra Arguments: \"), ExtraArg])])])\n",
+ " tab.children = [mainTab]\n",
+ " tab.set_title(0, 'spotify-downloader')\n",
+ " display(tab)\n",
+ " display(HTML(\"Save Location: \"), SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
+ " if not os.path.exists(\"/content/drive/\"):\n",
+ " display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
+ " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
+ "\n",
+ "def DownloadYT():\n",
+ " if Links.value.strip():\n",
+ " Count = 0\n",
+ " Total = str(len(Links.value.splitlines()))\n",
+ " if writeM3u.value:\n",
+ " M3u = '--write-m3u'\n",
+ " else:\n",
+ " M3u = ''\n",
+ " if TrimSilence.value:\n",
+ " trmS = '--trim-silence'\n",
+ " else:\n",
+ " trmS = ''\n",
+ " if noMeta.value:\n",
+ " noM = '--no-metadata'\n",
+ " else:\n",
+ " noM = ''\n",
+ " if nf.value:\n",
+ " nfv = '--no-fallback-metadata'\n",
+ " else:\n",
+ " nfv = ''\n",
+ " if dryRun.value:\n",
+ " drR = '--dry-run'\n",
+ " else:\n",
+ " drR = ''\n",
+ " if MusicVidOnly.value:\n",
+ " MsV = '--music-videos-only'\n",
+ " else:\n",
+ " MsV = ''\n",
+ " if NoSpaces.value:\n",
+ " NoS = '--no-spaces'\n",
+ " else:\n",
+ " NoS = ''\n",
+ " if manual.value:\n",
+ " mal = '--manual'\n",
+ " else:\n",
+ " mal = ''\n",
+ " if nr.value:\n",
+ " nro = '--no-remove-original' \n",
+ " else:\n",
+ " nro = ''\n",
+ " if not searchFormat.value == '{artist} - {track_name} lyrics':\n",
+ " seFor = f'--search-format \"{searchFormat.value}\"'\n",
+ " else:\n",
+ " seFor = ''\n",
+ " if not fileFormat.value == '{artist} - {track_name}':\n",
+ " fiFor = f'--file-format \"{fileFormat.value}\"'\n",
+ " else:\n",
+ " fiFor = ''\n",
+ " \n",
+ " if not LinksType.value == 'Songs':\n",
+ " with open('tools/spotify-downloader/finish.txt', 'a+') as master:\n",
+ " for Link in Links.value.splitlines():\n",
+ " if LinksType.value == 'Playlist':\n",
+ " outFileName = !spotdl --playlist $Link\n",
+ " elif LinksType.value == 'Album':\n",
+ " outFileName = !spotdl --album $Link\n",
+ " elif LinksType.value == 'Username':\n",
+ " outFileName = !spotdl -u $Link\n",
+ " elif LinksType.value == 'Artist':\n",
+ " outFileName = !spotdl --all-albums $Link\n",
+ " filename = re.search(r\"to\\s(.+\\.txt)\", outFileName[-1]).group(1)\n",
+ " with open(filename, 'r') as r:\n",
+ " master.write(r.read())\n",
+ " else:\n",
+ " for Link in Links.value.splitlines():\n",
+ " with open('tools/spotify-downloader/finish.txt', 'w') as master:\n",
+ " master.write(Link)\n",
+ " # Extra Arguments\n",
+ " \n",
+ " extraargC = ExtraArg.value\n",
+ " cmd = r\"spotdl -l 'tools/spotify-downloader/finish.txt' \" \\\n",
+ " fr\"-f {SavePathYT.value} \" \\\n",
+ " fr\"-o .{Extension.value} \" \\\n",
+ " f\"--overwrite skip \" \\\n",
+ " f\"{seFor} {fiFor} \" \\\n",
+ " f\"{M3u} {trmS} {noM} {nfv} {drR} {MsV} {NoS} {mal} {nro}\" \n",
+ " !$cmd\n",
+ " ShowYT()\n",
+ "\n",
+ "if not os.path.isfile(\"/usr/local/bin/spotdl\"):\n",
+ " get_ipython().system_raw(\"pip3 install spotdl && apt-get install ffmpeg\")\n",
+ "\n",
+ "ShowYT()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QOyo5zf4suod"
+ },
+ "source": [
+ "### YouTube-DL "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mYCRR-yWSuyi"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] YouTube-DL \n",
+ "Archive = False\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, urllib.parse\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "from glob import glob\n",
+ "from urllib.parse import urlparse, parse_qs\n",
+ "from IPython.display import HTML, clear_output, YouTubeVideo\n",
+ "from IPython.utils.io import ask_yes_no\n",
+ "from google.colab import output, files\n",
+ "\n",
+ "Links = widgets.Textarea(placeholder='''Video/Playlist Link\n",
+ "(one link per line)''')\n",
+ "\n",
+ "VideoQ = widgets.Dropdown(options=[\"Best Quality (VP9 upto 4K)\", \"Best Compatibility (H.264 upto 1080p)\"])\n",
+ "\n",
+ "AudioQ = widgets.Dropdown(options=[\"Best Quality (Opus)\", \"Best Compatibility (M4A)\"])\n",
+ "\n",
+ "Subtitle = widgets.ToggleButton(value=True, description=\"Subtitle\", button_style=\"info\", tooltip=\"Subtitle\")\n",
+ "\n",
+ "SavePathYT = widgets.Dropdown(options=[\"/content\", \"/content/downloads\"])\n",
+ "\n",
+ "AudioOnly = widgets.ToggleButton(value=False, description=\"Audio Only\", button_style=\"\", tooltip=\"Audio Only\")\n",
+ "\n",
+ "Resolution = widgets.Select(options=[\"Highest\", \"4K\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"144p\"], value=\"Highest\")\n",
+ "\n",
+ "Extension = widgets.Select(options=[\"mkv\", \"webm\"], value=\"mkv\")\n",
+ "\n",
+ "UsernameYT = widgets.Text(placeholder=\"Username\")\n",
+ "\n",
+ "PasswordYT = widgets.Text(placeholder=\"Password\")\n",
+ "\n",
+ "SecAuth = widgets.Text(placeholder=\"2nd Factor Authentication\")\n",
+ "\n",
+ "VideoPW = widgets.Text(placeholder=\"Video Password\")\n",
+ "\n",
+ "GEOBypass = widgets.Dropdown(options=[\"Disable\", \"Hide\", \"AD\", \"AE\", \"AF\", \"AG\", \"AI\", \"AL\", \"AM\", \"AO\", \"AQ\", \"AR\", \"AS\", \"AT\", \"AU\", \"AW\", \"AX\", \"AZ\", \"BA\", \"BB\", \"BD\", \"BE\", \"BF\", \"BG\", \"BH\", \"BI\", \"BJ\", \"BL\", \"BM\", \"BN\", \"BO\", \"BQ\", \"BR\", \"BS\", \"BT\", \"BV\", \"BW\", \"BY\", \"BZ\", \"CA\", \"CC\", \"CD\", \"CF\", \"CG\", \"CH\", \"CI\", \"CK\", \"CL\", \"CM\", \"CN\", \"CO\", \"CR\", \"CU\", \"CV\", \"CW\", \"CX\", \"CY\", \"CZ\", \"DE\", \"DJ\", \"DK\", \"DM\", \"DO\", \"DZ\", \"EC\", \"EE\", \"EG\", \"EH\", \"ER\", \"ES\", \"ET\", \"FI\", \"FJ\", \"FK\", \"FM\", \"FO\", \"FR\", \"GA\", \"GB\", \"GD\", \"GE\", \"GF\", \"GG\", \"GH\", \"GI\", \"GL\", \"GM\", \"GN\", \"GP\", \"GQ\", \"GR\", \"GS\", \"GT\", \"GU\", \"GW\", \"GY\", \"HK\", \"HM\", \"HN\", \"HR\", \"HT\", \"HU\", \"ID\", \"IE\", \"IL\", \"IM\", \"IN\", \"IO\", \"IQ\", \"IR\", \"IS\", \"IT\", \"JE\", \"JM\", \"JO\", \"JP\", \"KE\", \"KG\", \"KH\", \"KI\", \"KM\", \"KN\", \"KP\", \"KR\", \"KW\", \"KY\", \"KZ\", \"LA\", \"LB\", \"LC\", \"LI\", \"LK\", \"LR\", \"LS\", \"LT\", \"LU\", \"LV\", \"LY\", \"MA\", \"MC\", \"MD\", \"ME\", \"MF\", \"MG\", \"MH\", \"MK\", \"ML\", \"MM\", \"MN\", \"MO\", \"MP\", \"MQ\", \"MR\", \"MS\", \"MT\", \"MU\", \"MV\", \"MW\", \"MX\", \"MY\", \"MZ\", \"NA\", \"NC\", \"NE\", \"NF\", \"NG\", \"NI\", \"NL\", \"NO\", \"NP\", \"NR\", \"NU\", \"NZ\", \"OM\", \"PA\", \"PE\", \"PF\", \"PG\", \"PH\", \"PK\", \"PL\", \"PM\", \"PN\", \"PR\", \"PS\", \"PT\", \"PW\", \"PY\", \"QA\", \"RE\", \"RO\", \"RS\", \"RU\", \"RW\", \"SA\", \"SB\", \"SC\", \"SD\", \"SE\", \"SG\", \"SH\", \"SI\", \"SJ\", \"SK\", \"SL\", \"SM\", \"SN\", \"SO\", \"SR\", \"SS\", \"ST\", \"SV\", \"SX\", \"SY\", \"SZ\", \"TC\", \"TD\", \"TF\", \"TG\", \"TH\", \"TJ\", \"TK\", \"TL\", \"TM\", \"TN\", \"TO\", \"TR\", \"TT\", \"TV\", \"TW\", \"TZ\", \"UA\", \"UG\", \"UM\", \"US\", \"UY\", \"UZ\", \"VA\", \"VC\", \"VE\", \"VG\", \"VI\", \"VN\", \"VU\", \"WF\", \"WS\", \"YE\", \"YT\", \"ZA\", \"ZM\", \"ZW\"])\n",
+ "\n",
+ "ProxyYT = widgets.Text(placeholder=\"Proxy URL\")\n",
+ "\n",
+ "MinSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Min:\")\n",
+ "\n",
+ "MaxSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Max:\")\n",
+ "\n",
+ "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def MakeLabel(description, button_style):\n",
+ " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
+ "\n",
+ "def upload_archive():\n",
+ " if ask_yes_no(\"Do you already have an archive file? (y/n)\", default=\"\", interrupt=\"\"):\n",
+ " try:\n",
+ " display(HTML(\"Please upload an archive from your computer. \"))\n",
+ " UploadConfig = files.upload().keys()\n",
+ " clear_output(wait=True)\n",
+ " if len(UploadConfig) == 0:\n",
+ " return display(HTML(\"File upload has been cancelled during upload file. \"))\n",
+ " elif len(UploadConfig) == 1:\n",
+ " for fn in UploadConfig:\n",
+ " if os.path.isfile(\"/content/\" + fn):\n",
+ " get_ipython().system_raw(\"mv -f \" + \"\\\"\" + fn + \"\\\" /root/.youtube-dl.txt && chmod 666 /root/.youtube-dl.txt\")\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()\n",
+ " else:\n",
+ " return display(HTML(\"File upload has been failed during upload file. \"))\n",
+ " else:\n",
+ " for fn in UploadConfig:\n",
+ " get_ipython().system_raw(\"rm -f \" + \"\\\"\" + fn + \"\\\"\")\n",
+ " return display(HTML(\"Please uploading only one file at a time. \"))\n",
+ " except:\n",
+ " clear_output(wait=True)\n",
+ " return display(HTML(\"Error occurred during upload file. \"))\n",
+ " else:\n",
+ " get_ipython().system_raw(\"touch '/root/.youtube-dl.txt'\")\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()\n",
+ "\n",
+ "def RefreshPathYT():\n",
+ " if os.path.exists(\"/content/drive/\"):\n",
+ " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\"]\n",
+ "\n",
+ "def AudioOnlyChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
+ " VideoQ.disabled = True\n",
+ " Subtitle.disabled = True\n",
+ " if Subtitle.value:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " else:\n",
+ " Subtitle.button_style = \"\"\n",
+ " Resolution.disabled = True\n",
+ " Extension.options = [\"best\", \"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"]\n",
+ " Extension.value = \"best\"\n",
+ " AudioOnly.button_style = \"info\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
+ " VideoQ.disabled = False\n",
+ " Subtitle.disabled = False\n",
+ " if Subtitle.value:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " else:\n",
+ " Subtitle.button_style = \"\"\n",
+ " Resolution.disabled = False\n",
+ " if AudioQ.value == \"Best Quality (Opus)\":\n",
+ " Extension.options = [\"mkv\", \"webm\"]\n",
+ " else:\n",
+ " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ " AudioOnly.button_style = \"\"\n",
+ "\n",
+ "def SubtitleChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
+ " Subtitle.button_style = \"\"\n",
+ "\n",
+ "def AudioQChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"] == \"Best Quality (Opus)\":\n",
+ " Extension.options = [\"mkv\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == \"Best Compatibility (M4A)\":\n",
+ " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ "\n",
+ "def ShowYT():\n",
+ " clear_output(wait=True)\n",
+ " RefreshPathYT()\n",
+ " display(widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
+ " widgets.HTML(\"For website that require an account: \"), UsernameYT, PasswordYT, SecAuth, VideoPW,\n",
+ " widgets.HTML(\"GEO Bypass Country: \"), GEOBypass,\n",
+ " widgets.HTML(\"Proxy: \"), ProxyYT,\n",
+ " widgets.HTML(\"Sleep Interval (second): \"), MinSleep, MaxSleep]),\n",
+ " widgets.VBox([widgets.HTML(\"Video Quality: \"), VideoQ, widgets.HTML(\"Resolution: \"), Resolution,\n",
+ " widgets.HTML(\"Audio Quality: \"), AudioQ, widgets.HTML(\"Extension: \"), Extension,\n",
+ " widgets.HTML(\"Extra Options: \"), widgets.HBox([Subtitle, AudioOnly]),\n",
+ " widgets.HTML(\"Extra Arguments: \"), ExtraArg])]), HTML(\"Save Location: \"),\n",
+ " SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
+ " if not os.path.exists(\"/content/drive/\"):\n",
+ "# display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
+ " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
+ "\n",
+ "def DownloadYT():\n",
+ " if Links.value.strip():\n",
+ " Count = 0\n",
+ " Total = str(len(Links.value.splitlines()))\n",
+ " # Account Check\n",
+ " if UsernameYT.value.strip() and PasswordYT.value.strip():\n",
+ " accountC = \"--username \\\"\" + UsernameYT.value + \"\\\" --password \\\"\" + PasswordYT.value + \"\\\"\"\n",
+ " else:\n",
+ " accountC = \"\"\n",
+ " if SecAuth.value.strip():\n",
+ " secauthC = \"-2 \" + SecAuth.value\n",
+ " else:\n",
+ " secauthC = \"\"\n",
+ " if VideoPW.value.strip():\n",
+ " videopwC = \"--video-password \" + VideoPW.value\n",
+ " else:\n",
+ " videopwC = \"\"\n",
+ " # Proxy\n",
+ " if ProxyYT.value.strip():\n",
+ " proxyytC = \"--proxy \" + ProxyYT.value\n",
+ " else:\n",
+ " proxyytC = \"\"\n",
+ " # GEO Bypass\n",
+ " if GEOBypass.value == \"Disable\":\n",
+ " geobypass = \"\"\n",
+ " elif GEOBypass.value == \"Hide\":\n",
+ " geobypass = \"--geo-bypass\"\n",
+ " else:\n",
+ " geobypass = \"--geo-bypass-country \" + GEOBypass.value\n",
+ " # Video Quality\n",
+ " if VideoQ.value == \"Best Quality (VP9 upto 4K)\":\n",
+ " videoqC = \"webm\"\n",
+ " else:\n",
+ " videoqC = \"mp4\"\n",
+ " # Audio Quality\n",
+ " if AudioQ.value == \"Best Quality (Opus)\":\n",
+ " audioqC = \"webm\"\n",
+ " else:\n",
+ " audioqC = \"m4a\"\n",
+ " # Audio Only Check\n",
+ " if AudioOnly.value:\n",
+ " subtitleC = \"\"\n",
+ " thumbnailC = \"\"\n",
+ " extC = \"-x --audio-quality 0 --audio-format \" + Extension.value\n",
+ " codecC = \"bestaudio[ext=\" + audioqC + \"]/bestaudio/best\"\n",
+ " else:\n",
+ " if Subtitle.value:\n",
+ " subtitleC = \"--all-subs --convert-subs srt --embed-subs\"\n",
+ " else:\n",
+ " subtitleC = \"\"\n",
+ " if Extension.value == \"mp4\":\n",
+ " thumbnailC = \"--embed-thumbnail\"\n",
+ " else:\n",
+ " thumbnailC = \"\"\n",
+ " extC = \"--merge-output-format \" + Extension.value\n",
+ " if Resolution.value == \"Highest\":\n",
+ " codecC = \"bestvideo[ext=\" + videoqC + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo+bestaudio/best\"\n",
+ " else:\n",
+ " codecC = \"bestvideo[ext=\" + videoqC + \",height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo[height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio/bestvideo+bestaudio/best\"\n",
+ " # Archive\n",
+ " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
+ " archiveC = \"--download-archive \\\"/root/.youtube-dl.txt\\\"\"\n",
+ " else:\n",
+ " archiveC = \"\"\n",
+ " # Sleep Interval\n",
+ " if MinSleep.value > 0 and MaxSleep.value > 0:\n",
+ " minsleepC = \"--min-sleep-interval \" + MinSleep.value\n",
+ " maxsleepC = \"--max-sleep-interval \" + MaxSleep.value\n",
+ " else:\n",
+ " minsleepC = \"\"\n",
+ " maxsleepC = \"\"\n",
+ " # Extra Arguments\n",
+ " extraargC = ExtraArg.value\n",
+ " for Link in Links.value.splitlines():\n",
+ " clear_output(wait=True)\n",
+ " Count += 1\n",
+ " display(HTML(\"Processing link \" + str(Count) + \" out of \" + Total + \" \"))\n",
+ " if \"youtube.com\" in Link or \"youtu.be\" in Link:\n",
+ " display(HTML(\"Currently downloading... \"), YouTubeVideo(Link, width=640, height=360), HTML(\" \"))\n",
+ " else:\n",
+ " display(HTML(\" \"))\n",
+ " if (\"youtube.com\" in Link or \"youtu.be\" in Link) and \"list=\" in Link:\n",
+ " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s\" \"$Link\"\n",
+ " else:\n",
+ " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(title)s.%(ext)s\" \"$Link\"\n",
+ " if not os.path.exists(SavePathYT.value):\n",
+ " get_ipython().system_raw(\"mkdir -p -m 666 \" + SavePathYT.value)\n",
+ " get_ipython().system_raw(\"mv /root/.YouTube-DL/* '\" + SavePathYT.value + \"/'\")\n",
+ " # Archive Download\n",
+ " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
+ " files.download(\"/root/.youtube-dl.txt\")\n",
+ " ShowYT()\n",
+ "\n",
+ "if not os.path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
+ " get_ipython().system_raw(\"rm -rf /content/sample_data/ && mkdir -p -m 666 /root/.YouTube-DL/ && apt-get install atomicparsley && curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl && chmod a+rx /usr/local/bin/youtube-dl\")\n",
+ "if Archive:\n",
+ " upload_archive()\n",
+ "else:\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FejGUkxPhDmE"
+ },
+ "source": [
+ "## ✧ *P2P-File Downloader* ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_GVSJ9jdn6lW"
+ },
+ "source": [
+ "### Deluge "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "z1IqkfEXn-eu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Deluge \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, pathlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " displayUrl,\n",
+ " PortForward_wrapper\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "pathlib.Path('downloads').mkdir(exist_ok=True)\n",
+ "pathlib.Path(f\"{HOME}/.config/deluge/\").mkdir(parents=True, exist_ok=True)\n",
+ "\n",
+ "if not (findProcess(\"/usr/bin/python\", \"deluged\") or findProcess(\"/usr/bin/python\", \"deluge-web\")):\n",
+ " runSh('sudo apt install -y deluged deluge-console deluge-webui')\n",
+ " runSh(\n",
+ " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/core.conf \\\n",
+ " -O {HOME}/.config/deluge/core.conf\"\n",
+ " )\n",
+ " runSh(\n",
+ " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/web.conf \\\n",
+ " -O {HOME}/.config/deluge/web.conf\"\n",
+ " )\n",
+ " runSh('deluged &> /dev/null &', shell=True)\n",
+ " runSh('deluge-web --fork', shell=True)\n",
+ " runSh(\"\"\"sed -i 's/if s.hexdigest() == config\\[\"pwd_sha1\"\\]:/if True:/' /usr/lib/python2.7/dist-packages/deluge/ui/web/auth.py\"\"\")\n",
+ " runSh(\"sed -i 's/onShow:function(){this.passwordField.focus(.*)}/onShow:function(){this.onLogin();}/' /usr/lib/python2.7/dist-packages/deluge/ui/web/js/deluge-all.js\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['deluge', 8112, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/deluge.yml\", 4042]).start('deluge')\n",
+ "displayUrl(Server, pNamU='Deluge : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OJBVlUw-kKyt"
+ },
+ "source": [
+ "### libtorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NZgOIKJ3kOL9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install libtorrent \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!apt install python3-libtorrent\n",
+ "\n",
+ "import libtorrent as lt\n",
+ "\n",
+ "ses = lt.session()\n",
+ "ses.listen_on(6881, 6891)\n",
+ "downloads = []\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wOroL1PJns93"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Add Torrent from File \n",
+ "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: \"save_path\": \"/content/downloads\",3. Change /content/downloads to your path \n",
+ "# @markdown > You can run this cell as many time as you want.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "if os.path.exists(\"/content/downloads\"):\n",
+ " pass\n",
+ "else:\n",
+ " os.mkdir(\"/content/downloads\")\n",
+ "\n",
+ "source = files.upload()\n",
+ "params = {\n",
+ " \"save_path\": \"/content/downloads\",\n",
+ " \"ti\": lt.torrent_info(list(source.keys())[0]),\n",
+ "}\n",
+ "downloads.append(ses.add_torrent(params))\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nOQBAsoenwLb"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Add Torrent from Magnet Link \n",
+ "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: params = {\"save_path\": \"/content/downloads\"}3. Change /content/downloads to your path \n",
+ "# @markdown > You can run this cell as many time as you want.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/downloads\"):\n",
+ " pass\n",
+ "else:\n",
+ " os.mkdir(\"/content/downloads\")\n",
+ "\n",
+ "params = {\"save_path\": \"/content/downloads\"}\n",
+ "\n",
+ "while True:\n",
+ " magnet_link = input(\"Paste the magnet link here or type exit to stop:\\n\")\n",
+ " if magnet_link.lower() == \"exit\":\n",
+ " break\n",
+ " downloads.append(\n",
+ " lt.add_magnet_uri(ses, magnet_link, params)\n",
+ " )\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vY4-WX3FmMBB"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] libtorrent \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time\n",
+ "from IPython.display import display\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "state_str = [\n",
+ " \"queued\",\n",
+ " \"checking\",\n",
+ " \"downloading metadata\",\n",
+ " \"downloading\",\n",
+ " \"finished\",\n",
+ " \"seeding\",\n",
+ " \"allocating\",\n",
+ " \"checking fastresume\",\n",
+ "]\n",
+ "\n",
+ "layout = widgets.Layout(width=\"auto\")\n",
+ "style = {\"description_width\": \"initial\"}\n",
+ "download_bars = [\n",
+ " widgets.FloatSlider(\n",
+ " step=0.01, disabled=True, layout=layout, style=style\n",
+ " )\n",
+ " for _ in downloads\n",
+ "]\n",
+ "display(*download_bars)\n",
+ "\n",
+ "while downloads:\n",
+ " next_shift = 0\n",
+ " for index, download in enumerate(downloads[:]):\n",
+ " bar = download_bars[index + next_shift]\n",
+ " if not download.is_seed():\n",
+ " s = download.status()\n",
+ "\n",
+ " bar.description = \" \".join(\n",
+ " [\n",
+ " download.name(),\n",
+ " str(s.download_rate / 1000),\n",
+ " \"kB/s\",\n",
+ " state_str[s.state],\n",
+ " ]\n",
+ " )\n",
+ " bar.value = s.progress * 100\n",
+ " else:\n",
+ " next_shift -= 1\n",
+ " ses.remove_torrent(download)\n",
+ " downloads.remove(download)\n",
+ " bar.close() # Seems to be not working in Colab (see https://github.com/googlecolab/colabtools/issues/726#issue-486731758)\n",
+ " download_bars.remove(bar)\n",
+ " print(download.name(), \"complete\")\n",
+ " time.sleep(1)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "yqY0BtjuGS78"
+ },
+ "source": [
+ "### qBittorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Yk8cbx3EdKaK"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] qBittorrent \n",
+ "# @markdown MiXLab is now using VueTorrent as the default qBittorrent WebUI.
\n",
+ "#QBITTORRENT_VARIANT = \"official\" #@param [\"official\", \"unofficial\"]\n",
+ "## @markdown ---\n",
+ "## @markdown qBittorrent Default Credential
\n",
+ "## @markdown > Username: adminPassword: adminadmin\n",
+ "## @markdown ---\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "!wget -P /content/qBittorrent/tmp https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/qbittorrent/vuetorrent.zip\n",
+ "!unzip /content/qBittorrent/tmp/vuetorrent.zip -d /content/qBittorrent/tmp\n",
+ "!mv /content/qBittorrent/tmp/vuetorrent/ /content/qBittorrent/WebUI\n",
+ "clear_output()\n",
+ "\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "#Note: need to locate where the WebUI is extracted into and then remove it\n",
+ "# in order to use the proper WebUI for the Official or Unofficial version of qBittorrent\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
+ "#runSh(\"rm -f /usr/bin/qbittorrent\")\n",
+ "#runSh(\"rm -f /usr/bin/qbittorrent-nox\")\n",
+ "#runSh(\"sudo apt-get purge --auto-remove qbittorrent-nox \")\n",
+ "#clear_output()\n",
+ "\n",
+ "def addUtils():\n",
+ " if not checkAvailable(\"/usr/local/sessionSettings\"):\n",
+ " runSh(\"mkdir -p -m 777 /usr/local/sessionSettings\")\n",
+ " if not checkAvailable(\"/content/upload.txt\"):\n",
+ " runSh(\"touch /content/upload.txt\")\n",
+ " if not checkAvailable(\"checkAptUpdate.txt\", userPath=True):\n",
+ " runSh(\"apt update -qq -y\")\n",
+ " runSh(\"apt-get install -y iputils-ping\")\n",
+ "\n",
+ "def configTimezone(auto=True):\n",
+ " if checkAvailable(\"timezone.txt\", userPath=True):\n",
+ " return\n",
+ " if not auto:\n",
+ " runSh(\"sudo dpkg-reconfigure tzdata\")\n",
+ " else:\n",
+ " runSh(\"sudo ln -fs /usr/share/zoneinfo/Asia/Ho_Chi_Minh /etc/localtime\")\n",
+ " runSh(\"sudo dpkg-reconfigure -f noninteractive tzdata\")\n",
+ "\n",
+ "def uploadQBittorrentConfig():\n",
+ " if checkAvailable(\"updatedQBSettings.txt\", userPath=True):\n",
+ " return\n",
+ " runSh(\n",
+ " \"mkdir -p -m 666 /content/qBittorrent /root/.qBittorrent_temp /root/.config/qBittorrent\"\n",
+ " )\n",
+ " runSh(\n",
+ " \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/qbittorrent/qBittorrent.conf \\\n",
+ " -O /root/.config/qBittorrent/qBittorrent.conf\"\n",
+ " )\n",
+ "\n",
+ "def prepareSession():\n",
+ " if checkAvailable(\"ready.txt\", userPath=True):\n",
+ " return\n",
+ " else:\n",
+ " addUtils()\n",
+ " configTimezone()\n",
+ " uploadQBittorrentConfig()\n",
+ "\n",
+ "def installQBittorrent():\n",
+ " if checkAvailable(\"/usr/bin/qbittorrent-nox\"):\n",
+ " return\n",
+ " else:\n",
+ "# if QBITTORRENT_VARIANT == \"official\":\n",
+ " try:\n",
+ "# if checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
+ "# elif checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
+ "# else:\n",
+ " runSh(\"sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable\")\n",
+ " runSh(\"sudo apt-get update\")\n",
+ " runSh(\"sudo apt install qbittorrent-nox\")\n",
+ " except:\n",
+ " raise Exception('Failed to install qBittorrent!')\n",
+ "# elif QBITTORRENT_VARIANT == \"unofficial\":\n",
+ "# try:\n",
+ "# if checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
+ "# elif checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
+ "# else:\n",
+ "# runSh(\"sudo add-apt-repository ppa:poplite/qbittorrent-enhanced\")\n",
+ "# runSh(\"sudo apt-get update\")\n",
+ "# runSh(\"sudo apt-get install qbittorrent-enhanced qbittorrent-enhanced-nox\")\n",
+ "# except:\n",
+ "# raise Exception('Failed to install qBittorrent!')\n",
+ "\n",
+ "def startQBService():\n",
+ " prepareSession()\n",
+ " installQBittorrent()\n",
+ " if not findProcess(\"qbittorrent-nox\", \"-d --webui-port\"):\n",
+ " runSh(f\"qbittorrent-nox -d --webui-port={QB_Port}\")\n",
+ " time.sleep(1)\n",
+ "\n",
+ "QB_Port = 10001\n",
+ "loadingAn()\n",
+ "startQBService()\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['qbittorrent', QB_Port, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/qbittorrent.yml\", 4088]).start('qbittorrent', displayB=False)\n",
+ "displayUrl(server, pNamU='qBittorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nFrxKe_52fSj"
+ },
+ "source": [
+ "### rTorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "cN8mVNe52cYu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rTorrent \n",
+ "# @markdown > rTorrent Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re, urllib.request\n",
+ "from shutil import copyfile\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/', exist_ok=True)\n",
+ "os.makedirs(\"/content/downloads\", mode=0o775, exist_ok=True)\n",
+ "os.makedirs(\"/content/tools/rtorrent/session\", mode=0o775, exist_ok=True)\n",
+ "\n",
+ "configData = \"\"\"\n",
+ "# Where rTorrent saves the downloaded files\n",
+ "directory = /content/downloads\n",
+ "\n",
+ "# Where rTorrent saves the session\n",
+ "session = /content/tools/rtorrent/session\n",
+ "\n",
+ "# Which ports rTorrent can use (Make sure to open them in your router)\n",
+ "port_range = 50000-50000\n",
+ "port_random = no\n",
+ "\n",
+ "# Check the hash after the end of the download\n",
+ "check_hash = yes\n",
+ "\n",
+ "# Enable DHT (for torrents without trackers)\n",
+ "dht = auto\n",
+ "dht_port = 6881\n",
+ "peer_exchange = yes\n",
+ "\n",
+ "# Authorize UDP trackers\n",
+ "use_udp_trackers = yes\n",
+ "\n",
+ "# Enable encryption when possible\n",
+ "encryption = allow_incoming,try_outgoing,enable_retry\n",
+ "\n",
+ "# SCGI port, used to communicate with Flood\n",
+ "scgi_port = 127.0.0.1:5000\n",
+ "\"\"\"\n",
+ "with open(\"/root/.rtorrent.rc\", 'w') as rC:\n",
+ " rC.write(configData)\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/flood/config.js\"):\n",
+ " runSh(\"apt install rtorrent screen mediainfo -y\")\n",
+ " runSh(\"git clone --depth 1 https://github.com/jfurrow/flood.git tools/flood\", shell=True)\n",
+ " copyfile(\"tools/flood/config.template.js\", \"tools/flood/config.js\")\n",
+ " runSh(\"npm install\", shell=True, cd=\"tools/flood/\")\n",
+ " runSh(\"npm install pm2 -g\", shell=True, cd=\"tools/flood/\")\n",
+ " runSh(\"npm run build\", shell=True, cd=\"tools/flood/\")\n",
+ "\n",
+ " userDB = r\"\"\"{\"username\":\"admin\",\"password\":\"$argon2i$v=19$m=4096,t=3,p=1$3hJdjMSgwdUnJ86uYBhOnA$dud5j5/IokJ3hyb+v5aqmDK0jwP9X5W2pz6Qqek++Tk\",\"host\":\"127.0.0.1\",\"port\":\"5000\",\"isAdmin\":true,\"_id\":\"jLJcPySMAEgp35uB\"}\n",
+ "{\"$$indexCreated\":{\"fieldName\":\"username\",\"unique\":true,\"sparse\":false}}\n",
+ "\"\"\"\n",
+ " userSettingsDB = r\"\"\"{\"id\":\"startTorrentsOnLoad\",\"data\":true,\"_id\":\"5leeeHwIN9rKLgG9\"}\n",
+ "{\"id\":\"torrentListColumnWidths\",\"data\":{\"sizeBytes\":61,\"ratio\":56,\"peers\":62},\"_id\":\"PnB52rZSPg5fLEN9\"}\n",
+ "{\"id\":\"torrentDestination\",\"data\":\"/content/downloads\",\"_id\":\"YcGroeyigKYWM8Ol\"}\n",
+ "{\"id\":\"mountPoints\",\"data\":[\"/\"],\"_id\":\"gJlGwWqOsyPfkLyJ\"}\n",
+ "{\"id\":\"torrentListViewSize\",\"data\":\"expanded\",\"_id\":\"q0CmirE9c0KnDGV3\"}\n",
+ "\"\"\"\n",
+ "\n",
+ " os.makedirs(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings\", exist_ok=True)\n",
+ " with open(\"tools/flood/server/db/users.db\", 'w') as wDB:\n",
+ " wDB.write(userDB)\n",
+ " with open(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings/settings.db\", 'w') as wDB:\n",
+ " wDB.write(userSettingsDB)\n",
+ "\n",
+ "if not findProcess(\"rtorrent\", \"\"):\n",
+ " runSh(\"screen -d -m -fa -S rtorrent rtorrent\", shell=True)\n",
+ "if not findProcess(\"node\", \"start.js\"): \n",
+ " runSh(\"pm2 start server/bin/start.js\", shell=True, cd=\"tools/flood/\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rTorrent', 3000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/rTorrent.yml\", 1463]).start('rTorrent', btc='b', displayB=True)\n",
+ "displayUrl(Server, pNamU='rTorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ssn-ZMNcv5UQ"
+ },
+ "source": [
+ "### SimpleTorrent "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zb3hWwWE1Us8"
+ },
+ "source": [
+ "NOT WORKING! USE OTHER TORRENT DOWNLOADER! \n",
+ "(I'm... probably not going to fix this...) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "lrCc585SD2f7"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] SimpleTorrent \n",
+ "Install_old_version = False\n",
+ "Auto_UP_Gdrive = False\n",
+ "AUTO_MOVE_PATH = \"/content/drive/MyDrive\"\n",
+ "force_change_version = \"\"\n",
+ "rclone_DestinationPath = \"\"\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, time, pathlib, urllib.request, requests, tarfile\n",
+ "from subprocess import Popen\n",
+ "from IPython.display import clear_output\n",
+ " \n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ " \n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "# Defining environments for SImpleTorrent\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "os.makedirs('torrents', exist_ok=True)\n",
+ "os.makedirs('tools/simple-torrent', exist_ok=True)\n",
+ " \n",
+ "def generateCmd(src, dst):\n",
+ " FAST_LIST = True\n",
+ " PATH_RClone_Config = \"/usr/local/sessionSettings\"\n",
+ " cmd = f'rclone move \"{src}\" \"{dst}\" ' \\\n",
+ " f'--config {PATH_RClone_Config}/rclone.conf ' \\\n",
+ " f'{\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" ' \\\n",
+ " '--transfers 20 --checkers 20 --drive-server-side-across-configs ' \\\n",
+ " '-c --buffer-size 256M --drive-chunk-size 256M ' \\\n",
+ " '--drive-upload-cutoff 256M --drive-acknowledge-abuse ' \\\n",
+ " '--drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 ' \\\n",
+ " '--stats-one-line --stats=5s -v'\n",
+ " return cmd\n",
+ "\n",
+ "\n",
+ "if Auto_UP_Gdrive:\n",
+ " data = \"\"\"#!/bin/bash\n",
+ " dir=${CLD_DIR}\n",
+ " path=${CLD_PATH}\n",
+ " abp=\"${dir}/${path}\"\n",
+ " type=${CLD_TYPE}\n",
+ " if [[ ${type} == \"torrent\" ]]; then\n",
+ " \"\"\"\n",
+ "\n",
+ " nUpload = \"\"\" \n",
+ " #Upload to Gdrive\n",
+ " #mkdir -p \"%s/$(dirname \"${path}\")\"\n",
+ " mv \"${abp}\" \"%s/${path}\"\n",
+ " \"\"\" % (AUTO_MOVE_PATH, AUTO_MOVE_PATH)\n",
+ "\n",
+ " rcloneUpload = \"\"\"\n",
+ " #You can also use rcone move file to remote\n",
+ " %s\n",
+ " \"\"\" % generateCmd(r\"${abp}\", rclone_DestinationPath)\n",
+ "\n",
+ " end = \"\"\"\n",
+ " fi\n",
+ " \"\"\"\n",
+ " \n",
+ " data = data + (rcloneUpload if rclone_DestinationPath else nUpload) + end\n",
+ " with open(pathDoneCMD, 'w') as w:\n",
+ " w.write(data)\n",
+ " os.chmod(pathDoneCMD, 0o755)\n",
+ "else:\n",
+ " try:\n",
+ " os.unlink(pathDoneCMD)\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " \n",
+ "configPath = pathlib.Path('tools/simple-torrent/cloud-torrent.json')\n",
+ "configsdata = r\"\"\"\n",
+ "{{\n",
+ " \"AutoStart\": true,\n",
+ " \"EngineDebug\": false,\n",
+ " \"MuteEngineLog\": true,\n",
+ " \"ObfsPreferred\": true,\n",
+ " \"ObfsRequirePreferred\": false,\n",
+ " \"DisableTrackers\": false,\n",
+ " \"DisableIPv6\": false,\n",
+ " \"DownloadDirectory\": \"/content/downloads/\",\n",
+ " \"WatchDirectory\": \"torrents/\",\n",
+ " \"EnableUpload\": true,\n",
+ " \"EnableSeeding\": false,\n",
+ " \"IncomingPort\": 50007,\n",
+ " \"DoneCmd\": \"{}/doneCMD.sh\",\n",
+ " \"SeedRatio\": 1.5,\n",
+ " \"UploadRate\": \"High\",\n",
+ " \"DownloadRate\": \"Unlimited\",\n",
+ " \"TrackerListURL\": \"https://trackerslist.com/best.txt\",\n",
+ " \"AlwaysAddTrackers\": true,\n",
+ " \"ProxyURL\": \"\"\n",
+ "}}\n",
+ "\"\"\".format(HOME)\n",
+ "with open(configPath, \"w+\") as configFile:\n",
+ " configFile.write(configsdata)\n",
+ " \n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.isfile(\"tools/simple-torrent/cloud-torrent\"):\n",
+ " filename = 'tools/simple-torrent/cloud-torrent_linux_amd64.gz'\n",
+ " if Install_old_version:\n",
+ " latestTag = '1.2.3'\n",
+ " else:\n",
+ " latestTag = requests.get(\"https://api.github.com/repos/boypt/simple-torrent/releases/latest\").json()['tag_name']\n",
+ " url = \"https://github.com/boypt/simple-torrent/releases/download/\" \\\n",
+ " f\"{latestTag}/{filename[21:]}\"\n",
+ " \n",
+ " urllib.request.urlretrieve(url, filename)\n",
+ " import gzip, shutil\n",
+ " with gzip.open(filename, 'rb') as f_in:\n",
+ " with open('tools/simple-torrent/cloud-torrent', 'wb') as f_out: shutil.copyfileobj(f_in, f_out)\n",
+ " os.chmod('tools/simple-torrent/cloud-torrent', 0o775)\n",
+ " os.remove(filename)\n",
+ " \n",
+ "# Launching SimpleTorrent in background\n",
+ "if not findProcess(\"cloud-torrent\", \"SimpleTorrent\"):\n",
+ " PORT = 4444\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " except:\n",
+ " cmdC = f'./cloud-torrent --port {PORT} ' \\\n",
+ " '-t Simple-Torrent ' \\\n",
+ " '-c cloud-torrent.json ' \\\n",
+ " '--host 0.0.0.0'\n",
+ " for run in range(10): \n",
+ " Popen(cmdC.split(), cwd='tools/simple-torrent')\n",
+ " time.sleep(3)\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " break\n",
+ " except:\n",
+ " print(\"Unable to start SimpleTorrent! Retrying...\")\n",
+ " \n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['SimpleTorrent', 4444, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/SimpleTorrent.yml\", 4040]).start('SimpleTorrent')\n",
+ "displayUrl(Server, pNamU='SimpleTorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iLcAVtWT4NTC"
+ },
+ "source": [
+ "### Transmission "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CePVeFVG4QFz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Transmission \n",
+ "# @markdown > Transmission Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request, pathlib\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.exists('/usr/bin/transmission-daemon'):\n",
+ " os.makedirs('downloads', exist_ok=True)\n",
+ " os.makedirs('tools/transmission/', exist_ok=True)\n",
+ " runSh('apt install transmission-daemon')\n",
+ " nTWC = \"https://raw.githubusercontent.com/ronggang/\" \\\n",
+ " \"transmission-web-control/master/release/install-tr-control.sh\"\n",
+ " urllib.request.urlretrieve(nTWC, 'tools/transmission/trInstall.sh')\n",
+ " runSh('bash tools/transmission/trInstall.sh auto')\n",
+ " \n",
+ " try:\n",
+ " pathlib.Path('tools/transmission/trInstall.sh').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ "if not findProcess('transmission-daemon', '--no-watch-dir'):\n",
+ " !transmission-daemon --no-watch-dir --config-dir tools/transmission \\\n",
+ " --port 9091 --download-dir /content/downloads/ --dht --utp --no-portmap \\\n",
+ " --peerlimit-global 9999 --peerlimit-torrent 9999 --no-global-seedratio \\\n",
+ " -u admin -v admin --auth\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http'], ['transmission', 9091, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/transmission.yml\", 4058]).start('transmission', displayB=False)\n",
+ "displayUrl(server, pNamU='Transmission : ', btc='r')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bQ73mxqlpNjb"
+ },
+ "source": [
+ "### µTorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "unIq2GEJpLzG"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] µTorrent \n",
+ "# @markdown > µTorrent Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "r = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "# Installing µTorrent\n",
+ "if not os.path.exists(\"/usr/bin/utserver\"):\n",
+ " os.makedirs(\"downloads\", exist_ok=True)\n",
+ " r.system_raw(\"apt install libssl1.0.0 libssl-dev\")\n",
+ " r.system_raw(r\"wget http://download-new.utorrent.com/endpoint/utserver/os/linux-x64-ubuntu-13-04/track/beta/ -O utserver.tar.gz\")\n",
+ " r.system_raw(r\"tar -zxvf utserver.tar.gz -C /opt/\")\n",
+ " r.system_raw(\"rm -f utserver.tar.gz\")\n",
+ " r.system_raw(\"mv /opt/utorrent-server-* /opt/utorrent\")\n",
+ " os.chmod(\"/opt/utorrent\", 0o777)\n",
+ " r.system_raw(\"ln -s /opt/utorrent/utserver /usr/bin/utserver\")\n",
+ " urllib.request.urlretrieve(\"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/utorrent/utserver.conf\", \"/opt/utorrent/utserver.conf\")\n",
+ "\n",
+ "if not findProcess(\"utserver\", \"-settingspath\"):\n",
+ " cmd = \"utserver -settingspath /opt/utorrent/\" \\\n",
+ " \" -configfile /opt/utorrent/utserver.conf\" \\\n",
+ " \" -daemon\"\n",
+ " runSh(cmd, shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['utorrent', 5454, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/uTorrent.yml\", 4042]).start('utorrent', displayB=False)\n",
+ "displayUrl(Server, pNamU='µTorrent : ', ExUrl=fr\"http://admin:admin@{Server['url'][7:]}/gui\", btc=\"g\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UU-y9pOU4sRB"
+ },
+ "source": [
+ "### vuze "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Uxp5DDkJ4ue1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] vuze \n",
+ "# @markdown > viuze Default CredentialUsername: rootPassword: yesme\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request, pathlib\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "def latestTag():\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(\"http://dev.vuze.com/\").read().decode('UTF-8')\n",
+ " return re.findall(r'\\sVuze_(\\d{4})\\sRelease\\s', htmlF)[0]\n",
+ "\n",
+ "\n",
+ "loadingAn()\n",
+ "if not os.path.exists('tools/vuze/Vuze.jar'):\n",
+ " os.makedirs('downloads', exist_ok=True)\n",
+ " os.makedirs('tools/vuze/', exist_ok=True)\n",
+ " runSh('wget -r --level=1 -np -nH -R index.html -nd -k http://svn.vuze.com/public/client/trunk/uis/lib/', cd='tools/vuze/')\n",
+ " rv = latestTag()\n",
+ " dlink = f\"https://netcologne.dl.sourceforge.net/project/azureus/vuze/Vuze_{rv}/Vuze_{rv}.jar\"\n",
+ " urllib.request.urlretrieve(dlink, 'tools/vuze/Vuze.jar') \n",
+ "\n",
+ " # All command found in set command ex: java -jar Vuze.jar --ui=console -c set\n",
+ " runScript = \"\"\"plugin install xmwebui\n",
+ "pair enable\n",
+ "set \"Plugin.xmwebui.Port\" 9595 int\n",
+ "set \"Plugin.xmwebui.Password Enable\" true boolean\n",
+ "set \"Plugin.xmwebui.Pairing Enable\" false boolean\n",
+ "set \"Plugin.xmwebui.User\" \"root\" string\n",
+ "set \"Plugin.xmwebui.Password\" \"yesme\" password\n",
+ "set \"Completed Files Directory\" \"/content/downloads/\" string\n",
+ "set \"General_sDefaultSave_Directory\" \"/content/downloads/\" string\n",
+ "set \"General_sDefaultTorrent_Directory\" \"/content/downloads/\" string\n",
+ "\"\"\"\n",
+ " with open('tools/vuze/Rscript.sh', 'w') as w: w.write(runScript)\n",
+ "\n",
+ "if not findProcess('java', '-jar Vuze.jar'):\n",
+ " runSh('java -jar Vuze.jar --ui=console -e Rscript.sh &', cd='tools/vuze/', shell=True)\n",
+ " time.sleep(7)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/vuze.yml\", 4058]).start('vuze', displayB=False)\n",
+ "displayUrl(server, pNamU='vuze : ', ExUrl=fr\"http://root:yesme@{server['url'][7:]}\", btc='b')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EpwNYbcfRvcl"
+ },
+ "source": [
+ "# ✦ *Utility* ✦ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5CWw65NugcjI"
+ },
+ "source": [
+ "## ✧ Checksum Tool ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3AbFcLJr5PHk"
+ },
+ "source": [
+ "### MD5 + SHA-1 + SHA-256 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OQTQwwFm5PH1"
+ },
+ "source": [
+ "TO DO (later...):\n",
+ "\n",
+ "1. Add some kind of checking to make sure file_name does exist.\n",
+ "2. Add some kind of checking to make sure file_name is not a directory.\n",
+ "3. Add some kind of checking to make sure file_path does exist.\n",
+ "4. Add some kind of checking to make sure file_path is not a file.\n",
+ "5. Add whether the hash file does exist or not. If not, skip."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "ovjsyICM5PH5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Generate
\n",
+ "file_path = \"/content/\" #@param {type:\"string\"}\n",
+ "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
+ "\n",
+ "generate_md5 = True #@param {type:\"boolean\"}\n",
+ "generate_sha1 = True #@param {type:\"boolean\"}\n",
+ "generate_sha256 = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "%cd \"$file_path\"\n",
+ "clear_output()\n",
+ "\n",
+ "if generate_md5 is True:\n",
+ " print(\"Generating MD5 hash...\")\n",
+ " !md5sum \"$file_name\" > \"$file_name\".md5\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if generate_sha1 is True:\n",
+ " print(\"Generating SHA-1 hash...\")\n",
+ " !sha1sum \"$file_name\" > \"$file_name\".sha1\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if generate_sha256 is True:\n",
+ " print(\"Generating SHA-256 hash...\")\n",
+ " !sha256sum \"$file_name\" > \"$file_name\".sha256\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "print(\"\\nAll hashes has been generated.\\n\\n\")\n",
+ "\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "O8m9DgFb5PH8"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Compare
\n",
+ "file_path = \"/content/\" #@param {type:\"string\"}\n",
+ "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
+ "\n",
+ "compare_md5 = True #@param {type:\"boolean\"}\n",
+ "compare_sha1 = True #@param {type:\"boolean\"}\n",
+ "compare_sha256 = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.
\n",
+ "# @markdown > If the result shows \"OK\", that means the file matches 100%.
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "%cd \"$file_path\"\n",
+ "clear_output()\n",
+ "\n",
+ "if compare_md5 is True:\n",
+ " print(\"Comparing MD5 hash...\")\n",
+ " !md5sum -c \"$file_name\".md5\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if compare_sha1 is True:\n",
+ " print(\"\\nComparing SHA-1 hash...\")\n",
+ " !sha1sum -c \"$file_name\".sha1\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if compare_sha256 is True:\n",
+ " print(\"\\nComparing SHA-256 hash...\")\n",
+ " !sha256sum -c \"$file_name\".sha256\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "print(\"\\n\")\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pIk3H6xUic8a"
+ },
+ "source": [
+ "## ✧ Files Uploader ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LOmbPf7Tihne"
+ },
+ "source": [
+ "### anonfiles "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BIMRKjTrinOM"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Upload to anonfiles
\n",
+ "file_path = \"\" # @param {type: \"string\"}\n",
+ "\n",
+ "url = \"https://api.anonfiles.com/upload\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import requests\n",
+ "\n",
+ "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
+ "\n",
+ "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "paeY4yX7jNd1"
+ },
+ "source": [
+ "### BayFiles "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "b5hRr0CmjSI2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Upload to BayFiles
\n",
+ "file_path = \"\" # @param {type: \"string\"}\n",
+ "\n",
+ "url = \"https://api.bayfiles.com/upload\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import requests\n",
+ "\n",
+ "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
+ "\n",
+ "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "j-PgCLYrZFbm"
+ },
+ "source": [
+ "## ✧ File Manager ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TgwoGxAitg0y"
+ },
+ "source": [
+ "### Cloud Commander "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "sWTkCBV0ZHtJ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Cloud Commander \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " displayUrl,\n",
+ " PortForward_wrapper,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if os.path.isfile(\"/tools/node/bin/cloudcmd\") == False:\n",
+ " get_ipython().system_raw(\"npm cache clean -f && npm install -g n && n stable && npm i cloudcmd -g --force\")\n",
+ "\n",
+ "try:\n",
+ " urllib.request.urlopen('http://localhost:7007')\n",
+ "except urllib.error.URLError:\n",
+ " !nohup cloudcmd --online --no-auth --show-config --show-file-name \\\n",
+ " --editor 'deepword' --packer 'tar' --port 7007 \\\n",
+ " --no-confirm-copy --confirm-move --name 'File Manager' \\\n",
+ " --keys-panel --no-contact --console --sync-console-path \\\n",
+ " --no-terminal --no-vim --columns 'name-size-date' --no-log &\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['cloudcmd', 7007, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/CloudCommander.yml\", 7044]).start('cloudcmd')\n",
+ "displayUrl(server, pNamU='Cloud Commander : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xmq_9AJCtvlV"
+ },
+ "source": [
+ "### File Browser "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Cs_DPqJaabw3"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] File Browser \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/filebrowser/', exist_ok=True)\n",
+ "\n",
+ "get_ipython().system_raw(r\"curl -fsSL https://filebrowser.xyz/get.sh | bash\")\n",
+ "if not findProcess(\"filebrowser\", \"--noauth\"):\n",
+ " runSh(\"filebrowser --noauth -r /content/ -p 4000 -d tools/filebrowser/filebrowser.db &\", shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['filebrowser', 4000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/FileBrowser.yml\", 4099]).start('filebrowser')\n",
+ "displayUrl(server, pNamU='File Browser : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nUI7G8OSSXbM"
+ },
+ "source": [
+ "### Go HTTP File Server "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "evBFe60vSfxW"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Go HTTP File Server \n",
+ "HOME_DIRECTORY = \"/content\" #@param {type:\"string\"}\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, requests\n",
+ "from zipfile import ZipFile as ZZ\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ "\tdisplayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "toolLocation = 'tools/ghfs'\n",
+ "binaryF = f\"{toolLocation}/ghfs\"\n",
+ "if not HOME_DIRECTORY:\n",
+ " HOME_DIRECTORY = CWD\n",
+ "\n",
+ "try:\n",
+ " if HOME_DIRECTORY != OldP:\n",
+ " os.system(\"pkill ghfs\")\n",
+ "except NameError:\n",
+ " pass\n",
+ " \n",
+ "OldP = HOME_DIRECTORY\n",
+ "os.makedirs(toolLocation, exist_ok=True)\n",
+ "\n",
+ "if not os.path.exists(binaryF):\n",
+ " ownerProjet = \"mjpclab/go-http-file-server\"\n",
+ " DZipBL = f\"{toolLocation}/Zipghfs.zip\"\n",
+ " latest_tag = requests.get(f\"https://api.github.com/repos/{ownerProjet}/releases/latest\").json()['tag_name']\n",
+ " dBinaryL = f\"https://github.com/{ownerProjet}/releases/download/{latest_tag}/ghfs-{latest_tag}-linux-amd64.zip\"\n",
+ " urllib.request.urlretrieve(dBinaryL, DZipBL)\n",
+ " with ZZ(DZipBL, 'r') as zip_ref:zip_ref.extractall(toolLocation)\n",
+ " os.remove(DZipBL)\n",
+ " os.chmod(binaryF, 0o777)\n",
+ "\n",
+ "if not findProcess(\"ghfs\", \"--listen-plain\"):\n",
+ " runSh(f'./ghfs --listen-plain 1717 -R \\\n",
+ " -a \":/:{HOME_DIRECTORY}\" \\\n",
+ " --global-upload \\\n",
+ " --global-mkdir \\\n",
+ " --global-delete \\\n",
+ " --global-archive \\\n",
+ " --global-archive \\\n",
+ " &', \n",
+ " shell=True,\n",
+ " cd=\"tools/ghfs\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ghfs', 1717, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/GoHTTPFileServer.yml\", 41717]).start('ghfs')\n",
+ "displayUrl(server, pNamU='Go HTTP File Server : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aStiEPlnDoeY"
+ },
+ "source": [
+ "### Create / Extract Archive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "88JkX_J3EXWC"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Tools
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "os.system(\"sudo apt update\")\n",
+ "os.system(\"apt install p7zip-full p7zip-rar unrar rar\")\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "RMy0TxzHzCR9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Create archive \n",
+ "source_path = \"\" #@param {type:\"string\"}\n",
+ "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"tar.gz\"]\n",
+ "archive_name = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default name will be used (archive)\n",
+ "archive_password = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave this field empty if you do not want to protect the archive with password.\n",
+ "compression_level = \"no_compression\" #@param [\"no_compression\", \"fastest\", \"fast\", \"normal\", \"maximum\", \"ultra\"]\n",
+ "output_path = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default path will be used (/content)\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import os, sys, re\n",
+ "\n",
+ "\n",
+ "if archive_name == \"\":\n",
+ " archive_name = \"archive\"\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if archive_password == \"\":\n",
+ " pass\n",
+ "else:\n",
+ " archive_password = \"-p\" + archive_password\n",
+ "\n",
+ "if compression_level == \"no_compression\":\n",
+ " compression_level = \"-mx=0\"\n",
+ "elif compression_level == \"fastest\":\n",
+ " compression_level = \"-mx=1\"\n",
+ "elif compression_level == \"fast\":\n",
+ " compression_level = \"-mx=3\"\n",
+ "elif compression_level == \"normal\":\n",
+ " compression_level = \"-mx=5\"\n",
+ "elif compression_level == \"maximum\":\n",
+ " compression_level = \"-mx=7\"\n",
+ "elif compression_level == \"ultra\":\n",
+ " compression_level = \"-mx=9\"\n",
+ "\n",
+ "if output_path == \"\":\n",
+ " output_path = \"/content\"\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "if archive_type == \"zip\":\n",
+ " if source_path == \"\":\n",
+ " display(HTML(\"❌ The source_path field is empty! \"))\n",
+ " else:\n",
+ " #output_file_path = re.search(\"^[\\/].+\\/\", source_path)\n",
+ " #output_file_path_raw = output_file_path.group(0)\n",
+ " #delsplit = re.search(\"\\/(?:.(?!\\/))+$\", source_path)\n",
+ " #folder_name = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "\n",
+ " #os.environ['inputDir'] = source_path\n",
+ " #os.environ['outputPath'] = output_file_path_raw\n",
+ " #os.environ['folderName'] = folder_name\n",
+ " #os.environ['archiveLevel'] = compression_level\n",
+ " #os.environ['archivePassword'] = archive_password\n",
+ "\n",
+ " #!7z a -tzip \"$archiveLevel\" \"$archivePassword\" \"$outputPath\"/\"$folderName\".zip \"$inputDirectory\"\n",
+ " !7z a -tzip \"$compression_level\" \"$archive_password\" \"$output_path\"/\"$archive_name\".zip \"$source_path\"\n",
+ "else:\n",
+ " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Mbmf5lk0zF1q"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Extract archive \n",
+ "archive_path = \"\" #@param {type:\"string\"}\n",
+ "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"gzip\", \"iso\"]\n",
+ "archive_password = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave the archive_password field empty if archive is not password protected.\n",
+ "output_path = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave the output_path field empty to use default extraction path (/content).\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, sys, re\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "if archive_password == \"\":\n",
+ " pass\n",
+ "elif not archive_password == \"\":\n",
+ " archive_password = \"-p\" + archive_password\n",
+ "\n",
+ "if output_path == \"\":\n",
+ " output_path = \"-o/content\"\n",
+ "elif output_path == \"/content\":\n",
+ " output_path = \"-o/content\"\n",
+ "else:\n",
+ " output_path = \"-o\" + output_path\n",
+ "\n",
+ "\n",
+ "os.environ['inputFile'] = archive_path\n",
+ "os.environ['inputPassword'] = archive_password\n",
+ "os.environ['outputFile'] = output_path\n",
+ "\n",
+ "\n",
+ "if archive_path == \"\":\n",
+ " display(HTML(\"❌ The archive_path field is empty! \"))\n",
+ "else:\n",
+ " if archive_type == \"zip\":\n",
+ " !7z x \"$inputFile\" \"$inputPassword\" \"$outputFile\"\n",
+ " elif archive_type == \"iso\":\n",
+ " !7z x \"$inputFile\" \"$outputFile\"\n",
+ " else:\n",
+ " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d7hdxEjc-ynr"
+ },
+ "source": [
+ "## ✧ Image Manipulation ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qs8R07vxhuo2"
+ },
+ "source": [
+ "Some of these cells might require GPU runtime. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Jbw2QIUB6JKR"
+ },
+ "source": [
+ "### Real-ESRGAN "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JdnKplLq61kb"
+ },
+ "source": [
+ "GPU runtime is required! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5kUMYIALO6yI"
+ },
+ "source": [
+ "This is my own simple Google Colab implementation of xinntao 's amazing Real-ESRGAN project.\n",
+ "\n",
+ " \n",
+ "\n",
+ "Image credit: Real-ESRGAN "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OW-WSLlS6S3m"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Install] Real-ESRGAN\n",
+ "#@markdown You MUST run this cell first! \n",
+ "#====================================================================#\n",
+ "\n",
+ "import subprocess, pathlib, shutil\n",
+ "\n",
+ "\n",
+ "main_path = '/content/Real-ESRGAN'\n",
+ "input_path = main_path + '/inputs'\n",
+ "cmd = [\n",
+ " 'apt get update',\n",
+ " 'git clone https://github.com/xinntao/Real-ESRGAN.git',\n",
+ " 'pip install basicsr',\n",
+ " 'pip install facexlib',\n",
+ " 'pip install gfpgan',\n",
+ " 'pip install -r requirements.txt',\n",
+ " 'python setup.py develop'\n",
+ " ]\n",
+ "mdl = [\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth'\n",
+ " ]\n",
+ "\n",
+ "\n",
+ "for x in cmd[0:2:]:\n",
+ " subprocess.run(x, shell=True)\n",
+ "for y in cmd[2:]:\n",
+ " subprocess.run(y, shell=True, cwd=main_path)\n",
+ "for z in mdl:\n",
+ " subprocess.run(['wget ' + z + ' -P experiments/pretrained_models'], shell=True, cwd=main_path)\n",
+ "\n",
+ "\n",
+ "remove_path = pathlib.Path(input_path)\n",
+ "shutil.rmtree(remove_path)\n",
+ "remove_path.mkdir()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eFcZE1D374Gr"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← Get image\n",
+ "image_source = \"upload\" #@param [\"upload\", \"url\"]\n",
+ "#====================================================================#\n",
+ "\n",
+ "import os, sys, shutil\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "\n",
+ "main_path = '/content/Real-ESRGAN'\n",
+ "input_path = main_path + '/inputs'\n",
+ "\n",
+ "\n",
+ "if image_source == 'upload':\n",
+ " uploaded = files.upload()\n",
+ "\n",
+ " for filename in uploaded.keys():\n",
+ " dst_path = os.path.join(input_path, filename)\n",
+ " shutil.move(filename, dst_path)\n",
+ "\n",
+ " print(f'Moved file \"{filename}\" to \"{dst_path}\"') \n",
+ "elif image_source == 'url':\n",
+ " print('Enter ONLY direct url! For example: https://internet.com/image.jpg')\n",
+ " print('Leave the field below blank to cancel.\\n')\n",
+ "\n",
+ " image_url = input('URL: ')\n",
+ "\n",
+ " if image_url == '':\n",
+ " clear_output()\n",
+ " sys.exit('String image_url is empty!')\n",
+ " else:\n",
+ " os.system('wget -q ' + image_url + ' -N -P ' + input_path)\n",
+ "\n",
+ " print(f'\\nImage saved to: \"{input_path}\"')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OCxq4YzeQ2It"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Start] Real-ESRGAN\n",
+ "style = \"real_life\" #@param [\"real_life\", \"anime\"]\n",
+ "upscale_ratio = 2 #@param {type:\"slider\", min: 1, max:10, step:1}\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Advanced Options ⚙️ \n",
+ "output_format = \"auto\" #@param [\"auto\", \"jpg\", \"png\"]\n",
+ "alpha_upsampler = \"realesrgan\" #@param [\"realesrgan\", \"bicubic\"]\n",
+ "split_chunk = 256 #@param {type:\"slider\", min:0, max:1024, step:128}\n",
+ "custom_upscale_ratio = \"1.5\" #@param {type:\"string\"}\n",
+ "enable_custom_upscale_ratio = False #@param {type:\"boolean\"}\n",
+ "optimize_face = False #@param {type:\"boolean\"}\n",
+ "half_precision_mode = False #@param {type:\"boolean\"}\n",
+ "#@markdown >This cell is not finished yet!\n",
+ "#====================================================================#\n",
+ "\n",
+ "#\n",
+ "# TO DO: if \"inputs\" is empty, upload some image first\n",
+ "# optimize_face is not for anime model.\n",
+ "# add \"performance mode\" by using the X2 model? since it's faster...\n",
+ "# us os.system or subprocess.run\n",
+ "#\n",
+ "\n",
+ "work_path = '/content/Real-ESRGAN'\n",
+ "input_path = work_path + '/inputs'\n",
+ "output_path = work_path + '/results'\n",
+ "model = [\n",
+ " 'RealESRGAN_x2plus.pth',\n",
+ " 'RealESRGAN_x2plus_netD.pth',\n",
+ " 'RealESRNet_x4plus.pth',\n",
+ " 'RealESRGAN_x4plus_netD.pth',\n",
+ " 'RealESRGAN_x4plus_anime_6B.pth',\n",
+ " 'RealESRGAN_x4plus_anime_6B_netD.pth'\n",
+ " ]\n",
+ "output_format = '--ext ' + output_format\n",
+ "alpha_upsampler = '--alpha_upsampler ' + alpha_upsampler\n",
+ "split_chunk = '--tile ' + str(split_chunk)\n",
+ "\n",
+ "if style == 'anime':\n",
+ " use_model = model[4]\n",
+ "else:\n",
+ " use_model = model[2]\n",
+ "\n",
+ "if enable_custom_upscale_ratio is True:\n",
+ " if custom_upscale_ratio == '':\n",
+ " sys.exit('The custom_upscale_ratio field cannot be empty!')\n",
+ " else:\n",
+ " upscale_ratio = '--outscale ' + custom_upscale_ratio\n",
+ "else:\n",
+ " upscale_ratio = '--outscale ' + str(upscale_ratio)\n",
+ "\n",
+ "if optimize_face is True:\n",
+ " optimize_face = '--face_enhance'\n",
+ "else:\n",
+ " optimize_face = ''\n",
+ "\n",
+ "if half_precision_mode is True:\n",
+ " half_precision_mode = '--half'\n",
+ "else:\n",
+ " half_precision_mode = ''\n",
+ "\n",
+ "\n",
+ "!python \"{work_path}/inference_realesrgan.py\" --model_path \"{work_path}/experiments/pretrained_models/{use_model}\" --input \"{input_path}\" --output \"{output_path}\" {upscale_ratio} {split_chunk} {alpha_upsampler} {split_chunk} {optimize_face} {half_precision_mode} {output_format} --suffix 'realesrgan'\n",
+ "\n",
+ "print('\\nResults are saved in:', output_path)\n",
+ "\n",
+ "\n",
+ "#====================================================================================================\n",
+ "#\n",
+ "# import subprocess\n",
+ "# from subprocess import PIPE\n",
+ "\n",
+ "# work_path = '/content/Real-ESRGAN'\n",
+ "# input_path = work_path + '/inputs'\n",
+ "# output_path = work_path + '/results'\n",
+ "# use_model = 'RealESRGAN_x2plus.pth'\n",
+ "\n",
+ "# cmd = 'python inference_realesrgan.py --model_path experiments/pretrained_models/' + use_model + ' --input inputs'\n",
+ "# process_run = subprocess.run(cmd, shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True, cwd=work_path)\n",
+ "# print(process_run.stdout, process_run.stderr)\n",
+ "\n",
+ "# print('\\nOutputs are saved in:', output_path)\n",
+ "#\n",
+ "#===================================================================================================="
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "3cfIjdfASyNI"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Start] Visualize comparison (optional)\n",
+ "#====================================================================#\n",
+ "#\n",
+ "# Codes below are from Real-ESRGAN.\n",
+ "# Path variables are of course has been changed.\n",
+ "#\n",
+ "#====================================================================#\n",
+ "\n",
+ "working_directory = '/content/Real-ESRGAN'\n",
+ "input_folder = working_directory + '/inputs'\n",
+ "result_folder = working_directory + '/results'\n",
+ "\n",
+ "# utils for visualization\n",
+ "import cv2\n",
+ "import matplotlib.pyplot as plt\n",
+ "def display(img1, img2):\n",
+ " fig = plt.figure(figsize=(25, 10))\n",
+ " ax1 = fig.add_subplot(1, 2, 1) \n",
+ " plt.title('Input image', fontsize=16)\n",
+ " ax1.axis('off')\n",
+ " ax2 = fig.add_subplot(1, 2, 2)\n",
+ " plt.title('Real-ESRGAN output', fontsize=16)\n",
+ " ax2.axis('off')\n",
+ " ax1.imshow(img1)\n",
+ " ax2.imshow(img2)\n",
+ "def imread(img_path):\n",
+ " img = cv2.imread(img_path)\n",
+ " img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
+ " return img\n",
+ "\n",
+ "# display each image in the upload folder\n",
+ "import os\n",
+ "import glob\n",
+ "\n",
+ "input_list = sorted(glob.glob(os.path.join(input_folder, '*')))\n",
+ "output_list = sorted(glob.glob(os.path.join(result_folder, '*')))\n",
+ "for input_path, output_path in zip(input_list, output_list):\n",
+ " img_input = imread(input_path)\n",
+ " img_output = imread(output_path)\n",
+ " display(img_input, img_output)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2QB4GeGX0Wbr"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← Download results (archived)\n",
+ "#====================================================================#\n",
+ "\n",
+ "zip_filename = 'Real-ESRGAN_result.zip'\n",
+ "\n",
+ "if os.path.exists(zip_filename):\n",
+ " os.remove(zip_filename)\n",
+ "\n",
+ "os.system(f\"zip -r -j {zip_filename} /content/Real-ESRGAN/results/*\")\n",
+ "\n",
+ "files.download(zip_filename)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e-OWHJwruE6V"
+ },
+ "source": [
+ "### StyleGAN2 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OvwxWoyaUsIL"
+ },
+ "source": [
+ "GPU runtime is required! "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jqz-1eEnuIer"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Install] StyleGAN2 \n",
+ "# ================================================================ #\n",
+ "\n",
+ "%cd /content\n",
+ "!pip install typeguard;\n",
+ "!pip install psutil\n",
+ "!pip install humanize\n",
+ "!pip install tqdm\n",
+ "!rm -rf stylegan2 && git clone https://github.com/NVlabs/stylegan2.git;\n",
+ "%cd /content/stylegan2\n",
+ "\n",
+ "print(\"Installing\")\n",
+ "\n",
+ "from IPython.display import Image, clear_output\n",
+ "from google.colab import files\n",
+ "import sys\n",
+ "import pickle\n",
+ "import numpy as np\n",
+ "import PIL\n",
+ "import psutil\n",
+ "import humanize\n",
+ "import os\n",
+ "import time\n",
+ "from tqdm import tqdm\n",
+ "\n",
+ "from scipy import ndimage\n",
+ "\n",
+ "%tensorflow_version 1.x\n",
+ "sys.path.append('/content/stylegan2/dnnlib')\n",
+ "import dnnlib\n",
+ "import dnnlib.tflib as tflib\n",
+ "dnnlib.tflib.init_tf()\n",
+ "\n",
+ "entity_to_url = {\n",
+ " 'faces': 'https://drive.google.com/uc?id=1erg93hWnekh57m3cwsAnqJYfYVceVVSe',\n",
+ " 'celebs': 'https://drive.google.com/uc?id=1q8VldTeTbruoh34ih6GftOcybGNA0dcZ',\n",
+ " 'bedrooms': 'https://drive.google.com/uc?id=15EV9JBiQ7ifoi-B-DQAZF4sYPdCAsiCY',\n",
+ " 'cars': 'https://drive.google.com/uc?id=1QzWwIqJITrg5NWG7QyqrArhb_4UhStDy',\n",
+ " 'cats': 'https://drive.google.com/uc?id=1Fz12B8TSPiRtzCqjhFxTH_W-rIZ5rSGr',\n",
+ " 'anime': 'https://drive.google.com/uc?id=1z8N_-xZW9AU45rHYGj1_tDHkIkbnMW-R',\n",
+ " 'chruch': 'https://drive.google.com/uc?id=1-0JMXPdCQLIVxkDE_S9pO8t8mWoEvhHl',\n",
+ " 'horse': 'https://drive.google.com/uc?id=1-1oc3016pUDi2er1zEvjGcFy8FC-QAh3',\n",
+ " 'anime': 'https://drive.google.com/uc?id=1-91fGZSsZJPNlFytg5iHvVLqxKWDLFt8',\n",
+ " 'anime_portrait': 'https://drive.google.com/uc?id=1-Bw24cv9o7qjLtd8yq8bzzz9AjR9QAkL',\n",
+ " 'faces2': 'https://drive.google.com/uc?id=18rJYK9oF6D7C607Be1B_Fu53rjjHUAT1',\n",
+ " 'GOT': 'https://drive.google.com/uc?id=1-0LCuuUxUA0R6gdSd9prn5sP7T01iF0e',\n",
+ "}\n",
+ "\n",
+ "model_cache = {}\n",
+ "synthesis_kwargs = dict(output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), minibatch_size=20)\n",
+ "\n",
+ "def gen_pil_image(latents, zoom=1, psi=0.7):\n",
+ " fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n",
+ " image = Gs.run(latents, None, randomize_noise=True, output_transform=fmt, truncation_psi=psi)\n",
+ " if zoom == 1:\n",
+ " return PIL.Image.fromarray(image[0])\n",
+ " else:\n",
+ " print(image[0].shape)\n",
+ " return PIL.Image.fromarray(ndimage.zoom(image[0],(zoom,zoom,1)))\n",
+ "\n",
+ "import google.colab.output\n",
+ "import random\n",
+ "import io\n",
+ "import base64\n",
+ "\n",
+ "def gen(l=None, psi=1):\n",
+ " if l is None:\n",
+ " l = [random.random()*2-1 for x in range(512)]\n",
+ " pimg = gen_pil_image(np.array(l).reshape(1,512), psi=psi)\n",
+ " bio = io.BytesIO()\n",
+ " pimg.save(bio, \"PNG\")\n",
+ " b = bio.getvalue()\n",
+ " return 'data:image/png;base64,'+str(base64.b64encode(b),encoding='utf-8')\n",
+ "\n",
+ "google.colab.output.register_callback('gen', gen)\n",
+ "\n",
+ "##\n",
+ "def fetch_model(name):\n",
+ " if model_cache.get(name):\n",
+ " return model_cache[name]\n",
+ " url = entity_to_url[name]\n",
+ " with dnnlib.util.open_url(url, cache_dir='cache') as f:\n",
+ " _G, _D, Gs = pickle.load(f)\n",
+ " model_cache[name] = Gs\n",
+ " return model_cache[name]\n",
+ "\n",
+ "def fetch_file(filename):\n",
+ " with open(filename,'rb') as f:\n",
+ " return pickle.load(f)\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BPdx4NeDu1SX"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Render Model \n",
+ "# ================================================================ #\n",
+ "\n",
+ "#choose model here. default is ffhq\n",
+ "import os\n",
+ "Render_Model = \"anime\" #@param [\"faces\",\"faces2\",\"GOT\",\"celebs\",\"bedrooms\",\"cars\",\"cats\",\"chruch\",\"horse\",\"anime\"]\n",
+ "\n",
+ "\n",
+ "if Render_Model == \"faces\":\n",
+ " curr_model = \"faces\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"faces2\":\n",
+ " curr_model = \"faces2\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"celebs\":\n",
+ " curr_model = \"celebs\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"bedrooms\":\n",
+ " curr_model = \"bedrooms\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"cars\":\n",
+ " curr_model = \"cars\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"cats\":\n",
+ " curr_model = \"cats\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"chruch\":\n",
+ " curr_model = \"chruch\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"horse\":\n",
+ " curr_model = \"horse\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"anime\":\n",
+ " curr_model = \"anime\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"anime_portrait\":\n",
+ " curr_model = \"anime_portrait\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"GOT\":\n",
+ " curr_model = \"GOT\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "xYUOT5SAu_wz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] StyleGAN2 \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "def get_latent_html(i):\n",
+ " return \"\"\"\n",
+ " L%03i: \n",
+ " \n",
+ "
\"\"\" % (i, i, i, (random.random()*2-1))\n",
+ "\n",
+ "def get_latents_html():\n",
+ " return '\\n'.join([get_latent_html(i) for i in range(512)])\n",
+ "\n",
+ "input_form = \"\"\"\n",
+ " \n",
+ " \n",
+ "\n",
+ "\n",
+ "
You have currently loaded %s model
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " %s\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " Generate from latents \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " psi: \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Mutate randomly \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Mutation strength: \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Random image \n",
+ "
\n",
+ "
\n",
+ " Normalize latents \n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " Save latents \n",
+ " Load latents \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\"\"\" % (curr_model, get_latents_html())\n",
+ "\n",
+ "javascript = \"\"\"\n",
+ " \n",
+ "\n",
+ "\"\"\"\n",
+ "\n",
+ "HTML(input_form + javascript)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "AMu9crpy-7yb"
+ },
+ "source": [
+ "### waifu2xLab "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Q1H1wCcM-1Vd"
+ },
+ "source": [
+ "GPU runtime is optional, but waifu2x could perform better on GPU. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "o-L3111Z-2a3"
+ },
+ "source": [
+ "waifu2xLab is a Google Colab implementation of tsurumeso 's waifu2x-chainer \n",
+ "\n",
+ " \n",
+ "\n",
+ "2D character picture (Kagamine Rin) is licensed under CC BY-NC by piapro [2]. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0IOySews_Ine"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clone waifu2x-chainer and Install Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " pass\n",
+ "else:\n",
+ " # Installing the required dependencies\n",
+ " # !pip install -q cupy-cuda100\n",
+ " !pip install -q futures\n",
+ " !pip install -q chainer\n",
+ "\n",
+ " # Cloning waifu2x-chainer from github\n",
+ " !git clone -l -s https://github.com/tsurumeso/waifu2x-chainer.git /content/tools/waifu2x\n",
+ "\n",
+ " # Creating input and output directory for waifu2x-chainer to work with\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " else:\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "d_OGARyM_L8P"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Upload Image or Get from URL \n",
+ "image_source = \"file_upload\" #@param [\"file_upload\", \"url\"]\n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > For the url, input a direct link to the file. (e.g: https://domain.moe/saber_waifu.jpg )\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import google.colab.files\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "def IOFolderCheck():\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " IOFolderCheck()\n",
+ "\n",
+ " %cd /content/waifu2x/input\n",
+ " clear_output()\n",
+ "\n",
+ "\n",
+ " if image_source == \"file_upload\":\n",
+ " uploaded = google.colab.files.upload()\n",
+ " else:\n",
+ " if url == \"\":\n",
+ " display(HTML(\"❌ The url field is empty! \"))\n",
+ " else:\n",
+ " !wget -q {url}\n",
+ " \n",
+ "\n",
+ " %cd /content\n",
+ " clear_output()\n",
+ "else:\n",
+ " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "pZJnNTad_W0I"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] waifu2xLab \n",
+ "input = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the \"input\" and \"output\" fields are empty, waifu2xLab will look for image(s) in \"/content/waifu2x/input\" and store the processed image(s) into \"/content/waifu2x/output\". By default, waifu2xLab will process anything inside the \"input\" folder.To process a single image, type in the absolute path of the file (e.g: /content/downloads/image.jpg).\n",
+ "output = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default output path will be used: /content/waifu2x/output\n",
+ "\n",
+ "#@markdown ---\n",
+ "processor = \"CPU\" #@param [\"CPU\", \"GPU\"]\n",
+ "mode = \"De-noise\" #@param [\"De-noise\", \"Upscale\", \"De-noise & Upscale\"]\n",
+ "tta = \"Disabled\" #@param [\"Enabled\", \"Disabled\"]\n",
+ "tta_level = \"8\" #@param [\"2\", \"4\", \"8\"]\n",
+ "# tta_level = 2 #@param {type:\"slider\", min:2, max:8, step:2}\n",
+ "denoise_level = 0 #@param {type:\"slider\", min:0, max:3, step:1}\n",
+ "upscale_ratio = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
+ "output_quality = 100 #@param {type:\"slider\", min:1, max:100, step:1}\n",
+ "color_profile = \"RGB\" #@param [\"RGB\", \"YUV\"]\n",
+ "model = \"VGG7\" #@param [\"VGG7\", \"UpConv7\", \"ResNet10\", \"UpResNet10\"]\n",
+ "output_format = \"PNG\" #@param [\"PNG\", \"WEBP\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import google.colab.files\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "def IOFolderCheck():\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ "\n",
+ "# For now, the CPU core is hardcoded to use 4 cores.\n",
+ "# The same goes for GPU, only GPU = 0 will be used.\n",
+ "if processor == \"CPU\":\n",
+ " processor = \"\"\n",
+ "elif processor == \"GPU\":\n",
+ " processor = \"-g 0\"\n",
+ "\n",
+ "# Checking for which mode is chosen.\n",
+ "if mode == \"De-noise\":\n",
+ " mode = \"noise\"\n",
+ "\n",
+ " upscale_ratio = 1\n",
+ "elif mode == \"Upscale\":\n",
+ " mode = \"scale\"\n",
+ "\n",
+ " denoise_level = 0\n",
+ "elif mode == \"De-noise & Upscale\":\n",
+ " mode = \"noise_scale\"\n",
+ "\n",
+ "# Checking whether TTA is enabled or not.\n",
+ "if tta == \"Enabled\":\n",
+ " tta1 = \"-t\"\n",
+ " tta2 = \"-T\"\n",
+ "elif tta == \"Disabled\":\n",
+ " tta1 = \"\"\n",
+ " tta2 = \"\"\n",
+ " tta_level = \"\"\n",
+ "\n",
+ "# Checking for which arch/model is used and convert it into parameter number.\n",
+ "if model == \"VGG7\":\n",
+ " model = 0\n",
+ "elif model == \"UpConv7\":\n",
+ " model = 1\n",
+ "elif model == \"ResNet10\":\n",
+ " model = 2\n",
+ "elif model == \"UpResNet10\":\n",
+ " model = 3\n",
+ "\n",
+ "# Checking for the chosen color profile and convert it into parameter.\n",
+ "if color_profile == \"YUV\":\n",
+ " color_profile = \"y\"\n",
+ "elif color_profile == \"RGB\":\n",
+ " color_profile = \"rgb\"\n",
+ "\n",
+ "# Checking for which output format is chosen and convert it into parameter.\n",
+ "if output_format == \"PNG\":\n",
+ " output_format = \"png\"\n",
+ "elif output_format == \"WEBP\":\n",
+ " output_format = \"webp\"\n",
+ "\n",
+ "# Checking whether input and output fields are empty or not\n",
+ "# If they are empty, the default storing path will be used (/content/waifu2x/output/)\n",
+ "if input == \"\" and output == \"\":\n",
+ " input = input_path\n",
+ " output = output_path\n",
+ "elif input == \"\" and not output == \"\":\n",
+ " input = inpput_path\n",
+ "elif not input == \"\" and output == \"\":\n",
+ " output = output_path\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " IOFolderCheck()\n",
+ "\n",
+ " %cd \"$waifu2x_path_1\"\n",
+ " clear_output()\n",
+ "\n",
+ " !python waifu2x.py {processor} -m {mode} {tta1} {tta2} {tta_level} -n {denoise_level} -s {upscale_ratio} -c {color_profile} -a {model} -e {output_format} -q {output_quality} -i \"{input}\" -o \"{output}\"\n",
+ "\n",
+ " %cd \"/content\"\n",
+ " clear_output()\n",
+ "else:\n",
+ " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "uQT6GEq9Na_E"
+ },
+ "source": [
+ "## ✧ Programming ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FdDNhzc0NdeS"
+ },
+ "source": [
+ "### Visual Studio Code "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "QaKEKUrRNfHI"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] code-server
\n",
+ "# @markdown VS Code in the browser. Run VS Code on any machine anywhere and access it in the browser.
\n",
+ "# @markdown \n",
+ "# @markdown ⚙️ Install Configuration ⚙️ \n",
+ "TOKEN = \"\" \n",
+ "REGION = \"AP\"\n",
+ "USE_FREE_TOKEN = True #{type:\"boolean\"}\n",
+ "INSTALL_EXTENSION = \"ms-python.python ms-vscode.cpptools ritwickdey.LiveServer sidthesloth.html5-boilerplate tht13.python\" #@param {type:\"string\"}\n",
+ "USER_DATA_DIR = \"/content/tools/code-server/userdata\" #@param {type:\"string\"}\n",
+ "OPEN_FOLDER = \"/content/\" #@param {type: \"string\"} \n",
+ "TAG_NAME = \"3.11.1\" #@param {type: \"string\"}\n",
+ "#@markdown > See HERE to get the tag name.\n",
+ "PACKAGES = \"amd64\" #@param [\"x86_64\", \"amd64\"]\n",
+ "RUN_LATEST = True\n",
+ "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os,sys, pathlib, zipfile, re, tarfile, shutil\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " findPackageR,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/code-server/', exist_ok=True)\n",
+ "os.makedirs('tools/temp', exist_ok=True)\n",
+ "binFile = ''\n",
+ "\n",
+ "# Downloading code-server\n",
+ "if not os.path.exists(\"tools/code-server/README.md\"):\n",
+ " os.system(\"apt install net-tools -y\")\n",
+ "\n",
+ " BASE_URL = r\"https://github.com/cdr/code-server/\"\n",
+ " rawRdata = findPackageR(\"cdr/code-server\",\n",
+ " f\"linux-{PACKAGES}.tar.gz\",\n",
+ " False if RUN_LATEST else TAG_NAME,\n",
+ " all_=True)\n",
+ " file_name = rawRdata['assets']['name']\n",
+ " urlF = rawRdata['assets']['browser_download_url']\n",
+ " output_file = \"tools/temp/code-server.tar.gz\"\n",
+ "\n",
+ " textAn(f\"Installing code-server {rawRdata['tag_name']} ...\", ty=\"twg\")\n",
+ " \n",
+ " urllib.request.urlretrieve(urlF, output_file)\n",
+ " with tarfile.open(output_file, 'r:gz') as tar_ref:\n",
+ " tar_ref.extractall('tools/temp/')\n",
+ " os.renames(\"tools/temp/\"+file_name[:-7], 'tools/code-server/')\n",
+ " try:\n",
+ " pathlib.Path(output_file).unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " try:\n",
+ " os.remove('tools/code-server/lib/libstdc++.so.6')\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " \n",
+ " binList = ['bin/code-server',\n",
+ " 'code-server']\n",
+ " for b in binList:\n",
+ " if os.path.exists('tools/code-server/'+b):\n",
+ " binFile = b\n",
+ " break\n",
+ " \n",
+ " # workspace settings\n",
+ " configScript = \"\"\"{\n",
+ " \"workbench.colorTheme\": \"Default Dark+\",\n",
+ " \"editor.minimap.enabled\": false\n",
+ "}\n",
+ "\"\"\"\n",
+ " os.makedirs(f'{OPEN_FOLDER}/.vscode', exist_ok=True)\n",
+ " with open(f'{OPEN_FOLDER}/.vscode/settings.json', 'w') as w:w.write(configScript)\n",
+ "\n",
+ " if INSTALL_EXTENSION:\n",
+ " perExtension = INSTALL_EXTENSION.split(' ')\n",
+ " for l in perExtension:\n",
+ " cmdE = f\"./{binFile} \" \\\n",
+ " f\"--user-data-dir {USER_DATA_DIR}\" \\\n",
+ " f\" --install-extension {l}\"\n",
+ " runSh(cmdE, cd=\"tools/code-server\", shell=True)\n",
+ "\n",
+ "\n",
+ "if not findProcess(\"node\", \"--extensions-dir\"):\n",
+ " cmdDo = f\"./{binFile} --auth none \" \\\n",
+ " f\" --port 5050 --user-data-dir {USER_DATA_DIR}\" \\\n",
+ " \" &\"\n",
+ " runSh(cmdDo, \n",
+ " cd=\"tools/code-server\",\n",
+ " shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(\n",
+ " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['code-server', 5050, 'http']], REGION.lower(), \n",
+ " [f\"{HOME}/.ngrok2/code-server.yml\", 30499]\n",
+ ").start('code-server', displayB=False)\n",
+ "displayUrl(server, EcUrl=f\"/?folder={OPEN_FOLDER}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "-HjoEvVINmgx"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Port Forwarding
\n",
+ "# @markdown Type in whatever PORT you want and separate them with comma and space. `80, 8080, 4040`
\n",
+ "USE_FREE_TOKEN = True \n",
+ "TOKEN = \"\" \n",
+ "REGION = \"US\" #[\"US\", \"EU\", \"AP\", \"AU\", \"SA\", \"JP\", \"IN\"]\n",
+ "PORT_LIST = \"\" #@param {type:\"string\"}\n",
+ "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/', exist_ok=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "m = []\n",
+ "splitPortList = PORT_LIST.split(',')\n",
+ "for p in splitPortList:\n",
+ " p = int(p)\n",
+ " m.append([f\"s{p}\", p, 'http'])\n",
+ "\n",
+ "Server = PortForward_wrapper(\n",
+ " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, m, REGION.lower(), \n",
+ " [f\"{HOME}/.ngrok2/randomPortOpen.yml\", 45535]\n",
+ ")\n",
+ "\n",
+ "for l in m:\n",
+ " displayUrl(Server.start(l[0], displayB=False, v=False), \n",
+ " pNamU=f\"{l[0][1:]} -> \", cls=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_wlFbVS6JcSL"
+ },
+ "source": [
+ "## ✧ Remote Connection ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KFpBZnkQhQz2"
+ },
+ "source": [
+ "**!! NOT FOR CRYPTOCURRENCY MINING !!** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WaSgbPEch7KH"
+ },
+ "source": [
+ "### Chrome Remote Desktop "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1-hL0LM7vRH8"
+ },
+ "source": [
+ "Original code written by PradyumnaKrishna (modified for MiXLab use)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "t4yNp3KmLtZ6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Create user \n",
+ "username = \"MiXLab\" #@param {type:\"string\"}\n",
+ "password = \"123456qwerty\" #@param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "\n",
+ "print(\"Creating user and setting it up...\")\n",
+ "\n",
+ "# Creation of user\n",
+ "os.system(f\"useradd -m {username}\")\n",
+ "\n",
+ "# Add user to sudo group\n",
+ "os.system(f\"adduser {username} sudo\")\n",
+ " \n",
+ "# Set password of user to 'root'\n",
+ "os.system(f\"echo '{username}:{password}' | sudo chpasswd\")\n",
+ "\n",
+ "# Change default shell from sh to bash\n",
+ "os.system(\"sed -i 's/\\/bin\\/sh/\\/bin\\/bash/g' /etc/passwd\")\n",
+ "\n",
+ "print(\"User created and configured.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Q6bl1b0EifVG"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Remote Desktop \n",
+ "#@markdown \n",
+ "#@markdown \tClick HERE (opens in new tab) and set up a computer first. \n",
+ "#@markdown \tAfter you have done setting up a computer, get the Debian Linux command / authcode and paste it into the field below. \n",
+ "#@markdown \tRun the cell and wait for it to finish. \n",
+ "#@markdown \tNow, go to HERE (opens in new tab) and you should see a machine pops up in there. \n",
+ "#@markdown \tClick on that machine to remote it and enter the pin. \n",
+ "#@markdown \n",
+ "CRP = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Enter a PIN that is equal to or more than 6 digits\n",
+ "Pin = 123456 #@param {type: \"integer\"}\n",
+ "\n",
+ "#@markdown > It takes about 4 to 5 minutes for the installation process.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import subprocess\n",
+ "\n",
+ "\n",
+ "class CRD:\n",
+ " def __init__(self):\n",
+ " os.system(\"apt update\")\n",
+ " self.installCRD()\n",
+ " self.installDesktopEnvironment()\n",
+ " self.installGoogleChorme()\n",
+ " self.finish()\n",
+ "\n",
+ " @staticmethod\n",
+ " def installCRD():\n",
+ " print(\"Installing Chrome Remote Desktop...\")\n",
+ " subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
+ "\n",
+ " @staticmethod\n",
+ " def installDesktopEnvironment():\n",
+ " print(\"Installing Desktop Environment...\")\n",
+ " os.system(\"export DEBIAN_FRONTEND=noninteractive\")\n",
+ " os.system(\"apt install --assume-yes xfce4 desktop-base xfce4-terminal\")\n",
+ " os.system(\"bash -c 'echo \\\"exec /etc/X11/Xsession /usr/bin/xfce4-session\\\" > /etc/chrome-remote-desktop-session'\")\n",
+ " os.system(\"apt remove --assume-yes gnome-terminal\")\n",
+ " os.system(\"apt install --assume-yes xscreensaver\")\n",
+ " os.system(\"systemctl disable lightdm.service\")\n",
+ "\n",
+ " @staticmethod\n",
+ " def installGoogleChorme():\n",
+ " print(\"Installing Google Chrome...\")\n",
+ " subprocess.run([\"wget\", \"https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
+ " subprocess.run([\"dpkg\", \"--install\", \"google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
+ "\n",
+ " @staticmethod\n",
+ " def finish():\n",
+ " print(\"Finalizing...\")\n",
+ " os.system(f\"adduser {username} chrome-remote-desktop\")\n",
+ " command = f\"{CRP} --pin={Pin}\"\n",
+ " os.system(f\"su - {username} -c '{command}'\")\n",
+ " os.system(\"service chrome-remote-desktop start\")\n",
+ " print(\"Finished Succesfully!\")\n",
+ "\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " if CRP == \"\":\n",
+ " print(\"Please enter the authcode from the Chrome Remote Desktop site!\")\n",
+ " elif len(str(Pin)) < 6:\n",
+ " print(\"Enter a PIN that is equal to or more than 6 digits!\")\n",
+ " else:\n",
+ " CRD()\n",
+ "except NameError as e:\n",
+ " print(\"Username variable not found! Create a user first!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Th3Qyn2uttiW"
+ },
+ "source": [
+ "#### Optionals "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vk2qtOTGIFsQ"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **Google Drive Mount**\n",
+ "#@markdown Google Drive used as Persistance HDD for files. \n",
+ "#@markdown Mounted at `user` Home directory inside drive folder\n",
+ "#@markdown (If `username` variable not defined then use root as default).\n",
+ "\n",
+ "def MountGDrive():\n",
+ " from google.colab import drive\n",
+ "\n",
+ " ! runuser -l $user -c \"yes | python3 -m pip install --user google-colab\" > /dev/null 2>&1\n",
+ "\n",
+ " mount = \"\"\"from os import environ as env\n",
+ "from google.colab import drive\n",
+ "\n",
+ "env['CLOUDSDK_CONFIG'] = '/content/.config'\n",
+ "drive.mount('{}')\"\"\".format(mountpoint)\n",
+ "\n",
+ " with open('/content/mount.py', 'w') as script:\n",
+ " script.write(mount)\n",
+ "\n",
+ " ! runuser -l $user -c \"python3 /content/mount.py\"\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " mountpoint = \"/home/\"+username+\"/drive\"\n",
+ " user = username\n",
+ "except NameError:\n",
+ " print(\"username variable not found, mounting at `/content/drive' using `root'\")\n",
+ " mountpoint = '/content/drive'\n",
+ " user = 'root'\n",
+ "\n",
+ "MountGDrive()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8icuQYnyKDLk"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **SSH**\n",
+ "\n",
+ "! pip install colab_ssh --upgrade &> /dev/null\n",
+ "\n",
+ "Ngrok = False #@param {type:'boolean'}\n",
+ "Agro = False #@param {type:'boolean'}\n",
+ "\n",
+ "\n",
+ "#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)\n",
+ "ngrokToken = \"\" #@param {type:'string'}\n",
+ "\n",
+ "\n",
+ "def runNGROK():\n",
+ " from colab_ssh import launch_ssh\n",
+ " from IPython.display import clear_output\n",
+ " launch_ssh(ngrokToken, password)\n",
+ " clear_output()\n",
+ "\n",
+ " print(\"ssh\", username, end='@')\n",
+ " ! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n",
+ " \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))\"\n",
+ "\n",
+ "\n",
+ "def runAgro():\n",
+ " from colab_ssh import launch_ssh_cloudflared\n",
+ " launch_ssh_cloudflared(password=password)\n",
+ "\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " pass\n",
+ " elif password:\n",
+ " pass\n",
+ "except NameError:\n",
+ " print(\"No user found using username and password as 'root'\")\n",
+ " username='root'\n",
+ " password='root'\n",
+ "\n",
+ "\n",
+ "if Agro and Ngrok:\n",
+ " print(\"You can't do that\")\n",
+ " print(\"Select only one of them\")\n",
+ "elif Agro:\n",
+ " runAgro()\n",
+ "elif Ngrok:\n",
+ " if ngrokToken == \"\":\n",
+ " print(\"No ngrokToken Found, Please enter it\")\n",
+ " else:\n",
+ " runNGROK()\n",
+ "else:\n",
+ " print(\"Select one of them\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OXsG6_pxeEFu"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Package Installer { vertical-output: true }\n",
+ "run = False #@param {type:\"boolean\"}\n",
+ "#@markdown *Package management actions (gasp)*\n",
+ "action = \"Install\" #@param [\"Install\", \"Check Installed\", \"Remove\"] {allow-input: true}\n",
+ "\n",
+ "package = \"wget\" #@param {type:\"string\"}\n",
+ "system = \"apt\" #@param [\"apt\", \"\"]\n",
+ "\n",
+ "def install(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt --fix-broken install > /dev/null 2>&1\n",
+ " !killall apt > /dev/null 2>&1\n",
+ " !rm /var/lib/dpkg/lock-frontend\n",
+ " !dpkg --configure -a > /dev/null 2>&1\n",
+ "\n",
+ " !apt-get install -o Dpkg::Options::=\"--force-confold\" --no-install-recommends -y $package\n",
+ " \n",
+ " !dpkg --configure -a > /dev/null 2>&1 \n",
+ " !apt update > /dev/null 2>&1\n",
+ "\n",
+ " !apt install $package > /dev/null 2>&1\n",
+ "\n",
+ "def check_installed(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt list --installed | grep $package\n",
+ "\n",
+ "def remove(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt remove $package\n",
+ "\n",
+ "if run:\n",
+ " if action == \"Install\":\n",
+ " install()\n",
+ " if action == \"Check Installed\":\n",
+ " check_installed()\n",
+ " if action == \"Remove\":\n",
+ " remove()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "UoeBdz6_KE6a"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **Colab Shutdown**\n",
+ "\n",
+ "#@markdown To Kill NGROK Tunnel\n",
+ "NGROK = False #@param {type:'boolean'}\n",
+ "\n",
+ "#@markdown To Unmount GDrive\n",
+ "GDrive = False #@param {type:'boolean'}\n",
+ "\n",
+ "#@markdown To Sleep Colab\n",
+ "Sleep = True #@param {type:'boolean'}\n",
+ "\n",
+ "if NGROK:\n",
+ " ! killall ngrok\n",
+ "\n",
+ "if GDrive:\n",
+ " with open('/content/unmount.py', 'w') as unmount:\n",
+ " unmount.write(\"\"\"from google.colab import drive\n",
+ "drive.flush_and_unmount()\"\"\")\n",
+ " \n",
+ " try:\n",
+ " if user:\n",
+ " ! runuser $user -c 'python3 /content/unmount.py'\n",
+ " except NameError:\n",
+ " print(\"Google Drive not Mounted\")\n",
+ "\n",
+ "if Sleep:\n",
+ " from time import sleep\n",
+ " sleep(43200)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CKxGMNKUJloT"
+ },
+ "source": [
+ "### IceMW + noVNC "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NXhG3KGGJqtf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] IceWM \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "# Defining Github latest tag so the code can fetch the latest release, if there is any\n",
+ "def latestTag(link):\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
+ " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
+ "\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs(\"tools/noVnc\", exist_ok=True)\n",
+ "\n",
+ "# Generating the password\n",
+ "try:\n",
+ " print(f\"Found old password! : {password}\")\n",
+ "except:\n",
+ " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if not findProcess(\"Xtightvnc\", \":1\"):\n",
+ " textAn(\"Please wait while noVNC is being prepared...\")\n",
+ " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
+ " runW.system_raw('apt update -y')\n",
+ " runW.system_raw('apt install -y icewm firefox tightvncserver autocutsel xterm')\n",
+ " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
+ " data = \"\"\"\n",
+ "#!/bin/bash\n",
+ "xrdb $HOME/.Xresources\n",
+ "xsetroot -solid black -cursor_name left_ptr\n",
+ "autocutsel -fork\n",
+ "icewm-session &\n",
+ "\"\"\"\n",
+ " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
+ " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
+ " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
+ " \n",
+ " runSh('sudo vncserver :1 -geometry 1440x870 -economictranslate -dontdisconnect &', shell=True)\n",
+ "\n",
+ " BASE_URL = \"https://github.com/geek1011/easy-novnc\"\n",
+ " LATEST_TAG = latestTag(BASE_URL)\n",
+ " output_file = \"tools/noVnc/easy-noVnc_linux-64bit\"\n",
+ " file_name = f\"easy-novnc_linux-64bit\"\n",
+ " urlF = f\"{BASE_URL}/releases/download/{LATEST_TAG}/{file_name}\"\n",
+ "\n",
+ " try:\n",
+ " urllib.request.urlretrieve(urlF, output_file)\n",
+ " except OSError:\n",
+ " pass\n",
+ "\n",
+ " os.chmod(output_file, 0o755)\n",
+ "\n",
+ "if not findProcess(\"easy-noVnc_linux-64bit\", '--addr \"0.0.0.0:6080\"'):\n",
+ " cmdDo = \"./easy-noVnc_linux-64bit --addr 0.0.0.0:6080 --port 5901\" \\\n",
+ " \" &\"\n",
+ " runSh(cmdDo, cd=\"tools/noVnc/\", shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC.yml\", 4455])\n",
+ "data = Server.start('vnc', displayB=False)\n",
+ "displayUrl(data, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}&path=vnc&resize=scale&reconnect=true&show_dot=true')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "COqwo7iH6_vu"
+ },
+ "source": [
+ "### NoMachine "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eypiLPD8UtD2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] NoMachine \n",
+ "USE_FREE_TOKEN = False\n",
+ "TOKEN = \"\" # @param {type:\"string\"}\n",
+ "REGION = \"US\"\n",
+ "PORT_FORWARD = \"ngrok\"\n",
+ "# @markdown > You would need to provide your own ngrok Authtoken.Click here to register for a free ngrok account.Click here to copy your ngrok Authtoken.Click here to download NoMachine.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import tarfile\n",
+ "import urllib.request\n",
+ "import shutil\n",
+ "import time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "APT_INSTALL = \"apt install -y \"\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " textAn,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs(\"tools/nomachine\", exist_ok=True)\n",
+ "os.makedirs(\"/root/.icewm\", exist_ok=True)\n",
+ "\n",
+ "# password ganarate\n",
+ "try:\n",
+ " print(f\"Found the old password! : {password}\")\n",
+ "except:\n",
+ " password = 'nomachine'\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "start = time.time()\n",
+ "if not os.path.exists(\"tools/nomachine/NX/bin/nxserver\"):\n",
+ " textAn(\"Please wait while noMachine is being prepared...\")\n",
+ "\n",
+ " runW.system_raw('apt update --quiet --force-yes')\n",
+ "\n",
+ " # Minimal install \n",
+ " runW.system_raw(\n",
+ " 'apt install --quiet --force-yes --no-install-recommends \\\n",
+ " icewm x11-xserver-utils firefox xterm pcmanfm')\n",
+ "\n",
+ " # icewm theme\n",
+ " with open('/root/.icewm/theme', 'w') as w:\n",
+ " w.write('Theme=\"NanoBlue/default.theme\"')\n",
+ " \n",
+ " # with open('/root/.icewm/toolbar', 'w') as w:\n",
+ " # w.write('prog \"chromium\" ! chromium-browser --no-sandbox')\n",
+ "\n",
+ " # nomachine\n",
+ " \n",
+ " staticUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/nomachine_6.9.2_1_x86_64.tar.gz\"\n",
+ " configUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/NXetc.tar.gz\"\n",
+ " \n",
+ " output_file = 'tools/nomachine/nm.tar.gz'\n",
+ " config_file = 'tools/nomachine/etc.tar.gz'\n",
+ " urllib.request.urlretrieve(staticUrl, output_file)\n",
+ " urllib.request.urlretrieve(configUrl, config_file)\n",
+ " \n",
+ " with tarfile.open(output_file, 'r:gz') as t:t.extractall('tools/nomachine')\n",
+ " runSh('./nxserver --install', cd='tools/nomachine/NX', shell=True)\n",
+ " runSh('./nxserver --stop', cd='tools/nomachine/NX/bin', shell=True)\n",
+ " \n",
+ " shutil.rmtree('tools/nomachine/NX/etc')\n",
+ " with tarfile.open(config_file, 'r:gz') as t:t.extractall('tools/nomachine/NX')\n",
+ " os.remove(config_file)\n",
+ " \n",
+ " os.remove(output_file)\n",
+ " runSh('./nxserver --startup', cd='tools/nomachine/NX/bin', shell=True)\n",
+ " runW.system_raw(\"echo root:$password | chpasswd\")\n",
+ "\n",
+ "end = time.time()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['nomachine', 4000, 'tcp']], REGION.lower(), [f\"{HOME}/.ngrok2/nomachine.yml\", 8459])\n",
+ "\n",
+ "data = Server.start('nomachine', displayB=False)\n",
+ "host, port = data['url'][7:].split(':')\n",
+ "user = os.popen('whoami').read()\n",
+ "\n",
+ "# Colors\n",
+ "bttxt = 'hsla(10, 50%, 85%, 1)'\n",
+ "btcolor = 'hsla(10, 86%, 56%, 1)'\n",
+ "btshado = 'hsla(10, 40%, 52%, .4)'\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "display(HTML(\"\"\"NoMachine Configuration
Username Password Protocol Host Port
\"\"\"+user+\"\"\" \"\"\"+password+\"\"\" NX \"\"\"+host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JM1Do14AKIdF"
+ },
+ "source": [
+ "### SSH + noVNC "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "5-jp3jmlKKk5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] SSH \n",
+ "CREATE_VNC = True #@param {type:\"boolean\"}\n",
+ "CREATE_SSH = True #@param {type:\"boolean\"}\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "#TOKEN = \"\" #@param {type:\"string\"}\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "# Generating password\n",
+ "try:\n",
+ " print(f\"Found the old password! : {password}\")\n",
+ "except:\n",
+ " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
+ "\n",
+ "if CREATE_SSH:\n",
+ " USE_FREE_TOKEN = False\n",
+ "\n",
+ "# Setting up the root password\n",
+ "if CREATE_SSH and os.path.exists('/var/run/sshd') == False:\n",
+ " # Setting up the SSH Daemon\n",
+ " runSh('apt install -qq -o=Dpkg::Use-Pty=0 openssh-server pwgen')\n",
+ " runW.system_raw(\"echo root:$password | chpasswd\")\n",
+ " os.makedirs(\"/var/run/sshd\", exist_ok=True)\n",
+ " runW.system_raw('echo \"PermitRootLogin yes\" >> /etc/ssh/sshd_config')\n",
+ " runW.system_raw('echo \"PasswordAuthentication yes\" >> /etc/ssh/sshd_config')\n",
+ " runW.system_raw('echo \"LD_LIBRARY_PATH=/usr/lib64-nvidia\" >> /root/.bashrc')\n",
+ " runW.system_raw('echo \"export LD_LIBRARY_PATH\" >> /root/.bashrc')\n",
+ "\n",
+ " # Running the SSH Daemon\n",
+ " if not findProcess(\"/usr/sbin/sshd\", command=\"-D\"):\n",
+ " runSh('/usr/sbin/sshd -D &', shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if CREATE_VNC:\n",
+ " # Start = time.time()\n",
+ " textAn(\"Please wait while noVNC is being prepared...\")\n",
+ " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
+ " runW.system_raw('add-apt-repository -y ppa:apt-fast/stable < /dev/null')\n",
+ " runW.system_raw('echo debconf apt-fast/maxdownloads string 16 | debconf-set-selections')\n",
+ " runW.system_raw('echo debconf apt-fast/dlflag boolean true | debconf-set-selections')\n",
+ " runW.system_raw('echo debconf apt-fast/aptmanager string apt-get | debconf-set-selections')\n",
+ " runW.system_raw('apt install -y apt-fast')\n",
+ " runW.system_raw('apt-fast install -y xfce4 xfce4-goodies firefox tightvncserver autocutsel')\n",
+ " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
+ " data = \"\"\"\n",
+ "#!/bin/bash\n",
+ "xrdb $HOME/.Xresources\n",
+ "autocutsel -fork\n",
+ "startxfce4 &\n",
+ "\"\"\"\n",
+ " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
+ " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
+ " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
+ " runSh('sudo vncserver &', shell=True)\n",
+ " runSh(f'git clone https://github.com/novnc/noVNC.git {CWD}/noVNC')\n",
+ " runSh(\"bash noVNC/utils/launch.sh --listen 6080 --vnc localhost:5901 &\", shell=True)\n",
+ " # End = time.time()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ssh', 22, 'tcp'], ['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC_SSH.yml\", 4455])\n",
+ "data = Server.start('ssh', displayB=False)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Host,port = data['url'][7:].split(':')\n",
+ "data2 = Server.start('vnc', displayB=False)\n",
+ "\n",
+ "if CREATE_VNC:\n",
+ " displayUrl(data2, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}')\n",
+ "if CREATE_SSH:\n",
+ " display(HTML(\"\"\"SSH Configuration
Host Port Password
\"\"\"+Host+\"\"\" \"\"\"+port+\"\"\" \"\"\"+password+\"\"\"
Simple SSH Commands Terminal connect ssh root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\" SOCKS5 proxy ssh -D 8282 -q -C -N root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0vHRnizI9BXA"
+ },
+ "source": [
+ "### WeTTY "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "FMd-AFnVYZid"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] WeTTY \n",
+ "# @markdown Terminal access in browser over HTTP / HTTPS.\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, tarfile, urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "os.makedirs('tools/temp', exist_ok=True)\n",
+ "\n",
+ "if not os.path.exists(\"tools/wetty/wetty\"):\n",
+ " # Build WeTTy from source\n",
+ " # os.system(\"git clone https://github.com/butlerx/wetty.git tools/wetty\")\n",
+ " # Popen('npm install'.split(), cwd='tools/wetty').wait()\n",
+ " # Popen('npm run-script build'.split(), cwd='tools/wetty').wait()\n",
+ " # Popen('npm i -g'.split(), cwd='tools/wetty').wait()\n",
+ " # --------------------------------------------------\n",
+ " # Download a pre-built WeTTy package from github\n",
+ " wettyBF = 'https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/wetty/wetty.tar.gz'\n",
+ " fileSN = 'tools/temp/wetty.tar.gz'\n",
+ " urllib.request.urlretrieve(wettyBF, fileSN)\n",
+ " with tarfile.open(fileSN, 'r:gz') as t:t.extractall('tools/')\n",
+ " os.remove(fileSN)\n",
+ "\n",
+ "if not findProcess(\"wetty\", \"--port\"):\n",
+ "# Popen(\n",
+ "# r'wetty --port 4343 --bypasshelmet \\\n",
+ "# -b \"/\" -c \"/bin/bash\"'.split(), \n",
+ "# cwd='/content')\n",
+ " Popen(\n",
+ " r'tools/wetty/wetty --port 4343 --bypasshelmet \\\n",
+ " -b \"/\" -c \"/bin/bash\"'.split(), \n",
+ " cwd='/content')\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['wetty', '4343', 'http']], REGION.lower, [f\"{HOME}/.ngrok2/wetty.yml\", 31199]).start('wetty', displayB=True)\n",
+ "displayUrl(server, pNamU='WeTTy : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9JBIZh3OZBaL"
+ },
+ "source": [
+ "## ✧ System Tools ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2zGMePbPQJWI"
+ },
+ "source": [
+ "### Glances "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vLhOue7XQJWa"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Glances \n",
+ "# @markdown Glances is a cross-platform system monitoring tool written in Python.
\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.exists(\"/usr/local/bin/glances\"):\n",
+ " os.system(\"pip3 install https://github.com/nicolargo/glances/archive/master.zip\")\n",
+ " os.system('pip3 install Bottle')\n",
+ " os.system(\"pip3 install 'glances[gpu,ip]'\")\n",
+ "\n",
+ "if not findProcess(\"glances\", \"--webserver\"):\n",
+ " Popen(\n",
+ " 'glances --webserver --port 61208 --time 0 --enable-process-extended \\\n",
+ " --byte --diskio-show-ramfs --fs-free-space \\\n",
+ " --disable-check-update'.split()\n",
+ " )\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['glances', '61208', 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/Glances.yml\", 31499]).start('glances', displayB=True)\n",
+ "displayUrl(server, pNamU='Glances : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eaUJNGmju5G6"
+ },
+ "source": [
+ "### netdata "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "WSUUUDXsUOkl"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] netdata \n",
+ "# @markdown netdata is a real-time system performance monitoring utility.
\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, subprocess, shlex\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import time\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\" \\\n",
+ " f\" -O {HOME}/.ipython/mixlab.py\"\n",
+ " subprocess.run(shlex.split(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "def CheckProcess(process, command):\n",
+ " for pid in psutil.pids():\n",
+ " try:\n",
+ " p = psutil.Process(pid)\n",
+ " if process in p.name():\n",
+ " for arg in p.cmdline():\n",
+ " if command in str(arg): \n",
+ " return True\n",
+ " else:\n",
+ " pass\n",
+ " else:\n",
+ " pass\n",
+ " except:\n",
+ " continue\n",
+ "\n",
+ "def Start_ServerMT():\n",
+ " if CheckProcess(\"netdata\", \"\") != True:\n",
+ " runSh('/usr/sbin/netdata', shell=True)\n",
+ "\n",
+ "loadingAn() \n",
+ "\n",
+ "if not os.path.isfile(\"/usr/sbin/netdata\"):\n",
+ " clear_output(wait=True)\n",
+ " textAn(\"Installing netdata...\")\n",
+ " # Start = time.time()\n",
+ " get_ipython().system_raw(\"bash <(curl -Ss https://my-netdata.io/kickstart.sh) --dont-wait --dont-start-it\")\n",
+ " # End = time.time()\n",
+ " Start_ServerMT()\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['netdata', 19999, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/netdata.yml\", 7044]).start('netdata', 'g')\n",
+ "displayUrl(server, pNamU='netdata : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xzeZBOnhyKPy"
+ },
+ "source": [
+ "### speedtest "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Az1Yh9WMyQwB"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] speedtest \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import re\n",
+ "import csv\n",
+ "import sys\n",
+ "import math\n",
+ "import errno\n",
+ "import signal\n",
+ "import socket\n",
+ "import timeit\n",
+ "import datetime\n",
+ "import platform\n",
+ "import threading\n",
+ "import xml.parsers.expat\n",
+ "\n",
+ "try:\n",
+ " import gzip\n",
+ " GZIP_BASE = gzip.GzipFile\n",
+ "except ImportError:\n",
+ " gzip = None\n",
+ " GZIP_BASE = object\n",
+ "\n",
+ "__version__ = '2.1.1'\n",
+ "\n",
+ "class FakeShutdownEvent(object):\n",
+ " \"\"\"Class to fake a threading.Event.isSet so that users of this module\n",
+ " are not required to register their own threading.Event()\n",
+ " \"\"\"\n",
+ "\n",
+ " @staticmethod\n",
+ " def isSet():\n",
+ " \"Dummy method to always return false\"\"\"\n",
+ " return False\n",
+ "\n",
+ "# Some global variables we use\n",
+ "DEBUG = False\n",
+ "_GLOBAL_DEFAULT_TIMEOUT = object()\n",
+ "\n",
+ "# Begin import game to handle Python 2 and Python 3\n",
+ "try:\n",
+ " import json\n",
+ "except ImportError:\n",
+ " try:\n",
+ " import simplejson as json\n",
+ " except ImportError:\n",
+ " json = None\n",
+ "\n",
+ "try:\n",
+ " import xml.etree.cElementTree as ET\n",
+ "except ImportError:\n",
+ " try:\n",
+ " import xml.etree.ElementTree as ET\n",
+ " except ImportError:\n",
+ " from xml.dom import minidom as DOM\n",
+ " from xml.parsers.expat import ExpatError\n",
+ " ET = None\n",
+ "\n",
+ "try:\n",
+ " from urllib2 import (urlopen, Request, HTTPError, URLError,\n",
+ " AbstractHTTPHandler, ProxyHandler,\n",
+ " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
+ " HTTPErrorProcessor, OpenerDirector)\n",
+ "except ImportError:\n",
+ " from urllib.request import (urlopen, Request, HTTPError, URLError,\n",
+ " AbstractHTTPHandler, ProxyHandler,\n",
+ " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
+ " HTTPErrorProcessor, OpenerDirector)\n",
+ "\n",
+ "try:\n",
+ " from httplib import HTTPConnection, BadStatusLine\n",
+ "except ImportError:\n",
+ " from http.client import HTTPConnection, BadStatusLine\n",
+ "\n",
+ "try:\n",
+ " from httplib import HTTPSConnection\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from http.client import HTTPSConnection\n",
+ " except ImportError:\n",
+ " HTTPSConnection = None\n",
+ "\n",
+ "try:\n",
+ " from httplib import FakeSocket\n",
+ "except ImportError:\n",
+ " FakeSocket = None\n",
+ "\n",
+ "try:\n",
+ " from Queue import Queue\n",
+ "except ImportError:\n",
+ " from queue import Queue\n",
+ "\n",
+ "try:\n",
+ " from urlparse import urlparse\n",
+ "except ImportError:\n",
+ " from urllib.parse import urlparse\n",
+ "\n",
+ "try:\n",
+ " from urlparse import parse_qs\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from urllib.parse import parse_qs\n",
+ " except ImportError:\n",
+ " from cgi import parse_qs\n",
+ "\n",
+ "try:\n",
+ " from hashlib import md5\n",
+ "except ImportError:\n",
+ " from md5 import md5\n",
+ "\n",
+ "try:\n",
+ " from argparse import ArgumentParser as ArgParser\n",
+ " from argparse import SUPPRESS as ARG_SUPPRESS\n",
+ " PARSER_TYPE_INT = int\n",
+ " PARSER_TYPE_STR = str\n",
+ " PARSER_TYPE_FLOAT = float\n",
+ "except ImportError:\n",
+ " from optparse import OptionParser as ArgParser\n",
+ " from optparse import SUPPRESS_HELP as ARG_SUPPRESS\n",
+ " PARSER_TYPE_INT = 'int'\n",
+ " PARSER_TYPE_STR = 'string'\n",
+ " PARSER_TYPE_FLOAT = 'float'\n",
+ "\n",
+ "try:\n",
+ " from cStringIO import StringIO\n",
+ " BytesIO = None\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from StringIO import StringIO\n",
+ " BytesIO = None\n",
+ " except ImportError:\n",
+ " from io import StringIO, BytesIO\n",
+ "\n",
+ "try:\n",
+ " import __builtin__\n",
+ "except ImportError:\n",
+ " import builtins\n",
+ " from io import TextIOWrapper, FileIO\n",
+ "\n",
+ " class _Py3Utf8Output(TextIOWrapper):\n",
+ " \"\"\"UTF-8 encoded wrapper around stdout for py3, to override\n",
+ " ASCII stdout\n",
+ " \"\"\"\n",
+ " def __init__(self, f, **kwargs):\n",
+ " buf = FileIO(f.fileno(), 'w')\n",
+ " super(_Py3Utf8Output, self).__init__(\n",
+ " buf,\n",
+ " encoding='utf8',\n",
+ " errors='strict'\n",
+ " )\n",
+ "\n",
+ " def write(self, s):\n",
+ " super(_Py3Utf8Output, self).write(s)\n",
+ " self.flush()\n",
+ "\n",
+ " _py3_print = getattr(builtins, 'print')\n",
+ " try:\n",
+ " _py3_utf8_stdout = _Py3Utf8Output(sys.stdout)\n",
+ " _py3_utf8_stderr = _Py3Utf8Output(sys.stderr)\n",
+ " except OSError:\n",
+ " # sys.stdout/sys.stderr is not a compatible stdout/stderr object\n",
+ " # just use it and hope things go ok\n",
+ " _py3_utf8_stdout = sys.stdout\n",
+ " _py3_utf8_stderr = sys.stderr\n",
+ "\n",
+ " def to_utf8(v):\n",
+ " \"\"\"No-op encode to utf-8 for py3\"\"\"\n",
+ " return v\n",
+ "\n",
+ " def print_(*args, **kwargs):\n",
+ " \"\"\"Wrapper function for py3 to print, with a utf-8 encoded stdout\"\"\"\n",
+ " if kwargs.get('file') == sys.stderr:\n",
+ " kwargs['file'] = _py3_utf8_stderr\n",
+ " else:\n",
+ " kwargs['file'] = kwargs.get('file', _py3_utf8_stdout)\n",
+ " _py3_print(*args, **kwargs)\n",
+ "else:\n",
+ " del __builtin__\n",
+ "\n",
+ " def to_utf8(v):\n",
+ " \"\"\"Encode value to utf-8 if possible for py2\"\"\"\n",
+ " try:\n",
+ " return v.encode('utf8', 'strict')\n",
+ " except AttributeError:\n",
+ " return v\n",
+ "\n",
+ " def print_(*args, **kwargs):\n",
+ " \"\"\"The new-style print function for Python 2.4 and 2.5.\n",
+ " Taken from https://pypi.python.org/pypi/six/\n",
+ " Modified to set encoding to UTF-8 always, and to flush after write\n",
+ " \"\"\"\n",
+ " fp = kwargs.pop(\"file\", sys.stdout)\n",
+ " if fp is None:\n",
+ " return\n",
+ "\n",
+ " def write(data):\n",
+ " if not isinstance(data, basestring):\n",
+ " data = str(data)\n",
+ " # If the file has an encoding, encode unicode with it.\n",
+ " encoding = 'utf8' # Always trust UTF-8 for output\n",
+ " if (isinstance(fp, file) and\n",
+ " isinstance(data, unicode) and\n",
+ " encoding is not None):\n",
+ " errors = getattr(fp, \"errors\", None)\n",
+ " if errors is None:\n",
+ " errors = \"strict\"\n",
+ " data = data.encode(encoding, errors)\n",
+ " fp.write(data)\n",
+ " fp.flush()\n",
+ " want_unicode = False\n",
+ " sep = kwargs.pop(\"sep\", None)\n",
+ " if sep is not None:\n",
+ " if isinstance(sep, unicode):\n",
+ " want_unicode = True\n",
+ " elif not isinstance(sep, str):\n",
+ " raise TypeError(\"sep must be None or a string\")\n",
+ " end = kwargs.pop(\"end\", None)\n",
+ " if end is not None:\n",
+ " if isinstance(end, unicode):\n",
+ " want_unicode = True\n",
+ " elif not isinstance(end, str):\n",
+ " raise TypeError(\"end must be None or a string\")\n",
+ " if kwargs:\n",
+ " raise TypeError(\"invalid keyword arguments to print()\")\n",
+ " if not want_unicode:\n",
+ " for arg in args:\n",
+ " if isinstance(arg, unicode):\n",
+ " want_unicode = True\n",
+ " break\n",
+ " if want_unicode:\n",
+ " newline = unicode(\"\\n\")\n",
+ " space = unicode(\" \")\n",
+ " else:\n",
+ " newline = \"\\n\"\n",
+ " space = \" \"\n",
+ " if sep is None:\n",
+ " sep = space\n",
+ " if end is None:\n",
+ " end = newline\n",
+ " for i, arg in enumerate(args):\n",
+ " if i:\n",
+ " write(sep)\n",
+ " write(arg)\n",
+ " write(end)\n",
+ "\n",
+ "\n",
+ "# Exception \"constants\" to support Python 2 through Python 3\n",
+ "try:\n",
+ " import ssl\n",
+ " try:\n",
+ " CERT_ERROR = (ssl.CertificateError,)\n",
+ " except AttributeError:\n",
+ " CERT_ERROR = tuple()\n",
+ "\n",
+ " HTTP_ERRORS = (\n",
+ " (HTTPError, URLError, socket.error, ssl.SSLError, BadStatusLine) +\n",
+ " CERT_ERROR\n",
+ " )\n",
+ "except ImportError:\n",
+ " ssl = None\n",
+ " HTTP_ERRORS = (HTTPError, URLError, socket.error, BadStatusLine)\n",
+ "\n",
+ "\n",
+ "class SpeedtestException(Exception):\n",
+ " \"\"\"Base exception for this module\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestCLIError(SpeedtestException):\n",
+ " \"\"\"Generic exception for raising errors during CLI operation\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPError(SpeedtestException):\n",
+ " \"\"\"Base HTTP exception for this module\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestConfigError(SpeedtestException):\n",
+ " \"\"\"Configuration XML is invalid\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestServersError(SpeedtestException):\n",
+ " \"\"\"Servers XML is invalid\"\"\"\n",
+ "\n",
+ "\n",
+ "class ConfigRetrievalError(SpeedtestHTTPError):\n",
+ " \"\"\"Could not retrieve config.php\"\"\"\n",
+ "\n",
+ "\n",
+ "class ServersRetrievalError(SpeedtestHTTPError):\n",
+ " \"\"\"Could not retrieve speedtest-servers.php\"\"\"\n",
+ "\n",
+ "\n",
+ "class InvalidServerIDType(SpeedtestException):\n",
+ " \"\"\"Server ID used for filtering was not an integer\"\"\"\n",
+ "\n",
+ "\n",
+ "class NoMatchedServers(SpeedtestException):\n",
+ " \"\"\"No servers matched when filtering\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestMiniConnectFailure(SpeedtestException):\n",
+ " \"\"\"Could not connect to the provided speedtest mini server\"\"\"\n",
+ "\n",
+ "\n",
+ "class InvalidSpeedtestMiniServer(SpeedtestException):\n",
+ " \"\"\"Server provided as a speedtest mini server does not actually appear\n",
+ " to be a speedtest mini server\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class ShareResultsConnectFailure(SpeedtestException):\n",
+ " \"\"\"Could not connect to speedtest.net API to POST results\"\"\"\n",
+ "\n",
+ "\n",
+ "class ShareResultsSubmitFailure(SpeedtestException):\n",
+ " \"\"\"Unable to successfully POST results to speedtest.net API after\n",
+ " connection\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestUploadTimeout(SpeedtestException):\n",
+ " \"\"\"testlength configuration reached during upload\n",
+ " Used to ensure the upload halts when no additional data should be sent\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestBestServerFailure(SpeedtestException):\n",
+ " \"\"\"Unable to determine best server\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestMissingBestServer(SpeedtestException):\n",
+ " \"\"\"get_best_server not called or not able to determine best server\"\"\"\n",
+ "\n",
+ "\n",
+ "def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,\n",
+ " source_address=None):\n",
+ " \"\"\"Connect to *address* and return the socket object.\n",
+ " Convenience function. Connect to *address* (a 2-tuple ``(host,\n",
+ " port)``) and return the socket object. Passing the optional\n",
+ " *timeout* parameter will set the timeout on the socket instance\n",
+ " before attempting to connect. If no *timeout* is supplied, the\n",
+ " global default timeout setting returned by :func:`getdefaulttimeout`\n",
+ " is used. If *source_address* is set it must be a tuple of (host, port)\n",
+ " for the socket to bind as a source address before making the connection.\n",
+ " An host of '' or port 0 tells the OS to use the default.\n",
+ " Largely vendored from Python 2.7, modified to work with Python 2.4\n",
+ " \"\"\"\n",
+ "\n",
+ " host, port = address\n",
+ " err = None\n",
+ " for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):\n",
+ " af, socktype, proto, canonname, sa = res\n",
+ " sock = None\n",
+ " try:\n",
+ " sock = socket.socket(af, socktype, proto)\n",
+ " if timeout is not _GLOBAL_DEFAULT_TIMEOUT:\n",
+ " sock.settimeout(float(timeout))\n",
+ " if source_address:\n",
+ " sock.bind(source_address)\n",
+ " sock.connect(sa)\n",
+ " return sock\n",
+ "\n",
+ " except socket.error:\n",
+ " err = get_exception()\n",
+ " if sock is not None:\n",
+ " sock.close()\n",
+ "\n",
+ " if err is not None:\n",
+ " raise err\n",
+ " else:\n",
+ " raise socket.error(\"getaddrinfo returns an empty list\")\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPConnection(HTTPConnection):\n",
+ " \"\"\"Custom HTTPConnection to support source_address across\n",
+ " Python 2.4 - Python 3\n",
+ " \"\"\"\n",
+ " def __init__(self, *args, **kwargs):\n",
+ " source_address = kwargs.pop('source_address', None)\n",
+ " timeout = kwargs.pop('timeout', 10)\n",
+ "\n",
+ " HTTPConnection.__init__(self, *args, **kwargs)\n",
+ "\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def connect(self):\n",
+ " \"\"\"Connect to the host and port specified in __init__.\"\"\"\n",
+ " try:\n",
+ " self.sock = socket.create_connection(\n",
+ " (self.host, self.port),\n",
+ " self.timeout,\n",
+ " self.source_address\n",
+ " )\n",
+ " except (AttributeError, TypeError):\n",
+ " self.sock = create_connection(\n",
+ " (self.host, self.port),\n",
+ " self.timeout,\n",
+ " self.source_address\n",
+ " )\n",
+ "\n",
+ "\n",
+ "if HTTPSConnection:\n",
+ " class SpeedtestHTTPSConnection(HTTPSConnection,\n",
+ " SpeedtestHTTPConnection):\n",
+ " \"\"\"Custom HTTPSConnection to support source_address across\n",
+ " Python 2.4 - Python 3\n",
+ " \"\"\"\n",
+ " def __init__(self, *args, **kwargs):\n",
+ " source_address = kwargs.pop('source_address', None)\n",
+ " timeout = kwargs.pop('timeout', 10)\n",
+ "\n",
+ " HTTPSConnection.__init__(self, *args, **kwargs)\n",
+ "\n",
+ " self.timeout = timeout\n",
+ " self.source_address = source_address\n",
+ "\n",
+ " def connect(self):\n",
+ " \"Connect to a host on a given (SSL) port.\"\n",
+ "\n",
+ " SpeedtestHTTPConnection.connect(self)\n",
+ "\n",
+ " if ssl:\n",
+ " try:\n",
+ " kwargs = {}\n",
+ " if hasattr(ssl, 'SSLContext'):\n",
+ " kwargs['server_hostname'] = self.host\n",
+ " self.sock = self._context.wrap_socket(self.sock, **kwargs)\n",
+ " except AttributeError:\n",
+ " self.sock = ssl.wrap_socket(self.sock)\n",
+ " try:\n",
+ " self.sock.server_hostname = self.host\n",
+ " except AttributeError:\n",
+ " pass\n",
+ " elif FakeSocket:\n",
+ " # Python 2.4/2.5 support\n",
+ " try:\n",
+ " self.sock = FakeSocket(self.sock, socket.ssl(self.sock))\n",
+ " except AttributeError:\n",
+ " raise SpeedtestException(\n",
+ " 'This version of Python does not support HTTPS/SSL '\n",
+ " 'functionality'\n",
+ " )\n",
+ " else:\n",
+ " raise SpeedtestException(\n",
+ " 'This version of Python does not support HTTPS/SSL '\n",
+ " 'functionality'\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def _build_connection(connection, source_address, timeout, context=None):\n",
+ " \"\"\"Cross Python 2.4 - Python 3 callable to build an ``HTTPConnection`` or\n",
+ " ``HTTPSConnection`` with the args we need\n",
+ " Called from ``http(s)_open`` methods of ``SpeedtestHTTPHandler`` or\n",
+ " ``SpeedtestHTTPSHandler``\n",
+ " \"\"\"\n",
+ " def inner(host, **kwargs):\n",
+ " kwargs.update({\n",
+ " 'source_address': source_address,\n",
+ " 'timeout': timeout\n",
+ " })\n",
+ " if context:\n",
+ " kwargs['context'] = context\n",
+ " return connection(host, **kwargs)\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPHandler(AbstractHTTPHandler):\n",
+ " \"\"\"Custom ``HTTPHandler`` that can build a ``HTTPConnection`` with the\n",
+ " args we need for ``source_address`` and ``timeout``\n",
+ " \"\"\"\n",
+ " def __init__(self, debuglevel=0, source_address=None, timeout=10):\n",
+ " AbstractHTTPHandler.__init__(self, debuglevel)\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def http_open(self, req):\n",
+ " return self.do_open(\n",
+ " _build_connection(\n",
+ " SpeedtestHTTPConnection,\n",
+ " self.source_address,\n",
+ " self.timeout\n",
+ " ),\n",
+ " req\n",
+ " )\n",
+ "\n",
+ " http_request = AbstractHTTPHandler.do_request_\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPSHandler(AbstractHTTPHandler):\n",
+ " \"\"\"Custom ``HTTPSHandler`` that can build a ``HTTPSConnection`` with the\n",
+ " args we need for ``source_address`` and ``timeout``\n",
+ " \"\"\"\n",
+ " def __init__(self, debuglevel=0, context=None, source_address=None,\n",
+ " timeout=10):\n",
+ " AbstractHTTPHandler.__init__(self, debuglevel)\n",
+ " self._context = context\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def https_open(self, req):\n",
+ " return self.do_open(\n",
+ " _build_connection(\n",
+ " SpeedtestHTTPSConnection,\n",
+ " self.source_address,\n",
+ " self.timeout,\n",
+ " context=self._context,\n",
+ " ),\n",
+ " req\n",
+ " )\n",
+ "\n",
+ " https_request = AbstractHTTPHandler.do_request_\n",
+ "\n",
+ "\n",
+ "def build_opener(source_address=None, timeout=10):\n",
+ " \"\"\"Function similar to ``urllib2.build_opener`` that will build\n",
+ " an ``OpenerDirector`` with the explicit handlers we want,\n",
+ " ``source_address`` for binding, ``timeout`` and our custom\n",
+ " `User-Agent`\n",
+ " \"\"\"\n",
+ "\n",
+ " printer('Timeout set to %d' % timeout, debug=True)\n",
+ "\n",
+ " if source_address:\n",
+ " source_address_tuple = (source_address, 0)\n",
+ " printer('Binding to source address: %r' % (source_address_tuple,),\n",
+ " debug=True)\n",
+ " else:\n",
+ " source_address_tuple = None\n",
+ "\n",
+ " handlers = [\n",
+ " ProxyHandler(),\n",
+ " SpeedtestHTTPHandler(source_address=source_address_tuple,\n",
+ " timeout=timeout),\n",
+ " SpeedtestHTTPSHandler(source_address=source_address_tuple,\n",
+ " timeout=timeout),\n",
+ " HTTPDefaultErrorHandler(),\n",
+ " HTTPRedirectHandler(),\n",
+ " HTTPErrorProcessor()\n",
+ " ]\n",
+ "\n",
+ " opener = OpenerDirector()\n",
+ " opener.addheaders = [('User-agent', build_user_agent())]\n",
+ "\n",
+ " for handler in handlers:\n",
+ " opener.add_handler(handler)\n",
+ "\n",
+ " return opener\n",
+ "\n",
+ "\n",
+ "class GzipDecodedResponse(GZIP_BASE):\n",
+ " \"\"\"A file-like object to decode a response encoded with the gzip\n",
+ " method, as described in RFC 1952.\n",
+ " Largely copied from ``xmlrpclib``/``xmlrpc.client`` and modified\n",
+ " to work for py2.4-py3\n",
+ " \"\"\"\n",
+ " def __init__(self, response):\n",
+ " # response doesn't support tell() and read(), required by\n",
+ " # GzipFile\n",
+ " if not gzip:\n",
+ " raise SpeedtestHTTPError('HTTP response body is gzip encoded, '\n",
+ " 'but gzip support is not available')\n",
+ " IO = BytesIO or StringIO\n",
+ " self.io = IO()\n",
+ " while 1:\n",
+ " chunk = response.read(1024)\n",
+ " if len(chunk) == 0:\n",
+ " break\n",
+ " self.io.write(chunk)\n",
+ " self.io.seek(0)\n",
+ " gzip.GzipFile.__init__(self, mode='rb', fileobj=self.io)\n",
+ "\n",
+ " def close(self):\n",
+ " try:\n",
+ " gzip.GzipFile.close(self)\n",
+ " finally:\n",
+ " self.io.close()\n",
+ "\n",
+ "\n",
+ "def get_exception():\n",
+ " \"\"\"Helper function to work with py2.4-py3 for getting the current\n",
+ " exception in a try/except block\n",
+ " \"\"\"\n",
+ " return sys.exc_info()[1]\n",
+ "\n",
+ "\n",
+ "def distance(origin, destination):\n",
+ " \"\"\"Determine distance between 2 sets of [lat,lon] in km\"\"\"\n",
+ "\n",
+ " lat1, lon1 = origin\n",
+ " lat2, lon2 = destination\n",
+ " radius = 6371 # km\n",
+ "\n",
+ " dlat = math.radians(lat2 - lat1)\n",
+ " dlon = math.radians(lon2 - lon1)\n",
+ " a = (math.sin(dlat / 2) * math.sin(dlat / 2) +\n",
+ " math.cos(math.radians(lat1)) *\n",
+ " math.cos(math.radians(lat2)) * math.sin(dlon / 2) *\n",
+ " math.sin(dlon / 2))\n",
+ " c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n",
+ " d = radius * c\n",
+ "\n",
+ " return d\n",
+ "\n",
+ "\n",
+ "def build_user_agent():\n",
+ " \"\"\"Build a Mozilla/5.0 compatible User-Agent string\"\"\"\n",
+ "\n",
+ " ua_tuple = (\n",
+ " 'Mozilla/5.0',\n",
+ " '(%s; U; %s; en-us)' % (platform.platform(),\n",
+ " platform.architecture()[0]),\n",
+ " 'Python/%s' % platform.python_version(),\n",
+ " '(KHTML, like Gecko)',\n",
+ " 'speedtest-cli/%s' % __version__\n",
+ " )\n",
+ " user_agent = ' '.join(ua_tuple)\n",
+ " printer('User-Agent: %s' % user_agent, debug=True)\n",
+ " return user_agent\n",
+ "\n",
+ "\n",
+ "def build_request(url, data=None, headers=None, bump='0', secure=False):\n",
+ " \"\"\"Build a urllib2 request object\n",
+ " This function automatically adds a User-Agent header to all requests\n",
+ " \"\"\"\n",
+ "\n",
+ " if not headers:\n",
+ " headers = {}\n",
+ "\n",
+ " if url[0] == ':':\n",
+ " scheme = ('http', 'https')[bool(secure)]\n",
+ " schemed_url = '%s%s' % (scheme, url)\n",
+ " else:\n",
+ " schemed_url = url\n",
+ "\n",
+ " if '?' in url:\n",
+ " delim = '&'\n",
+ " else:\n",
+ " delim = '?'\n",
+ "\n",
+ " # WHO YOU GONNA CALL? CACHE BUSTERS!\n",
+ " final_url = '%s%sx=%s.%s' % (schemed_url, delim,\n",
+ " int(timeit.time.time() * 1000),\n",
+ " bump)\n",
+ "\n",
+ " headers.update({\n",
+ " 'Cache-Control': 'no-cache',\n",
+ " })\n",
+ "\n",
+ " printer('%s %s' % (('GET', 'POST')[bool(data)], final_url),\n",
+ " debug=True)\n",
+ "\n",
+ " return Request(final_url, data=data, headers=headers)\n",
+ "\n",
+ "\n",
+ "def catch_request(request, opener=None):\n",
+ " \"\"\"Helper function to catch common exceptions encountered when\n",
+ " establishing a connection with a HTTP/HTTPS request\n",
+ " \"\"\"\n",
+ "\n",
+ " if opener:\n",
+ " _open = opener.open\n",
+ " else:\n",
+ " _open = urlopen\n",
+ "\n",
+ " try:\n",
+ " uh = _open(request)\n",
+ " if request.get_full_url() != uh.geturl():\n",
+ " printer('Redirected to %s' % uh.geturl(), debug=True)\n",
+ " return uh, False\n",
+ " except HTTP_ERRORS:\n",
+ " e = get_exception()\n",
+ " return None, e\n",
+ "\n",
+ "\n",
+ "def get_response_stream(response):\n",
+ " \"\"\"Helper function to return either a Gzip reader if\n",
+ " ``Content-Encoding`` is ``gzip`` otherwise the response itself\n",
+ " \"\"\"\n",
+ "\n",
+ " try:\n",
+ " getheader = response.headers.getheader\n",
+ " except AttributeError:\n",
+ " getheader = response.getheader\n",
+ "\n",
+ " if getheader('content-encoding') == 'gzip':\n",
+ " return GzipDecodedResponse(response)\n",
+ "\n",
+ " return response\n",
+ "\n",
+ "\n",
+ "def get_attributes_by_tag_name(dom, tag_name):\n",
+ " \"\"\"Retrieve an attribute from an XML document and return it in a\n",
+ " consistent format\n",
+ " Only used with xml.dom.minidom, which is likely only to be used\n",
+ " with python versions older than 2.5\n",
+ " \"\"\"\n",
+ " elem = dom.getElementsByTagName(tag_name)[0]\n",
+ " return dict(list(elem.attributes.items()))\n",
+ "\n",
+ "\n",
+ "def print_dots(shutdown_event):\n",
+ " \"\"\"Built in callback function used by Thread classes for printing\n",
+ " status\n",
+ " \"\"\"\n",
+ " def inner(current, total, start=False, end=False):\n",
+ " if shutdown_event.isSet():\n",
+ " return\n",
+ "\n",
+ " sys.stdout.write('.')\n",
+ " if current + 1 == total and end is True:\n",
+ " sys.stdout.write('\\n')\n",
+ " sys.stdout.flush()\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "def do_nothing(*args, **kwargs):\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "class HTTPDownloader(threading.Thread):\n",
+ " \"\"\"Thread class for retrieving a URL\"\"\"\n",
+ "\n",
+ " def __init__(self, i, request, start, timeout, opener=None,\n",
+ " shutdown_event=None):\n",
+ " threading.Thread.__init__(self)\n",
+ " self.request = request\n",
+ " self.result = [0]\n",
+ " self.starttime = start\n",
+ " self.timeout = timeout\n",
+ " self.i = i\n",
+ " if opener:\n",
+ " self._opener = opener.open\n",
+ " else:\n",
+ " self._opener = urlopen\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " def run(self):\n",
+ " try:\n",
+ " if (timeit.default_timer() - self.starttime) <= self.timeout:\n",
+ " f = self._opener(self.request)\n",
+ " while (not self._shutdown_event.isSet() and\n",
+ " (timeit.default_timer() - self.starttime) <=\n",
+ " self.timeout):\n",
+ " self.result.append(len(f.read(10240)))\n",
+ " if self.result[-1] == 0:\n",
+ " break\n",
+ " f.close()\n",
+ " except IOError:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "class HTTPUploaderData(object):\n",
+ " \"\"\"File like object to improve cutting off the upload once the timeout\n",
+ " has been reached\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, length, start, timeout, shutdown_event=None):\n",
+ " self.length = length\n",
+ " self.start = start\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " self._data = None\n",
+ "\n",
+ " self.total = [0]\n",
+ "\n",
+ " def pre_allocate(self):\n",
+ " chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n",
+ " multiplier = int(round(int(self.length) / 36.0))\n",
+ " IO = BytesIO or StringIO\n",
+ " try:\n",
+ " self._data = IO(\n",
+ " ('content1=%s' %\n",
+ " (chars * multiplier)[0:int(self.length) - 9]\n",
+ " ).encode()\n",
+ " )\n",
+ " except MemoryError:\n",
+ " raise SpeedtestCLIError(\n",
+ " 'Insufficient memory to pre-allocate upload data. Please '\n",
+ " 'use --no-pre-allocate'\n",
+ " )\n",
+ "\n",
+ " @property\n",
+ " def data(self):\n",
+ " if not self._data:\n",
+ " self.pre_allocate()\n",
+ " return self._data\n",
+ "\n",
+ " def read(self, n=10240):\n",
+ " if ((timeit.default_timer() - self.start) <= self.timeout and\n",
+ " not self._shutdown_event.isSet()):\n",
+ " chunk = self.data.read(n)\n",
+ " self.total.append(len(chunk))\n",
+ " return chunk\n",
+ " else:\n",
+ " raise SpeedtestUploadTimeout()\n",
+ "\n",
+ " def __len__(self):\n",
+ " return self.length\n",
+ "\n",
+ "\n",
+ "class HTTPUploader(threading.Thread):\n",
+ " \"\"\"Thread class for putting a URL\"\"\"\n",
+ "\n",
+ " def __init__(self, i, request, start, size, timeout, opener=None,\n",
+ " shutdown_event=None):\n",
+ " threading.Thread.__init__(self)\n",
+ " self.request = request\n",
+ " self.request.data.start = self.starttime = start\n",
+ " self.size = size\n",
+ " self.result = None\n",
+ " self.timeout = timeout\n",
+ " self.i = i\n",
+ "\n",
+ " if opener:\n",
+ " self._opener = opener.open\n",
+ " else:\n",
+ " self._opener = urlopen\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " def run(self):\n",
+ " request = self.request\n",
+ " try:\n",
+ " if ((timeit.default_timer() - self.starttime) <= self.timeout and\n",
+ " not self._shutdown_event.isSet()):\n",
+ " try:\n",
+ " f = self._opener(request)\n",
+ " except TypeError:\n",
+ " # PY24 expects a string or buffer\n",
+ " # This also causes issues with Ctrl-C, but we will concede\n",
+ " # for the moment that Ctrl-C on PY24 isn't immediate\n",
+ " request = build_request(self.request.get_full_url(),\n",
+ " data=request.data.read(self.size))\n",
+ " f = self._opener(request)\n",
+ " f.read(11)\n",
+ " f.close()\n",
+ " self.result = sum(self.request.data.total)\n",
+ " else:\n",
+ " self.result = 0\n",
+ " except (IOError, SpeedtestUploadTimeout):\n",
+ " self.result = sum(self.request.data.total)\n",
+ "\n",
+ "\n",
+ "class SpeedtestResults(object):\n",
+ " \"\"\"Class for holding the results of a speedtest, including:\n",
+ " Download speed\n",
+ " Upload speed\n",
+ " Ping/Latency to test server\n",
+ " Data about server that the test was run against\n",
+ " Additionally this class can return a result data as a dictionary or CSV,\n",
+ " as well as submit a POST of the result data to the speedtest.net API\n",
+ " to get a share results image link.\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, download=0, upload=0, ping=0, server=None, client=None,\n",
+ " opener=None, secure=False):\n",
+ " self.download = download\n",
+ " self.upload = upload\n",
+ " self.ping = ping\n",
+ " if server is None:\n",
+ " self.server = {}\n",
+ " else:\n",
+ " self.server = server\n",
+ " self.client = client or {}\n",
+ "\n",
+ " self._share = None\n",
+ " self.timestamp = '%sZ' % datetime.datetime.utcnow().isoformat()\n",
+ " self.bytes_received = 0\n",
+ " self.bytes_sent = 0\n",
+ "\n",
+ " if opener:\n",
+ " self._opener = opener\n",
+ " else:\n",
+ " self._opener = build_opener()\n",
+ "\n",
+ " self._secure = secure\n",
+ "\n",
+ " def __repr__(self):\n",
+ " return repr(self.dict())\n",
+ "\n",
+ " def share(self):\n",
+ " \"\"\"POST data to the speedtest.net API to obtain a share results\n",
+ " link\n",
+ " \"\"\"\n",
+ "\n",
+ " if self._share:\n",
+ " return self._share\n",
+ "\n",
+ " download = int(round(self.download / 1000.0, 0))\n",
+ " ping = int(round(self.ping, 0))\n",
+ " upload = int(round(self.upload / 1000.0, 0))\n",
+ "\n",
+ " # Build the request to send results back to speedtest.net\n",
+ " # We use a list instead of a dict because the API expects parameters\n",
+ " # in a certain order\n",
+ " api_data = [\n",
+ " 'recommendedserverid=%s' % self.server['id'],\n",
+ " 'ping=%s' % ping,\n",
+ " 'screenresolution=',\n",
+ " 'promo=',\n",
+ " 'download=%s' % download,\n",
+ " 'screendpi=',\n",
+ " 'upload=%s' % upload,\n",
+ " 'testmethod=http',\n",
+ " 'hash=%s' % md5(('%s-%s-%s-%s' %\n",
+ " (ping, upload, download, '297aae72'))\n",
+ " .encode()).hexdigest(),\n",
+ " 'touchscreen=none',\n",
+ " 'startmode=pingselect',\n",
+ " 'accuracy=1',\n",
+ " 'bytesreceived=%s' % self.bytes_received,\n",
+ " 'bytessent=%s' % self.bytes_sent,\n",
+ " 'serverid=%s' % self.server['id'],\n",
+ " ]\n",
+ "\n",
+ " headers = {'Referer': 'http://c.speedtest.net/flash/speedtest.swf'}\n",
+ " request = build_request('://www.speedtest.net/api/api.php',\n",
+ " data='&'.join(api_data).encode(),\n",
+ " headers=headers, secure=self._secure)\n",
+ " f, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise ShareResultsConnectFailure(e)\n",
+ "\n",
+ " response = f.read()\n",
+ " code = f.code\n",
+ " f.close()\n",
+ "\n",
+ " if int(code) != 200:\n",
+ " raise ShareResultsSubmitFailure('Could not submit results to '\n",
+ " 'speedtest.net')\n",
+ "\n",
+ " qsargs = parse_qs(response.decode())\n",
+ " resultid = qsargs.get('resultid')\n",
+ " if not resultid or len(resultid) != 1:\n",
+ " raise ShareResultsSubmitFailure('Could not submit results to '\n",
+ " 'speedtest.net')\n",
+ "\n",
+ " self._share = 'http://www.speedtest.net/result/%s.png' % resultid[0]\n",
+ "\n",
+ " return self._share\n",
+ "\n",
+ " def dict(self):\n",
+ " \"\"\"Return dictionary of result data\"\"\"\n",
+ "\n",
+ " return {\n",
+ " 'download': self.download,\n",
+ " 'upload': self.upload,\n",
+ " 'ping': self.ping,\n",
+ " 'server': self.server,\n",
+ " 'timestamp': self.timestamp,\n",
+ " 'bytes_sent': self.bytes_sent,\n",
+ " 'bytes_received': self.bytes_received,\n",
+ " 'share': self._share,\n",
+ " 'client': self.client,\n",
+ " }\n",
+ "\n",
+ " @staticmethod\n",
+ " def csv_header(delimiter=','):\n",
+ " \"\"\"Return CSV Headers\"\"\"\n",
+ "\n",
+ " row = ['Server ID', 'Sponsor', 'Server Name', 'Timestamp', 'Distance',\n",
+ " 'Ping', 'Download', 'Upload', 'Share', 'IP Address']\n",
+ " out = StringIO()\n",
+ " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
+ " writer.writerow([to_utf8(v) for v in row])\n",
+ " return out.getvalue()\n",
+ "\n",
+ " def csv(self, delimiter=','):\n",
+ " \"\"\"Return data in CSV format\"\"\"\n",
+ "\n",
+ " data = self.dict()\n",
+ " out = StringIO()\n",
+ " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
+ " row = [data['server']['id'], data['server']['sponsor'],\n",
+ " data['server']['name'], data['timestamp'],\n",
+ " data['server']['d'], data['ping'], data['download'],\n",
+ " data['upload'], self._share or '', self.client['ip']]\n",
+ " writer.writerow([to_utf8(v) for v in row])\n",
+ " return out.getvalue()\n",
+ "\n",
+ " def json(self, pretty=False):\n",
+ " \"\"\"Return data in JSON format\"\"\"\n",
+ "\n",
+ " kwargs = {}\n",
+ " if pretty:\n",
+ " kwargs.update({\n",
+ " 'indent': 4,\n",
+ " 'sort_keys': True\n",
+ " })\n",
+ " return json.dumps(self.dict(), **kwargs)\n",
+ "\n",
+ "\n",
+ "class Speedtest(object):\n",
+ " \"\"\"Class for performing standard speedtest.net testing operations\"\"\"\n",
+ "\n",
+ " def __init__(self, config=None, source_address=None, timeout=10,\n",
+ " secure=False, shutdown_event=None):\n",
+ " self.config = {}\n",
+ "\n",
+ " self._source_address = source_address\n",
+ " self._timeout = timeout\n",
+ " self._opener = build_opener(source_address, timeout)\n",
+ "\n",
+ " self._secure = secure\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " self.get_config()\n",
+ " if config is not None:\n",
+ " self.config.update(config)\n",
+ "\n",
+ " self.servers = {}\n",
+ " self.closest = []\n",
+ " self._best = {}\n",
+ "\n",
+ " self.results = SpeedtestResults(\n",
+ " client=self.config['client'],\n",
+ " opener=self._opener,\n",
+ " secure=secure,\n",
+ " )\n",
+ "\n",
+ " @property\n",
+ " def best(self):\n",
+ " if not self._best:\n",
+ " self.get_best_server()\n",
+ " return self._best\n",
+ "\n",
+ " def get_config(self):\n",
+ " \"\"\"Download the speedtest.net configuration and return only the data\n",
+ " we are interested in\n",
+ " \"\"\"\n",
+ "\n",
+ " headers = {}\n",
+ " if gzip:\n",
+ " headers['Accept-Encoding'] = 'gzip'\n",
+ " request = build_request('://www.speedtest.net/speedtest-config.php',\n",
+ " headers=headers, secure=self._secure)\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise ConfigRetrievalError(e)\n",
+ " configxml_list = []\n",
+ "\n",
+ " stream = get_response_stream(uh)\n",
+ "\n",
+ " while 1:\n",
+ " try:\n",
+ " configxml_list.append(stream.read(1024))\n",
+ " except (OSError, EOFError):\n",
+ " raise ConfigRetrievalError(get_exception())\n",
+ " if len(configxml_list[-1]) == 0:\n",
+ " break\n",
+ " stream.close()\n",
+ " uh.close()\n",
+ "\n",
+ " if int(uh.code) != 200:\n",
+ " return None\n",
+ "\n",
+ " configxml = ''.encode().join(configxml_list)\n",
+ "\n",
+ " printer('Config XML:\\n%s' % configxml, debug=True)\n",
+ "\n",
+ " try:\n",
+ " try:\n",
+ " root = ET.fromstring(configxml)\n",
+ " except ET.ParseError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Malformed speedtest.net configuration: %s' % e\n",
+ " )\n",
+ " server_config = root.find('server-config').attrib\n",
+ " download = root.find('download').attrib\n",
+ " upload = root.find('upload').attrib\n",
+ " # times = root.find('times').attrib\n",
+ " client = root.find('client').attrib\n",
+ "\n",
+ " except AttributeError:\n",
+ " try:\n",
+ " root = DOM.parseString(configxml)\n",
+ " except ExpatError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Malformed speedtest.net configuration: %s' % e\n",
+ " )\n",
+ " server_config = get_attributes_by_tag_name(root, 'server-config')\n",
+ " download = get_attributes_by_tag_name(root, 'download')\n",
+ " upload = get_attributes_by_tag_name(root, 'upload')\n",
+ " # times = get_attributes_by_tag_name(root, 'times')\n",
+ " client = get_attributes_by_tag_name(root, 'client')\n",
+ "\n",
+ " ignore_servers = list(\n",
+ " map(int, server_config['ignoreids'].split(','))\n",
+ " )\n",
+ "\n",
+ " ratio = int(upload['ratio'])\n",
+ " upload_max = int(upload['maxchunkcount'])\n",
+ " up_sizes = [32768, 65536, 131072, 262144, 524288, 1048576, 7340032]\n",
+ " sizes = {\n",
+ " 'upload': up_sizes[ratio - 1:],\n",
+ " 'download': [350, 500, 750, 1000, 1500, 2000, 2500,\n",
+ " 3000, 3500, 4000]\n",
+ " }\n",
+ "\n",
+ " size_count = len(sizes['upload'])\n",
+ "\n",
+ " upload_count = int(math.ceil(upload_max / size_count))\n",
+ "\n",
+ " counts = {\n",
+ " 'upload': upload_count,\n",
+ " 'download': int(download['threadsperurl'])\n",
+ " }\n",
+ "\n",
+ " threads = {\n",
+ " 'upload': int(upload['threads']),\n",
+ " 'download': int(server_config['threadcount']) * 2\n",
+ " }\n",
+ "\n",
+ " length = {\n",
+ " 'upload': int(upload['testlength']),\n",
+ " 'download': int(download['testlength'])\n",
+ " }\n",
+ "\n",
+ " self.config.update({\n",
+ " 'client': client,\n",
+ " 'ignore_servers': ignore_servers,\n",
+ " 'sizes': sizes,\n",
+ " 'counts': counts,\n",
+ " 'threads': threads,\n",
+ " 'length': length,\n",
+ " 'upload_max': upload_count * size_count\n",
+ " })\n",
+ "\n",
+ " try:\n",
+ " self.lat_lon = (float(client['lat']), float(client['lon']))\n",
+ " except ValueError:\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Unknown location: lat=%r lon=%r' %\n",
+ " (client.get('lat'), client.get('lon'))\n",
+ " )\n",
+ "\n",
+ " printer('Config:\\n%r' % self.config, debug=True)\n",
+ "\n",
+ " return self.config\n",
+ "\n",
+ " def get_servers(self, servers=None, exclude=None):\n",
+ " \"\"\"Retrieve a the list of speedtest.net servers, optionally filtered\n",
+ " to servers matching those specified in the ``servers`` argument\n",
+ " \"\"\"\n",
+ " if servers is None:\n",
+ " servers = []\n",
+ "\n",
+ " if exclude is None:\n",
+ " exclude = []\n",
+ "\n",
+ " self.servers.clear()\n",
+ "\n",
+ " for server_list in (servers, exclude):\n",
+ " for i, s in enumerate(server_list):\n",
+ " try:\n",
+ " server_list[i] = int(s)\n",
+ " except ValueError:\n",
+ " raise InvalidServerIDType(\n",
+ " '%s is an invalid server type, must be int' % s\n",
+ " )\n",
+ "\n",
+ " urls = [\n",
+ " '://www.speedtest.net/speedtest-servers-static.php',\n",
+ " 'http://c.speedtest.net/speedtest-servers-static.php',\n",
+ " '://www.speedtest.net/speedtest-servers.php',\n",
+ " 'http://c.speedtest.net/speedtest-servers.php',\n",
+ " ]\n",
+ "\n",
+ " headers = {}\n",
+ " if gzip:\n",
+ " headers['Accept-Encoding'] = 'gzip'\n",
+ "\n",
+ " errors = []\n",
+ " for url in urls:\n",
+ " try:\n",
+ " request = build_request(\n",
+ " '%s?threads=%s' % (url,\n",
+ " self.config['threads']['download']),\n",
+ " headers=headers,\n",
+ " secure=self._secure\n",
+ " )\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " errors.append('%s' % e)\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " stream = get_response_stream(uh)\n",
+ "\n",
+ " serversxml_list = []\n",
+ " while 1:\n",
+ " try:\n",
+ " serversxml_list.append(stream.read(1024))\n",
+ " except (OSError, EOFError):\n",
+ " raise ServersRetrievalError(get_exception())\n",
+ " if len(serversxml_list[-1]) == 0:\n",
+ " break\n",
+ "\n",
+ " stream.close()\n",
+ " uh.close()\n",
+ "\n",
+ " if int(uh.code) != 200:\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " serversxml = ''.encode().join(serversxml_list)\n",
+ "\n",
+ " printer('Servers XML:\\n%s' % serversxml, debug=True)\n",
+ "\n",
+ " try:\n",
+ " try:\n",
+ " try:\n",
+ " root = ET.fromstring(serversxml)\n",
+ " except ET.ParseError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestServersError(\n",
+ " 'Malformed speedtest.net server list: %s' % e\n",
+ " )\n",
+ " elements = root.getiterator('server')\n",
+ " except AttributeError:\n",
+ " try:\n",
+ " root = DOM.parseString(serversxml)\n",
+ " except ExpatError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestServersError(\n",
+ " 'Malformed speedtest.net server list: %s' % e\n",
+ " )\n",
+ " elements = root.getElementsByTagName('server')\n",
+ " except (SyntaxError, xml.parsers.expat.ExpatError):\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " for server in elements:\n",
+ " try:\n",
+ " attrib = server.attrib\n",
+ " except AttributeError:\n",
+ " attrib = dict(list(server.attributes.items()))\n",
+ "\n",
+ " if servers and int(attrib.get('id')) not in servers:\n",
+ " continue\n",
+ "\n",
+ " if (int(attrib.get('id')) in self.config['ignore_servers']\n",
+ " or int(attrib.get('id')) in exclude):\n",
+ " continue\n",
+ "\n",
+ " try:\n",
+ " d = distance(self.lat_lon,\n",
+ " (float(attrib.get('lat')),\n",
+ " float(attrib.get('lon'))))\n",
+ " except Exception:\n",
+ " continue\n",
+ "\n",
+ " attrib['d'] = d\n",
+ "\n",
+ " try:\n",
+ " self.servers[d].append(attrib)\n",
+ " except KeyError:\n",
+ " self.servers[d] = [attrib]\n",
+ "\n",
+ " break\n",
+ "\n",
+ " except ServersRetrievalError:\n",
+ " continue\n",
+ "\n",
+ " if (servers or exclude) and not self.servers:\n",
+ " raise NoMatchedServers()\n",
+ "\n",
+ " return self.servers\n",
+ "\n",
+ " def set_mini_server(self, server):\n",
+ " \"\"\"Instead of querying for a list of servers, set a link to a\n",
+ " speedtest mini server\n",
+ " \"\"\"\n",
+ "\n",
+ " urlparts = urlparse(server)\n",
+ "\n",
+ " name, ext = os.path.splitext(urlparts[2])\n",
+ " if ext:\n",
+ " url = os.path.dirname(server)\n",
+ " else:\n",
+ " url = server\n",
+ "\n",
+ " request = build_request(url)\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise SpeedtestMiniConnectFailure('Failed to connect to %s' %\n",
+ " server)\n",
+ " else:\n",
+ " text = uh.read()\n",
+ " uh.close()\n",
+ "\n",
+ " extension = re.findall('upload_?[Ee]xtension: \"([^\"]+)\"',\n",
+ " text.decode())\n",
+ " if not extension:\n",
+ " for ext in ['php', 'asp', 'aspx', 'jsp']:\n",
+ " try:\n",
+ " f = self._opener.open(\n",
+ " '%s/speedtest/upload.%s' % (url, ext)\n",
+ " )\n",
+ " except Exception:\n",
+ " pass\n",
+ " else:\n",
+ " data = f.read().strip().decode()\n",
+ " if (f.code == 200 and\n",
+ " len(data.splitlines()) == 1 and\n",
+ " re.match('size=[0-9]', data)):\n",
+ " extension = [ext]\n",
+ " break\n",
+ " if not urlparts or not extension:\n",
+ " raise InvalidSpeedtestMiniServer('Invalid Speedtest Mini Server: '\n",
+ " '%s' % server)\n",
+ "\n",
+ " self.servers = [{\n",
+ " 'sponsor': 'Speedtest Mini',\n",
+ " 'name': urlparts[1],\n",
+ " 'd': 0,\n",
+ " 'url': '%s/speedtest/upload.%s' % (url.rstrip('/'), extension[0]),\n",
+ " 'latency': 0,\n",
+ " 'id': 0\n",
+ " }]\n",
+ "\n",
+ " return self.servers\n",
+ "\n",
+ " def get_closest_servers(self, limit=5):\n",
+ " \"\"\"Limit servers to the closest speedtest.net servers based on\n",
+ " geographic distance\n",
+ " \"\"\"\n",
+ "\n",
+ " if not self.servers:\n",
+ " self.get_servers()\n",
+ "\n",
+ " for d in sorted(self.servers.keys()):\n",
+ " for s in self.servers[d]:\n",
+ " self.closest.append(s)\n",
+ " if len(self.closest) == limit:\n",
+ " break\n",
+ " else:\n",
+ " continue\n",
+ " break\n",
+ "\n",
+ " printer('Closest Servers:\\n%r' % self.closest, debug=True)\n",
+ " return self.closest\n",
+ "\n",
+ " def get_best_server(self, servers=None):\n",
+ " \"\"\"Perform a speedtest.net \"ping\" to determine which speedtest.net\n",
+ " server has the lowest latency\n",
+ " \"\"\"\n",
+ "\n",
+ " if not servers:\n",
+ " if not self.closest:\n",
+ " servers = self.get_closest_servers()\n",
+ " servers = self.closest\n",
+ "\n",
+ " if self._source_address:\n",
+ " source_address_tuple = (self._source_address, 0)\n",
+ " else:\n",
+ " source_address_tuple = None\n",
+ "\n",
+ " user_agent = build_user_agent()\n",
+ "\n",
+ " results = {}\n",
+ " for server in servers:\n",
+ " cum = []\n",
+ " url = os.path.dirname(server['url'])\n",
+ " stamp = int(timeit.time.time() * 1000)\n",
+ " latency_url = '%s/latency.txt?x=%s' % (url, stamp)\n",
+ " for i in range(0, 3):\n",
+ " this_latency_url = '%s.%s' % (latency_url, i)\n",
+ " printer('%s %s' % ('GET', this_latency_url),\n",
+ " debug=True)\n",
+ " urlparts = urlparse(latency_url)\n",
+ " try:\n",
+ " if urlparts[0] == 'https':\n",
+ " h = SpeedtestHTTPSConnection(\n",
+ " urlparts[1],\n",
+ " source_address=source_address_tuple\n",
+ " )\n",
+ " else:\n",
+ " h = SpeedtestHTTPConnection(\n",
+ " urlparts[1],\n",
+ " source_address=source_address_tuple\n",
+ " )\n",
+ " headers = {'User-Agent': user_agent}\n",
+ " path = '%s?%s' % (urlparts[2], urlparts[4])\n",
+ " start = timeit.default_timer()\n",
+ " h.request(\"GET\", path, headers=headers)\n",
+ " r = h.getresponse()\n",
+ " total = (timeit.default_timer() - start)\n",
+ " except HTTP_ERRORS:\n",
+ " e = get_exception()\n",
+ " printer('ERROR: %r' % e, debug=True)\n",
+ " cum.append(3600)\n",
+ " continue\n",
+ "\n",
+ " text = r.read(9)\n",
+ " if int(r.status) == 200 and text == 'test=test'.encode():\n",
+ " cum.append(total)\n",
+ " else:\n",
+ " cum.append(3600)\n",
+ " h.close()\n",
+ "\n",
+ " avg = round((sum(cum) / 6) * 1000.0, 3)\n",
+ " results[avg] = server\n",
+ "\n",
+ " try:\n",
+ " fastest = sorted(results.keys())[0]\n",
+ " except IndexError:\n",
+ " raise SpeedtestBestServerFailure('Unable to connect to servers to '\n",
+ " 'test latency.')\n",
+ " best = results[fastest]\n",
+ " best['latency'] = fastest\n",
+ "\n",
+ " self.results.ping = fastest\n",
+ " self.results.server = best\n",
+ "\n",
+ " self._best.update(best)\n",
+ " printer('Best Server:\\n%r' % best, debug=True)\n",
+ " return best\n",
+ "\n",
+ " def download(self, callback=do_nothing, threads=None):\n",
+ " \"\"\"Test download speed against speedtest.net\n",
+ " A ``threads`` value of ``None`` will fall back to those dictated\n",
+ " by the speedtest.net configuration\n",
+ " \"\"\"\n",
+ "\n",
+ " urls = []\n",
+ " for size in self.config['sizes']['download']:\n",
+ " for _ in range(0, self.config['counts']['download']):\n",
+ " urls.append('%s/random%sx%s.jpg' %\n",
+ " (os.path.dirname(self.best['url']), size, size))\n",
+ "\n",
+ " request_count = len(urls)\n",
+ " requests = []\n",
+ " for i, url in enumerate(urls):\n",
+ " requests.append(\n",
+ " build_request(url, bump=i, secure=self._secure)\n",
+ " )\n",
+ "\n",
+ " def producer(q, requests, request_count):\n",
+ " for i, request in enumerate(requests):\n",
+ " thread = HTTPDownloader(\n",
+ " i,\n",
+ " request,\n",
+ " start,\n",
+ " self.config['length']['download'],\n",
+ " opener=self._opener,\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " thread.start()\n",
+ " q.put(thread, True)\n",
+ " callback(i, request_count, start=True)\n",
+ "\n",
+ " finished = []\n",
+ "\n",
+ " def consumer(q, request_count):\n",
+ " while len(finished) < request_count:\n",
+ " thread = q.get(True)\n",
+ " while thread.isAlive():\n",
+ " thread.join(timeout=0.1)\n",
+ " finished.append(sum(thread.result))\n",
+ " callback(thread.i, request_count, end=True)\n",
+ "\n",
+ " q = Queue(threads or self.config['threads']['download'])\n",
+ " prod_thread = threading.Thread(target=producer,\n",
+ " args=(q, requests, request_count))\n",
+ " cons_thread = threading.Thread(target=consumer,\n",
+ " args=(q, request_count))\n",
+ " start = timeit.default_timer()\n",
+ " prod_thread.start()\n",
+ " cons_thread.start()\n",
+ " while prod_thread.isAlive():\n",
+ " prod_thread.join(timeout=0.1)\n",
+ " while cons_thread.isAlive():\n",
+ " cons_thread.join(timeout=0.1)\n",
+ "\n",
+ " stop = timeit.default_timer()\n",
+ " self.results.bytes_received = sum(finished)\n",
+ " self.results.download = (\n",
+ " (self.results.bytes_received / (stop - start)) * 8.0\n",
+ " )\n",
+ " if self.results.download > 100000:\n",
+ " self.config['threads']['upload'] = 8\n",
+ " return self.results.download\n",
+ "\n",
+ " def upload(self, callback=do_nothing, pre_allocate=True, threads=None):\n",
+ " \"\"\"Test upload speed against speedtest.net\n",
+ " A ``threads`` value of ``None`` will fall back to those dictated\n",
+ " by the speedtest.net configuration\n",
+ " \"\"\"\n",
+ "\n",
+ " sizes = []\n",
+ "\n",
+ " for size in self.config['sizes']['upload']:\n",
+ " for _ in range(0, self.config['counts']['upload']):\n",
+ " sizes.append(size)\n",
+ "\n",
+ " # request_count = len(sizes)\n",
+ " request_count = self.config['upload_max']\n",
+ "\n",
+ " requests = []\n",
+ " for i, size in enumerate(sizes):\n",
+ " # We set ``0`` for ``start`` and handle setting the actual\n",
+ " # ``start`` in ``HTTPUploader`` to get better measurements\n",
+ " data = HTTPUploaderData(\n",
+ " size,\n",
+ " 0,\n",
+ " self.config['length']['upload'],\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " if pre_allocate:\n",
+ " data.pre_allocate()\n",
+ "\n",
+ " headers = {'Content-length': size}\n",
+ " requests.append(\n",
+ " (\n",
+ " build_request(self.best['url'], data, secure=self._secure,\n",
+ " headers=headers),\n",
+ " size\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " def producer(q, requests, request_count):\n",
+ " for i, request in enumerate(requests[:request_count]):\n",
+ " thread = HTTPUploader(\n",
+ " i,\n",
+ " request[0],\n",
+ " start,\n",
+ " request[1],\n",
+ " self.config['length']['upload'],\n",
+ " opener=self._opener,\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " thread.start()\n",
+ " q.put(thread, True)\n",
+ " callback(i, request_count, start=True)\n",
+ "\n",
+ " finished = []\n",
+ "\n",
+ " def consumer(q, request_count):\n",
+ " while len(finished) < request_count:\n",
+ " thread = q.get(True)\n",
+ " while thread.isAlive():\n",
+ " thread.join(timeout=0.1)\n",
+ " finished.append(thread.result)\n",
+ " callback(thread.i, request_count, end=True)\n",
+ "\n",
+ " q = Queue(threads or self.config['threads']['upload'])\n",
+ " prod_thread = threading.Thread(target=producer,\n",
+ " args=(q, requests, request_count))\n",
+ " cons_thread = threading.Thread(target=consumer,\n",
+ " args=(q, request_count))\n",
+ " start = timeit.default_timer()\n",
+ " prod_thread.start()\n",
+ " cons_thread.start()\n",
+ " while prod_thread.isAlive():\n",
+ " prod_thread.join(timeout=0.1)\n",
+ " while cons_thread.isAlive():\n",
+ " cons_thread.join(timeout=0.1)\n",
+ "\n",
+ " stop = timeit.default_timer()\n",
+ " self.results.bytes_sent = sum(finished)\n",
+ " self.results.upload = (\n",
+ " (self.results.bytes_sent / (stop - start)) * 8.0\n",
+ " )\n",
+ " return self.results.upload\n",
+ "\n",
+ "\n",
+ "def ctrl_c(shutdown_event):\n",
+ " \"\"\"Catch Ctrl-C key sequence and set a SHUTDOWN_EVENT for our threaded\n",
+ " operations\n",
+ " \"\"\"\n",
+ " def inner(signum, frame):\n",
+ " shutdown_event.set()\n",
+ " printer('\\nCancelling...', error=True)\n",
+ " sys.exit(0)\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "def version():\n",
+ " \"\"\"Print the version\"\"\"\n",
+ "\n",
+ " printer('speedtest-cli %s' % __version__)\n",
+ " printer('Python %s' % sys.version.replace('\\n', ''))\n",
+ " sys.exit(0)\n",
+ "\n",
+ "\n",
+ "def csv_header(delimiter=','):\n",
+ " \"\"\"Print the CSV Headers\"\"\"\n",
+ "\n",
+ " printer(SpeedtestResults.csv_header(delimiter=delimiter))\n",
+ " sys.exit(0)\n",
+ "\n",
+ "\n",
+ "def parse_args():\n",
+ " \"\"\"Function to handle building and parsing of command line arguments\"\"\"\n",
+ " description = (\n",
+ " 'Command line interface for testing internet bandwidth using '\n",
+ " 'speedtest.net.\\n'\n",
+ " '------------------------------------------------------------'\n",
+ " '--------------\\n'\n",
+ " 'https://github.com/sivel/speedtest-cli')\n",
+ "\n",
+ " parser = ArgParser(description=description)\n",
+ " # Give optparse.OptionParser an `add_argument` method for\n",
+ " # compatibility with argparse.ArgumentParser\n",
+ " try:\n",
+ " parser.add_argument = parser.add_option\n",
+ " except AttributeError:\n",
+ " pass\n",
+ " parser.add_argument('--no-download', dest='download', default=True,\n",
+ " action='store_const', const=False,\n",
+ " help='Do not perform download test')\n",
+ " parser.add_argument('--no-upload', dest='upload', default=True,\n",
+ " action='store_const', const=False,\n",
+ " help='Do not perform upload test')\n",
+ " parser.add_argument('--single', default=False, action='store_true',\n",
+ " help='Only use a single connection instead of '\n",
+ " 'multiple. This simulates a typical file '\n",
+ " 'transfer.')\n",
+ " parser.add_argument('--bytes', dest='units', action='store_const',\n",
+ " const=('byte', 8), default=('bit', 1),\n",
+ " help='Display values in bytes instead of bits. Does '\n",
+ " 'not affect the image generated by --share, nor '\n",
+ " 'output from --json or --csv')\n",
+ " parser.add_argument('--share', action='store_true',\n",
+ " help='Generate and provide a URL to the speedtest.net '\n",
+ " 'share results image, not displayed with --csv')\n",
+ " parser.add_argument('--simple', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information')\n",
+ " parser.add_argument('--csv', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information in CSV format. Speeds listed in '\n",
+ " 'bit/s and not affected by --bytes')\n",
+ " parser.add_argument('--csv-delimiter', default=',', type=PARSER_TYPE_STR,\n",
+ " help='Single character delimiter to use in CSV '\n",
+ " 'output. Default \",\"')\n",
+ " parser.add_argument('--csv-header', action='store_true', default=False,\n",
+ " help='Print CSV headers')\n",
+ " parser.add_argument('--json', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information in JSON format. Speeds listed in '\n",
+ " 'bit/s and not affected by --bytes')\n",
+ " parser.add_argument('--list', action='store_true',\n",
+ " help='Display a list of speedtest.net servers '\n",
+ " 'sorted by distance')\n",
+ " parser.add_argument('--server', type=PARSER_TYPE_INT, action='append',\n",
+ " help='Specify a server ID to test against. Can be '\n",
+ " 'supplied multiple times')\n",
+ " parser.add_argument('--exclude', type=PARSER_TYPE_INT, action='append',\n",
+ " help='Exclude a server from selection. Can be '\n",
+ " 'supplied multiple times')\n",
+ " parser.add_argument('--mini', help='URL of the Speedtest Mini server')\n",
+ " parser.add_argument('--source', help='Source IP address to bind to')\n",
+ " parser.add_argument('--timeout', default=10, type=PARSER_TYPE_FLOAT,\n",
+ " help='HTTP timeout in seconds. Default 10')\n",
+ " parser.add_argument('--secure', action='store_true',\n",
+ " help='Use HTTPS instead of HTTP when communicating '\n",
+ " 'with speedtest.net operated servers')\n",
+ " parser.add_argument('--no-pre-allocate', dest='pre_allocate',\n",
+ " action='store_const', default=True, const=False,\n",
+ " help='Do not pre allocate upload data. Pre allocation '\n",
+ " 'is enabled by default to improve upload '\n",
+ " 'performance. To support systems with '\n",
+ " 'insufficient memory, use this option to avoid a '\n",
+ " 'MemoryError')\n",
+ " parser.add_argument('--version', action='store_true',\n",
+ " help='Show the version number and exit')\n",
+ " parser.add_argument('--debug', action='store_true',\n",
+ " help=ARG_SUPPRESS, default=ARG_SUPPRESS)\n",
+ "\n",
+ " options = parser.parse_args(args=[])\n",
+ " if isinstance(options, tuple):\n",
+ " args = options[0]\n",
+ " else:\n",
+ " args = options\n",
+ " return args\n",
+ "\n",
+ "\n",
+ "def validate_optional_args(args):\n",
+ " \"\"\"Check if an argument was provided that depends on a module that may\n",
+ " not be part of the Python standard library.\n",
+ " If such an argument is supplied, and the module does not exist, exit\n",
+ " with an error stating which module is missing.\n",
+ " \"\"\"\n",
+ " optional_args = {\n",
+ " 'json': ('json/simplejson python module', json),\n",
+ " 'secure': ('SSL support', HTTPSConnection),\n",
+ " }\n",
+ "\n",
+ " for arg, info in optional_args.items():\n",
+ " if getattr(args, arg, False) and info[1] is None:\n",
+ " raise SystemExit('%s is not installed. --%s is '\n",
+ " 'unavailable' % (info[0], arg))\n",
+ "\n",
+ "\n",
+ "def printer(string, quiet=False, debug=False, error=False, **kwargs):\n",
+ " \"\"\"Helper function print a string with various features\"\"\"\n",
+ "\n",
+ " if debug and not DEBUG:\n",
+ " return\n",
+ "\n",
+ " if debug:\n",
+ " if sys.stdout.isatty():\n",
+ " out = '\\033[1;30mDEBUG: %s\\033[0m' % string\n",
+ " else:\n",
+ " out = 'DEBUG: %s' % string\n",
+ " else:\n",
+ " out = string\n",
+ "\n",
+ " if error:\n",
+ " kwargs['file'] = sys.stderr\n",
+ "\n",
+ " if not quiet:\n",
+ " print_(out, **kwargs)\n",
+ "\n",
+ "\n",
+ "def shell():\n",
+ " \"\"\"Run the full speedtest.net test\"\"\"\n",
+ "\n",
+ " global DEBUG\n",
+ " shutdown_event = threading.Event()\n",
+ "\n",
+ " signal.signal(signal.SIGINT, ctrl_c(shutdown_event))\n",
+ "\n",
+ " args = parse_args()\n",
+ "\n",
+ " # Print the version and exit\n",
+ " if args.version:\n",
+ " version()\n",
+ "\n",
+ " if not args.download and not args.upload:\n",
+ " raise SpeedtestCLIError('Cannot supply both --no-download and '\n",
+ " '--no-upload')\n",
+ "\n",
+ " if len(args.csv_delimiter) != 1:\n",
+ " raise SpeedtestCLIError('--csv-delimiter must be a single character')\n",
+ "\n",
+ " if args.csv_header:\n",
+ " csv_header(args.csv_delimiter)\n",
+ "\n",
+ " validate_optional_args(args)\n",
+ "\n",
+ " debug = getattr(args, 'debug', False)\n",
+ " if debug == 'SUPPRESSHELP':\n",
+ " debug = False\n",
+ " if debug:\n",
+ " DEBUG = True\n",
+ "\n",
+ " if args.simple or args.csv or args.json:\n",
+ " quiet = True\n",
+ " else:\n",
+ " quiet = False\n",
+ "\n",
+ " if args.csv or args.json:\n",
+ " machine_format = True\n",
+ " else:\n",
+ " machine_format = False\n",
+ "\n",
+ " # Don't set a callback if we are running quietly\n",
+ " if quiet or debug:\n",
+ " callback = do_nothing\n",
+ " else:\n",
+ " callback = print_dots(shutdown_event)\n",
+ "\n",
+ " printer('Retrieving speedtest.net configuration...', quiet)\n",
+ " try:\n",
+ " speedtest = Speedtest(\n",
+ " source_address=args.source,\n",
+ " timeout=args.timeout,\n",
+ " secure=args.secure\n",
+ " )\n",
+ " except (ConfigRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest configuration', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ "\n",
+ " if args.list:\n",
+ " try:\n",
+ " speedtest.get_servers()\n",
+ " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest server list', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ "\n",
+ " for _, servers in sorted(speedtest.servers.items()):\n",
+ " for server in servers:\n",
+ " line = ('%(id)5s) %(sponsor)s (%(name)s, %(country)s) '\n",
+ " '[%(d)0.2f km]' % server)\n",
+ " try:\n",
+ " printer(line)\n",
+ " except IOError:\n",
+ " e = get_exception()\n",
+ " if e.errno != errno.EPIPE:\n",
+ " raise\n",
+ " sys.exit(0)\n",
+ "\n",
+ " printer('Testing from %(isp)s (%(ip)s)...' % speedtest.config['client'],\n",
+ " quiet)\n",
+ "\n",
+ " if not args.mini:\n",
+ " printer('Retrieving speedtest.net server list...', quiet)\n",
+ " try:\n",
+ " speedtest.get_servers(servers=args.server, exclude=args.exclude)\n",
+ " except NoMatchedServers:\n",
+ " raise SpeedtestCLIError(\n",
+ " 'No matched servers: %s' %\n",
+ " ', '.join('%s' % s for s in args.server)\n",
+ " )\n",
+ " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest server list', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ " except InvalidServerIDType:\n",
+ " raise SpeedtestCLIError(\n",
+ " '%s is an invalid server type, must '\n",
+ " 'be an int' % ', '.join('%s' % s for s in args.server)\n",
+ " )\n",
+ "\n",
+ " if args.server and len(args.server) == 1:\n",
+ " printer('Retrieving information for the selected server...', quiet)\n",
+ " else:\n",
+ " printer('Selecting best server based on ping...', quiet)\n",
+ " speedtest.get_best_server()\n",
+ " elif args.mini:\n",
+ " speedtest.get_best_server(speedtest.set_mini_server(args.mini))\n",
+ "\n",
+ " results = speedtest.results\n",
+ "\n",
+ " printer('Hosted by %(sponsor)s (%(name)s) [%(d)0.2f km]: '\n",
+ " '%(latency)s ms' % results.server, quiet)\n",
+ "\n",
+ " if args.download:\n",
+ " printer('Testing download speed', quiet,\n",
+ " end=('', '\\n')[bool(debug)])\n",
+ " speedtest.download(\n",
+ " callback=callback,\n",
+ " threads=(None, 1)[args.single]\n",
+ " )\n",
+ " printer('Download: %0.2f M%s/s' %\n",
+ " ((results.download / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]),\n",
+ " quiet)\n",
+ " else:\n",
+ " printer('Skipping download test', quiet)\n",
+ "\n",
+ " if args.upload:\n",
+ " printer('Testing upload speed', quiet,\n",
+ " end=('', '\\n')[bool(debug)])\n",
+ " speedtest.upload(\n",
+ " callback=callback,\n",
+ " pre_allocate=args.pre_allocate,\n",
+ " threads=(None, 1)[args.single]\n",
+ " )\n",
+ " printer('Upload: %0.2f M%s/s' %\n",
+ " ((results.upload / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]),\n",
+ " quiet)\n",
+ " else:\n",
+ " printer('Skipping upload test', quiet)\n",
+ "\n",
+ " printer('Results:\\n%r' % results.dict(), debug=True)\n",
+ "\n",
+ " if not args.simple and args.share:\n",
+ " results.share()\n",
+ "\n",
+ " if args.simple:\n",
+ " printer('Ping: %s ms\\nDownload: %0.2f M%s/s\\nUpload: %0.2f M%s/s' %\n",
+ " (results.ping,\n",
+ " (results.download / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0],\n",
+ " (results.upload / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]))\n",
+ " elif args.csv:\n",
+ " printer(results.csv(delimiter=args.csv_delimiter))\n",
+ " elif args.json:\n",
+ " printer(results.json())\n",
+ "\n",
+ " if args.share and not machine_format:\n",
+ " printer('Share results: %s' % results.share())\n",
+ "\n",
+ "\n",
+ "def main():\n",
+ " try:\n",
+ " shell()\n",
+ " except KeyboardInterrupt:\n",
+ " printer('\\nCancelling...', error=True)\n",
+ " except (SpeedtestException, SystemExit):\n",
+ " e = get_exception()\n",
+ " # Ignore a successful exit, or argparse exit\n",
+ " if getattr(e, 'code', 1) not in (0, 2):\n",
+ " msg = '%s' % e\n",
+ " if not msg:\n",
+ " msg = '%r' % e\n",
+ " raise SystemExit('ERROR: %s' % msg)\n",
+ "\n",
+ "\n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NgCsGSiDu1bY"
+ },
+ "source": [
+ "### Virtual Machine "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qUU2tyDpSAB2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Ubuntu VM updater \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "!apt update -qq -y &> /dev/null\n",
+ "!apt upgrade -qq -y &> /dev/null\n",
+ "!npm i -g npm &> /dev/null\n",
+ "\n",
+ "display(HTML(\"The system has been updated! \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "arzz5dBiSEDd"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Check VM status \n",
+ "Check_IP = True #@param {type:\"boolean\"}\n",
+ "Loop_Check = False #@param {type:\"boolean\"}\n",
+ "Loop_Interval = 4 #@param {type:\"slider\", min:1, max:15, step:1}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time, requests\n",
+ "from IPython.display import clear_output\n",
+ "Loop = True\n",
+ "\n",
+ "try:\n",
+ " while Loop == True:\n",
+ " clear_output(wait=True)\n",
+ " !top -bcn1 -w512\n",
+ " if Check_IP: print(\"\\nYour Public IP: \" + requests.get('http://ip.42.pl/raw').text)\n",
+ " if Loop_Check == False:\n",
+ " Loop = False\n",
+ " else:\n",
+ " time.sleep(Loop_Interval)\n",
+ "except:\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "YBpux5mNSHhG"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Get VM specification \n",
+ "Output_Format = \"TEXT\" #@param [\"TEXT\", \"HTML\", \"XML\", \"JSON\"]\n",
+ "Short_Output = True #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from google.colab import files\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "try:\n",
+ " Output_Format_Ext\n",
+ "except NameError:\n",
+ " get_ipython().system_raw(\"apt install lshw -qq -y\")\n",
+ "\n",
+ "if Short_Output:\n",
+ " Output_Format = \"txt\"\n",
+ " Output_Format2 = \"-short\"\n",
+ " Output_Format_Ext = \"txt\"\n",
+ "elif Output_Format == \"TEXT\":\n",
+ " Output_Format = \"txt\"\n",
+ " Output_Format2 = \"\"\n",
+ " Output_Format_Ext = \"txt\"\n",
+ "else:\n",
+ " Output_Format = Output_Format.lower()\n",
+ " Output_Format2 = \"-\"+Output_Format.lower()\n",
+ " Output_Format_Ext = Output_Format.lower()\n",
+ "\n",
+ "get_ipython().system_raw(\"lshw \" + Output_Format2 + \" > Specification.\" + Output_Format)\n",
+ "files.download(\"/content/Specification.\" + Output_Format_Ext)\n",
+ "get_ipython().system_raw(\"rm -f /content/Specification.$outputformatC\")\n",
+ "display(HTML(\"Sending log to your browser... \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nJlifxF8_yv1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Check for GPU (GPU runtime is needed) \n",
+ "# @markdown You should never ever connect to GPU runtime if you do not have any use for GPU at all! \n",
+ "# ================================================================ #\n",
+ "\n",
+ "gpu = !nvidia-smi --query-gpu=gpu_name,driver_version,memory.total --format=csv\n",
+ "\n",
+ "print(\"\")\n",
+ "print(gpu[1])\n",
+ "print(\"\")\n",
+ "print(\"(If the output shows nothing, that means you are not connected to GPU runtime)\")\n",
+ "print(\"----------------------------------------------------------------------------------------------------\")\n",
+ "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
+ "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "6sxlwKm9SLBa"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Crash the VM \n",
+ "# @markdown Run this cell to crash the VM. ONLY when needed!
\n",
+ "# @markdown > You might need to run this cell when the VM is out of disk due to rclone caching.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "some_str = ' ' * 5120000000000"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OOpAjMjxsNd6"
+ },
+ "source": [
+ "# ✦ *EXPERIMENTAL* ✦ \n",
+ "\n",
+ "**Everything in this section is in EXPERIMENTAL state and/or UNFINISHED and/or LEFT AS IS!\n",
+ "\n",
+ "Any issue regarding this section will be IGNORED!** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UdiQLlm5zX3_"
+ },
+ "source": [
+ "## FFMPEG 1 \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EFOqhHG6hOVH"
+ },
+ "source": [
+ "### ***Required to use Scripts:*** Install FFmpeg, VCSI & Mkvtoolnix"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "G3JHGE0Jtzme"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Click Here to Install FFmpeg, VCSI, Mkvtoolnix, Firefox, Furiousmount & Handbrake \n",
+ "\n",
+ "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
+ "from IPython.display import clear_output\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/totalleecher/\" \\\n",
+ " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
+ "\n",
+ "from ttmg import (\n",
+ " loadingAn,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn(name=\"lds\")\n",
+ "textAn(\"Installing Dependencies...\", ty='twg')\n",
+ "#os.system('pip install git+git://github.com/AWConant/jikanpy.git') //GPU Not supported\n",
+ "#os.system('add-apt-repository -y ppa:jonathonf/ffmpeg-4') //GPU Not supported\n",
+ "os.system('apt-get update')\n",
+ "os.system('apt-get install ffmpeg')\n",
+ "os.system('apt-get install mkvtoolnix')\n",
+ "os.system('pip install vcsi')\n",
+ "#os.system('sudo apt-get install synaptic')\n",
+ "#os.system('sudo apt install firefox')\n",
+ "os.system('sudo add-apt-repository ppa:stebbins/handbrake-releases -y')\n",
+ "os.system('sudo apt update -y')\n",
+ "os.system('sudo apt install --install-recommends handbrake-gtk handbrake-cli')\n",
+ "#os.system('sudo apt-get install furiusisomount')\n",
+ "\n",
+ "clear_output()\n",
+ "print(\"Install Finished\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ey6-UveDalxR"
+ },
+ "source": [
+ "### » Re-encode a Video to a Different Resolution (*H265*) - Need GPU - Nvidia Telsa P100 or T4 (Support Both Single & Batch Processing)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "tsY6jhC9SXvF"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Check GPU\n",
+ "#@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you.\n",
+ "\n",
+ "gpu = !nvidia-smi --query-gpu=gpu_name --format=csv\n",
+ "print(gpu[1])\n",
+ "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
+ "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Zam_JHPDalxc"
+ },
+ "outputs": [],
+ "source": [
+ "path = \"\" #@param {type:\"string\"}\n",
+ "save_txt = False #@param {type:\"boolean\"}\n",
+ "import os, uuid, re, IPython\n",
+ "import ipywidgets as widgets\n",
+ "import time\n",
+ "\n",
+ "from glob import glob\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from google.colab import output, drive\n",
+ "\n",
+ "def mediainfo():\n",
+ " display(HTML(\" \"))\n",
+ "# print(path.split(\"/\")[::-1][0])\n",
+ " display(HTML(\" \"))\n",
+ "# media = !mediainfo \"$path\"\n",
+ "# media = \"\\n\".join(media).replace(os.path.dirname(path)+\"/\", \"\")\n",
+ " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path\" \"\"\")\n",
+ " with open('/root/.nfo', 'r') as file:\n",
+ " media = file.read()\n",
+ " media = media.replace(os.path.dirname(path)+\"/\", \"\")\n",
+ " print(media)\n",
+ " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
+ " \n",
+ " if save_txt:\n",
+ " txt = path.rpartition('.')[0] + \".txt\"\n",
+ " if os.path.exists(txt):\n",
+ " get_ipython().system_raw(\"rm -f '$txt'\")\n",
+ " !curl -s https://pastebin.com/raw/TApKLQfM -o \"$txt\"\n",
+ " with open(txt, 'a+') as file:\n",
+ " file.write(\"\\n\\n\")\n",
+ " file.write(media)\n",
+ "\n",
+ "while not os.path.exists(\"/content/drive\"):\n",
+ " try:\n",
+ " drive.mount(\"/content/drive\")\n",
+ " clear_output(wait=True)\n",
+ " except:\n",
+ " clear_output()\n",
+ " \n",
+ "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
+ " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
+ " \n",
+ "mediainfo()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "SHBPElqualx6"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "#@markdown Encoder \n",
+ "Encoder = \"CPU\" #@param [\"GPU\", \"CPU\"]\n",
+ "codec = \"x264\" #@param [\"x264\", \"x265\"]\n",
+ "#@markdown Encoding all videos in folder \n",
+ "video_folder_path = '' #@param {type:\"string\"}\n",
+ "#@markdown ---\n",
+ "#@markdown Encoding selected videos \n",
+ "video_file_path1 = '' #@param {type:\"string\"}\n",
+ "video_file_path2 = '' #@param {type:\"string\"}\n",
+ "video_file_path3 = '' #@param {type:\"string\"}\n",
+ "video_file_path4 = '' #@param {type:\"string\"}\n",
+ "video_file_path5 = '' #@param {type:\"string\"}\n",
+ "\n",
+ "#counting\n",
+ "if video_file_path1 != \"\":\n",
+ " coa = 1\n",
+ "else:\n",
+ " coa = 0\n",
+ "\n",
+ "if video_file_path2 != \"\":\n",
+ " cob = 1\n",
+ "else:\n",
+ " cob = 0\n",
+ "\n",
+ "if video_file_path3 != \"\":\n",
+ " coc = 1\n",
+ "else:\n",
+ " coc = 0\n",
+ "\n",
+ "if video_file_path4 != \"\":\n",
+ " cod = 1\n",
+ "else:\n",
+ " cod = 0\n",
+ "\n",
+ "if video_file_path5 != \"\":\n",
+ " coe = 1\n",
+ "else:\n",
+ " coe = 0\n",
+ "\n",
+ "#@markdown ---\n",
+ "resolution = '360p' #@param [\"2160p\",\"1440p\",\"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"same as input\"]\n",
+ "encode_setting = 'Advance' #@param [\"Advance\", \"HEVC\", \"HEVC 10 Bit\"]\n",
+ "file_type = 'mkv' #@param [\"mkv\", \"mp4\"]\n",
+ "rip_audio = False #@param {type:\"boolean\"}\n",
+ "rip_subtitle = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "if rip_audio == False:\n",
+ " rip_audio_string = \"-acodec copy\"\n",
+ "else:\n",
+ " rip_audio_string = \"-an\"\n",
+ "\n",
+ "if rip_subtitle == False:\n",
+ " rip_subtitle_string = \"-scodec copy\"\n",
+ "else:\n",
+ " rip_subtitle_string = \"-sn\"\n",
+ "\n",
+ "\n",
+ "if resolution == '2160p':\n",
+ " w = '3840'\n",
+ "elif resolution == '1440p':\n",
+ " w = '2560'\n",
+ "elif resolution == '1080p':\n",
+ " w = '1980'\n",
+ "elif resolution == '720p':\n",
+ " w = '1280'\n",
+ "elif resolution == '480p':\n",
+ " w = '854'\n",
+ "elif resolution == '360p':\n",
+ " w = '640'\n",
+ "elif resolution == '240p':\n",
+ " w = '426'\n",
+ "else:\n",
+ " w = ''\n",
+ "\n",
+ "if (w == '3840' or w == '2560' or w == '1980' or w == '1280' or w == '854' or w == '640' or w == '426'):\n",
+ " scale_string = \"-vf scale=\"+(w)+\":-1:flags=lanczos\" \n",
+ "else:\n",
+ " scale_string = \"\"\n",
+ "\n",
+ "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
+ "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
+ "filePath = \"ffmpeg.txt\"\n",
+ "if os.path.exists(filePath):\n",
+ " os.remove(filePath)\n",
+ "\n",
+ "if video_folder_path == \"\":\n",
+ " #try:\n",
+ " f = open(\"ffmpeg.txt\", \"+w\")\n",
+ " x = (video_file_path1) + \"\\n\" + (video_file_path2) + \"\\n\" +(video_file_path3) + \"\\n\" +(video_file_path4) +\"\\n\" + (video_file_path5)\n",
+ " f.write(x)\n",
+ " f.close()\n",
+ " count = coa+cob+coc+cod+coe\n",
+ " #except:\n",
+ " #err = 1\n",
+ "\n",
+ "else:\n",
+ "#writing temp file\n",
+ " for file in os.listdir(video_folder_path):\n",
+ " if file.endswith(tuple(ext)):\n",
+ " \n",
+ " x = os.path.join(video_folder_path, file) \n",
+ " #print(x)\n",
+ " print(x, file=open(\"ffmpeg.txt\", \"+a\")) \n",
+ "\n",
+ "#counting line\n",
+ " thefilepath = \"ffmpeg.txt\"\n",
+ " count = len(open(thefilepath).readlines( ))\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown Advance Settings \n",
+ "#@markdown Video Setting \n",
+ "preset = 'slow' #@param [\"slow\", \"medium\", \"fast\", \"hq\", \"hp\", \"bd\", \"ll\", \"llhq\", \"llhp\", \"lossless\", \"losslesshp\"]\n",
+ "level = '5.2' #@param [\"default\",\"4.1\", \"5.1\", \"5.2\", \"6.2\"]\n",
+ "tier = 'main' #@param [\"default\",\"main\", \"high\"]\n",
+ "#@markdown Setting only for GPU Encoding
\n",
+ "profile = 'main' #@param [\"main\", \"main10\", \"rext\"]\n",
+ "pixfmt = 'p010le' #@param [\"nv12\", \"yuv420p\", \"p010le\", \"yuv444p\", \"p016le\", \"yuv444p16le\"]\n",
+ "rc = 'vbr_hq' #@param [\"vbr\", \"cbr\", \"vbr_2pass\", \"ll_2pass_size\", \"vbr_hq\", \"cbr_hq\"]\n",
+ "rcla = '32' #@param [\"8\", \"16\", \"32\", \"64\"]\n",
+ "overall_bitrate = 2500 #@param {type:\"slider\", min:500, max:10000, step:100}\n",
+ "max_bitrate = 20000 #@param {type:\"slider\", min:500, max:50000, step:100}\n",
+ "buffer_size = 60000 #@param {type:\"slider\", min:500, max:90000, step:100}\n",
+ "deblock = -3 #@param {type:\"slider\", min:-6, max:6, step:1}\n",
+ "reframe = 5 #@param {type:\"slider\", min:1, max:6, step:1}\n",
+ "surfaces = 64 #@param {type:\"slider\", min:0, max:64, step:1}\n",
+ "#@markdown Setting only for CPU Encoding
\n",
+ "profile_cpu = 'main10' #@param [\"main10\"]\n",
+ "pixfmt_cpu = 'yuv420p10le' #@param [\"yuv420p\",\"yuv420p10le\",\"yuv444p\",\"yuv444p16le\"]\n",
+ "threads = 16 #@param {type:\"slider\", min:0, max:16, step:1}\n",
+ "crf = 28 #@param {type:\"slider\", min:0, max:30, step:1}\n",
+ "\n",
+ "\n",
+ "if level != \"default\":\n",
+ " l_string = \"-level \"+str(level)\n",
+ "else:\n",
+ " l_string =\"\"\n",
+ "\n",
+ "if tier != \"default\":\n",
+ " t_string = \"-tier \"+str(tier)\n",
+ "else:\n",
+ " t_string = \"\"\n",
+ "\n",
+ "#tp = '1' #@param [\"0\", \"1\"]\n",
+ "#cq = '21' #@param {type:\"string\"}\n",
+ "#qm ='21' #@param {type:\"string\"}\n",
+ "#qmx = '27' #@param {type:\"string\"}\n",
+ "#qp = '23' #@param {type:\"string\"}\n",
+ "#qb = '25' #@param {type:\"string\"}\n",
+ "#qi = '21' #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Audio Setting \n",
+ "\n",
+ "audio_output = 'No audio' #@param [\"None\", \"copy\", \"flac\", \"aac\", \"libopus\", \"eac3\", \"No audio\", \"same as input\"]\n",
+ "channel = 'same as input' #@param [\"DownMix 2CH\", \"same as input\"]\n",
+ "\n",
+ "if audio_output == \"same as input\":\n",
+ " audio_string = \"-acodec copy\"\n",
+ "elif audio_output == \"No audio\":\n",
+ " audio_string = \"-an\"\n",
+ "elif audio_output == \"None\":\n",
+ " audio_string = \"\"\n",
+ "else:\n",
+ " audio_string = \"-c:a \"+(audio_output)\n",
+ "\n",
+ "if channel == \"DownMix 2CH\":\n",
+ " channel_string =\"-ac 2\"\n",
+ "else:\n",
+ " channel_string =\"\"\n",
+ "\n",
+ "#@markdown Subtitle Setting \n",
+ "#@markdown Please use ass
file for hardsub \n",
+ "hardsub = False #@param {type:\"boolean\"}\n",
+ "subtitle_option = 'same as input' #@param [\"None\",\"No sub\", \"Add custom sub\",\"same as input\"]\n",
+ "custom_subtitle_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Custom Added Setting \n",
+ "custom_command = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "\n",
+ "if hardsub == False:\n",
+ "\n",
+ " if subtitle_option == \"No sub\":\n",
+ " subtitle_string = \"-sn\"\n",
+ " elif subtitle_option == \"same as input\":\n",
+ " subtitle_string = \"-scodec copy\"\n",
+ " elif subtitle_option == \"None\":\n",
+ " subtitle_string = \"\"\n",
+ " else:\n",
+ " subtitle_string = \"-i \"+(custom_subtitle_path)\n",
+ "\n",
+ "else:\n",
+ " subtitle_string = \"ass=\"+(custom_subtitle_path)\n",
+ "#=================\n",
+ "if custom_command != \"\":\n",
+ " c_string = custom_command\n",
+ "else:\n",
+ " c_string = \"\"\n",
+ "#=================\n",
+ "\n",
+ "os.environ['ps'] = preset\n",
+ "os.environ['pf'] = profile\n",
+ "os.environ['pf_cpu'] = profile_cpu\n",
+ "os.environ['pfm'] = pixfmt\n",
+ "os.environ['pfmcpu'] = pixfmt_cpu\n",
+ "os.environ['br'] = str(overall_bitrate)\n",
+ "os.environ['max'] = str(max_bitrate)\n",
+ "os.environ['buff'] = str(buffer_size)\n",
+ "os.environ['de'] = str(deblock)\n",
+ "os.environ['ref'] = str(reframe)\n",
+ "os.environ['sur'] = str(surfaces)\n",
+ "os.environ['lv'] = l_string\n",
+ "os.environ['ti'] = t_string\n",
+ "os.environ['rc'] = rc\n",
+ "os.environ['rl'] = rcla\n",
+ "os.environ['thr'] = str(threads)\n",
+ "os.environ['crf'] = str(crf)\n",
+ "os.environ['res'] = resolution\n",
+ "#os.environ['tp'] = tp\n",
+ "#os.environ['cq'] = cq\n",
+ "#os.environ['qP'] = qp\n",
+ "#os.environ['qB'] = qb\n",
+ "#os.environ['qI'] = qi\n",
+ "#os.environ['qm'] = qm\n",
+ "#os.environ['qmx'] = qmx\n",
+ "os.environ['scs'] = str(scale_string)\n",
+ "os.environ['aus'] = audio_string\n",
+ "os.environ['chc'] = channel_string\n",
+ "os.environ['sus'] = subtitle_string\n",
+ "os.environ['cus'] = str(c_string)\n",
+ "#=================\n",
+ "#Batch Encoding\n",
+ "if count != 0:\n",
+ " f=open('ffmpeg.txt')\n",
+ " lines=f.readlines()\n",
+ "\n",
+ " i = 0\n",
+ " while i < count:\n",
+ " video_file_path = lines[i]\n",
+ " video_file_path = video_file_path.rstrip(\"\\n\")\n",
+ " #print(video_file_path)\n",
+ "\n",
+ " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ " testsplit = video_file_path.split(\"/\")\n",
+ " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ " resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
+ " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "\n",
+ " os.environ['inputFile'] = video_file_path\n",
+ " os.environ['outputPath'] = output_file_path.group(0)\n",
+ " os.environ['fileName'] = filename_raw\n",
+ " os.environ['fileType'] = file_type\n",
+ " os.environ['resolutionWidth'] = resolution_raw.group(0)\n",
+ "\n",
+ " if Encoder == \"GPU\":\n",
+ " if codec == \"x265\":\n",
+ " if encode_setting == \"Advance\":\n",
+ "\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v \"$ps\" -rc \"$rc\" -2pass 1 -b:v \"$br\"k -maxrate \"$max\"k -bufsize \"$buff\"k -profile:v \"$pf\" $lv $ti -pix_fmt \"$pfm\" -rc-lookahead \"$rl\" -no-scenecut 1 -weighted_pred 1 -deblock:v \"$de\":\"$de\" -refs:v \"$ref\" -surfaces \"$sur\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " elif encode_setting == \"HEVC\":\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " else:\n",
+ " !ffmpeg -hwaccel cuvid -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -profile:v main10 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " else:\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -c:v h264_cuvid $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " else:\n",
+ " if codec == \"x265\":\n",
+ " if encode_setting == \"Advance\":\n",
+ " !ffmpeg -i \"$inputFile\" -flags +loop -c:v libx265 -profile:v \"$pf_cpu\" $lv $ti -pix_fmt \"$pfmcpu\" -threads \"$thr\" -thread_type frame -preset:v \"$ps\" -crf \"$crf\" -x265-params \"rc-lookahead=40:bframes=4:b-adapt=2:ref=6:aq-mode=0:aq-strength=0:aq-motion=0:me=hex:subme=3:max-merge=3:weightb=1:no-fast-intra=1:tskip-fast=0:rskip=0:strong-intra-smoothing=0:b-intra=1:early-skip=0:sao=0:rd=1:psy-rd=0:deblock=-5,-5\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " elif encode_setting == \"HEVC\":\n",
+ " !ffmpeg -i \"$inputFile\" -c:v libx265 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " else:\n",
+ " !ffmpeg -i \"$inputFile\" -c:v libx265 -profile:v main10 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " else:\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" -c:v libx264 -preset \"$ps\" -crf \"$crf\" -threads \"$thr\" -strict experimental $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " i += 1\n",
+ "\n",
+ " else:\n",
+ " print(\"All Finished\")\n",
+ " os.remove(filePath)\n",
+ "else:\n",
+ " print(\"Please input file or folder path\")\n",
+ "#End of Code V1.5 - Codemater - "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GahMjYf8miNs"
+ },
+ "source": [
+ "### » Generate Thumbnails - Preview from Video "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0nY7QbDIrnGl"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Click Here to generate thumbnail for all video in input folder path \n",
+ "\n",
+ "import os\n",
+ "folder_path = \"\" #@param {type:\"string\"}\n",
+ "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
+ "video_path = '' #@param {type:\"string\"}\n",
+ "\n",
+ "\n",
+ "#counting\n",
+ "if video_path != \"\":\n",
+ " count = 1\n",
+ "else:\n",
+ " count = 0\n",
+ "\n",
+ "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
+ "filePath = \"vcsi.txt\"\n",
+ "if os.path.exists(filePath):\n",
+ " os.remove(filePath)\n",
+ "\n",
+ "\n",
+ "\n",
+ "if (folder_path == \"\") and (video_path != \"\"):\n",
+ " #try:\n",
+ " f = open(\"vcsi.txt\", \"+w\")\n",
+ " f.write(video_path)\n",
+ " f.close()\n",
+ " count = 1\n",
+ "\n",
+ "elif (folder_path == \"\") and (video_path == \"\"):\n",
+ " count = 0\n",
+ "\n",
+ "else:\n",
+ "#writing temp file\n",
+ " for file in os.listdir(folder_path):\n",
+ " if file.endswith(tuple(ext)):\n",
+ " \n",
+ " x = os.path.join(folder_path, file) \n",
+ " #print(x)\n",
+ " print(x, file=open(\"vcsi.txt\", \"+a\")) \n",
+ "\n",
+ "#counting line\n",
+ " thefilepath = \"vcsi.txt\"\n",
+ " count = len(open(thefilepath).readlines( ))\n",
+ "\n",
+ "\n",
+ "import os, sys, re\n",
+ "from IPython.display import Image, display\n",
+ "os.makedirs(\"/content/drive/My Drive/Thumbnail\", exist_ok=True)\n",
+ "\n",
+ "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
+ "creation_engine = 'vcsi' #@param [\"ffmpeg\", \"vcsi\"]\n",
+ "output_path = 'same folder' #@param [\"same folder\", \"My Drive/Thumbnail\"]\n",
+ "#@markdown Eg : gird 3 = 3x3
\n",
+ "grid = 4 #@param {type:\"slider\", min:1, max:20, step:1}\n",
+ "default_grid = True #@param {type:\"boolean\"}\n",
+ "time_stamp = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "\n",
+ "if time_stamp == True:\n",
+ " t_string = \"-t\"\n",
+ "else:\n",
+ " t_string = \"\"\n",
+ "\n",
+ "if default_grid == False:\n",
+ " g_string = \"-g \" + str(grid) + \"x\" + str(grid) \n",
+ "else:\n",
+ " g_string = \"\"\n",
+ "\n",
+ "os.environ['ts'] = t_string\n",
+ "os.environ['gs'] = g_string\n",
+ "#Batch Encoding\n",
+ "if count != 0:\n",
+ " f=open('vcsi.txt')\n",
+ " lines=f.readlines()\n",
+ "\n",
+ " i = 0\n",
+ " while i < count:\n",
+ " video_file_path = lines[i]\n",
+ " video_file_path = video_file_path.rstrip(\"\\n\")\n",
+ " print(video_file_path)\n",
+ " \n",
+ " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ " output_file_path_raw = output_file_path.group(0)\n",
+ " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ " file_extension = re.search(\".{3}$\", filename)\n",
+ " file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ " os.environ['inputFile'] = video_file_path\n",
+ " os.environ['outputPath'] = output_file_path_raw\n",
+ " os.environ['outputExtension'] = output_file_type\n",
+ " os.environ['fileName'] = filename_raw\n",
+ " os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ " if output_path == \"same folder\":\n",
+ " if creation_engine == 'ffmpeg':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 0 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " if output_path == \"same folder\":\n",
+ " if creation_engine == 'vcsi':\n",
+ " !vcsi $ts $gs \"$inputFile\" -o \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " if not output_path == \"same folder\":\n",
+ " !vcsi $ts $gs \"$inputFile\" -o \"/content/drive/My Drive/Thumbnail\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " i += 1\n",
+ "\n",
+ " else:\n",
+ " print(\"All Finished\")\n",
+ " os.remove(filePath)\n",
+ "else:\n",
+ " print(\"Please video file or folder path\")\n",
+ "#End of Code V1.2 - Codemater - "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NQ0TxfKeghR8"
+ },
+ "source": [
+ "### » Misc."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Ls4O5VLwief-"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert *.mkv* ➔ *.mp4* (Lossless)\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputFile'] = filename_raw\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict experimental \"$outputPath\"\"$outputFile\".mp4"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "iFBUeQhn7QTc"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert Trim Video File (Lossless)\n",
+ "\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nSeO98YQoTJe"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Audio from Video File (Lossless)\n",
+ "\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = output_file_extension\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CEHi5EMm9lXG"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Crop Video\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "out_width = \"1280\" #@param {type:\"string\"}\n",
+ "out_height = \"200\" #@param {type:\"string\"}\n",
+ "starting_position_x = \"0\" #@param {type:\"string\"}\n",
+ "starting_position_y = \"300\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outWidth'] = out_width\n",
+ "os.environ['outHeight'] = out_height\n",
+ "os.environ['positionX'] = starting_position_x\n",
+ "os.environ['positionY'] = starting_position_y\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "ee5omyu53kv0"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Individual Frames from Video (*Lossless*)\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
+ "#@markdown * [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
+ "\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['frameRate'] = frame_rate\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qRVrWJDPFvYY"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Verify Tracks for Video \n",
+ "import os, sys, re\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "!mkvmerge -i \"$video_file_path\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "IVoQDyfT06bN"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Subtitle from Video \n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = 'idx/sub' #@param [\"srt\", \"ass\", \"idx/sub\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outputExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "if output_file_type == 'srt':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
+ "\n",
+ "if output_file_type == 'ass':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
+ "\n",
+ "if output_file_type == 'idx/sub':\n",
+ " !mkvextract \"$inputFile\" tracks 2:\"$outputPath\"/\"$fileName\".idx"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "aURlOf9BC1P3"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)\n",
+ "import os, sys, re\n",
+ "\n",
+ "audio_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = audio_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ja95mvvq8oei"
+ },
+ "source": [
+ "### Extract HardSub (*Code still pending - Require python 3.7*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nsT83IDywPFe"
+ },
+ "outputs": [],
+ "source": [
+ "#@title\n",
+ "#@markdown ⬅️ Click Here to START server \n",
+ "\n",
+ "!sudo apt-get update \n",
+ "!sudo apt install tesseract-ocr\n",
+ "!sudo apt install libtesseract-dev\n",
+ "!sudo apt-get install tesseract-ocr-eng-mya\n",
+ "!sudo pip install pytesseract\n",
+ "!pip3 install opencv-python\n",
+ "!sudo apt-get install libopencv-dev\n",
+ "!pip install videocr\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "print(\"Server Started Successfully\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "EzF2X0m7FIku"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install progressbar2 baidu-aip opencv-python-headless numpy"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3kabgg9wFmjv"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/fanyange/ocr_video_hardcoded_subtitles.git"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "D313rmPQFrQ3"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/ocr_video_hardcoded_subtitles"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "40JsjJCBxWcn"
+ },
+ "outputs": [],
+ "source": [
+ "from videocr import get_subtitles\n",
+ "\n",
+ "if __name__ == '__main__': # This check is mandatory for Windows.\n",
+ " print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KXWYsnPOxJVd"
+ },
+ "outputs": [],
+ "source": [
+ "get_subtitles(\n",
+ " video_path: str, lang='eng', time_start='0:00', time_end='',\n",
+ " conf_threshold=65, sim_threshold=90, use_fullframe=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Dnkbv5UyGzMJ"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "knnSIyZzG2gs"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/aritra1999/Video-OCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "dJYInqIZHAPJ"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/Video-OCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "anaBbX-VHEwk"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install -r reuirements.txt\n",
+ "!python final.py"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BKvHp7QUKMGL"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/rflynn/mangold.git"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Sc5dglFDKU80"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/mangold"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BUm3Yn42KbHD"
+ },
+ "outputs": [],
+ "source": [
+ "!python ocr1.py pitrain.png"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CD36vcpf2FSb"
+ },
+ "source": [
+ "## FFMPEG 2 \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RDHuIkoi6l9a"
+ },
+ "source": [
+ "### » Display Media File Metadata"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Sv8au_RO6WUs"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "media_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "os.environ['inputFile'] = media_file_path\n",
+ "\n",
+ "!ffmpeg -i \"$inputFile\" -hide_banner"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "X4yIG_nqYAoH"
+ },
+ "source": [
+ "> *You can ignore the* \"`At least one output file must be specified`\" *error after running this.*\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "66I2t2sQ2SMq"
+ },
+ "source": [
+ "### » Convert *Video File* ➔ *.mp4* (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "o6fcC2wN2SM8"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mp4"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NObEcBWAJoaz"
+ },
+ "source": [
+ "### » Convert *Video File* ➔ *.mkv* (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "zsx4JFLRJoa0"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mkv"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FpJXJiRl6-gK"
+ },
+ "source": [
+ "### » Trim Video File (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8rjW6Fcb2SN0"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SNDGdMRn3PA-"
+ },
+ "source": [
+ "### » Crop Video"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KFcIThDuBii_"
+ },
+ "source": [
+ " Crop Variables Explanation:\n",
+ "\n",
+ "* `out_width` = The width of your cropped video file.\n",
+ "* `out_height` = The height of your cropped video file.\n",
+ "* `starting_position_x` & `starting_position_y` = These values define the x & y coordinates of the top left corner of your original video to start cropping from.\n",
+ "\n",
+ "###### *Example: For cropping the black bars from a video that looked like* [this](https://yuju.pw/y/312r.png):\n",
+ "* *For your starting coordinates* (`x` , `y`) *you would use* (`0` , `138`).\n",
+ "* *For* `out_width` *you would use* `1920`. *And for* `out_height` *you would use `804`.*\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wuMEJdjV2SOT"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "out_width = \"1920\" #@param {type:\"string\"}\n",
+ "out_height = \"804\" #@param {type:\"string\"}\n",
+ "starting_position_x = \"0\" #@param {type:\"string\"}\n",
+ "starting_position_y = \"138\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outWidth'] = out_width\n",
+ "os.environ['outHeight'] = out_height\n",
+ "os.environ['positionX'] = starting_position_x\n",
+ "os.environ['positionY'] = starting_position_y\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2f-THZmDoOaY"
+ },
+ "source": [
+ "### » Extract Audio from Video File (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JNckCucf2SOs"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = output_file_extension\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MSUasbRUDP3B"
+ },
+ "source": [
+ "### » Re-encode a Video to a Different Resolution"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nd2LvSRZCxRe"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = '' #@param {type:\"string\"}\n",
+ "resolution = '1080p' #@param [\"2160p\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\"]\n",
+ "file_type = 'mp4' #@param [\"mkv\", \"mp4\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "testsplit = video_file_path.split(\"/\")\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = file_type\n",
+ "os.environ['resolutionHeight'] = resolution_raw.group(0)\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vf \"scale=-1:\"$resolutionHeight\"\" -c:a copy -strict experimental \"$outputPath\"/\"$fileName\"-\"$resolutionHeight\"p.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9UagRtLPyKoQ"
+ },
+ "source": [
+ "### » Extract Individual Frames from Video"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jTnByMhAyKoF"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
+ "* [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['frameRate'] = frame_rate\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9ZcgdPBT2SQK"
+ },
+ "source": [
+ "### » Generate Thumbnails - Preview from Video (3x2)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "J2u-Rha8miNy"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown Example of output image: https://yuju.pw/y/39i2.png \n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outputExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 2 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7-3O4en4C4IL"
+ },
+ "source": [
+ "### » Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2sKzNHSG2SQq"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "audio_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = audio_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VRk2Ye1exWVA"
+ },
+ "source": [
+ "### » Extract + Upload Frames from Video "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BIGsgarfxWVI"
+ },
+ "outputs": [],
+ "source": [
+ "import os, re, time, pathlib\n",
+ "import urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "Auto_UP_Gdrive = False \n",
+ "AUTO_MOVE_PATH = \"/content\" \n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/biplobsd/\" \\\n",
+ " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
+ "\n",
+ "from ttmg import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " updateCheck,\n",
+ " ngrok\n",
+ ")\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir -p \"/content/frames\"\n",
+ "\n",
+ "for i in range(10):\n",
+ " clear_output()\n",
+ " loadingAn()\n",
+ " print(\"Uploading Frames...\")\n",
+ "\n",
+ "%cd \"/content/frames\"\n",
+ "!ffmpeg -hide_banner -ss 00:56.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame1.png\"\n",
+ "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame1.png\" https://catbox.moe/user/api.php -o frame1.txt\n",
+ "f1 = open('frame1.txt', 'r')\n",
+ "%cd \"/content\"\n",
+ "file_content1 = f1.read()\n",
+ "\n",
+ "%cd \"/content/frames\"\n",
+ "!ffmpeg -hide_banner -ss 02:20.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame2.png\"\n",
+ "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame2.png\" https://catbox.moe/user/api.php -o frame2.txt\n",
+ "%cd \"/content/frames\"\n",
+ "f2 = open('frame2.txt', 'r')\n",
+ "%cd \"/content\"\n",
+ "file_content2 = f2.read()\n",
+ "\n",
+ "clear_output()\n",
+ "print (\"Screenshot URLs:\")\n",
+ "print (\"1. \" + file_content1)\n",
+ "print (\"2. \" + file_content2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tozwpAhhnm69"
+ },
+ "source": [
+ "### MediaInfo "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NTULRguzu0b0"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MediaInfo \n",
+ "path_to_file = \"\" # @param {type:\"string\"}\n",
+ "save_output_to_file = False # @param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, re, IPython\n",
+ "import ipywidgets as widgets\n",
+ "import time\n",
+ "from glob import glob\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "def mediainfo():\n",
+ " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path_to_file\" \"\"\")\n",
+ " with open('/root/.nfo', 'r') as file:\n",
+ " media = file.read()\n",
+ " media = media.replace(os.path.dirname(path_to_file)+\"/\", \"\")\n",
+ " print(media)\n",
+ " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
+ " \n",
+ " if save_output_to_file:\n",
+ " txt = path.rpartition('.')[0] + \".txt\"\n",
+ " if os.path.exists(txt):\n",
+ " get_ipython().system_raw(\"rm -f '$txt'\")\n",
+ " with open(txt, 'a+') as file:\n",
+ " file.write(media)\n",
+ " \n",
+ "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
+ " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
+ " \n",
+ "mediainfo()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ts6zYXUdEfrz"
+ },
+ "source": [
+ "## Google Drive Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "B10h_KlyE_S5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install googleDriveFileDownloader\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/Google Drive'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mHTDvjRKEs9n"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Google Drive Downloader \n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/Google Drive).
\n",
+ "# @markdown > This downloader is somewhat working.The only problem (for now) is that the downloaded file is not stored with the same name and appears to not have extension as well.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from googleDriveFileDownloader import googleDriveFileDownloader\n",
+ "\n",
+ "if url == '':\n",
+ " print(\"The url field is empty!\")\n",
+ "else:\n",
+ " if output == '':\n",
+ " output = '/content/downloads/Google Drive'\n",
+ " %cd \"$output\"\n",
+ " a = googleDriveFileDownloader()\n",
+ " a.downloadFile(url)\n",
+ " else:\n",
+ " %cd \"$output\"\n",
+ " a = googleDriveFileDownloader()\n",
+ " a.downloadFile(url)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FWdEg4H9JlSp"
+ },
+ "source": [
+ "## HandBrake "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "E2seNDqYO8wg"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install HandBrake \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from os import makedirs\n",
+ "\n",
+ "makedirs(\"/content/temp/HandbrakeTemp\", exist_ok = True)\n",
+ "\n",
+ "!wget -qq https://github.com/vot/ffbinaries-prebuilt/releases/download/v4.2.1/ffmpeg-4.2.1-linux-64.zip \n",
+ "!rm -f ffmpeg-4.2.1-linux-64.zip\n",
+ "!add-apt-repository ppa:stebbins/handbrake-releases -y \n",
+ "!apt-get install -y handbrake-cli\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CQdjykVdJw0a"
+ },
+ "outputs": [],
+ "source": [
+ "##################################################\n",
+ "#\n",
+ "# Code author: SKGHD\n",
+ "# https://github.com/SKGHD/Handy\n",
+ "#\n",
+ "##################################################\n",
+ "\n",
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] HandBrake \n",
+ "MODE = \"SINGLE\" #@param [\"SINGLE\", \"BATCH\"]\n",
+ "# @markdown > Select mode (batch conversion / single file)\n",
+ "# @markdown ---\n",
+ "SOURCE = \"\" # @param {type:\"string\"}\n",
+ "DESTINATION = \"\" # @param {type:\"string\"}\n",
+ "FORMAT = \"mkv\" # @param [\"mp4\", \"mkv\"]\n",
+ "RESOLUTION = \"480p\" # @param [\"480p\", \"576p\", \"720p\", \"1080p\"]\n",
+ "Encoder = \"x264\" # @param [\"x264\", \"x265\"]\n",
+ "Encoder_Preset = \"ultrafast\" # @param [\"ultrafast\", \"faster\", \"fast\", \"medium\", \"slow\", \"slower\"]\n",
+ "CQ = 30 #@param {type:\"slider\", min:10, max:30, step:1}\n",
+ "# @markdown > Choose Constant Quality Rate (higher quality / smaller file size)\n",
+ "Additional_Flags = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import smtplib\n",
+ "import os\n",
+ "\n",
+ "formats = ('.mkv','.mp4','.ts','.avi','.mov','.wmv')\n",
+ "\n",
+ "######## Renames the file ########\n",
+ "def fileName(fPath):\n",
+ " tName = fPath.split('/')[-1] \n",
+ " if tName.endswith('ts'):\n",
+ " tName = '[HandBrake] ' + tName[:-3] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
+ " else:\n",
+ " tName = '[HandBrake] ' + tName[:-4] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
+ " return tName\n",
+ "\n",
+ "def set_resolution():\n",
+ " global w,h,flags\n",
+ " if RESOLUTION == \"480p\":\n",
+ " w, h = \"854\" , \"480\"\n",
+ " if RESOLUTION == \"480p\":\n",
+ " w, h = \"1024\" , \"576\"\n",
+ " elif RESOLUTION == \"720p\":\n",
+ " w, h = \"1280\" , \"720\"\n",
+ " elif RESOLUTION==\"1080p\":\n",
+ " w, h = \"1920\" , \"1080\"\n",
+ "\n",
+ "def addFlags():\n",
+ " global flags\n",
+ " flags = f\" --encoder {Encoder} --all-audio -s '0,1,2,3' --cfr --optimize --quality={CQ} --width={w} --height={h} --format={FORMAT} --encoder-preset={Encoder_Preset} \"\n",
+ " if Additional_Flags != \"\":\n",
+ " flags += str(Additional_Flags)\n",
+ "\n",
+ "set_resolution()\n",
+ "addFlags()\n",
+ "\n",
+ "##### HandBrake and Rclone #####\n",
+ "def runner(path):\n",
+ " f_name = fileName(path)\n",
+ " hTemp=f\"/content/temp/HandbrakeTemp/{f_name}\"\n",
+ " !HandBrakeCLI -i \"$path\" -o \"$hTemp\" $flags\n",
+ "\n",
+ "\n",
+ " if os.path.isfile(hTemp):\n",
+ " print(f\"\\n\\n********** Successfully converted {f_name}\\n Now saving to Destination.....\")\n",
+ " if os.path.exists('/usr/bin/rclone'):\n",
+ " !rclone move \"$hTemp\" --user-agent \"Mozilla\" \"$DESTINATION\" --transfers 20 --checkers 20 --stats-one-line --stats=5s -v --tpslimit 95 --tpslimit-burst 40\n",
+ " else:\n",
+ " dest = DESTINATION+'/'+f_name\n",
+ " !mv \"$hTemp\" \"$dest\"\n",
+ " if os.path.isfile(DESTINATION+ '/' +f_name): \n",
+ " print(f\"\\n\\n********** Successfully saved {f_name} to Destination\")\n",
+ "\n",
+ "########## Check Mode ########\n",
+ "if MODE==\"BATCH\":\n",
+ " os.makedirs(DESTINATION, exist_ok=True)\n",
+ " if SOURCE.endswith('/'):\n",
+ " pass\n",
+ " else: SOURCE +='/'\n",
+ " filesList = os.listdir(SOURCE+'.')\n",
+ " if os.path.isfile(SOURCE+'processed_db.txt'):\n",
+ " pass\n",
+ " else:\n",
+ " with open((SOURCE+'processed_db.txt'), 'w') as fb:\n",
+ " fb.write(\"Do not delete this file until all files have been processed!\\n\")\n",
+ " fb.close()\n",
+ " with open((SOURCE+'processed_db.txt'), \"r+\") as filehandle:\n",
+ " processedList = [x.rstrip() for x in filehandle.readlines()]\n",
+ "\n",
+ " print('<<<<<<<<<<<<<<<<<< Starting Conversion in Batch mode. >>>>>>>>>>>>>>>>>>')\n",
+ "\n",
+ " for currentFile in filesList:\n",
+ " if currentFile.endswith(formats):\n",
+ " if currentFile not in processedList:\n",
+ " currentPath = SOURCE + currentFile \n",
+ " print(f'\\n\\n**************** Current File to process: {currentFile}')\n",
+ " runner(currentPath)\n",
+ " filehandle.write(currentFile+'\\n')\n",
+ " filehandle.close()\n",
+ " \n",
+ "\n",
+ "else:\n",
+ " if SOURCE.endswith(formats): \n",
+ " runner(SOURCE)\n",
+ " else: print(\"Are you sure you have selected the correct file?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Rd6Br05y7_Ya"
+ },
+ "source": [
+ "## MEGA Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LeGWoVGW8Eem"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module and Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install git+https://github.com/jeroenmeulenaar/python3-mega.git\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/MEGA'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JiZ0tJd78LNQ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Downloader \n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/MEGA).
\n",
+ "# @markdown > Currently not working due to the module haven't been updated to work with the new MEGA link structure. \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from mega import Mega\n",
+ "\n",
+ "if url == '':\n",
+ " print(\"The url field is empty!\")\n",
+ "else:\n",
+ " if output == '':\n",
+ " output = '/content/downloads/MEGA'\n",
+ " %cd /content/downloads/MEGA\n",
+ " m = Mega.from_ephemeral()\n",
+ " m.download_from_url(url)\n",
+ " else:\n",
+ " %cd \"$output\"\n",
+ " m = Mega.from_ephemeral()\n",
+ " m.download_from_url(url)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7bNutSOeJ1kM"
+ },
+ "source": [
+ "## zippyshare Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "if-ge8tzJ305"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module and Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/zippyshare'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "tO22WPSLKdbH"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] zippyshare Downloader \n",
+ "mode = 'single' #@param [\"single\", \"batch\"]\n",
+ "# @markdown ---\n",
+ "direct_url = \"\" #@param {type:\"string\"}\n",
+ "store_path = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > This downloader isn't working as it can't read from zippyshare's weird url (www(random_number).zippyshare)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "\n",
+ "if mode == 'single':\n",
+ " if direct_url == '':\n",
+ " print(\"The URL field is empty!\")\n",
+ " else:\n",
+ " url = direct_url\n",
+ " url_content = requests.get(url).content\n",
+ " soup = BeautifulSoup(url_content,'html.parser')\n",
+ "\n",
+ " x = list(soup.find_all('script',type='text/javascript'))\n",
+ " xx = []\n",
+ "\n",
+ " for i in x:\n",
+ " xx.append(str(i))\n",
+ "\n",
+ " for j in xx:\n",
+ " if '51245' in j:\n",
+ " thing = j\n",
+ " break\n",
+ " else:\n",
+ " pass\n",
+ "\n",
+ " thing_stripped = thing.strip()\n",
+ "\n",
+ " ylist = thing_stripped.split('/')\n",
+ "\n",
+ " url_initial = url.split('/')[2]\n",
+ "\n",
+ " file_id = ylist[3]\n",
+ "\n",
+ " unique_code = ylist[4].strip(\" '\\\" ()+\")\n",
+ " unique_code0 = eval(unique_code)\n",
+ "\n",
+ " game_name = ylist[-2].strip('\";\\n}< ')\n",
+ " parsed_link = f'{url_initial}/d/{file_id}/{unique_code0}/{game_name}'\n",
+ " direct_url = parsed_link\n",
+ " \n",
+ " if store_path == '':\n",
+ " store_path = '/content/downloads/zippyshare'\n",
+ " !wget -P {store_path} {direct_url}\n",
+ " else:\n",
+ " !wget -P {store_path} {direct_url}\n",
+ "elif mode == 'batch':\n",
+ " print(\"Upload a download.txt file that contains a list of zippyshare links.\\n\")\n",
+ " files.upload()\n",
+ " clear_output()\n",
+ " if store_path == '':\n",
+ " store_path = '/content/downloads/zippyshare'\n",
+ " !plowdown {direct_url} -o {store_path}\n",
+ " else:\n",
+ " !plowdown {direct_url} -o {store_path}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pUODCRACrvGC"
+ },
+ "source": [
+ "## Penetration Testing "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5Lo-h1Cnrxou"
+ },
+ "source": [
+ "### hashcat \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dWFBQvMVOJv0"
+ },
+ "source": [
+ "This block is unlikely going to make any progress as the learning curve of hashcat is quite steep..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LPxKv5DAr3KV"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install hashcat \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!apt install cmake build-essential -y && apt install checkinstall git -y && git clone https://github.com/hashcat/hashcat.git && cd hashcat && git submodule update --init && make && make install \n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "SeubAcoyxCsw"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] hashcat Bechmark \n",
+ "# ================================================================ #\n",
+ "\n",
+ "!hashcat -b"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "HwRqNJoYR4Us"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] hashcat \n",
+ "hash = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > The output field is currently there just as a placeholder.
\n",
+ "# @markdown ---\n",
+ "hash_type = 'WPA-EAPOL-PBKDF2' #@param [\"MD5\", \"SHA1\", \"WPA-EAPOL-PBKDF2\"]\n",
+ "attack_mode = 'dictionary' #@param [\"dictionary\", \"combination\", \"mask\", \"hybrid_wordlist_+_mask\", \"hybrid_mask_+_wordlist\"]\n",
+ "wordlist = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > Enter the path to your wordlist (only used when the dictionary attack is chosen).
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if hash == '':\n",
+ " print(\"The hash field is empty!\")\n",
+ "\n",
+ "if output == '':\n",
+ " output = '/content/hashcat_output.txt'\n",
+ "\n",
+ "placeholder = 'This cell is not complete yet and could be dropped/abandoned at any time.'\n",
+ "\n",
+ "if hash_type == 'MD5' or hash_type == 'SHA1':\n",
+ " print(placeholder)\n",
+ "elif hash_type == 'WPA-EAPOL-PBKDF2':\n",
+ " hash_type = 2500\n",
+ " if attack_mode == 'dictionary':\n",
+ " attack_mode = 0\n",
+ " if wordlist == '':\n",
+ " print(\"The wordlist field is empty!\")\n",
+ " else:\n",
+ " !hashcat -m {hash_type} -a {attack_mode} {hash} {wordlist} -o {output} --force\n",
+ " elif attack_mode == 'combination' or attack_mode == 'mask' or attack_mode == 'hybrid_wordlist_+_mask' or attack_mode == 'hybrid_mask_+_wordlist':\n",
+ " print(placeholder)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2EIvy2zb8re1"
+ },
+ "outputs": [],
+ "source": [
+ "!hashcat -m 2500 -a 0 /content/test.hccapx /content/downloads/rockyou.txt -d 1 -o /content/test.txt "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gdgYuWnst4ed"
+ },
+ "source": [
+ "## ProxyBroker "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "SuLleS03tzjn"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install proxybroker"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1czv6VpwuJs8"
+ },
+ "outputs": [],
+ "source": [
+ "\"\"\"Find 10 working HTTP(S) proxies and save them to a file.\"\"\"\n",
+ "\n",
+ "import asyncio\n",
+ "from proxybroker import Broker\n",
+ "\n",
+ "\n",
+ "async def save(proxies, filename):\n",
+ " \"\"\"Save proxies to a file.\"\"\"\n",
+ " with open(filename, 'w') as f:\n",
+ " while True:\n",
+ " proxy = await proxies.get()\n",
+ " if proxy is None:\n",
+ " break\n",
+ " proto = 'https' if 'HTTPS' in proxy.types else 'http'\n",
+ " row = '%s://%s:%d\\n' % (proto, proxy.host, proxy.port)\n",
+ " f.write(row)\n",
+ "\n",
+ "\n",
+ "def main():\n",
+ " proxies = asyncio.Queue()\n",
+ " broker = Broker(proxies)\n",
+ " tasks = asyncio.gather(broker.find(types=['HTTP', 'HTTPS'], limit=10),\n",
+ " save(proxies, filename='proxies.txt'))\n",
+ " loop = asyncio.get_event_loop()\n",
+ "# loop.run_until_complete(tasks)\n",
+ "\n",
+ "\n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TxQiE-LXjnAb"
+ },
+ "source": [
+ "## Prawler "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "rq0caJ3njq08"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install Prawler"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Gl8Efvbzjs3T"
+ },
+ "outputs": [],
+ "source": [
+ "import Prawler\n",
+ "\n",
+ "proxy_list = Prawler.get_proxy_list(5, \"http\", \"elite\", \"US\")\n",
+ "\n",
+ "print(proxy_list)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JyUn6Yn8lM_c"
+ },
+ "source": [
+ "## Free-Proxy "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "K8qprse5lLcb"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install free-proxy"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "jR8t0j98lWG1"
+ },
+ "outputs": [],
+ "source": [
+ "from fp.fp import FreeProxy\n",
+ "\n",
+ "proxy = FreeProxy(country_id=['US', 'AU', 'CA', 'SG', 'JP', 'KR'], timeout=1, rand=False).get()\n",
+ "\n",
+ "print(proxy)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "jmlQ0JeXyH9j"
+ },
+ "source": [
+ "## madodl "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PZrpgJGe59yp"
+ },
+ "source": [
+ "## code-server "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "yzLxqKex6BQ6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install code-server \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install colabcode\n",
+ "\n",
+ "# clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "-LB2nKez6XOz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] code-server \n",
+ "# @markdown > Please note that while running this cell, you cannot run other cell until you stop this one first.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from colabcode import ColabCode\n",
+ "\n",
+ "# Run VSCode with password\n",
+ "# ColabCode(port=10000, password=\"12345\")\n",
+ "\n",
+ "# Run VSCode without password\n",
+ "ColabCode()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eWs-zl2gNvwW"
+ },
+ "source": [
+ "## Create/Extract Archive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "XO8dzdyyH5pT"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Create Archive \n",
+ "MODE = \"ZIP\" #@param [\"ZIP\", \"TAR\", \"7Z\"]\n",
+ "FILENAME = \"\" # @param {type:\"string\"}\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "# option supports b k m g (bytes, kilobytes, megabytes, gigabytes)\n",
+ "SPLIT = \"no\" #@param [\"1g\", \"2g\", \"3g\", \"4g\", \"5g\", \"no\"]\n",
+ "\n",
+ "compress = 4#@param {type:\"slider\", min:0, max:9, step:0}\n",
+ "#@markdown > Use the character `|` to separate paths. (Example `path/to /1 | path/to/2`)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from pathlib import PurePosixPath\n",
+ "\n",
+ "pathList = PATH_TO_FILE.split('|')\n",
+ "if MODE == \"ZIP\":\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE.ZIP\"\n",
+ " if ARCHIVE_PASSWORD:\n",
+ " passADD = f'--password \"{ARCHIVE_PASSWORD}\"'\n",
+ " else:\n",
+ " passADD = ''\n",
+ " splitC = f\"-s {SPLIT}\" if not 'no' in SPLIT else \"\" \n",
+ " for part in pathList:\n",
+ " pathdic = PurePosixPath(part.strip())\n",
+ " parent = pathdic.parent\n",
+ " partName = pathdic.parts[-1]\n",
+ " cmd = f'cd \"{parent}\" && zip {passADD} -{compress} {splitC} -v -r -u \"{FILENAME}\" \"{partName}\"'\n",
+ " !$cmd\n",
+ "elif MODE == \"TAR\":\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE\"\n",
+ " cmd = f'GZIP=-{compress} tar -zcvf \"{FILENAME}.tar.gz\" {PATH_TO_FILE}'\n",
+ " !$cmd\n",
+ "else:\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE\"\n",
+ " for part in pathList:\n",
+ " pathdic = PurePosixPath(part.strip())\n",
+ " parent = pathdic.parent\n",
+ " partName = pathdic.parts[-1]\n",
+ " cmd = f'cd \"{parent}\" && 7z a -mx={compress} \"{FILENAME}.7z\" \"{partName}\"'\n",
+ " !$cmd\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "k98WImeXH5pK"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Extract Archive \n",
+ "MODE = \"7Z\" # @param [\"UNZIP\", \"UNTAR\", \"UNRAR\", \"7Z\"]\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "extractPath = \"\" # @param {type:\"string\"}\n",
+ "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " checkAvailable,\n",
+ ")\n",
+ "\n",
+ "def extractFiles():\n",
+ " global extractPath\n",
+ " if ARCHIVE_PASSWORD:\n",
+ " passADD = f'-P {ARCHIVE_PASSWORD}'\n",
+ " else:\n",
+ " passADD = ''\n",
+ " if not extractPath:\n",
+ " extractPath = \"/content/extract\"\n",
+ " os.makedirs(extractPath, exist_ok=True)\n",
+ " if MODE == \"UNZIP\":\n",
+ " runSh('unzip '+passADD+f' \"{PATH_TO_FILE}\" -d \"{extractPath}\"', output=True)\n",
+ " elif MODE == \"UNRAR\":\n",
+ " runSh(f'unrar x \"{PATH_TO_FILE}\" \"{extractPath}\" '+passADD+' -o+', output=True)\n",
+ " elif MODE == \"UNTAR\":\n",
+ " runSh(f'tar -C \"{extractPath}\" -xvf \"{PATH_TO_FILE}\"', output=True)\n",
+ " else:\n",
+ " runSh(f'7z x \"{PATH_TO_FILE}\" -o{extractPath} '+passADD, output=True)\n",
+ "\n",
+ "extractFiles()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "wBCtu4fMAwRn"
+ },
+ "source": [
+ "## 4chan-downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0w-c_xBUBCXN"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clone 4chan-downloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "import os.path\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
+ " print(\"Hey, Anon-kun/chan!\\n\\nDid you know that you already have cloned the 4chan-downloader?\\nNo need to do that again, you know...\\n\\n(How do I know that? Well, I can os.path.exists the file inb4404.py, so... yeah)\")\n",
+ "else:\n",
+ " !git clone https://github.com/Exceen/4chan-downloader.git /content/tools/4chan-downloader\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "VkBNduaUBg6S"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] 4chan-downloader \n",
+ "automatically_clear_output = False #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "import os.path\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
+ " !python /content/tools/4chan-downloader/inb4404.py -h\n",
+ " if automatically_clear_output == True:\n",
+ " clear_output()\n",
+ "else:\n",
+ " print(\"Hey, Anon-kun/chan... I can't find the inb4404.py.\\n\\nHave you run the cell above this one?\\nIf you haven't already, run the cell above first.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "um3eitPj0QWG"
+ },
+ "source": [
+ "## Instagram Scraper "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "dqFUrm7M3B4j"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install Instagram Scraper \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "%pip install instagram-scraper\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wk2bY_l00Sq3"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Instagram Scraper \n",
+ "target_username = \"\" #@param {type:\"string\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
In case if the account is private, you will need to authenticate using your account. \n",
+ "your_username = \"\" #@param {type:\"string\"}\n",
+ "your_password = \"\" #@param {type:\"string\"}\n",
+ "use_login = False #@param {type:\"boolean\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
Options: \n",
+ "download_path = \"\" #@param {type:\"string\"}\n",
+ "download_mode = 'default' #@param [\"default\", \"image_only\", \"video_only\", \"story_only\", \"broadcast_only\"]\n",
+ "silent_mode = False #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "path1 = \"/content/downloads/\"\n",
+ "path2 = \"/content/downloads/instagram-scraper/\"\n",
+ "silent = \"\"\n",
+ "\n",
+ "if download_path != \"\":\n",
+ " pass\n",
+ "elif download_path == \"\":\n",
+ " if os.path.exists(path1) == False:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path1)\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " os.makedirs(path1)\n",
+ " elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " download_path = path2\n",
+ "\n",
+ "if download_mode == \"default\":\n",
+ "\tdownload_mode = \"\"\n",
+ "elif download_mode == \"image_only\":\n",
+ "\tdownload_mode = \"image\"\n",
+ "elif download_mode == \"video_only\":\n",
+ "\tdownload_mode = \"video\"\n",
+ "elif download_mode == \"story_only\":\n",
+ "\tdownload_mode = \"story\"\n",
+ "elif download_mode == \"broadcast_only\":\n",
+ "\tdownload_mode = \"broadcast\"\n",
+ "\n",
+ "if silent_mode == True:\n",
+ "\tsilent = \"-q\"\n",
+ "else:\n",
+ "\tsilent = \"\"\n",
+ "\n",
+ "if target_username == \"\":\n",
+ " sys.exit(\"No target username to download is given.\")\n",
+ "else:\n",
+ " if use_login == True:\n",
+ " if your_username == \"\" and your_password == \"\":\n",
+ " sys.exit(\"The username and password fields are empty!\")\n",
+ " elif your_username == \"\" and your_password != \"\":\n",
+ " sys.exit(\"The username field is empty!\")\n",
+ " elif your_username != \"\" and your_password == \"\":\n",
+ " sys.exit(\"The password field is empty!\")\n",
+ " else:\n",
+ " !instagram-scraper \"$target_username\" -u \"$your_username\" -p \"$your_password\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent\"\n",
+ " else:\n",
+ " !instagram-scraper \"$target_username\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent_mode\"\n",
+ "\n",
+ "print(\"\")\n",
+ "print(\"==================================================\")\n",
+ "print(\"Downloaded files are stored in\", download_path + target_username)\n",
+ "print(\"==================================================\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "OgTz1FAtcHJH"
+ },
+ "outputs": [],
+ "source": [
+ "!instagram-scraper \"\" -u \"\" -p \"\" -d \"\" -n -t image"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3PmqhlfgKj85"
+ },
+ "source": [
+ "## instaloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "XH3kLNW9KoRf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install instaloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!pip3 install instaloader\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "avemAgewKydt"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] instaloader \n",
+ "target_username = \"\" #@param {type:\"string\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
Options: \n",
+ "use_login = False #@param {type:\"boolean\"}\n",
+ "download_path = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > If the download path is not specified, the default one will be used.\"/content/downloads/instaloader/username\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if download_path != \"\":\n",
+ " pass\n",
+ "elif download_path == \"\":\n",
+ " path1 = \"/content/downloads/\"\n",
+ " path2 = \"/content/downloads/instaloader/\"\n",
+ " if os.path.exists(path1) == False:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path1)\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " os.makedirs(path1)\n",
+ " elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " download_path = path2\n",
+ "\n",
+ "if target_username == \"\":\n",
+ " sys.exit(\"No target username to download is given.\")\n",
+ "else:\n",
+ " if use_login == True:\n",
+ " username = input(\"Enter your username: \")\n",
+ " username = \"--login=\" + username\n",
+ " %cd \"$download_path\"\n",
+ " clear_output()\n",
+ " !instaloader --fast-update \"$target_username\" \"$username\"\n",
+ " else:\n",
+ " %cd \"$download_path\"\n",
+ " clear_output()\n",
+ " !instaloader \"$target_username\"\n",
+ "\n",
+ "print(\"\")\n",
+ "print(\"==================================================\")\n",
+ "print(\"Downloaded files are stored in\", download_path + target_username)\n",
+ "print(\"==================================================\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "bpnK5DH0VBs6"
+ },
+ "outputs": [],
+ "source": [
+ "# Copy session from local to google drive\n",
+ "!cp -a /root/.config/instaloader/ /content/drive/MyDrive/instaloader-session"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0Jej2XFqWI4O"
+ },
+ "outputs": [],
+ "source": [
+ "# Copy session from google drive to local\n",
+ "!cp -a /content/drive/MyDrive/instaloader-session /root/.config/instaloader"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2b4Igr8g0duu"
+ },
+ "source": [
+ "## ecchi.iwara-dl "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "__PBrzCP0fPf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Clone] ecchi.iwara-dl \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!apt-get install -y jq\n",
+ "!apt-get install python3-bs4\n",
+ "!git clone https://github.com/hare1039/iwara-dl /content/tools/iwara-dl\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "lLU9f0EH0mZN"
+ },
+ "outputs": [],
+ "source": [
+ "!bash /content/tools/iwara-dl/iwara-dl.sh [-u [U]] [-p [P]] [-i [n]] [-rhftcsdn] [url [url ...]]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iHyp2Bgkx1B2"
+ },
+ "source": [
+ "## UUP Dump "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "uoSecUJvx4za"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Requirements \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!sudo apt-get install aria2 cabextract wimtools chntpw genisoimage\n",
+ "!git clone https://github.com/uup-dump/converter \"/content/tools/uup-dump/converter\"\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "6NqUFFnBx9C5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] UUP Dump \n",
+ "script_location = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > Only type in the script's path and exclude the script's name. Type in: /content/path/to/script Exclude: uup_download_linux.sh\n",
+ "# ================================================================ #\n",
+ "\n",
+ "if not script_location == \"\":\n",
+ " pass\n",
+ "else:\n",
+ " script_location = \"/content\"\n",
+ "\n",
+ "%cd \"$script_location\"\n",
+ "\n",
+ "!bash \"uup_download_linux.sh\"\n",
+ "\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KDFWgCYE0ULQ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# Custom commands goes here\n",
+ "# ================================================================ #\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6uDnLXX40fv6"
+ },
+ "source": [
+ "TO DO:\n",
+ "\n",
+ "- Add files and paths checker ot make sure they are exist"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [
+ "ygDyFQvR5Gci",
+ "XU_IUOV6owRg",
+ "uzuVvbfSo16m",
+ "e-0yDs4C0HkB",
+ "21-Wb8ywqQeJ",
+ "GaegjvHPPW9q",
+ "4sXeh7Tdx1v-",
+ "477G4hACPgqM",
+ "KTXERiVMIKgw",
+ "wkc0wCvPIUFh",
+ "YSEyWbWfY9qx",
+ "UFQoYxRKAclf",
+ "c1N141ZcEdwd",
+ "O0NHwsI_-d3W",
+ "4DdRcv08fzTG",
+ "o_uCXhC1S0GZ",
+ "nGKbLp4P8MXi",
+ "l8uIsoVrC6to",
+ "YMSrqjUm_bDN",
+ "N09EnjlB6wuV",
+ "bZ-Z0cUdz7IL",
+ "ERBVA5aIERou",
+ "bEYznPNQ61sm",
+ "1mctlRk1TTrc",
+ "dEq11jIB5oee",
+ "Ci0HTN9Xyxze",
+ "tL-ilxH0N_B9",
+ "QOyo5zf4suod",
+ "FejGUkxPhDmE",
+ "_GVSJ9jdn6lW",
+ "OJBVlUw-kKyt",
+ "yqY0BtjuGS78",
+ "nFrxKe_52fSj",
+ "Ssn-ZMNcv5UQ",
+ "iLcAVtWT4NTC",
+ "bQ73mxqlpNjb",
+ "UU-y9pOU4sRB",
+ "EpwNYbcfRvcl",
+ "5CWw65NugcjI",
+ "3AbFcLJr5PHk",
+ "pIk3H6xUic8a",
+ "LOmbPf7Tihne",
+ "paeY4yX7jNd1",
+ "j-PgCLYrZFbm",
+ "TgwoGxAitg0y",
+ "xmq_9AJCtvlV",
+ "nUI7G8OSSXbM",
+ "aStiEPlnDoeY",
+ "d7hdxEjc-ynr",
+ "Jbw2QIUB6JKR",
+ "e-OWHJwruE6V",
+ "AMu9crpy-7yb",
+ "uQT6GEq9Na_E",
+ "FdDNhzc0NdeS",
+ "_wlFbVS6JcSL",
+ "WaSgbPEch7KH",
+ "Th3Qyn2uttiW",
+ "CKxGMNKUJloT",
+ "COqwo7iH6_vu",
+ "JM1Do14AKIdF",
+ "0vHRnizI9BXA",
+ "9JBIZh3OZBaL",
+ "2zGMePbPQJWI",
+ "eaUJNGmju5G6",
+ "xzeZBOnhyKPy",
+ "NgCsGSiDu1bY",
+ "OOpAjMjxsNd6",
+ "UdiQLlm5zX3_",
+ "EFOqhHG6hOVH",
+ "ey6-UveDalxR",
+ "GahMjYf8miNs",
+ "NQ0TxfKeghR8",
+ "Ja95mvvq8oei",
+ "CD36vcpf2FSb",
+ "RDHuIkoi6l9a",
+ "66I2t2sQ2SMq",
+ "NObEcBWAJoaz",
+ "FpJXJiRl6-gK",
+ "SNDGdMRn3PA-",
+ "KFcIThDuBii_",
+ "2f-THZmDoOaY",
+ "MSUasbRUDP3B",
+ "9UagRtLPyKoQ",
+ "9ZcgdPBT2SQK",
+ "7-3O4en4C4IL",
+ "VRk2Ye1exWVA",
+ "tozwpAhhnm69",
+ "Ts6zYXUdEfrz",
+ "FWdEg4H9JlSp",
+ "Rd6Br05y7_Ya",
+ "7bNutSOeJ1kM",
+ "pUODCRACrvGC",
+ "5Lo-h1Cnrxou",
+ "gdgYuWnst4ed",
+ "TxQiE-LXjnAb",
+ "JyUn6Yn8lM_c",
+ "jmlQ0JeXyH9j",
+ "PZrpgJGe59yp",
+ "eWs-zl2gNvwW",
+ "wBCtu4fMAwRn",
+ "um3eitPj0QWG",
+ "3PmqhlfgKj85",
+ "2b4Igr8g0duu",
+ "iHyp2Bgkx1B2"
+ ],
+ "include_colab_link": true,
+ "name": "MiXLab",
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.1"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/MiXLab.ipynb b/MiXLab.ipynb
index 26a819e..dd14ab9 100644
--- a/MiXLab.ipynb
+++ b/MiXLab.ipynb
@@ -1,12692 +1,12730 @@
{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
- "colab": {
- "name": "MiXLab",
- "provenance": [],
- "collapsed_sections": [
- "ygDyFQvR5Gci",
- "XU_IUOV6owRg",
- "uzuVvbfSo16m",
- "e-0yDs4C0HkB",
- "21-Wb8ywqQeJ",
- "GaegjvHPPW9q",
- "4sXeh7Tdx1v-",
- "477G4hACPgqM",
- "KTXERiVMIKgw",
- "wkc0wCvPIUFh",
- "YSEyWbWfY9qx",
- "UFQoYxRKAclf",
- "c1N141ZcEdwd",
- "O0NHwsI_-d3W",
- "4DdRcv08fzTG",
- "o_uCXhC1S0GZ",
- "nGKbLp4P8MXi",
- "l8uIsoVrC6to",
- "YMSrqjUm_bDN",
- "N09EnjlB6wuV",
- "bZ-Z0cUdz7IL",
- "ERBVA5aIERou",
- "bEYznPNQ61sm",
- "1mctlRk1TTrc",
- "dEq11jIB5oee",
- "Ci0HTN9Xyxze",
- "tL-ilxH0N_B9",
- "QOyo5zf4suod",
- "FejGUkxPhDmE",
- "_GVSJ9jdn6lW",
- "OJBVlUw-kKyt",
- "yqY0BtjuGS78",
- "nFrxKe_52fSj",
- "Ssn-ZMNcv5UQ",
- "iLcAVtWT4NTC",
- "bQ73mxqlpNjb",
- "UU-y9pOU4sRB",
- "EpwNYbcfRvcl",
- "5CWw65NugcjI",
- "3AbFcLJr5PHk",
- "pIk3H6xUic8a",
- "LOmbPf7Tihne",
- "paeY4yX7jNd1",
- "j-PgCLYrZFbm",
- "TgwoGxAitg0y",
- "xmq_9AJCtvlV",
- "nUI7G8OSSXbM",
- "aStiEPlnDoeY",
- "d7hdxEjc-ynr",
- "Jbw2QIUB6JKR",
- "e-OWHJwruE6V",
- "AMu9crpy-7yb",
- "uQT6GEq9Na_E",
- "FdDNhzc0NdeS",
- "_wlFbVS6JcSL",
- "WaSgbPEch7KH",
- "Th3Qyn2uttiW",
- "CKxGMNKUJloT",
- "COqwo7iH6_vu",
- "JM1Do14AKIdF",
- "0vHRnizI9BXA",
- "9JBIZh3OZBaL",
- "2zGMePbPQJWI",
- "eaUJNGmju5G6",
- "xzeZBOnhyKPy",
- "NgCsGSiDu1bY",
- "OOpAjMjxsNd6",
- "UdiQLlm5zX3_",
- "EFOqhHG6hOVH",
- "ey6-UveDalxR",
- "GahMjYf8miNs",
- "NQ0TxfKeghR8",
- "Ja95mvvq8oei",
- "CD36vcpf2FSb",
- "RDHuIkoi6l9a",
- "66I2t2sQ2SMq",
- "NObEcBWAJoaz",
- "FpJXJiRl6-gK",
- "SNDGdMRn3PA-",
- "KFcIThDuBii_",
- "2f-THZmDoOaY",
- "MSUasbRUDP3B",
- "9UagRtLPyKoQ",
- "9ZcgdPBT2SQK",
- "7-3O4en4C4IL",
- "VRk2Ye1exWVA",
- "tozwpAhhnm69",
- "Ts6zYXUdEfrz",
- "FWdEg4H9JlSp",
- "Rd6Br05y7_Ya",
- "7bNutSOeJ1kM",
- "pUODCRACrvGC",
- "5Lo-h1Cnrxou",
- "gdgYuWnst4ed",
- "TxQiE-LXjnAb",
- "JyUn6Yn8lM_c",
- "jmlQ0JeXyH9j",
- "PZrpgJGe59yp",
- "eWs-zl2gNvwW",
- "wBCtu4fMAwRn",
- "um3eitPj0QWG",
- "3PmqhlfgKj85",
- "2b4Igr8g0duu",
- "iHyp2Bgkx1B2"
- ],
- "toc_visible": true,
- "include_colab_link": true
- },
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
- }
- },
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "view-in-github",
- "colab_type": "text"
- },
- "source": [
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ygDyFQvR5Gci"
- },
- "source": [
- "\n",
- " \n",
- "\n",
- "# **Welcome to Mi XL ab ** "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "XU_IUOV6owRg"
- },
- "source": [
- "## About MiXLab "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "lkyLut0_ntrJ"
- },
- "source": [
- "MiXLab is a mix of multiple amazing colab notebooks found on the internet (mostly from github).\n",
- "\n",
- "The name MiXLab is inspired from this awesome 3rd party Android file manager app called MiXplorer and combined with (Google) Colab at the end, resulting in MiXLab.\n",
- "\n",
- "What is the aim of MiXLab, you might ask?\n",
- "Well... educational purpose, I guess..."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "uzuVvbfSo16m"
- },
- "source": [
- "## Features "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "N1eYiTlGoqaA"
- },
- "source": [
- "Here's what you can do with MiXLab\n",
- "* Mount/unmount remote storage (Google Drive / rclone).\n",
- "* Hosted/P2P downloader.\n",
- "* Some other useful tools such as File Manager, Remote Connection and System Monitor to monitor the VM's state."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "e-0yDs4C0HkB"
- },
- "source": [
- "# ✦ *Change Log* ✦ \n",
- "\n",
- "Last modified: 2021-09-29 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3_30gF8Am-PQ"
- },
- "source": [
- "2021-09-29 \n",
- " \n",
- "Added cell on Real-ESRGAN to download the results. \n",
- "Changed back the default runtime type CPU only (no hardware accelerator). \n",
- "Added a lot more options to Real-ESRGAN . \n",
- "Removed \"custom_command\" field from Real-ESRGAN . \n",
- "Added a temporary field \"custom_command\" to Real-ESRGAN ."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "eA2hvW0ZYn2u"
- },
- "source": [
- "2021-09-28 \n",
- " \n",
- "Added a simple implementation of Real-ESRGAN ."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Be03jPf-0L0F"
- },
- "source": [
- "2021-09-28 \n",
- " \n",
- "MiXLab is now using VueTorrent for the qBittorrent alternate web interface.\n",
- "\n",
- ">Note: there seem to be something wrong with VueTorrent not automatically redirecting user to the main page, serving the login page instead, while there is no need to login. You simply have to click on the login button and then it should take you to the main page."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "21-Wb8ywqQeJ"
- },
- "source": [
- "# ✦ *Colab Stay Alive* ✦ "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nYEj5CeCqbTY",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Colab Stay Alive \n",
- "# @markdown This cell runs a JS code that will automatically press the reconnect button when you got disconnected due to idle.\n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "from google.colab import output\n",
- "\n",
- "display(IPython.display.Javascript('''\n",
- " function ClickConnect(){\n",
- " btn = document.querySelector(\"colab-connect-button\")\n",
- " if (btn != null){\n",
- " console.log(\"Clicked on the connect button\"); \n",
- " btn.click() \n",
- " }\n",
- " \n",
- " btn = document.getElementById('connect')\n",
- " if (btn != null){\n",
- " console.log(\"Clicked on the reconnect button\"); \n",
- " btn.click() \n",
- " }\n",
- " }\n",
- " \n",
- "setInterval(ClickConnect,60000)\n",
- "'''))\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "cRwNEZJmUFMg"
- },
- "source": [
- "If the cell above doesn't work, try to run one of these codes below on your browser's developer tool/console.\n",
- "\n",
- "\n",
- "\n",
- ">Code 1(credit to rockyourcode)\n",
- "function ClickConnect() {\n",
- " console.log('Working')\n",
- " document\n",
- " .querySelector('#top-toolbar > colab-connect-button')\n",
- " .shadowRoot.querySelector('#connect')\n",
- " .click()\n",
- "}\n",
- "\n",
- "setInterval(ClickConnect, 60000)
\n",
- "\n",
- "\n",
- "\n",
- "> Code 2(credit to Kavyajeet Bora on stack overflow)\n",
- "function ClickConnect(){\n",
- " console.log(\"Working\"); \n",
- " document.querySelector(\"colab-toolbar-button#connect\").click() \n",
- "}\n",
- "setInterval(ClickConnect,60000)
\n",
- "\n",
- "\n",
- "\n",
- "> Code 3\n",
- "function ClickConnect(){\n",
- " console.log(\"Connnect Clicked - Start\"); \n",
- " document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click();\n",
- " console.log(\"Connnect Clicked - End\"); \n",
- "};\n",
- "setInterval(ClickConnect, 60000)
\n",
- "\n",
- "\n",
- "\n",
- "> Code 4(credit to Stephane Belemkoabga on stack overflow)\n",
- "function ClickConnect(){\n",
- " console.log(\"Working\"); \n",
- " document.querySelector(\"colab-connect-button\").click() \n",
- "}\n",
- "setInterval(ClickConnect,60000)
"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "GaegjvHPPW9q"
- },
- "source": [
- "# ✦ *Mount/Unmount Storage* ✦ \n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "4sXeh7Tdx1v-"
- },
- "source": [
- "## Google Drive "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "LkGoo1n9PNgj",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Mount/Unmount Google Drive \n",
- "# @markdown This cell will mount/unmount Google Drive to /content/drive/
\n",
- "MODE = \"MOUNT\" #@param [\"MOUNT\", \"UNMOUNT\"]\n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "from google.colab import drive\n",
- "drive.mount._DEBUG = False\n",
- "if MODE == \"MOUNT\":\n",
- " drive.mount('/content/drive', force_remount=True)\n",
- "elif MODE == \"UNMOUNT\":\n",
- " try:\n",
- " drive.flush_and_unmount()\n",
- " except ValueError:\n",
- " pass\n",
- " get_ipython().system_raw(\"rm -rf /root/.config/Google/DriveFS\")\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "EgMPgxmrTCvF",
- "cellView": "form"
- },
- "source": [
- "# @markdown ← Force re-mount Google Drive \n",
- "\n",
- "drive.mount(\"/content/drive\", force_remount=True)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nPuXzMyawnzo",
- "cellView": "form"
- },
- "source": [
- "# @markdown This cell is not needed (won't do anything if you run it and here just for reference).\n",
- "\n",
- "## ============================= FORM ============================= #\n",
- "## @markdown ← Mount Google Drive (Cloud SDK) \n",
- "## @markdown This cell will mount Google Drive to /content/downloads/
\n",
- "## @markdown > currently there is no way to unmount the drive.\n",
- "## ================================================================ #\n",
- "\n",
- "#!apt-get install -y -qq software-properties-common python-software-properties module-init-tools\n",
- "#!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null\n",
- "#!apt-get update -qq 2>&1 > /dev/null\n",
- "#!apt-get -y install -qq google-drive-ocamlfuse fuse\n",
- "#from google.colab import auth\n",
- "#auth.authenticate_user()\n",
- "#from oauth2client.client import GoogleCredentials\n",
- "#creds = GoogleCredentials.get_application_default()\n",
- "#import getpass\n",
- "#!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL\n",
- "#vcode = getpass.getpass()\n",
- "#!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}\n",
- "\n",
- "#!mkdir -p downloads\n",
- "#!google-drive-ocamlfuse drive downloads\n",
- "\n",
- "#from IPython.display import HTML, clear_output\n",
- "\n",
- "#clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "477G4hACPgqM"
- },
- "source": [
- "## rclone "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0VJ4VO1X8YE6",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "#@markdown ← Install rclone \n",
- "build_version = \"stable\" #@param [\"stable\", \"beta\"]\n",
- "\n",
- "#@markdown ---\n",
- "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "\n",
- "if build_version == \"stable\":\n",
- "\t!curl https://rclone.org/install.sh | sudo bash\n",
- "else:\n",
- "\t!curl https://rclone.org/install.sh | sudo bash -s beta\n",
- "\n",
- "\n",
- "try:\n",
- "\tos.makedirs(\"/root/.config/rclone\", exist_ok=True)\n",
- "except OSError as error:\n",
- "\tpass\n",
- "\n",
- "\n",
- "if automatically_clear_cell_output is True:\n",
- "\tclear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "KTXERiVMIKgw"
- },
- "source": [
- "### rclone 1 "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "2db3MpgeQdT9",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] rclone \n",
- "Mode = \"Copy\" # @param [\"Move\", \"Copy\", \"Sync\", \"Verify\", \"Dedupe\", \"Clean Empty Dirs\", \"Empty Trash\"]\n",
- "Source = \"\" # @param {type:\"string\"}\n",
- "Destination = \"\" # @param {type:\"string\"}\n",
- "\n",
- "#@markdown ---\n",
- "Extra_Arguments = \"--local-no-check-updated\" # @param {type:\"string\"}\n",
- "COPY_SHARED_FILES = False # @param{type: \"boolean\"}\n",
- "Compare = \"Size & Checksum\"\n",
- "TRANSFERS, CHECKERS = 20, 20\n",
- "THROTTLE_TPS = True\n",
- "BRIDGE_TRANSFER = False # @param{type: \"boolean\"}\n",
- "FAST_LIST = False # @param{type: \"boolean\"}\n",
- "OPTIMIZE_GDRIVE = True\n",
- "SIMPLE_LOG = True\n",
- "RECORD_LOGFILE = False # @param{type: \"boolean\"}\n",
- "SKIP_NEWER_FILE = False\n",
- "SKIP_EXISTED = False\n",
- "SKIP_UPDATE_MODTIME = False\n",
- "ONE_FILE_SYSTEM = False\n",
- "LOG_LEVEL = \"DEBUG\"\n",
- "SYNC_MODE = \"Delete after transfering\"\n",
- "SYNC_TRACK_RENAME = True\n",
- "DEDUPE_MODE = \"Largest\"\n",
- "USE_TRASH = True\n",
- "DRY_RUN = False # @param{type: \"boolean\"}\n",
- "\n",
- "#@markdown ---\n",
- "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "from os import path as _p\n",
- "\n",
- "\n",
- "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run\n",
- " \n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd))\n",
- "\n",
- "\n",
- "from datetime import datetime as _dt\n",
- "from mixlab import (\n",
- " displayOutput,\n",
- " checkAvailable,\n",
- " runSh,\n",
- " prepareSession,\n",
- " rcloneConfigurationPath,\n",
- " accessSettingFile,\n",
- " memGiB,\n",
- ")\n",
- "\n",
- "\n",
- "def populateActionArg():\n",
- " if Mode == \"Copy\":\n",
- " actionArg = \"copy\"\n",
- " elif Mode == \"Sync\":\n",
- " actionArg = \"sync\"\n",
- " elif Mode == \"Verify\":\n",
- " actionArg = \"check\"\n",
- " elif Mode == \"Dedupe\":\n",
- " actionArg = \"dedupe largest\"\n",
- " elif Mode == \"Clean Empty Dirs\":\n",
- " actionArg = \"rmdirs\"\n",
- " elif Mode == \"Empty Trash\":\n",
- " actionArg = \"delete\"\n",
- " else:\n",
- " actionArg = \"move\"\n",
- "\n",
- " return actionArg\n",
- "\n",
- "\n",
- "def populateCompareArg():\n",
- " if Compare == \"Mod-Time\":\n",
- " compareArg = \"--ignore-size\"\n",
- " elif Compare == \"Size\":\n",
- " compareArg = \"--size-only\"\n",
- " elif Compare == \"Checksum\":\n",
- " compareArg = \"-c --ignore-size\"\n",
- " else:\n",
- " compareArg = \"-c\"\n",
- "\n",
- " return compareArg\n",
- "\n",
- "\n",
- "def populateOptimizeGDriveArg():\n",
- " return (\n",
- " \"--buffer-size 256M \\\n",
- " --drive-chunk-size 256M \\\n",
- " --drive-upload-cutoff 256M \\\n",
- " --drive-acknowledge-abuse \\\n",
- " --drive-keep-revision-forever\"\n",
- "\n",
- " if OPTIMIZE_GDRIVE\n",
- " else \"--buffer-size 128M\"\n",
- " )\n",
- "\n",
- "\n",
- "def populateGDriveCopyArg():\n",
- " if BRIDGE_TRANSFER and memGiB() < 13:\n",
- " global TRANSFERS, CHECKERS\n",
- " TRANSFERS, CHECKERS = 10, 80\n",
- " else:\n",
- " pass\n",
- " return \"--disable copy\" if BRIDGE_TRANSFER else \"--drive-server-side-across-configs\"\n",
- "\n",
- "\n",
- "def populateStatsArg():\n",
- " statsArg = \"--stats-one-line --stats=5s\" if SIMPLE_LOG else \"--stats=5s -P\"\n",
- " if LOG_LEVEL != \"OFF\":\n",
- " statsArg += \" -v\" if SIMPLE_LOG else \"-vv\"\n",
- " elif LOG_LEVEL == \"INFO\":\n",
- " statsArg += \" --log-level INFO\"\n",
- " elif LOG_LEVEL == \"ERROR\":\n",
- " statsArg += \" --log-level ERROR\"\n",
- " else:\n",
- " statsArg += \" --log-level DEBUG\"\n",
- " return statsArg\n",
- "\n",
- "\n",
- "def populateSyncModeArg():\n",
- " if Mode != \"Sync\":\n",
- " return \"\"\n",
- " elif SYNC_MODE == \"Delete before transfering\":\n",
- " syncModeArg = \"--delete-before\"\n",
- " elif SYNC_MODE == \"Delete after transfering\":\n",
- " syncModeArg = \"--delete-after\"\n",
- " else:\n",
- " syncModeArg = \"--delete-during\"\n",
- " if SYNC_TRACK_RENAME:\n",
- " syncModeArg += \" --track-renames\"\n",
- " return syncModeArg\n",
- "\n",
- "\n",
- "def populateDedupeModeArg():\n",
- " if DEDUPE_MODE == \"Interactive\":\n",
- " dedupeModeArg = \"--dedupe-mode interactive\"\n",
- " elif DEDUPE_MODE == \"Skip\":\n",
- " dedupeModeArg = \"--dedupe-mode skip\"\n",
- " elif DEDUPE_MODE == \"First\":\n",
- " dedupeModeArg = \"--dedupe-mode first\"\n",
- " elif DEDUPE_MODE == \"Newest\":\n",
- " dedupeModeArg = \"--dedupe-mode newest\"\n",
- " elif DEDUPE_MODE == \"Oldest\":\n",
- " dedupeModeArg = \"--dedupe-mode oldest\"\n",
- " elif DEDUPE_MODE == \"Rename\":\n",
- " dedupeModeArg = \"--dedupe-mode rename\"\n",
- " else:\n",
- " dedupeModeArg = \"--dedupe-mode largest\"\n",
- "\n",
- " return dedupeModeArg\n",
- "\n",
- "\n",
- "def generateCmd():\n",
- " sharedFilesArgs = (\n",
- " \"--drive-shared-with-me --files-from /content/upload.txt --no-traverse\"\n",
- " if COPY_SHARED_FILES\n",
- " else \"\"\n",
- " )\n",
- "\n",
- " logFileArg = f\"--log-file /content/rclone_log.txt -vv -P\"\n",
- "\n",
- " args = [\n",
- " \"rclone\",\n",
- " f\"--config {rcloneConfigurationPath}/rclone.conf\",\n",
- " '--user-agent \"Mozilla\"',\n",
- " populateActionArg(),\n",
- " f'\"{Source}\"',\n",
- " f'\"{Destination}\"' if Mode in (\"Move\", \"Copy\", \"Sync\") else \"\",\n",
- " f\"--transfers {str(TRANSFERS)}\",\n",
- " f\"--checkers {str(CHECKERS)}\",\n",
- " ]\n",
- "\n",
- " if Mode == \"Verify\":\n",
- " args.append(\"--one-way\")\n",
- " elif Mode == \"Empty Trash\":\n",
- " args.append(\"--drive-trashed-only --drive-use-trash=false\")\n",
- " else:\n",
- " args.extend(\n",
- " [\n",
- " populateGDriveCopyArg(),\n",
- " populateSyncModeArg(),\n",
- " populateCompareArg(),\n",
- " populateOptimizeGDriveArg(),\n",
- " \"-u\" if SKIP_NEWER_FILE else \"\",\n",
- " \"--ignore-existing\" if SKIP_EXISTED else \"\",\n",
- " \"--no-update-modtime\" if SKIP_UPDATE_MODTIME else \"\",\n",
- " \"--one-file-system\" if ONE_FILE_SYSTEM else \"\",\n",
- " \"--tpslimit 95 --tpslimit-burst 40\" if THROTTLE_TPS else \"\",\n",
- " \"--fast-list\" if FAST_LIST else \"\",\n",
- " \"--delete-empty-src-dirs\" if Mode == \"Move\" else \"\",\n",
- " ]\n",
- " )\n",
- " args.extend(\n",
- " [\n",
- " \"-n\" if DRY_RUN else \"\",\n",
- " populateStatsArg() if not RECORD_LOGFILE else logFileArg,\n",
- " sharedFilesArgs,\n",
- " Extra_Arguments,\n",
- " ]\n",
- " )\n",
- "\n",
- " return args\n",
- "\n",
- "\n",
- "def executeRclone():\n",
- " prepareSession()\n",
- " if Source.strip() == \"\":\n",
- " displayOutput(\"❌ The source field is empty!\")\n",
- " return\n",
- " if checkAvailable(\"/content/rclone_log.txt\"):\n",
- " if not checkAvailable(\"/content/logfiles\"):\n",
- " runSh(\"mkdir -p -m 666 /content/logfiles\")\n",
- " job = accessSettingFile(\"job.txt\")\n",
- " runSh(\n",
- " f'mv /content/rclone_log.txt /content/logfiles/{job[\"title\"]}_{job[\"status\"]}_logfile.txt'\n",
- " )\n",
- "\n",
- " onGoingJob = {\n",
- " \"title\": f'{Mode}_{Source}_{Destination}_{_dt.now().strftime(\"%a-%H-%M-%S\")}',\n",
- " \"status\": \"ongoing\",\n",
- " }\n",
- " accessSettingFile(\"job.txt\", onGoingJob)\n",
- "\n",
- " cmd = \" \".join(generateCmd())\n",
- " runSh(cmd, output=True)\n",
- " displayOutput(Mode, \"success\")\n",
- "\n",
- " onGoingJob[\"status\"] = \"finished\"\n",
- " accessSettingFile(\"job.txt\", onGoingJob)\n",
- "\n",
- "executeRclone()\n",
- "\n",
- "\n",
- "if automatically_clear_cell_output is True:\n",
- "\tclear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "wkc0wCvPIUFh"
- },
- "source": [
- "### rclone 2 "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "t03ZdwQ-IvPv",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] rclone \n",
- "Mode = \"Copy\" #@param [\"Copy\", \"Move\", \"Sync\", \"Checker\", \"Deduplicate\", \"Remove Empty Directories\", \"Empty Trash\"]\n",
- "Source = \"\" #@param {type:\"string\"}\n",
- "Destination = \"\" #@param {type:\"string\"}\n",
- "\n",
- "#@markdown ---\n",
- "#@markdown ⚙️ Global Configuration ⚙️ \n",
- "Extra_Arguments = \"--local-no-check-updated\" #@param {type:\"string\"}\n",
- "Compare = \"Size & Mod-Time\" #@param [\"Size & Mod-Time\", \"Size & Checksum\", \"Only Mod-Time\", \"Only Size\", \"Only Checksum\"]\n",
- "Checkers = 10 #@param {type:\"slider\", min:1, max:40, step:1}\n",
- "Transfers = 10 #@param {type:\"slider\", min:1, max:20, step:1}\n",
- "Dry_Run = False #@param {type:\"boolean\"}\n",
- "Do_not_cross_filesystem_boundaries = False\n",
- "Do_not_update_modtime_if_files_are_identical = False #@param {type:\"boolean\"}\n",
- "Google_Drive_optimization = False #@param {type:\"boolean\"}\n",
- "Large_amount_of_files_optimization = False #@param {type:\"boolean\"}\n",
- "Simple_Ouput = True #@param {type:\"boolean\"}\n",
- "Skip_all_files_that_exist = False #@param {type:\"boolean\"}\n",
- "Skip_files_that_are_newer_on_the_destination = False #@param {type:\"boolean\"}\n",
- "Output_Log_File = \"OFF\" #@param [\"OFF\", \"NOTICE\", \"INFO\", \"ERROR\", \"DEBUG\"]\n",
- "\n",
- "#@markdown ↪️ Sync Configuration ↩️ \n",
- "Sync_Mode = \"Delete during transfer\" #@param [\"Delete during transfer\", \"Delete before transfering\", \"Delete after transfering\"]\n",
- "Track_Renames = False #@param {type:\"boolean\"}\n",
- "\n",
- "#@markdown 💞 Deduplicate Configuration 💞 \n",
- "Deduplicate_Mode = \"Interactive\" #@param [\"Interactive\", \"Skip\", \"First\", \"Newest\", \"Oldest\", \"Largest\", \"Rename\"]\n",
- "Deduplicate_Use_Trash = True #@param {type:\"boolean\"}\n",
- "\n",
- "#@markdown ---\n",
- "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "##### Importing the needed modules\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "##### Variable Declaration\n",
- "# Optimized for Google Colaboratory\n",
- "os.environ[\"bufferC\"] = \"--buffer-size 96M\"\n",
- "\n",
- "if Compare == \"Size & Checksum\":\n",
- " os.environ[\"compareC\"] = \"-c\"\n",
- "elif Compare == \"Only Mod-Time\":\n",
- " os.environ[\"compareC\"] = \"--ignore-size\"\n",
- "elif Compare == \"Only Size\":\n",
- " os.environ[\"compareC\"] = \"--size-only\"\n",
- "elif Compare == \"Only Checksum\":\n",
- " os.environ[\"compareC\"] = \"-c --ignore-size\"\n",
- "else:\n",
- " os.environ[\"compareC\"] = \"\"\n",
- "\n",
- "os.environ[\"sourceC\"] = Source\n",
- "os.environ[\"destinationC\"] = Destination\n",
- "os.environ[\"transfersC\"] = \"--transfers \"+str(Transfers)\n",
- "os.environ[\"checkersC\"] = \"--checkers \"+str(Checkers)\n",
- "\n",
- "if Skip_files_that_are_newer_on_the_destination == True:\n",
- " os.environ[\"skipnewC\"] = \"-u\"\n",
- "else:\n",
- " os.environ[\"skipnewC\"] = \"\"\n",
- " \n",
- "if Skip_all_files_that_exist == True:\n",
- " os.environ[\"skipexistC\"] = \"--ignore-existing\"\n",
- "else:\n",
- " os.environ[\"skipexistC\"] = \"\"\n",
- " \n",
- "if Do_not_cross_filesystem_boundaries == True:\n",
- " os.environ[\"nocrossfilesystemC\"] = \"--one-file-system\"\n",
- "else:\n",
- " os.environ[\"nocrossfilesystemC\"] = \"\"\n",
- " \n",
- "if Do_not_update_modtime_if_files_are_identical == True:\n",
- " os.environ[\"noupdatemodtimeC\"] = \"--no-update-modtime\"\n",
- "else:\n",
- " os.environ[\"noupdatemodtimeC\"] = \"\"\n",
- "\n",
- "if Large_amount_of_files_optimization == True:\n",
- " os.environ[\"filesoptimizeC\"] = \"--fast-list\"\n",
- "else:\n",
- " os.environ[\"filesoptimizeC\"] = \"\"\n",
- " \n",
- "if Google_Drive_optimization == True:\n",
- " os.environ[\"driveoptimizeC\"] = \"--drive-chunk-size 32M --drive-acknowledge-abuse --drive-keep-revision-forever\"\n",
- "else:\n",
- " os.environ[\"driveoptimizeC\"] = \"\"\n",
- " \n",
- "if Dry_Run == True:\n",
- " os.environ[\"dryrunC\"] = \"-n\"\n",
- "else:\n",
- " os.environ[\"dryrunC\"] = \"\"\n",
- " \n",
- "if Output_Log_File != \"OFF\":\n",
- " os.environ[\"statsC\"] = \"--log-file=/root/.rclone_log/rclone_log.txt\"\n",
- "else:\n",
- " if Simple_Ouput == True:\n",
- " os.environ[\"statsC\"] = \"-v --stats-one-line --stats=5s\"\n",
- " else:\n",
- " os.environ[\"statsC\"] = \"-v --stats=5s\"\n",
- " \n",
- "if Output_Log_File == \"INFO\":\n",
- " os.environ[\"loglevelC\"] = \"--log-level INFO\"\n",
- "elif Output_Log_File == \"ERROR\":\n",
- " os.environ[\"loglevelC\"] = \"--log-level ERROR\"\n",
- "elif Output_Log_File == \"DEBUG\":\n",
- " os.environ[\"loglevelC\"] = \"--log-level DEBUG\"\n",
- "else:\n",
- " os.environ[\"loglevelC\"] = \"\"\n",
- "\n",
- "os.environ[\"extraC\"] = Extra_Arguments\n",
- "\n",
- "if Sync_Mode == \"Delete during transfer\":\n",
- " os.environ[\"syncmodeC\"] = \"--delete-during\"\n",
- "elif Sync_Mode == \"Delete before transfering\":\n",
- " os.environ[\"syncmodeC\"] = \"--delete-before\"\n",
- "elif Sync_Mode == \"Delete after transfering\":\n",
- " os.environ[\"syncmodeC\"] = \"--delete-after\"\n",
- " \n",
- "if Track_Renames == True:\n",
- " os.environ[\"trackrenamesC\"] = \"--track-renames\"\n",
- "else:\n",
- " os.environ[\"trackrenamesC\"] = \"\"\n",
- " \n",
- "if Deduplicate_Mode == \"Interactive\":\n",
- " os.environ[\"deduplicateC\"] = \"interactive\"\n",
- "elif Deduplicate_Mode == \"Skip\":\n",
- " os.environ[\"deduplicateC\"] = \"skip\"\n",
- "elif Deduplicate_Mode == \"First\":\n",
- " os.environ[\"deduplicateC\"] = \"first\"\n",
- "elif Deduplicate_Mode == \"Newest\":\n",
- " os.environ[\"deduplicateC\"] = \"newest\"\n",
- "elif Deduplicate_Mode == \"Oldest\":\n",
- " os.environ[\"deduplicateC\"] = \"oldest\"\n",
- "elif Deduplicate_Mode == \"Largest\":\n",
- " os.environ[\"deduplicateC\"] = \"largest\"\n",
- "elif Deduplicate_Mode == \"Rename\":\n",
- " os.environ[\"deduplicateC\"] = \"rename\"\n",
- " \n",
- "if Deduplicate_Use_Trash == True:\n",
- " os.environ[\"deduplicatetrashC\"] = \"\"\n",
- "else:\n",
- " os.environ[\"deduplicatetrashC\"] = \"--drive-use-trash=false\"\n",
- "\n",
- "\n",
- "##### rclone Execution\n",
- "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
- " !mkdir -p -m 666 /root/.rclone_log/\n",
- " display(HTML(\"Logging enabled, rclone will no longer display any output on the terminal. Please wait until the cell stop by itself. \"))\n",
- "\n",
- "if Mode == \"Copy\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf copy \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
- "elif Mode == \"Move\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf move \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
- "elif Mode == \"Sync\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf sync \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $syncmodeC $trackrenamesC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
- "elif Mode == \"Checker\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf check \"$sourceC\" \"$destinationC\" $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
- "elif Mode == \"Deduplicate\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf dedupe \"$sourceC\" $checkersC $statsC $loglevelC --dedupe-mode $deduplicateC $deduplicatetrashC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
- "elif Mode == \"Remove Empty Directories\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf rmdirs \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
- "elif Mode == \"Empty Trash\":\n",
- " !rclone --config=/root/.config/rclone/rclone.conf cleanup \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
- "\n",
- "\n",
- "##### Log Output\n",
- "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
- "\n",
- " ##### Rename log file and output settings.\n",
- " !mv /root/.rclone_log/rclone_log.txt /root/.rclone_log/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).txt\n",
- " with open(\"/root/.rclone_log/\" + Mode + \"_settings.txt\", \"w\") as f:\n",
- " f.write(\"Mode: \" + Mode + \\\n",
- " \"\\nCompare: \" + Compare + \\\n",
- " \"\\nSource: \\\"\" + Source + \\\n",
- " \"\\\"\\nDestination: \\\"\" + Destination + \\\n",
- " \"\\\"\\nTransfers: \" + str(Transfers) + \\\n",
- " \"\\nCheckers: \" + str(Checkers) + \\\n",
- " \"\\nSkip files that are newer on the destination: \" + str(Skip_files_that_are_newer_on_the_destination) + \\\n",
- " \"\\nSkip all files that exist: \" + str(Skip_all_files_that_exist) + \\\n",
- " \"\\nDo not cross filesystem boundaries: \" + str(Do_not_cross_filesystem_boundaries) + \\\n",
- " \"\\nDo not update modtime if files are identical: \" + str(Do_not_update_modtime_if_files_are_identical) + \\\n",
- " \"\\nDry-Run: \" + str(Dry_Run) + \\\n",
- " \"\\nOutput Log Level: \" + Output_Log_File + \\\n",
- " \"\\nExtra Arguments: \\\"\" + Extra_Arguments + \\\n",
- " \"\\\"\\nSync Moden: \" + Sync_Mode + \\\n",
- " \"\\nTrack Renames: \" + str(Track_Renames) + \\\n",
- " \"\\nDeduplicate Mode: \" + Deduplicate_Mode + \\\n",
- " \"\\nDeduplicate Use Trash: \" + str(Deduplicate_Use_Trash))\n",
- "\n",
- " ##### Compressing log file.\n",
- " !rm -f /root/rclone_log.zip\n",
- " !zip -r -q -j -9 /root/rclone_log.zip /root/.rclone_log/\n",
- " !rm -rf /root/.rclone_log/\n",
- " !mkdir -p -m 666 /root/.rclone_log/\n",
- "\n",
- " ##### Send Log\n",
- " if os.path.isfile(\"/root/rclone_log.zip\") == True:\n",
- " try:\n",
- " files.download(\"/root/rclone_log.zip\")\n",
- " !rm -f /root/rclone_log.zip\n",
- " display(HTML(\"Sending log to your browser... \"))\n",
- " except:\n",
- " !mv /root/rclone_log.zip /content/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).zip\n",
- " display(HTML(\"You can use file explorer to download the log file. \"))\n",
- " else:\n",
- " clear_output()\n",
- " display(HTML(\"There is no log file. \"))\n",
- " \n",
- "\n",
- "### Operation has been successfully completed.\n",
- "if Mode != \"Config\":\n",
- " display(HTML(\"✅ Operation has been successfully completed. \"))\n",
- "\n",
- "\n",
- "##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
- "if automatically_clear_cell_output is True:\n",
- " clear_output()\n",
- "else:\n",
- "\tpass##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
- "if automatically_clear_cell_output is True:\n",
- " clear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "YSEyWbWfY9qx"
- },
- "source": [
- "### Google Drive 750GB Upload Bandwidth Bypass "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Qvwz8vtgjSLM"
- },
- "source": [
- "\n",
- "Still work in progress! Use at your own risk! \n",
- "Be sure to read everything in this block carefully. No, seriously. Read carefully. \n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "HBy4qgMQNm7Q"
- },
- "source": [
- "**Always remember to install rclone first!** "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "UI9NTz-typuf",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← [Clone] AutorRclone \n",
- "#================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!git clone https://github.com/xyou365/AutoRclone /content/tools/AutoRclone\n",
- "!sudo pip3 install -r /content/tools/AutoRclone/requirements.txt\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Y28rhXs2a7QV"
- },
- "source": [
- "\n",
- "Since Google has removed the ability to automatically enable the GDrive API from the good old \"Quickstart\" (as of 2021-04-15), you will have to manually create a project by yourself, to get the credentials.json.\n",
- " \n",
- "(This means that you have to do the initial job all by yourself. This includes creating a project on the Google Cloud Platform, enabling the GDrive API, setting up the OAuth 2.0, setting up the OAuth Screen, all that stuff.)\n",
- " \n",
- "Click here (opens in new tab) and follow along the tutorial there.\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0AR8nQi2w9_K",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Upload the \"credentials.json\" File \n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
- "\n",
- "\n",
- "if not os.path.exists(AutoRclone_path):\n",
- " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
- "else:\n",
- " %cd \"$AutoRclone_path\"\n",
- "\n",
- " from google.colab import files\n",
- " uploaded = files.upload()\n",
- "\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "pqe_u-WjESWe"
- },
- "source": [
- "TO DO: Add \"remove token\" to be able to re-authorize with different account."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "UFQoYxRKAclf"
- },
- "source": [
- "#### Generate Project(s) and Service Account(s) "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "dQsFZnNa8qN4",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Generate Service Account(s) on Existing Project(s) \n",
- "#@markdown > This cell will generate the Service Accounts on ALL existing project(s)! Let's say you currenly have 2 projects, then the number of service accounts will be created is 200 (100 per project). To avoid any unwanted things like messing up your current project, it is highly recommended to run the cell below instead.\n",
- "#================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
- "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
- "\n",
- "\n",
- "if not os.path.exists(AutoRclone_path):\n",
- " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
- "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
- " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
- "else:\n",
- " %cd /content/tools/AutoRclone\n",
- " !python3 gen_sa_accounts.py --quick-setup -1\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "n1OhWkE8Flds",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Generate New Project(s) and Service Account(s) \n",
- "\n",
- "the_amount_of_project_to_generate = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
- "#@markdown > To avoid any unwanted things like messing up your current project, this cell will generate a NEW project instead, on the Google Cloud Platform, based on the number specified by the slider. It will also (trying to) enable the needed API(s) and create the Service Accounts. The number of Service Account created per project is 100. That is a lot. So the calculation here is 100 x 750GB = 7500GB or 7.5TB worth of upload bandwidth. There could be a chance that Google will notice your action. You obviously don't want that, right? Well... just don't be a glutton and slide the slider all the way to the right and you should be safe to go. (Realistically speaking though, 7.5TB is a lot of upload bandwidth. Even 750 x 5 should be sufficient enough... not to mention the limitation is just a day and will recharge after 24 hours).\n",
- "#================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
- "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
- "\n",
- "\n",
- "if not os.path.exists(AutoRclone_path):\n",
- " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
- "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
- " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
- "else:\n",
- " %cd /content/tools/AutoRclone\n",
- " !python3 gen_sa_accounts.py --quick-setup \"$the_amount_of_project_to_generate\" --new-only\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "k8UlN_AeTZqs",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Download the Service Account Keys (Optional) \n",
- "Project_ID = \"\" #@param {type:\"string\"}\n",
- "#@markdown > After you have generated the project(s) and the service account(s) using one one the cell above, the service account keys should be automatically downloaded. You can still run this cell to manually do it yourself, or if you want to download keys from a specific project.\n",
- "#================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
- "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
- "\n",
- "\n",
- "if not os.path.exists(AutoRclone_path):\n",
- " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
- "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
- " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
- "else:\n",
- " %cd /content/tools/AutoRclone\n",
- " !python3 gen_sa_accounts.py --download-keys \"$Project_ID\"\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "8TsnaCxSV-9G",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Clear the \"accounts\" Folder (Optional) \n",
- "#@markdown > If you think the \"accounts\" folder is cluttered, feel free to run this cell and then run the cell above this to re-download the service account keys.\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import shutil\n",
- "\n",
- "accounts_path = \"/content/tools/AutoRclone/accounts\"\n",
- "\n",
- "if os.path.exists(accounts_path) and os.path.isdir(accounts_path):\n",
- " shutil.rmtree(accounts_path)\n",
- " os.makedirs(accounts_path)\n",
- "else:\n",
- " os.makedirs(accounts_path)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "mOjIsl60XBvw",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Export the Email Addresses from the JSON Files to a Text File \n",
- "input_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
- "#@markdown > Path to the folder which contain the Service Account JSON files.\n",
- "#output_name = \"\" #@param {type:\"string\"}\n",
- "#output_path = \"\" #@param {type:\"string\"}\n",
- "##@markdown > If both fields are empty, the default name and path for the output file will be used. Name = service-account-emails.txt Path = /content\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "#if output_name and output_path == \"\":\n",
- "# output_name = \"service-account-emails\"\n",
- "# output_path = \"/content\"\n",
- "#elif output_name == \"\" and not output_path == \"\":\n",
- "# output_name = \"service-account-emails\"\n",
- "#elif not output_name == \"\" and output_path == \"\":\n",
- "# output_path = \"/content\"\n",
- "\n",
- "\n",
- "if input_path == \"\":\n",
- " display(HTML(\"❌ The input_path field is empty! \"))\n",
- "else:\n",
- " if not os.path.exists(input_path):\n",
- " display(HTML(\"❌ The path you have entered does not exist! \"))\n",
- " elif os.path.exists(input_path) and os.path.isfile(input_path):\n",
- " display(HTML(\"❌ The input_path is not a folder! \"))\n",
- " elif os.path.exists(input_path) and os.path.isdir(input_path):\n",
- " %cd \"$input_path\"\n",
- " !grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > /content/service_account_emails.txt\n",
- " #!grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > \"$output_path\"/\"$output_name\".txt\n",
- " %cd /content\n",
- "\n",
- " clear_output()\n",
- "\n",
- " display(HTML(\"✅ The output is saved in /content/service_account_emails.txt \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "l-Sbt9djBtpe",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Bulk Rename the Service Account Keys (Optional) \n",
- "service_account_keys_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
- "rename_prefix = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If the rename_prefix field is empty, the default prefix will be given: service_account_0 to 100.\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "\n",
- "if rename_prefix == \"\":\n",
- " rename_prefix = \"service_account_\"\n",
- "else:\n",
- " rename_prefix = rename_prefix\n",
- "\n",
- "def main():\n",
- " for count, filename in enumerate(os.listdir(service_account_keys_path)):\n",
- " destination = rename_prefix + str(count) + \".json\"\n",
- " source = service_account_keys_path + \"/\" + filename\n",
- " destination = service_account_keys_path + \"/\" + destination\n",
- " \n",
- " # rename() function will\n",
- " # rename all the files\n",
- " os.rename(source, destination)\n",
- " \n",
- "if __name__ == '__main__':\n",
- " main()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "g5HCVqRNaj4Q",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← Bulk Add the Service Accounts into a Team Drive (Optional) \n",
- "Team_Drive_ID = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If this cell does not work or maybe not doing anything, simply create a Google Group (click here (opens in new tab)) and add all, if not, a number of the service accounts into that group and then on the Team Drive, just invite over the group's email into the Team Drive. The group's email should look something like this: group-name@googlegroups.com\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "if not os.path.exists(\"/content/tools/AutoRclone/add_to_team_drive.py\"):\n",
- " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
- "else:\n",
- " if Team_Drive_ID == \"\":\n",
- " display(HTML(\"❌ The Team_Drive_ID field is empty! \"))\n",
- " elif not Team_Drive_ID == \"\":\n",
- " %cd /content/tools/AutoRclone\n",
- " !python3 add_to_team_drive.py -d \"Team_Drive_ID\"\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "c1N141ZcEdwd"
- },
- "source": [
- "#### Perform the Task "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "eF7Wmr7unSD5",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← [Start] Method 1 \n",
- "Source = \"\" #@param {type:\"string\"}\n",
- "Destination = \"\" #@param {type:\"string\"}\n",
- "#@markdown > I'm pretty sure this only works between Team Drive to Team Drive, but your mileage may vary.\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "if not os.path.exists(\"/content/tools/AutoRclone/rclone_sa_magic.py\"):\n",
- " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
- "else:\n",
- " if Source is \"\" and not Destination is \"\":\n",
- " display(HTML(\"❌ The Source field is empty! \"))\n",
- " elif not Source is \"\" and Destination is \"\":\n",
- " display(HTML(\"❌ The Destination field is empty! \"))\n",
- " elif Source is \"\" and Destination is \"\":\n",
- " display(HTML(\"❌ Both of the fields above are empty! \"))\n",
- " else:\n",
- " %cd /content/tools/AutoRclone\n",
- " !python3 rclone_sa_magic.py -s \"$Source\" -d \"$Destination\" -b 1 -e 600\n",
- " %cd /content\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "O0NHwsI_-d3W"
- },
- "source": [
- "### rclone Configuration "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "PDc8KdYNQ2s-",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] rclone WebUI Configuration \n",
- "# @markdown >rclone WebUI Default CredentialUsername: userPassword: pass\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, signal, random, string, urllib.request, time\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "runW = get_ipython()\n",
- "\n",
- "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run\n",
- "\n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd))\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl,\n",
- " findProcess,\n",
- " CWD,\n",
- " textAn,\n",
- " checkAvailable,\n",
- " displayOutput,\n",
- " prepareSession,\n",
- " rcloneConfigurationPath,\n",
- " accessSettingFile,\n",
- " memGiB,\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "prepareSession()\n",
- "\n",
- "pid = findProcess(\"rclone\", \"rcd\", isPid=True)\n",
- "\n",
- "try:\n",
- " os.kill(int(pid), signal.SIGTERM)\n",
- "except TypeError:\n",
- " pass\n",
- " \n",
- "cmd = \"rclone rcd --rc-web-gui --rc-addr :5572\" \\\n",
- " \" --rc-serve\" \\\n",
- " \" --rc-user=user --rc-pass=pass\" \\\n",
- " \" --rc-no-auth\" \\\n",
- " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
- " ' --user-agent \"Mozilla\"' \\\n",
- " ' --transfers 16' \\\n",
- " \" &\"\n",
- "\n",
- "runSh(cmd, shell=True)\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneWebUI', 5572, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneWebUI.yml\", 4099]).start('rcloneWebUI', displayB=False)\n",
- "clear_output()\n",
- "displayUrl(Server, pNamU='rclone WebUI : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "5HURZQEZQ6pT",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] rclone CLI Configuration \n",
- "# @markdown Run this cell to create and/or edit an rclone configuration.
\n",
- "# @markdown > After you have created a configuration, download the configuration file.In the next time you want to mount an rclone drive, simply import the configuration file.\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\" #\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "# @markdown ---\n",
- "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, urllib.request, IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "runW = get_ipython()\n",
- "\n",
- "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run\n",
- "\n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd))\n",
- "\n",
- "from mixlab import (\n",
- " prepareSession,\n",
- " rcloneConfigurationPath,\n",
- " runSh,\n",
- " PortForward_wrapper\n",
- ")\n",
- "\n",
- "import codecs, contextlib, locale, os, pty, select, signal, subprocess, sys, termios, time\n",
- "from IPython.utils import text\n",
- "import six\n",
- "from google.colab import _ipython\n",
- "from google.colab import _message\n",
- "from google.colab.output import _tags\n",
- "\n",
- "# Linux read(2) limits to 0x7ffff000 so stay under that for clarity.\n",
- "_PTY_READ_MAX_BYTES_FOR_TEST = 2**20 # 1MB\n",
- "\n",
- "_ENCODING = 'UTF-8'\n",
- "\n",
- "class ShellResult(object):\n",
- " \"\"\"Result of an invocation of the shell magic.\n",
- "\n",
- " Note: This is intended to mimic subprocess.CompletedProcess, but has slightly\n",
- " different characteristics, including:\n",
- " * CompletedProcess has separate stdout/stderr properties. A ShellResult\n",
- " has a single property containing the merged stdout/stderr stream,\n",
- " providing compatibility with the existing \"!\" shell magic (which this is\n",
- " intended to provide an alternative to).\n",
- " * A custom __repr__ method that returns output. When the magic is invoked as\n",
- " the only statement in the cell, Python prints the string representation by\n",
- " default. The existing \"!\" shell magic also returns output.\n",
- " \"\"\"\n",
- "\n",
- " def __init__(self, args, returncode, command_output):\n",
- " self.args = args\n",
- " self.returncode = returncode\n",
- " self.output = command_output\n",
- "\n",
- " def check_returncode(self):\n",
- " if self.returncode:\n",
- " raise subprocess.CalledProcessError(\n",
- " returncode=self.returncode, cmd=self.args, output=self.output)\n",
- "\n",
- " def _repr_pretty_(self, p, cycle): # pylint:disable=unused-argument\n",
- " # Note: When invoking the magic and not assigning the result\n",
- " # (e.g. %shell echo \"foo\"), Python's default semantics will be used and\n",
- " # print the string representation of the object. By default, this will\n",
- " # display the __repr__ of ShellResult. Suppress this representation since\n",
- " # the output of the command has already been displayed to the output window.\n",
- " if cycle:\n",
- " raise NotImplementedError\n",
- "\n",
- "\n",
- "def _configure_term_settings(pty_fd):\n",
- " term_settings = termios.tcgetattr(pty_fd)\n",
- " # ONLCR transforms NL to CR-NL, which is undesirable. Ensure this is disabled.\n",
- " # http://man7.org/linux/man-pages/man3/termios.3.html\n",
- " term_settings[1] &= ~termios.ONLCR\n",
- "\n",
- " # ECHOCTL echoes control characters, which is undesirable.\n",
- " term_settings[3] &= ~termios.ECHOCTL\n",
- "\n",
- " termios.tcsetattr(pty_fd, termios.TCSANOW, term_settings)\n",
- "\n",
- "\n",
- "def _run_command(cmd, clear_streamed_output):\n",
- " \"\"\"Calls the shell command, forwarding input received on the stdin_socket.\"\"\"\n",
- " locale_encoding = locale.getpreferredencoding()\n",
- " if locale_encoding != _ENCODING:\n",
- " raise NotImplementedError(\n",
- " 'A UTF-8 locale is required. Got {}'.format(locale_encoding))\n",
- "\n",
- " parent_pty, child_pty = pty.openpty()\n",
- " _configure_term_settings(child_pty)\n",
- "\n",
- " epoll = select.epoll()\n",
- " epoll.register(\n",
- " parent_pty,\n",
- " (select.EPOLLIN | select.EPOLLOUT | select.EPOLLHUP | select.EPOLLERR))\n",
- "\n",
- " try:\n",
- " temporary_clearer = _tags.temporary if clear_streamed_output else _no_op\n",
- "\n",
- " with temporary_clearer(), _display_stdin_widget(\n",
- " delay_millis=500) as update_stdin_widget:\n",
- " # TODO(b/115531839): Ensure that subprocesses are terminated upon\n",
- " # interrupt.\n",
- " p = subprocess.Popen(\n",
- " cmd,\n",
- " shell=True,\n",
- " executable='/bin/bash',\n",
- " stdout=child_pty,\n",
- " stdin=child_pty,\n",
- " stderr=child_pty,\n",
- " close_fds=True)\n",
- " # The child PTY is only needed by the spawned process.\n",
- " os.close(child_pty)\n",
- "\n",
- " return _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget)\n",
- " finally:\n",
- " epoll.close()\n",
- " os.close(parent_pty)\n",
- "\n",
- "\n",
- "class _MonitorProcessState(object):\n",
- "\n",
- " def __init__(self):\n",
- " self.process_output = six.StringIO()\n",
- " self.is_pty_still_connected = True\n",
- "\n",
- "\n",
- "def _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget):\n",
- " \"\"\"Monitors the given subprocess until it terminates.\"\"\"\n",
- " state = _MonitorProcessState()\n",
- "\n",
- " # A single UTF-8 character can span multiple bytes. os.read returns bytes and\n",
- " # could return a partial byte sequence for a UTF-8 character. Using an\n",
- " # incremental decoder is incrementally fed input bytes and emits UTF-8\n",
- " # characters.\n",
- " decoder = codecs.getincrementaldecoder(_ENCODING)()\n",
- "\n",
- " num_interrupts = 0\n",
- " echo_status = None\n",
- " while True:\n",
- " try:\n",
- " result = _poll_process(parent_pty, epoll, p, cmd, decoder, state)\n",
- " if result is not None:\n",
- " return result\n",
- " term_settings = termios.tcgetattr(parent_pty)\n",
- " new_echo_status = bool(term_settings[3] & termios.ECHO)\n",
- " if echo_status != new_echo_status:\n",
- " update_stdin_widget(new_echo_status)\n",
- " echo_status = new_echo_status\n",
- " except KeyboardInterrupt:\n",
- " try:\n",
- " num_interrupts += 1\n",
- " if num_interrupts == 1:\n",
- " p.send_signal(signal.SIGINT)\n",
- " elif num_interrupts == 2:\n",
- " # Process isn't responding to SIGINT and user requested another\n",
- " # interrupt. Attempt to send SIGTERM followed by a SIGKILL if the\n",
- " # process doesn't respond.\n",
- " p.send_signal(signal.SIGTERM)\n",
- " time.sleep(0.5)\n",
- " if p.poll() is None:\n",
- " p.send_signal(signal.SIGKILL)\n",
- " except KeyboardInterrupt:\n",
- " # Any interrupts that occur during shutdown should not propagate.\n",
- " pass\n",
- "\n",
- " if num_interrupts > 2:\n",
- " # In practice, this shouldn't be possible since\n",
- " # SIGKILL is quite effective.\n",
- " raise\n",
- "\n",
- "\n",
- "def _poll_process(parent_pty, epoll, p, cmd, decoder, state):\n",
- " \"\"\"Polls the process and captures / forwards input and output.\"\"\"\n",
- "\n",
- " terminated = p.poll() is not None\n",
- " if terminated:\n",
- " termios.tcdrain(parent_pty)\n",
- " # We're no longer interested in write events and only want to consume any\n",
- " # remaining output from the terminated process. Continuing to watch write\n",
- " # events may cause early termination of the loop if no output was\n",
- " # available but the pty was ready for writing.\n",
- " epoll.modify(parent_pty,\n",
- " (select.EPOLLIN | select.EPOLLHUP | select.EPOLLERR))\n",
- "\n",
- " output_available = False\n",
- "\n",
- " events = epoll.poll()\n",
- " input_events = []\n",
- " for _, event in events:\n",
- " if event & select.EPOLLIN:\n",
- " output_available = True\n",
- " raw_contents = os.read(parent_pty, _PTY_READ_MAX_BYTES_FOR_TEST)\n",
- " import re\n",
- " decoded_contents = re.sub(r\"http:\\/\\/127.0.0.1:53682\", Server[\"url\"], \n",
- " decoder.decode(raw_contents))\n",
- " sys.stdout.write(decoded_contents)\n",
- " state.process_output.write(decoded_contents)\n",
- "\n",
- " if event & select.EPOLLOUT:\n",
- " # Queue polling for inputs behind processing output events.\n",
- " input_events.append(event)\n",
- "\n",
- " # PTY was disconnected or encountered a connection error. In either case,\n",
- " # no new output should be made available.\n",
- " if (event & select.EPOLLHUP) or (event & select.EPOLLERR):\n",
- " state.is_pty_still_connected = False\n",
- "\n",
- " for event in input_events:\n",
- " # Check to see if there is any input on the stdin socket.\n",
- " # pylint: disable=protected-access\n",
- " input_line = _message._read_stdin_message()\n",
- " # pylint: enable=protected-access\n",
- " if input_line is not None:\n",
- " # If a very large input or sequence of inputs is available, it's\n",
- " # possible that the PTY buffer could be filled and this write call\n",
- " # would block. To work around this, non-blocking writes and keeping\n",
- " # a list of to-be-written inputs could be used. Empirically, the\n",
- " # buffer limit is ~12K, which shouldn't be a problem in most\n",
- " # scenarios. As such, optimizing for simplicity.\n",
- " input_bytes = bytes(input_line.encode(_ENCODING))\n",
- " os.write(parent_pty, input_bytes)\n",
- "\n",
- " # Once the process is terminated, there still may be output to be read from\n",
- " # the PTY. Wait until the PTY has been disconnected and no more data is\n",
- " # available for read. Simply waiting for disconnect may be insufficient if\n",
- " # there is more data made available on the PTY than we consume in a single\n",
- " # read call.\n",
- " if terminated and not state.is_pty_still_connected and not output_available:\n",
- " sys.stdout.flush()\n",
- " command_output = state.process_output.getvalue()\n",
- " return ShellResult(cmd, p.returncode, command_output)\n",
- "\n",
- " if not output_available:\n",
- " # The PTY is almost continuously available for reading input to provide\n",
- " # to the underlying subprocess. This means that the polling loop could\n",
- " # effectively become a tight loop and use a large amount of CPU. Add a\n",
- " # slight delay to give resources back to the system while monitoring the\n",
- " # process.\n",
- " # Skip this delay if we read output in the previous loop so that a partial\n",
- " # read doesn't unnecessarily sleep before reading more output.\n",
- " # TODO(b/115527726): Rather than sleep, poll for incoming messages from\n",
- " # the frontend in the same poll as for the output.\n",
- " time.sleep(0.1)\n",
- "\n",
- "\n",
- "@contextlib.contextmanager\n",
- "def _display_stdin_widget(delay_millis=0):\n",
- " \"\"\"Context manager that displays a stdin UI widget and hides it upon exit.\n",
- "\n",
- " Args:\n",
- " delay_millis: Duration (in milliseconds) to delay showing the widget within\n",
- " the UI.\n",
- "\n",
- " Yields:\n",
- " A callback that can be invoked with a single argument indicating whether\n",
- " echo is enabled.\n",
- " \"\"\"\n",
- " shell = _ipython.get_ipython()\n",
- " display_args = ['cell_display_stdin', {'delayMillis': delay_millis}]\n",
- " _message.blocking_request(*display_args, parent=shell.parent_header)\n",
- "\n",
- " def echo_updater(new_echo_status):\n",
- " # Note: Updating the echo status uses colab_request / colab_reply on the\n",
- " # stdin socket. Input provided by the user also sends messages on this\n",
- " # socket. If user input is provided while the blocking_request call is still\n",
- " # waiting for a colab_reply, the input will be dropped per\n",
- " # https://github.com/googlecolab/colabtools/blob/56e4dbec7c4fa09fad51b60feb5c786c69d688c6/google/colab/_message.py#L100.\n",
- " update_args = ['cell_update_stdin', {'echo': new_echo_status}]\n",
- " _message.blocking_request(*update_args, parent=shell.parent_header)\n",
- "\n",
- " yield echo_updater\n",
- "\n",
- " hide_args = ['cell_remove_stdin', {}]\n",
- " _message.blocking_request(*hide_args, parent=shell.parent_header)\n",
- "\n",
- "\n",
- "@contextlib.contextmanager\n",
- "def _no_op():\n",
- " yield\n",
- "\n",
- "prepareSession()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneConfiguration', 53682, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneConfiguration.yml\", 4074]).start('rcloneConfiguration', displayB=False, v=False)\n",
- "\n",
- "printData = \"\"\"\n",
- "Before finishing the configuration, you will be redirected to an address.\n",
- "Replace the address http://127.0.0.0:53682 with {}\"\"\".format(Server['url'])\n",
- "print(printData)\n",
- "display(HTML('(Click here to see how to do it)'))\n",
- "print(f\"{Server['url']}\", end=\"\\n\\n\")\n",
- "_run_command(f\"rclone config --config {rcloneConfigurationPath}/rclone.conf\", False)\n",
- "\n",
- "\n",
- "if automatically_clear_cell_output is True:\n",
- "\tclear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "qakuMVVjQlGU",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Mount/Unmount rclone Drive (Optional) \n",
- "# @markdown Mount a remote drive as a local drive on a mountpoint.\n",
- "# @markdown ---\n",
- "Cache_Directory = \"DISK\" #@param [\"RAM\", \"DISK\"]\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import HTML, clear_output\n",
- "import uuid\n",
- "import ipywidgets as widgets\n",
- "from google.colab import output\n",
- "import re\n",
- "\n",
- "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run\n",
- "\n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd))\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " prepareSession,\n",
- " rcloneConfigurationPath,\n",
- ")\n",
- "\n",
- "class MakeButton(object):\n",
- " def __init__(self, title, callback, style):\n",
- " self._title = title\n",
- " self._callback = callback\n",
- " self._style = style\n",
- " def _repr_html_(self):\n",
- " callback_id = 'button-' + str(uuid.uuid4())\n",
- " output.register_callback(callback_id, self._callback)\n",
- " if self._style != \"\":\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
- " else:\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
- " template = \"\"\"{title} \n",
- " \"\"\"\n",
- " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
- " return html\n",
- " \n",
- "def ShowAC():\n",
- " clear_output(wait=True)\n",
- " display(\n",
- " widgets.HBox(\n",
- " [widgets.VBox(\n",
- " [widgets.HTML(\n",
- " '''\n",
- " Available drive to mount/unmount: \n",
- " '''\n",
- " ),\n",
- " mountNam]\n",
- " )\n",
- " ]\n",
- " )\n",
- " )\n",
- " \n",
- " display(HTML(\" \"), MakeButton(\"Mount\", MountCMD, \"primary\"),\n",
- " MakeButton(\"Unmount\", unmountCMD, \"danger\"))\n",
- "\n",
- "prepareSession()\n",
- "content = open(f\"{rcloneConfigurationPath}/rclone.conf\").read()\n",
- "avCon = re.findall(r\"^\\[(.+)\\]$\", content, re.M)\n",
- "mountNam = widgets.Dropdown(options=avCon)\n",
- "\n",
- "if Cache_Directory == 'RAM':\n",
- " cache_path = '/dev/shm'\n",
- "elif Cache_Directory == 'DISK':\n",
- " os.makedirs('/tmp', exist_ok=True)\n",
- " cache_path = '/tmp'\n",
- "\n",
- "def MountCMD():\n",
- " mPoint = f\"/content/drives/{mountNam.value}\"\n",
- " os.makedirs(mPoint, exist_ok=True)\n",
- " cmd = rf\"rclone mount {mountNam.value}: {mPoint}\" \\\n",
- " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
- " ' --user-agent \"Mozilla\"' \\\n",
- " ' --buffer-size 256M' \\\n",
- " ' --transfers 10' \\\n",
- " ' --vfs-cache-mode full' \\\n",
- " ' --vfs-cache-max-age 0h0m1s' \\\n",
- " ' --vfs-cache-poll-interval 0m1s' \\\n",
- " f' --cache-dir {cache_path}' \\\n",
- " ' --allow-other' \\\n",
- " ' --daemon'\n",
- "\n",
- " if runSh(cmd, shell=True) == 0:\n",
- " print(f\"The drive have been successfully mounted! - \\t{mPoint}\")\n",
- " else:\n",
- " print(f\"Failed to mount the drive! - \\t{mPoint}\")\n",
- "\n",
- "def unmountCMD():\n",
- " mPoint = f\"/content/drives/{mountNam.value}\"\n",
- " if os.system(f\"fusermount -uz {mPoint}\") == 0:\n",
- " runSh(f\"rm -r {mPoint}\")\n",
- " print(f\"The drive have been successfully unmounted! - \\t{mPoint}\")\n",
- " else:\n",
- " runSh(f\"fusermount -uz {mPoint}\", output=True)\n",
- "\n",
- "ShowAC()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "G3rr1OuFRApD",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Upload Configuration File \n",
- "# @markdown If you already have an rclone configuration file, you can upload it by running this cell.
\n",
- "\n",
- "# @markdown ---\n",
- "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG', 'RCONFIG_append', \"GENERATELIST\"]\n",
- "REMOTE = \"mnc\" # @param {type:\"string\"}\n",
- "QUERY_PATTERN = \"\" # @param {type:\"string\"}\n",
- "# @markdown > For those who are unable to upload local file: StackOverflow
\n",
- "# ================================================================ #\n",
- "\n",
- "from os import path as _p\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run # nosec\n",
- "\n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd)) # nosec\n",
- "\n",
- "import importlib, mixlab\n",
- "from google.colab import files # pylint: disable=import-error #nosec\n",
- "from mixlab import checkAvailable, runSh, rcloneConfigurationPath, prepareSession\n",
- "\n",
- "\n",
- "def generateUploadList():\n",
- " prepareSession()\n",
- " if checkAvailable(\"/content/upload.txt\"):\n",
- " runSh(\"rm -f upload.txt\")\n",
- " runSh(\n",
- " f\"rclone --config {rcloneConfigurationPath}/rclone.conf lsf {REMOTE}: --include '{QUERY_PATTERN}' --drive-shared-with-me --files-only --max-depth 1 > /content/upload.txt\",\n",
- " shell=True, # nosec\n",
- " )\n",
- "\n",
- "\n",
- "def uploadLocalFiles():\n",
- " prepareSession()\n",
- " if MODE == \"UTILS\":\n",
- " filePath = \"/root/.ipython/mixlab.py\"\n",
- " elif MODE in (\"RCONFIG\", \"RCONFIG_append\"):\n",
- " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
- " else:\n",
- " pass\n",
- "\n",
- " try:\n",
- " if checkAvailable(filePath):\n",
- " runSh(f\"rm -f {filePath}\")\n",
- " display(HTML(\"Upload rclone.conf from your local machine. \"))\n",
- " uploadedFile = files.upload()\n",
- " fileNameDictKeys = uploadedFile.keys()\n",
- " fileNo = len(fileNameDictKeys)\n",
- " if fileNo > 1:\n",
- " for fn in fileNameDictKeys:\n",
- " runSh(f'rm -f \"/content/{fn}\"')\n",
- " return print(\"\\nOnly upload one configuration file!\")\n",
- " elif fileNo == 0:\n",
- " return print(\"\\nFile upload cancelled.\")\n",
- " elif fileNo == 1:\n",
- " for fn in fileNameDictKeys:\n",
- " if checkAvailable(f\"/content/{fn}\"):\n",
- " if MODE == \"RCONFIG_append\":\n",
- " import urllib\n",
- " urllib.request.urlretrieve(\"https://shirooo39.github.io/MiXLab/resources/configurations/rclone/rclone.conf\",\n",
- " \"/usr/local/sessionSettings/rclone.conf\")\n",
- " with open(f\"/content/{fn}\", 'r+') as r:\n",
- " new_data = r.read()\n",
- " runSh(f'rm -f \"/content/{fn}\"')\n",
- " with open(filePath, 'r+') as f:\n",
- " old_data = f.read()\n",
- " f.seek(0)\n",
- " f.truncate(0)\n",
- " f.write(old_data + new_data)\n",
- " print(\"\\nUpdate completed.\")\n",
- " else:\n",
- " runSh(f'mv -f \"/content/{fn}\" {filePath}')\n",
- " runSh(f\"chmod 666 {filePath}\")\n",
- " runSh(f'rm -f \"/content/{fn}\"')\n",
- " importlib.reload(mixlab)\n",
- " !rm /content/upload.txt\n",
- " clear_output()\n",
- " print(\"rclone.conf has been uploaded to Colab!\")\n",
- " return\n",
- " else:\n",
- " print(\"\\nNo file is chosen!\")\n",
- " return\n",
- " except:\n",
- " return print(\"\\nFailed to upload!\")\n",
- "\n",
- "\n",
- "if MODE == \"GENERATELIST\":\n",
- " generateUploadList()\n",
- "else:\n",
- " uploadLocalFiles()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "BucL21B4RIGJ",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Download Configuration File \n",
- "# @markdown Download configuration file from the VM into your local machine.
\n",
- "\n",
- "# @markdown ---\n",
- "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG']\n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "from google.colab import files\n",
- "\n",
- "def downloadFile():\n",
- " if MODE == \"UTILS\":\n",
- " filePath = \"/root/.ipython/mixlab.py\"\n",
- " elif MODE == \"RCONFIG\":\n",
- " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
- " else:\n",
- " pass\n",
- " try:\n",
- " files.download(filePath)\n",
- " except FileNotFoundError:\n",
- " print(\"File not found!\")\n",
- "\n",
- "if __name__ == \"__main__\":\n",
- " downloadFile()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "_NGsTyR3Ra5N",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "## @markdown ← Sync Backup \n",
- "# @markdown \n",
- "#FAST_LIST = True\n",
- "# ================================================================ #\n",
- "\n",
- "#from os import path as _p\n",
- "\n",
- "#if not _p.exists(\"/root/.ipython/rlab_utils.py\"):\n",
- "# from shlex import split as _spl\n",
- "# from subprocess import run # nosec\n",
- "\n",
- "# shellCmd = \"wget -qq https://biplobsd.github.io/RLabClone/res/rlab_utils.py \\\n",
- "# -O /root/.ipython/rlab_utils.py\"\n",
- "# run(_spl(shellCmd)) # nosec\n",
- "\n",
- "#from rlab_utils import (\n",
- "# runSh,\n",
- "# prepareSession,\n",
- "# PATH_RClone_Config,\n",
- "#)\n",
- "\n",
- "\n",
- "#def generateCmd(src, dst):\n",
- "# block=f\"{'':=<117}\"\n",
- "# title=f\"\"\"+{f'Now Synchronizing... \"{src}\" > \"{dst}\" Fast List : {\"ON\" if FAST_LIST else \"OFF\"}':^{len(block)-2}}+\"\"\"\n",
- "# print(f\"{block}\\n{title}\\n{block}\")\n",
- "# cmd = f'rclone sync \"{src}\" \"{dst}\" --config {PATH_RClone_Config}/rclone.conf {\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" --transfers 20 --checkers 20 --drive-server-side-across-configs -c --buffer-size 256M --drive-chunk-size 256M --drive-upload-cutoff 256M --drive-acknowledge-abuse --drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 --stats-one-line --stats=5s -v'\n",
- "# return cmd\n",
- "\n",
- "\n",
- "#def executeSync():\n",
- "# prepareSession()\n",
- "# runSh(generateCmd(\"tdTdnMov:Movies\",\"tdMovRa4:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdnTvs:TV Shows\",\"tdTvsRa5:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdnRa6:Games\",\"tdGamRa7:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdnRa8:Software\",\"tdSofRa9:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdnR11:Tutorials\",\"tdTutR12:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdnR13:Anime\",\"tdAniR14:\"), output=True)\n",
- "# runSh(generateCmd(\"tdTdn14:Music\",\"tdMusR15:\"), output=True)\n",
- "\n",
- "\n",
- "#executeSync()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "4DdRcv08fzTG"
- },
- "source": [
- "# ✦ *Download Manager* ✦ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Sjvzf5WLsJya"
- },
- "source": [
- "> It is recommended to download the file(s) into the VM's local disk first and then use rclone to upload (move/copy)to remote Drive, to avoid possible file corruption."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "o_uCXhC1S0GZ"
- },
- "source": [
- "## ✧ *Hosted-File Downloader* ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "nGKbLp4P8MXi"
- },
- "source": [
- "### aria2 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "l8uIsoVrC6to"
- },
- "source": [
- "#### aria2 "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Z3fpZQeJ8N80",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] aria2 \n",
- "Aria2_rpc = True\n",
- "Ariang_WEBUI = True\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re\n",
- "import urllib.request, requests\n",
- "from IPython.display import HTML, clear_output\n",
- "from urllib.parse import urlparse\n",
- "\n",
- "PORT = 8221\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " CWD,\n",
- " displayUrl,\n",
- " findProcess,\n",
- " findPackageR\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "# Setting up aria2\n",
- "runSh('apt install -y aria2')\n",
- "pathlib.Path('ariang').mkdir(mode=0o777, exist_ok=True)\n",
- "pathlib.Path('downloads').mkdir(mode=0o777, exist_ok=True)\n",
- "\n",
- "# Defining Github latest release tag\n",
- "def latestTag(link):\n",
- " import re\n",
- " from urllib.request import urlopen\n",
- " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
- " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
- "\n",
- "# Downloading the latest version of ariaNg\n",
- "if not os.path.exists(\"ariang/index.html\"):\n",
- " # BASE_URL = r\"https://github.com/mayswind/AriaNg\"\n",
- " # LATEST_TAG = latestTag(BASE_URL)\n",
- " # urlF = f'{BASE_URL}/releases/download/{LATEST_TAG}/' \\\n",
- " # f'AriaNg-{LATEST_TAG}-AllInOne.zip'\n",
- " urllib.request.urlretrieve(findPackageR('mayswind/AriaNg', 'AllInOne.zip'), 'ariang/new.zip')\n",
- " with zipfile.ZipFile('ariang/new.zip', 'r') as zip_ref: zip_ref.extractall('ariang')\n",
- " try:\n",
- " pathlib.Path('ariang/new.zip').unlink()\n",
- " except FileNotFoundError:\n",
- " pass\n",
- "\n",
- "# Starting up aria2 RPC and the WebUI (ariaNg)\n",
- "try:\n",
- " if not OUTPUT_DIR:\n",
- " OUTPUT_DIR = f\"downloads/\"\n",
- " elif not os.path.exists(OUTPUT_DIR):\n",
- " \n",
- " clear_output()\n",
- " \n",
- " print(\"Unable to find the defined path!\")\n",
- " exx()\n",
- "except:\n",
- " OUTPUT_DIR = f\"{CWD}/downloads/\"\n",
- "\n",
- "if Aria2_rpc:\n",
- " if not findProcess(\"aria2c\", \"--enable-rpc\"):\n",
- " try:\n",
- " trackers = requests.get(\"https://trackerslist.com/best_aria2.txt\").text\n",
- " cmdC = r\"aria2c --enable-rpc --rpc-listen-port=6800 -D \" \\\n",
- " fr\"-d {OUTPUT_DIR} \" \\\n",
- " r\"-j 20 \" \\\n",
- " r\"-c \" \\\n",
- " fr\"--bt-tracker={trackers} \" \\\n",
- " r\"--bt-request-peer-speed-limit=0 \" \\\n",
- " r\"--bt-max-peers=0 \" \\\n",
- " r\"--seed-ratio=0.0 \" \\\n",
- " r\"--max-connection-per-server=10 \" \\\n",
- " r\"--min-split-size=10M \" \\\n",
- " r\"--follow-torrent=mem \" \\\n",
- " r\"--disable-ipv6=true \" \\\n",
- " r\" &\"\n",
- " runSh(cmdC, shell=True)\n",
- " except:\n",
- " print(\"aria2 RPC is not enabled! Please enable the RPC first!\")\n",
- "\n",
- "# Configuring port forwarding\n",
- "clear_output()\n",
- "\n",
- "if Aria2_rpc:\n",
- " Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Aria2_rpc', 6800, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/aria2.yml\", 5042])\n",
- " data = Server.start('Aria2_rpc', displayB=False)\n",
- " Host = urlparse(data['url']).hostname\n",
- " port = \"80\"\n",
- "\n",
- "clear_output()\n",
- "\n",
- "if Ariang_WEBUI:\n",
- " if Aria2_rpc:\n",
- " filePath = 'ariang/index.html'\n",
- " with open(filePath, 'r+') as f:\n",
- " read_data = f.read()\n",
- " f.seek(0)\n",
- " f.truncate(0)\n",
- " read_data = re.sub(r'(rpcHost:\"\\w+.\")|rpcHost:\"\"', f'rpcHost:\"{Host}\"', read_data)\n",
- " read_data = re.sub(r'protocol:\"\\w+.\"', r'protocol:\"ws\"', read_data)\n",
- " read_data = re.sub(r'rpcPort:\"\\d+.\"', f'rpcPort:\"{port}\"', read_data)\n",
- " f.write(read_data)\n",
- " try:\n",
- " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
- " except:\n",
- " runSh(f\"python3 -m http.server {PORT} &\", shell=True, cd=\"ariang/\")\n",
- " \n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Ariang', PORT, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/ariang.yml\", 5043])\n",
- "data2 = Server.start('Ariang', displayB=False)\n",
- "data2['url'] = urlparse(data2['url'])._replace(scheme='http').geturl()\n",
- "displayUrl(data2, pNamU='AriaNg : ')\n",
- "\n",
- "if Aria2_rpc:\n",
- " display(HTML(\"\"\"aria2 RPC Configuration
Protocol Host Port
WebSocket \"\"\"+Host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "YMSrqjUm_bDN"
- },
- "source": [
- "#### aria2 > "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "xa483vhL_d0X",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] aria2 > \n",
- "URL = \"\" #@param {type:\"string\"}\n",
- "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
- "# @markdown > If OUTPUT_PATH is blank, the file will be downloaded into the default location.Default download location is /content/downloads\n",
- "# ================================================================ #\n",
- "\n",
- "import pathlib\n",
- "import shutil\n",
- "import hashlib\n",
- "import requests\n",
- "from urllib.parse import urlparse\n",
- "from os import path, mkdir\n",
- "if not path.exists(\"/root/.ipython/mixlab.py\"): \n",
- " from subprocess import run\n",
- " from shlex import split\n",
- "\n",
- " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(split(shellCmd))\n",
- "\n",
- "from mixlab import runSh\n",
- "\n",
- "def youtubedlInstall():\n",
- " if not path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
- " cmdC = \"rm -rf /content/sample_data/ && \" \\\n",
- " \" mkdir -p -m 666 /root/.YouTube-DL/ &&\" \\\n",
- " \" apt-get install atomicparsley &&\" \\\n",
- " \" curl -L https://yt-dl.org/downloads/latest/youtube-dl \" \\\n",
- " \"-o /usr/local/bin/youtube-dl &&\" \\\n",
- " \" chmod a+rx /usr/local/bin/youtube-dl\"\n",
- " get_ipython().system_raw(cmdC)\n",
- "\n",
- "def aria2Install():\n",
- " runSh('apt install -y aria2')\n",
- "\n",
- "def istmd(URL): \n",
- " link = urlparse(URL)\n",
- " \n",
- " #YandexDisk\n",
- " if link.netloc == \"yadi.sk\":\n",
- " API_ENDPOINT = 'https://cloud-api.yandex.net/v1/disk/public/resources/' \\\n",
- " '?public_key={}&path=/{}&offset={}'\n",
- " dry = False\n",
- " def md5sum(file_path):\n",
- " md5 = hashlib.md5()\n",
- " with open(file_path, 'rb') as f:\n",
- " for chunk in iter(lambda: f.read(128 * md5.block_size), b''):\n",
- " md5.update(chunk)\n",
- " return md5.hexdigest()\n",
- "\n",
- "\n",
- " def check_and_download_file(target_path, url, size, checksum):\n",
- " if path.isfile(target_path):\n",
- " if size == path.getsize(target_path):\n",
- " if checksum == md5sum(target_path):\n",
- " print('URL {}'.format(url))\n",
- " print('skipping correct {}'.format(target_path))\n",
- " return\n",
- " if not dry:\n",
- " print('URL {}'.format(url))\n",
- " print('downloading {}'.format(target_path))\n",
- " runSh(f'aria2c -x 16 -s 16 -k 1M -d {OUTPUT_PATH} {url}', output=True)\n",
- " # r = requests.get(url, stream=True)\n",
- " # with open(target_path, 'wb') as f:\n",
- " # shutil.copyfileobj(r.raw, f)\n",
- "\n",
- " def download_path(target_path, public_key, source_path, offset=0):\n",
- " print('getting \"{}\" at offset {}'.format(source_path, offset))\n",
- " current_path = path.join(target_path, source_path)\n",
- " pathlib.Path(current_path).mkdir(parents=True, exist_ok=True)\n",
- " jsn = requests.get(API_ENDPOINT.format(public_key, source_path, offset)).json()\n",
- " def try_as_file(j):\n",
- " if 'file' in j:\n",
- " file_save_path = path.join(current_path, j['name'])\n",
- " check_and_download_file(file_save_path, j['file'], j['size'], j['md5'])\n",
- " return True\n",
- " return False\n",
- "\n",
- " # first try to treat the actual json as a single file description\n",
- " if try_as_file(jsn):\n",
- " return\n",
- "\n",
- " # otherwise treat it as a directory\n",
- " emb = jsn['_embedded']\n",
- " items = emb['items']\n",
- " for i in items:\n",
- " # each item can be a file...\n",
- " if try_as_file(i):\n",
- " continue\n",
- " # ... or a directory\n",
- " else:\n",
- " subdir_path = path.join(source_path, i['name'])\n",
- " download_path(target_path, public_key, subdir_path)\n",
- "\n",
- " # check if current directory has more items\n",
- " last = offset + emb['limit']\n",
- " if last < emb['total']:\n",
- " download_path(target_path, public_key, source_path, last)\n",
- " download_path(OUTPUT_PATH, URL, '')\n",
- " return False \n",
- " return URL\n",
- "\n",
- "if not OUTPUT_PATH:\n",
- " OUTPUT_PATH = \"/content/downloads/\"\n",
- " \n",
- "if not URL == \"\":\n",
- " aria2Install()\n",
- " youtubedlInstall()\n",
- " try:\n",
- " mkdir(\"downloads\")\n",
- " except FileExistsError:\n",
- " pass\n",
- " url = istmd(URL)\n",
- " if url != False:\n",
- " print('URL {}'.format(URL))\n",
- " cmdC = f'youtube-dl -o \"{OUTPUT_PATH}/%(title)s\" {URL} ' \\\n",
- " '--external-downloader aria2c ' \\\n",
- " '--external-downloader-args \"-x 16 -s 16 -k 1M\"'\n",
- " runSh(cmdC, output=True)\n",
- "else:\n",
- " print(\"The URL field is emtpy!\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "N09EnjlB6wuV"
- },
- "source": [
- "### bandcamp-dl "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0jLuWp0C604l",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM =============================#\n",
- "#@markdown ← [Install] bandcamp-dl \n",
- "#@markdown Make sure to run this cell first! \n",
- "#================================================================#\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "!pip3 install bandcamp-downloader\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "LxU70FqH62an",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM =============================#\n",
- "#@markdown ← [Run] bandcamp-dl \n",
- "URL = \"\" #@param {type:\"string\"}\n",
- "Download_location = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If the \"Download_location\" field is left empty, downloads will be stored in: /content/downloads/bandcamp\n",
- "\n",
- "#@markdown ---\n",
- "#@markdown ⚙️ Download Options ⚙️ \n",
- "Download_only_if_all_tracks_are_available = False #@param {type:\"boolean\"}\n",
- "Overwrite_tracks_that_already_exist = False #@param {type:\"boolean\"}\n",
- "Skip_grabbing_album_art = False #@param {type:\"boolean\"}\n",
- "Embed_track_lyrics_If_available = False #@param {type:\"boolean\"}\n",
- "Use_album_or_track_Label_as_iTunes_grouping = False #@param {type:\"boolean\"}\n",
- "Embed_album_art_If_available = False #@param {type:\"boolean\"}\n",
- "\n",
- "#@markdown ---\n",
- "#@markdown ⚙️ Advanced Options ⚙️ \n",
- "Enable_verbose_logging = False #@param {type:\"boolean\"}\n",
- "Disable_slugification_of_track_album_and_artist_names = False #@param {type:\"boolean\"}\n",
- "Only_allow_ASCII_characters = False #@param {type:\"boolean\"}\n",
- "Retain_whitespace_in_filenames = False #@param {type:\"boolean\"}\n",
- "Retain_uppercase_letters_in_filenames = False #@param {type:\"boolean\"}\n",
- "Specify_allowed_characters_in_slugify = \"-_~\" #@param {type:\"string\"}\n",
- "Specify_the_character_to_use_in_place_of_spaces = \"-\" #@param {type:\"string\"}\n",
- "#================================================================#\n",
- "\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "default_download_location = \"/content/downloads/bandcamp\"\n",
- "custom_download_location = Download_location\n",
- "\n",
- "if Download_location is \"\":\n",
- " Download_location = \"--base-dir=\" + default_download_location\n",
- " \n",
- " if os.path.exists(default_download_location):\n",
- " pass\n",
- " else:\n",
- " os.makedirs(default_download_location)\n",
- "else:\n",
- " Download_location = \"--base-dir=\" + Download_location\n",
- " \n",
- " if os.path.exists(custom_download_location):\n",
- " pass\n",
- " else:\n",
- " os.makedirs(custom_download_location)\n",
- "\n",
- "if Download_only_if_all_tracks_are_available is True:\n",
- " full_album = \"-f\"\n",
- "else:\n",
- " full_album = \"\"\n",
- "\n",
- "if Overwrite_tracks_that_already_exist is True:\n",
- " overwrite = \"-o\"\n",
- "else:\n",
- " overwrite = \"\"\n",
- "\n",
- "if Skip_grabbing_album_art is True:\n",
- " no_art = \"-n\"\n",
- "else:\n",
- " no_art = \"\"\n",
- "\n",
- "if Embed_track_lyrics_If_available is True:\n",
- " embed_lyrics = \"-e\"\n",
- "else:\n",
- " embed_lyrics = \"\"\n",
- "\n",
- "if Use_album_or_track_Label_as_iTunes_grouping is True:\n",
- " group = \"-g\"\n",
- "else:\n",
- " group = \"\"\n",
- "\n",
- "if Embed_album_art_If_available is True:\n",
- " embed_art = \"-r\"\n",
- "else:\n",
- " embed_art = \"\"\n",
- "\n",
- "if Enable_verbose_logging is True:\n",
- " verbose_logging = \"-d\"\n",
- "else:\n",
- " verbose_logging = \"\"\n",
- "\n",
- "if Disable_slugification_of_track_album_and_artist_names is True:\n",
- " no_slugify = \"-y\"\n",
- "else:\n",
- " no_slugify = \"\"\n",
- "\n",
- "if Only_allow_ASCII_characters is True:\n",
- " ascii_only = \"-a\"\n",
- "else:\n",
- " ascii_only = \"\"\n",
- "\n",
- "if Retain_whitespace_in_filenames is True:\n",
- " keep_spaces = \"-k\"\n",
- "else:\n",
- " keep_spaces = \"\"\n",
- "\n",
- "if Retain_uppercase_letters_in_filenames is True:\n",
- " keep_upper = \"-u\"\n",
- "else:\n",
- " keep_upper = \"\"\n",
- "\n",
- "\n",
- "if not URL is \"\":\n",
- " !bandcamp-dl $full_album $overwrite $no_art $embed_lyrics $group $embed_art $verbose_logging $no_slugify $ascii_only $keep_spaces $keep_upper \"$Download_location\" \"$URL\"\n",
- " \n",
- " display(HTML(\"✅ bandcamp-dl has finished performing its task! \"))\n",
- "else:\n",
- " display(HTML(\"❌ The URL field is empty! \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "bZ-Z0cUdz7IL"
- },
- "source": [
- "### FunKiiU "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "yRmvnl090JmZ",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "#@markdown ← [Start] FunKiiU \n",
- "#@markdown FunKiiU is a python tool for downloading Nintendo Wii U content from Nintendo's CDN. (Click here to check out the github repository)
\n",
- "\n",
- "#@markdown ---\n",
- "title_id = \"\" #@param {type:\"string\"}\n",
- "title_key = \"\" #@param {type:\"string\"}\n",
- "#download_path = \"\" #@param {type:\"string\"}\n",
- "run_in_simulated_mode = False #@param{type: \"boolean\"}\n",
- "#@markdown > Download(s) are stored in (/content/install).\n",
- "\n",
- "# @markdown ---\n",
- "automatically_clear_cell_output = False #@param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "#import subprocess\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "FunKiiU_clone_path = \"/content/tools/FunKiiU\"\n",
- "FunKiiU_path = \"/content/tools/FunKiiU/FunKiiU.py\"\n",
- "FunKiiU_download_path = \"/content/install\"\n",
- "\n",
- "\n",
- "# Checks whether FunKiiU exists or not.\n",
- "# If FunKiiU does not exist, it will be downloaded/pulled from its github repository.\n",
- "if os.path.exists(FunKiiU_path):\n",
- "\tpass\n",
- "else:\n",
- " os.system(\"git clone https://github.com/llakssz/FunKiiU \" + FunKiiU_clone_path)\n",
- " \n",
- " # This block here is not actually necessery as FunKiiU is able automatically create the \"install\" folder but, well...\n",
- " try:\n",
- " os.makedirs(FunKiiU_download_path, exist_ok=True)\n",
- " except OSError as error:\n",
- " pass\n",
- "\n",
- "\n",
- "clear_output()\n",
- "\n",
- "\n",
- "# Fields checking.\n",
- "# If both fields or one of them are empty, a message will be shown.\n",
- "if title_id == \"\" and title_key == \"\":\n",
- " display(HTML(\"❌ Both fields are empty! \"))\n",
- "elif title_id == \"\" and not title_key == \"\":\n",
- " display(HTML(\"❌ The title_id field is empty! \"))\n",
- "elif not title_id == \"\" and title_key == \"\":\n",
- " display(HTML(\"❌ The title_key field is empty! \"))\n",
- "else:\n",
- " # Passing the -simulate argument to run in simulated mode, if the above checkbox's value is True\n",
- " if run_in_simulated_mode is True:\n",
- " simulate = \" -simulate\"\n",
- " else:\n",
- " simulate = \"\"\n",
- " \n",
- " # The actual piece of command that runs FunKiiU\n",
- " # ----- Downloading by running the command directly as the OS,\n",
- " !python \"/content/tools/FunKiiU/FunKiiU.py\" -title \"$title_id\" -key \"$title_key\" $simulate\n",
- " \n",
- " # ----- Downloading the python way but still as the OS (does not show any output),\n",
- " #os.system(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate)\n",
- " \n",
- " # ----- Downloading as subprocess and capture the output.\n",
- " #FunKiiU_process = subprocess.Popen(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate, shell = True, stdout = subprocess.PIPE).stdout\n",
- " #FunKiiU = FunKiiU_process.read()\n",
- " #\n",
- " #print(FunKiiU.decode())\n",
- "\n",
- " # Printing different message for regular download mode or simulated mode.\n",
- " if run_in_simulated_mode is True:\n",
- " display(HTML(\"✅ FunKiiU has finished doing the simulation. \"))\n",
- " else:\n",
- " display(HTML(\"✅ Download(s) are stored in: /content/install \"))\n",
- " \n",
- " # Will automatically clear console output if the above checkbox's value is True\n",
- " # With this enabled, user won't be able to see anything, though.\n",
- " if automatically_clear_cell_output is True:\n",
- " clear_output()\n",
- " else:\n",
- " pass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0XaXh7Ix0VFu",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Clear \"install\" Folder \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "FunKiiU_download_path = \"/content/install\"\n",
- "\n",
- "if os.path.exists(FunKiiU_download_path):\n",
- " os.system(\"rm -rf \" + FunKiiU_download_path)\n",
- " os.makedirs(FunKiiU_download_path)\n",
- "elif not os.path.exists(FunKiiU_download_path):\n",
- " os.makedirs(FunKiiU_download_path)\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "s7IbnEdkYBkY",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Remove FunKiiU \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "FunKiiU_path = \"/content/tools/FunKiiU\"\n",
- "\n",
- "if os.path.exists(FunKiiU_download_path):\n",
- " os.system(\"rm -rf \" + FunKiiU_path)\n",
- "elif not os.path.exists(FunKiiU_path):\n",
- " pass\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ERBVA5aIERou"
- },
- "source": [
- "### Google Drive CLI "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Qs0bcnzAFDZq",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Clone] Google Drive CLI \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "\n",
- "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
- "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
- "\n",
- "\n",
- "def cloneGoogleDriveCLI():\n",
- " if os.path.exists(GoogleDriveCLI_path1 + \"/gdrive\"):\n",
- " pass\n",
- " else:\n",
- " # Big thanks to github user GrowtopiaJaw for providing a pre-compiled binary of Google Drive CLI.\n",
- " # https://github.com/GrowtopiaJaw/gdrive\n",
- " os.system(\"wget https://github.com/GrowtopiaJaw/gdrive/releases/download/v2.1.1/gdrive-linux-amd64\")\n",
- " \n",
- " if not os.path.exists(GoogleDriveCLI_path1):\n",
- " # Big thanks to github user prasmussen for creating such an awesome tool.\n",
- " # https://github.com/prasmussen/gdrive\n",
- " os.makedirs(\"/content/tools/GoogleDriveCLI\")\n",
- "\n",
- " os.system(\"mv /content/gdrive-linux-amd64 \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
- " os.system(\"chmod +x \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
- "\n",
- "\n",
- "def initializeGoogleDriveCLI():\n",
- " if not os.path.exists(GoogleDriveCLI_path2):\n",
- " cloneGoogleDriveCLI()\n",
- " initializeGoogleDriveCLI()\n",
- " else:\n",
- " !\"$GoogleDriveCLI_path2\" \"about\"\n",
- " #clear_output(wait = True)\n",
- "\n",
- "\n",
- "initializeGoogleDriveCLI()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "V6fwq8QcF77j",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "#@markdown ← [Start] Google Drive CLI \n",
- "download_id = \"\" #@param{type:\"string\"}\n",
- "#@markdown > Currently only support downloading a publicly shared file (a file, NOT a folder).\n",
- "download_path = \"\" #@param{type:\"string\"}\n",
- "#@markdown > If left empty, the default download path will be used (/content/downloads).\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "download_path_default = \"/content/downloads\"\n",
- "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
- "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
- "\n",
- "\n",
- "if not os.path.exists(GoogleDriveCLI_path2):\n",
- " display(HTML(\"❌ Unable to locate the required binary! Make sure you have already run the cell above first! \"))\n",
- "else:\n",
- " if download_id == \"\":\n",
- " display(HTML(\"❌ The download_id field is empty! \"))\n",
- " else:\n",
- " if download_path == \"\":\n",
- " download_path = download_path_default\n",
- " if not os.path.exists(download_path):\n",
- " os.makedirs(download_path)\n",
- " else:\n",
- " pass\n",
- " elif not os.path.exists(download_path):\n",
- " os.makedirs(download_path)\n",
- " else:\n",
- " pass\n",
- " \n",
- " !\"/content/tools/GoogleDriveCLI/gdrive\" download --path \"$download_path\" \"$download_id\"\n",
- " \n",
- " if download_path is download_path_default:\n",
- " display(HTML(\"The download_path field is empty. Download(s) are stored into the default download path (/content/downloads). \"))\n",
- " else:\n",
- " display(HTML(\"Download(s) are stored into (\" + download_path + \"). \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "bEYznPNQ61sm"
- },
- "source": [
- "### JDownloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "LP35vcdpw2Vd"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] JDownloader \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from os import path as _p\n",
- "\n",
- "NEW_Account = True\n",
- "\n",
- "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from shlex import split as _spl\n",
- " from subprocess import run # nosec\n",
- "\n",
- " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(_spl(shellCmd)) # nosec\n",
- "\n",
- "from mixlab import handleJDLogin\n",
- "\n",
- "handleJDLogin(NEW_Account)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1mctlRk1TTrc"
- },
- "source": [
- "### MEGA "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "AelSL7BeTcJA",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← MEGA Login \n",
- "# @markdown Please log in to MEGA first (only needed to use the Uploader).
\n",
- "# ================================================================ #\n",
- "\n",
- "from functools import wraps\n",
- "import errno\n",
- "import os\n",
- "import signal\n",
- "import subprocess\n",
- "import shlex\n",
- "\n",
- "class TimeoutError(Exception):\n",
- " pass\n",
- "\n",
- "\n",
- "def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):\n",
- " def decorator(func):\n",
- " def _handle_timeout(signum, frame):\n",
- " raise TimeoutError(error_message)\n",
- "\n",
- " def wrapper(*args, **kwargs):\n",
- " signal.signal(signal.SIGALRM, _handle_timeout)\n",
- " signal.alarm(seconds)\n",
- " try:\n",
- " result = func(*args, **kwargs)\n",
- " finally:\n",
- " signal.alarm(0)\n",
- " return result\n",
- "\n",
- " return wraps(func)(wrapper)\n",
- "\n",
- " return decorator\n",
- "\n",
- "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
- " from subprocess import run\n",
- " from shlex import split\n",
- "\n",
- " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/mixlab.py \\\n",
- " -O /root/.ipython/mixlab.py\"\n",
- " run(split(shellCmd))\n",
- "from mixlab import runSh\n",
- "\n",
- "@timeout(10)\n",
- "def runShT(args):\n",
- " return runSh(args, output=True)\n",
- "\n",
- "# Installing MEGAcmd\n",
- "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
- " print(\"Installing MEGA ...\")\n",
- " runSh('sudo apt-get -y update')\n",
- " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
- " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
- " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
- " print(\"MEGA is installed.\")\n",
- "else:\n",
- " !pkill mega-cmd\n",
- "\n",
- "# Enter MEGA credential\n",
- "USERNAME = \"\" # @param {type:\"string\"}\n",
- "PASSWORD = \"\" # @param {type:\"string\"}\n",
- "if not (USERNAME == \"\" or PASSWORD == \"\"):\n",
- " try:\n",
- " runShT(f\"mega-login {USERNAME} {PASSWORD}\")\n",
- " except TimeoutError:\n",
- " runSh('mega-whoami', output=True)\n",
- "else:\n",
- " print(\"Please enter your MEGA credential.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "p0Wg4seDVseV",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] MEGA Downloader \n",
- "URL = \"\" #@param {type:\"string\"}\n",
- "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
- "# @markdown > URL: is the MEGA link you want to download (ex: mega.nz/file/file_link#decryption_key)OUTPUT_PATH: is where to store the downloaded file(s) (ex: /content/downloads/)\n",
- "# ================================================================ #\n",
- "\n",
- "import sys, os, urllib.request\n",
- "import time\n",
- "import subprocess\n",
- "import contextlib\n",
- "from IPython.display import clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- ")\n",
- "\n",
- "if not OUTPUT_PATH:\n",
- " os.makedirs(\"downloads\", exist_ok=True)\n",
- " OUTPUT_PATH = \"downloads\"\n",
- "# Installing MEGAcmd\n",
- "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
- " loadingAn()\n",
- " print(\"Installing MEGA ...\")\n",
- " runSh('sudo apt-get -y update')\n",
- " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
- " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
- " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
- " print(\"MEGA is installed.\")\n",
- " clear_output()\n",
- "\n",
- "# Unix, Windows and old Macintosh end-of-line\n",
- "newlines = ['\\n', '\\r\\n', '\\r']\n",
- "\n",
- "def unbuffered(proc, stream='stdout'):\n",
- " stream = getattr(proc, stream)\n",
- " with contextlib.closing(stream):\n",
- " while True:\n",
- " out = []\n",
- " last = stream.read(1)\n",
- " # Don't loop forever\n",
- " if last == '' and proc.poll() is not None:\n",
- " break\n",
- " while last not in newlines:\n",
- " # Don't loop forever\n",
- " if last == '' and proc.poll() is not None:\n",
- " break\n",
- " out.append(last)\n",
- " last = stream.read(1)\n",
- " out = ''.join(out)\n",
- " yield out\n",
- "\n",
- "def transfare():\n",
- " import codecs\n",
- " decoder = codecs.getincrementaldecoder(\"UTF-8\")()\n",
- " cmd = [\"mega-get\", URL, OUTPUT_PATH]\n",
- " proc = subprocess.Popen(\n",
- " cmd,\n",
- " stdout=subprocess.PIPE,\n",
- " stderr=subprocess.STDOUT,\n",
- " # Make all end-of-lines '\\n'\n",
- " universal_newlines=True,\n",
- " )\n",
- " for line in unbuffered(proc):\n",
- " print(line)\n",
- " \n",
- "transfare()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "3GKtYuBbUP-c",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] MEGA Uploader \n",
- "# Simple_torrent = False # @param{type: \"boolean\"}\n",
- "# Peerflix = False # @param{type: \"boolean\"}\n",
- "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
- "# @markdown > PATH_TO_FILE is the location of the file you want to upload located at. (ex: /content/downloads/file-to-upload.zip)\n",
- "# ================================================================ #\n",
- "\n",
- "import time\n",
- "import subprocess\n",
- "import contextlib\n",
- "from IPython.display import clear_output\n",
- "\n",
- "# Unix, Windows and old Macintosh end-of-line\n",
- "newlines = ['\\n', '\\r\\n', '\\r']\n",
- "\n",
- "def unbuffered(proc, stream='stdout'):\n",
- " stream = getattr(proc, stream)\n",
- " with contextlib.closing(stream):\n",
- " while True:\n",
- " out = []\n",
- " last = stream.read(1)\n",
- " # Don't loop forever\n",
- " if last == '' and proc.poll() is not None:\n",
- " break\n",
- " while last not in newlines:\n",
- " # Don't loop forever\n",
- " if last == '' and proc.poll() is not None:\n",
- " break\n",
- " out.append(last)\n",
- " last = stream.read(1)\n",
- " out = ''.join(out)\n",
- " yield out\n",
- "\n",
- "def transfare():\n",
- " cmd = \"\"\n",
- " if Simple_torrent:\n",
- " cmd = ['mega-put', 'downloads', '/colab']\n",
- " elif Peerflix:\n",
- " cmd = ['mega-put', 'peerflix', '/colab']\n",
- " else:\n",
- " cmd = ['mega-put', PATH_TO_FILE, '/colab']\n",
- " proc = subprocess.Popen(\n",
- " cmd,\n",
- " stdout=subprocess.PIPE,\n",
- " stderr=subprocess.STDOUT,\n",
- " # Make all end-of-lines '\\n'\n",
- " universal_newlines=True,\n",
- " )\n",
- " for line in unbuffered(proc):\n",
- " clear_output(wait=True)\n",
- " print(line)\n",
- "\n",
- "try:\n",
- " transfare()\n",
- "except FileNotFoundError:\n",
- " print(\"Please log into your MEGA account first!\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dEq11jIB5oee"
- },
- "source": [
- "### pyLoad "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "a08IDWFG5rm1",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] pyLoad \n",
- "# @markdown pyLoad is a free and open-source download manager written in pure python.\n",
- "# @markdown > pyLoad Default CredentialUsername: adminPassword: admin\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "PORT_FORWARD = \"argo_tunnel\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re\n",
- "import urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl,\n",
- " findProcess\n",
- ")\n",
- "\n",
- "\n",
- "clear_output()\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs('downloads', exist_ok=True)\n",
- "os.makedirs('tools/pyload', exist_ok=True)\n",
- "\n",
- "# Downloading latest version of pyload\n",
- "if not os.path.exists(\"tools/pyload/pyload-stable\"):\n",
- " urlF = 'https://github.com/pyload/pyload/archive/stable.zip'\n",
- " conf = 'https://raw.githubusercontent.com/shirooo39/' \\\n",
- " 'MiXLab/master/resources/configurations/pyload/pyload.conf'\n",
- " db = 'https://github.com/shirooo39/MiXLab/raw/master/' \\\n",
- " 'resources/configurations/pyload/files.db'\n",
- " urllib.request.urlretrieve(urlF, 'tools/pyload.zip')\n",
- " urllib.request.urlretrieve(conf, 'tools/pyload/pyload.conf')\n",
- " urllib.request.urlretrieve(db, 'tools/pyload/files.db')\n",
- " with zipfile.ZipFile('tools/pyload.zip', 'r') as zip_ref: zip_ref.extractall('tools/pyload')\n",
- " try:\n",
- " pathlib.Path('tools/pyload.zip').unlink()\n",
- " except FileNotFoundError:\n",
- " pass\n",
- "\n",
- " runSh(\"apt install python-pycurl python-qt4 tesseract-ocr libtesseract-dev\")\n",
- " runSh(\"pip2 install pycrypto pyOpenSSL Jinja2 tesseract tesseract-ocr\")\n",
- "\n",
- "if not findProcess(\"python2.7\", \"pyLoadCore.py\"):\n",
- " runCmd = \"python2.7 /content/tools/pyload/pyload-stable/pyLoadCore.py\" \\\n",
- " \" --configdir=/content/tools/pyload\" \\\n",
- " \" --no-remote\" \\\n",
- " \" --daemon\"\n",
- " runSh(runCmd, shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['pyload', 8000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/pyLoad.yml\", 4074]).start('pyload')\n",
- "displayUrl(Server, pNamU='pyLoad : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Ci0HTN9Xyxze"
- },
- "source": [
- "### Pornhub Downloader "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "cD9BrjIoAbF7"
- },
- "source": [
- "> Recommended to use YouTube-DL instead."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "jRrvPBr5y19U",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Required Module(s) \n",
- "# ================================================================ #\n",
- "\n",
- "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
- "from IPython.display import clear_output\n",
- "import os, urllib.request\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " loadingAn,\n",
- " textAn,\n",
- ")\n",
- "\n",
- "loadingAn(name=\"lds\")\n",
- "textAn(\"Installing dependencies...\", ty='twg')\n",
- "os.system('pip3 install youtube-dl')\n",
- "os.system('pip3 install prettytable')\n",
- "os.system('pip3 install bs4')\n",
- "os.system('pip3 install requests')\n",
- "%cd /content\n",
- "os.system('git clone https://github.com/mariosemes/PornHub-downloader-python.git')\n",
- "\n",
- "clear_output()\n",
- "print(\"The module(s) has been successfully installed.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "OLj2mj4lzcOp",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] PornHub Downloader \n",
- "pornhub_url = '' #@param {type: \"string\"}\n",
- "option = \"single_download\" #@param [\"single_download\", \"batch_download\",\"add\",\"delete\"]\n",
- "# @markdown > - Single Download link Eg: https://www.pornhub.com/view_video.php?viewkey=ph5d69a2093729e\n",
- "#@markdown > - The batch option will ask you for the full path of your .txt file where you can import multiple URLs at once.Take care that every single URL in the .txt file is in his own row.\n",
- "# ================================================================ #\n",
- "\n",
- "%cd PornHub-downloader-python\n",
- "\n",
- "if option == 'single_download':\n",
- " !python3 phdler.py custom \"$pornhub_url\"\n",
- "\n",
- "elif option == 'add':\n",
- " !python3 phdler.py add \"$pornhub_url\"\n",
- "\n",
- "elif option == 'delete':\n",
- " !python3 phdler.py delete \"$pornhub_url\"\n",
- "\n",
- "else:\n",
- " !python3 phdler.py custom batch "
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "tL-ilxH0N_B9"
- },
- "source": [
- "### Spotify Downloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "JTAKDpp9OCEs",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Spotify Downloader \n",
- "# @markdown Download Spotify playlists from YouTube with album-art and meta-tags
\n",
- "# ================================================================ #\n",
- "\n",
- "import os, uuid, urllib.parse, re\n",
- "import ipywidgets as widgets\n",
- "\n",
- "from glob import glob\n",
- "from urllib.parse import urlparse, parse_qs\n",
- "from IPython.display import HTML, clear_output, YouTubeVideo\n",
- "from IPython.utils.io import ask_yes_no\n",
- "from google.colab import output, files\n",
- "\n",
- "\n",
- "os.makedirs('tools/spotify-downloader/', exist_ok=True)\n",
- "os.makedirs('downloads', exist_ok=True)\n",
- "\n",
- "# # Config files\n",
- "# data = \"\"\"spotify-downloader:\n",
- "# avconv: false\n",
- "# download-only-metadata: false\n",
- "# dry-run: false\n",
- "# file-format: '{artist} - {track_name}'\n",
- "# folder: /home/user/Music\n",
- "# input-ext: .m4a\n",
- "# log-level: INFO\n",
- "# manual: false\n",
- "# music-videos-only: false\n",
- "# no-fallback-metadata: false\n",
- "# no-metadata: false\n",
- "# no-spaces: false\n",
- "# output-ext: .mp3\n",
- "# overwrite: prompt\n",
- "# search-format: '{artist} - {track_name} lyrics'\n",
- "# skip: null\n",
- "# spotify_client_id: 4fe3fecfe5334023a1472516cc99d805\n",
- "# spotify_client_secret: 0f02b7c483c04257984695007a4a8d5c\n",
- "# trim-silence: false\n",
- "# write-successful: null\n",
- "# write-to: null\n",
- "# youtube-api-key: null\n",
- "# \"\"\"\n",
- "# with open('tools/spotify-downloader/config.yml', 'w') as wnow:\n",
- "# wnow.write(data)\n",
- "\n",
- "Links = widgets.Textarea(placeholder='''Link list\n",
- "(one link per line)''')\n",
- "\n",
- "fileFormat = widgets.Text(\n",
- " value='{artist} - {track_name}',\n",
- " placeholder='File name format',\n",
- " description=\"\"\"File Name : file format to save the downloaded track with, each\n",
- " tag is surrounded by curly braces. Possible formats:\n",
- " ['track_name', 'artist', 'album', 'album_artist',\n",
- " 'genre', 'disc_number', 'duration', 'year',\n",
- " 'original_date', 'track_number', 'total_tracks',\n",
- " 'isrc']\"\"\",\n",
- " disabled=False\n",
- ")\n",
- "\n",
- "searchFormat = widgets.Text(\n",
- " value='{artist} - {track_name} lyrics',\n",
- " placeholder='Search format',\n",
- " description=\"\"\"Search Format : search format to search for on YouTube, each tag is\n",
- " surrounded by curly braces. Possible formats:\n",
- " ['track_name', 'artist', 'album', 'album_artist',\n",
- " 'genre', 'disc_number', 'duration', 'year',\n",
- " 'original_date', 'track_number', 'total_tracks',\n",
- " 'isrc']\"\"\",\n",
- " disabled=False\n",
- ")\n",
- "\n",
- "tab = widgets.Tab()\n",
- "\n",
- "LinksType = widgets.RadioButtons(\n",
- " options=['Songs', 'Playlist', 'Album', 'Username', 'Artist'],\n",
- " value='Songs',\n",
- " layout={'width': 'max-content'},\n",
- " description='Links type:',\n",
- " disabled=False,\n",
- ")\n",
- "\n",
- "SavePathYT = widgets.Dropdown(options=[\"/content/downloads\", \"/content\"])\n",
- "\n",
- "Extension = widgets.Select(options=[\"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"], value=\"mp3\")\n",
- "\n",
- "TrimSilence = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='Trim silence',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='remove silence from the start of the audio',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "writeM3u = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='Write .m3u playlist',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''generate an .m3u playlist file with youtube links\n",
- " given a text file containing tracks''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "noMeta = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='No metadata',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='do not embed metadata in tracks',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "nf = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='No fallback metadata',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''do not use YouTube as fallback for metadata if track\n",
- " not found on Spotify''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "dryRun = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='Dry run',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip=''' show only track title and YouTube URL, and then skip\n",
- " to the next track (if any)''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "MusicVidOnly = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='Music Videos Only',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''search only for music videos on Youtube (works only\n",
- " when YouTube API key is set''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "NoSpaces = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='No Spaces',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''replace spaces with underscores in file names''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "manual = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='manually',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''choose the track to download manually from a list of\n",
- " matching tracks''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "nr = widgets.ToggleButton(\n",
- " value=False,\n",
- " description='Keep original',\n",
- " disabled=False,\n",
- " button_style='',\n",
- " tooltip='''do not remove the original file after conversion''',\n",
- " icon='check'\n",
- ")\n",
- "\n",
- "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
- "\n",
- "class MakeButton(object):\n",
- " def __init__(self, title, callback, style):\n",
- " self._title = title\n",
- " self._callback = callback\n",
- " self._style = style\n",
- " def _repr_html_(self):\n",
- " callback_id = 'button-' + str(uuid.uuid4())\n",
- " output.register_callback(callback_id, self._callback)\n",
- " if self._style != \"\":\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
- " else:\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
- " template = \"\"\"{title} \n",
- " \"\"\"\n",
- " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
- " return html\n",
- " \n",
- "def MakeLabel(description, button_style):\n",
- " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
- "\n",
- "def RefreshPathYT():\n",
- " if os.path.exists(\"/content/drive/\"):\n",
- " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
- " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
- " else:\n",
- " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
- " else:\n",
- " SavePathYT.options = [\"/content/downloads\", \"/content\"]\n",
- "\n",
- "\n",
- "def ShowYT():\n",
- " clear_output(wait=True)\n",
- " RefreshPathYT()\n",
- " mainTab = widgets.Box([widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
- " LinksType, searchFormat, fileFormat, widgets.HBox([TrimSilence, writeM3u, noMeta]), widgets.HBox([nf, dryRun, MusicVidOnly]),widgets.HBox([NoSpaces, manual, nr])]),\n",
- " widgets.VBox([widgets.HTML(\"Extension: \"), Extension,\n",
- " widgets.HTML(\"Extra Arguments: \"), ExtraArg])])])\n",
- " tab.children = [mainTab]\n",
- " tab.set_title(0, 'spotify-downloader')\n",
- " display(tab)\n",
- " display(HTML(\"Save Location: \"), SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
- " if not os.path.exists(\"/content/drive/\"):\n",
- " display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
- " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
- "\n",
- "def DownloadYT():\n",
- " if Links.value.strip():\n",
- " Count = 0\n",
- " Total = str(len(Links.value.splitlines()))\n",
- " if writeM3u.value:\n",
- " M3u = '--write-m3u'\n",
- " else:\n",
- " M3u = ''\n",
- " if TrimSilence.value:\n",
- " trmS = '--trim-silence'\n",
- " else:\n",
- " trmS = ''\n",
- " if noMeta.value:\n",
- " noM = '--no-metadata'\n",
- " else:\n",
- " noM = ''\n",
- " if nf.value:\n",
- " nfv = '--no-fallback-metadata'\n",
- " else:\n",
- " nfv = ''\n",
- " if dryRun.value:\n",
- " drR = '--dry-run'\n",
- " else:\n",
- " drR = ''\n",
- " if MusicVidOnly.value:\n",
- " MsV = '--music-videos-only'\n",
- " else:\n",
- " MsV = ''\n",
- " if NoSpaces.value:\n",
- " NoS = '--no-spaces'\n",
- " else:\n",
- " NoS = ''\n",
- " if manual.value:\n",
- " mal = '--manual'\n",
- " else:\n",
- " mal = ''\n",
- " if nr.value:\n",
- " nro = '--no-remove-original' \n",
- " else:\n",
- " nro = ''\n",
- " if not searchFormat.value == '{artist} - {track_name} lyrics':\n",
- " seFor = f'--search-format \"{searchFormat.value}\"'\n",
- " else:\n",
- " seFor = ''\n",
- " if not fileFormat.value == '{artist} - {track_name}':\n",
- " fiFor = f'--file-format \"{fileFormat.value}\"'\n",
- " else:\n",
- " fiFor = ''\n",
- " \n",
- " if not LinksType.value == 'Songs':\n",
- " with open('tools/spotify-downloader/finish.txt', 'a+') as master:\n",
- " for Link in Links.value.splitlines():\n",
- " if LinksType.value == 'Playlist':\n",
- " outFileName = !spotdl --playlist $Link\n",
- " elif LinksType.value == 'Album':\n",
- " outFileName = !spotdl --album $Link\n",
- " elif LinksType.value == 'Username':\n",
- " outFileName = !spotdl -u $Link\n",
- " elif LinksType.value == 'Artist':\n",
- " outFileName = !spotdl --all-albums $Link\n",
- " filename = re.search(r\"to\\s(.+\\.txt)\", outFileName[-1]).group(1)\n",
- " with open(filename, 'r') as r:\n",
- " master.write(r.read())\n",
- " else:\n",
- " for Link in Links.value.splitlines():\n",
- " with open('tools/spotify-downloader/finish.txt', 'w') as master:\n",
- " master.write(Link)\n",
- " # Extra Arguments\n",
- " \n",
- " extraargC = ExtraArg.value\n",
- " cmd = r\"spotdl -l 'tools/spotify-downloader/finish.txt' \" \\\n",
- " fr\"-f {SavePathYT.value} \" \\\n",
- " fr\"-o .{Extension.value} \" \\\n",
- " f\"--overwrite skip \" \\\n",
- " f\"{seFor} {fiFor} \" \\\n",
- " f\"{M3u} {trmS} {noM} {nfv} {drR} {MsV} {NoS} {mal} {nro}\" \n",
- " !$cmd\n",
- " ShowYT()\n",
- "\n",
- "if not os.path.isfile(\"/usr/local/bin/spotdl\"):\n",
- " get_ipython().system_raw(\"pip3 install spotdl && apt-get install ffmpeg\")\n",
- "\n",
- "ShowYT()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "QOyo5zf4suod"
- },
- "source": [
- "### YouTube-DL "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "mYCRR-yWSuyi",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] YouTube-DL \n",
- "Archive = False\n",
- "# ================================================================ #\n",
- "\n",
- "import os, uuid, urllib.parse\n",
- "import ipywidgets as widgets\n",
- "\n",
- "from glob import glob\n",
- "from urllib.parse import urlparse, parse_qs\n",
- "from IPython.display import HTML, clear_output, YouTubeVideo\n",
- "from IPython.utils.io import ask_yes_no\n",
- "from google.colab import output, files\n",
- "\n",
- "Links = widgets.Textarea(placeholder='''Video/Playlist Link\n",
- "(one link per line)''')\n",
- "\n",
- "VideoQ = widgets.Dropdown(options=[\"Best Quality (VP9 upto 4K)\", \"Best Compatibility (H.264 upto 1080p)\"])\n",
- "\n",
- "AudioQ = widgets.Dropdown(options=[\"Best Quality (Opus)\", \"Best Compatibility (M4A)\"])\n",
- "\n",
- "Subtitle = widgets.ToggleButton(value=True, description=\"Subtitle\", button_style=\"info\", tooltip=\"Subtitle\")\n",
- "\n",
- "SavePathYT = widgets.Dropdown(options=[\"/content\", \"/content/downloads\"])\n",
- "\n",
- "AudioOnly = widgets.ToggleButton(value=False, description=\"Audio Only\", button_style=\"\", tooltip=\"Audio Only\")\n",
- "\n",
- "Resolution = widgets.Select(options=[\"Highest\", \"4K\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"144p\"], value=\"Highest\")\n",
- "\n",
- "Extension = widgets.Select(options=[\"mkv\", \"webm\"], value=\"mkv\")\n",
- "\n",
- "UsernameYT = widgets.Text(placeholder=\"Username\")\n",
- "\n",
- "PasswordYT = widgets.Text(placeholder=\"Password\")\n",
- "\n",
- "SecAuth = widgets.Text(placeholder=\"2nd Factor Authentication\")\n",
- "\n",
- "VideoPW = widgets.Text(placeholder=\"Video Password\")\n",
- "\n",
- "GEOBypass = widgets.Dropdown(options=[\"Disable\", \"Hide\", \"AD\", \"AE\", \"AF\", \"AG\", \"AI\", \"AL\", \"AM\", \"AO\", \"AQ\", \"AR\", \"AS\", \"AT\", \"AU\", \"AW\", \"AX\", \"AZ\", \"BA\", \"BB\", \"BD\", \"BE\", \"BF\", \"BG\", \"BH\", \"BI\", \"BJ\", \"BL\", \"BM\", \"BN\", \"BO\", \"BQ\", \"BR\", \"BS\", \"BT\", \"BV\", \"BW\", \"BY\", \"BZ\", \"CA\", \"CC\", \"CD\", \"CF\", \"CG\", \"CH\", \"CI\", \"CK\", \"CL\", \"CM\", \"CN\", \"CO\", \"CR\", \"CU\", \"CV\", \"CW\", \"CX\", \"CY\", \"CZ\", \"DE\", \"DJ\", \"DK\", \"DM\", \"DO\", \"DZ\", \"EC\", \"EE\", \"EG\", \"EH\", \"ER\", \"ES\", \"ET\", \"FI\", \"FJ\", \"FK\", \"FM\", \"FO\", \"FR\", \"GA\", \"GB\", \"GD\", \"GE\", \"GF\", \"GG\", \"GH\", \"GI\", \"GL\", \"GM\", \"GN\", \"GP\", \"GQ\", \"GR\", \"GS\", \"GT\", \"GU\", \"GW\", \"GY\", \"HK\", \"HM\", \"HN\", \"HR\", \"HT\", \"HU\", \"ID\", \"IE\", \"IL\", \"IM\", \"IN\", \"IO\", \"IQ\", \"IR\", \"IS\", \"IT\", \"JE\", \"JM\", \"JO\", \"JP\", \"KE\", \"KG\", \"KH\", \"KI\", \"KM\", \"KN\", \"KP\", \"KR\", \"KW\", \"KY\", \"KZ\", \"LA\", \"LB\", \"LC\", \"LI\", \"LK\", \"LR\", \"LS\", \"LT\", \"LU\", \"LV\", \"LY\", \"MA\", \"MC\", \"MD\", \"ME\", \"MF\", \"MG\", \"MH\", \"MK\", \"ML\", \"MM\", \"MN\", \"MO\", \"MP\", \"MQ\", \"MR\", \"MS\", \"MT\", \"MU\", \"MV\", \"MW\", \"MX\", \"MY\", \"MZ\", \"NA\", \"NC\", \"NE\", \"NF\", \"NG\", \"NI\", \"NL\", \"NO\", \"NP\", \"NR\", \"NU\", \"NZ\", \"OM\", \"PA\", \"PE\", \"PF\", \"PG\", \"PH\", \"PK\", \"PL\", \"PM\", \"PN\", \"PR\", \"PS\", \"PT\", \"PW\", \"PY\", \"QA\", \"RE\", \"RO\", \"RS\", \"RU\", \"RW\", \"SA\", \"SB\", \"SC\", \"SD\", \"SE\", \"SG\", \"SH\", \"SI\", \"SJ\", \"SK\", \"SL\", \"SM\", \"SN\", \"SO\", \"SR\", \"SS\", \"ST\", \"SV\", \"SX\", \"SY\", \"SZ\", \"TC\", \"TD\", \"TF\", \"TG\", \"TH\", \"TJ\", \"TK\", \"TL\", \"TM\", \"TN\", \"TO\", \"TR\", \"TT\", \"TV\", \"TW\", \"TZ\", \"UA\", \"UG\", \"UM\", \"US\", \"UY\", \"UZ\", \"VA\", \"VC\", \"VE\", \"VG\", \"VI\", \"VN\", \"VU\", \"WF\", \"WS\", \"YE\", \"YT\", \"ZA\", \"ZM\", \"ZW\"])\n",
- "\n",
- "ProxyYT = widgets.Text(placeholder=\"Proxy URL\")\n",
- "\n",
- "MinSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Min:\")\n",
- "\n",
- "MaxSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Max:\")\n",
- "\n",
- "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
- "\n",
- "class MakeButton(object):\n",
- " def __init__(self, title, callback, style):\n",
- " self._title = title\n",
- " self._callback = callback\n",
- " self._style = style\n",
- " def _repr_html_(self):\n",
- " callback_id = 'button-' + str(uuid.uuid4())\n",
- " output.register_callback(callback_id, self._callback)\n",
- " if self._style != \"\":\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
- " else:\n",
- " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
- " template = \"\"\"{title} \n",
- " \"\"\"\n",
- " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
- " return html\n",
- " \n",
- "def MakeLabel(description, button_style):\n",
- " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
- "\n",
- "def upload_archive():\n",
- " if ask_yes_no(\"Do you already have an archive file? (y/n)\", default=\"\", interrupt=\"\"):\n",
- " try:\n",
- " display(HTML(\"Please upload an archive from your computer. \"))\n",
- " UploadConfig = files.upload().keys()\n",
- " clear_output(wait=True)\n",
- " if len(UploadConfig) == 0:\n",
- " return display(HTML(\"File upload has been cancelled during upload file. \"))\n",
- " elif len(UploadConfig) == 1:\n",
- " for fn in UploadConfig:\n",
- " if os.path.isfile(\"/content/\" + fn):\n",
- " get_ipython().system_raw(\"mv -f \" + \"\\\"\" + fn + \"\\\" /root/.youtube-dl.txt && chmod 666 /root/.youtube-dl.txt\")\n",
- " AudioOnly.observe(AudioOnlyChange)\n",
- " Subtitle.observe(SubtitleChange)\n",
- " AudioQ.observe(AudioQChange)\n",
- " ShowYT()\n",
- " else:\n",
- " return display(HTML(\"File upload has been failed during upload file. \"))\n",
- " else:\n",
- " for fn in UploadConfig:\n",
- " get_ipython().system_raw(\"rm -f \" + \"\\\"\" + fn + \"\\\"\")\n",
- " return display(HTML(\"Please uploading only one file at a time. \"))\n",
- " except:\n",
- " clear_output(wait=True)\n",
- " return display(HTML(\"Error occurred during upload file. \"))\n",
- " else:\n",
- " get_ipython().system_raw(\"touch '/root/.youtube-dl.txt'\")\n",
- " AudioOnly.observe(AudioOnlyChange)\n",
- " Subtitle.observe(SubtitleChange)\n",
- " AudioQ.observe(AudioQChange)\n",
- " ShowYT()\n",
- "\n",
- "def RefreshPathYT():\n",
- " if os.path.exists(\"/content/drive/\"):\n",
- " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
- " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
- " else:\n",
- " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
- " else:\n",
- " SavePathYT.options = [\"/content\", \"/content/downloads\"]\n",
- "\n",
- "def AudioOnlyChange(change):\n",
- " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
- " VideoQ.disabled = True\n",
- " Subtitle.disabled = True\n",
- " if Subtitle.value:\n",
- " Subtitle.button_style = \"info\"\n",
- " else:\n",
- " Subtitle.button_style = \"\"\n",
- " Resolution.disabled = True\n",
- " Extension.options = [\"best\", \"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"]\n",
- " Extension.value = \"best\"\n",
- " AudioOnly.button_style = \"info\"\n",
- " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
- " VideoQ.disabled = False\n",
- " Subtitle.disabled = False\n",
- " if Subtitle.value:\n",
- " Subtitle.button_style = \"info\"\n",
- " else:\n",
- " Subtitle.button_style = \"\"\n",
- " Resolution.disabled = False\n",
- " if AudioQ.value == \"Best Quality (Opus)\":\n",
- " Extension.options = [\"mkv\", \"webm\"]\n",
- " else:\n",
- " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
- " Extension.value = \"mkv\"\n",
- " AudioOnly.button_style = \"\"\n",
- "\n",
- "def SubtitleChange(change):\n",
- " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
- " Subtitle.button_style = \"info\"\n",
- " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
- " Subtitle.button_style = \"\"\n",
- "\n",
- "def AudioQChange(change):\n",
- " if change[\"type\"] == \"change\" and change[\"new\"] == \"Best Quality (Opus)\":\n",
- " Extension.options = [\"mkv\", \"webm\"]\n",
- " Extension.value = \"mkv\"\n",
- " elif change[\"type\"] == \"change\" and change[\"new\"] == \"Best Compatibility (M4A)\":\n",
- " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
- " Extension.value = \"mkv\"\n",
- "\n",
- "def ShowYT():\n",
- " clear_output(wait=True)\n",
- " RefreshPathYT()\n",
- " display(widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
- " widgets.HTML(\"For website that require an account: \"), UsernameYT, PasswordYT, SecAuth, VideoPW,\n",
- " widgets.HTML(\"GEO Bypass Country: \"), GEOBypass,\n",
- " widgets.HTML(\"Proxy: \"), ProxyYT,\n",
- " widgets.HTML(\"Sleep Interval (second): \"), MinSleep, MaxSleep]),\n",
- " widgets.VBox([widgets.HTML(\"Video Quality: \"), VideoQ, widgets.HTML(\"Resolution: \"), Resolution,\n",
- " widgets.HTML(\"Audio Quality: \"), AudioQ, widgets.HTML(\"Extension: \"), Extension,\n",
- " widgets.HTML(\"Extra Options: \"), widgets.HBox([Subtitle, AudioOnly]),\n",
- " widgets.HTML(\"Extra Arguments: \"), ExtraArg])]), HTML(\"Save Location: \"),\n",
- " SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
- " if not os.path.exists(\"/content/drive/\"):\n",
- "# display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
- " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
- "\n",
- "def DownloadYT():\n",
- " if Links.value.strip():\n",
- " Count = 0\n",
- " Total = str(len(Links.value.splitlines()))\n",
- " # Account Check\n",
- " if UsernameYT.value.strip() and PasswordYT.value.strip():\n",
- " accountC = \"--username \\\"\" + UsernameYT.value + \"\\\" --password \\\"\" + PasswordYT.value + \"\\\"\"\n",
- " else:\n",
- " accountC = \"\"\n",
- " if SecAuth.value.strip():\n",
- " secauthC = \"-2 \" + SecAuth.value\n",
- " else:\n",
- " secauthC = \"\"\n",
- " if VideoPW.value.strip():\n",
- " videopwC = \"--video-password \" + VideoPW.value\n",
- " else:\n",
- " videopwC = \"\"\n",
- " # Proxy\n",
- " if ProxyYT.value.strip():\n",
- " proxyytC = \"--proxy \" + ProxyYT.value\n",
- " else:\n",
- " proxyytC = \"\"\n",
- " # GEO Bypass\n",
- " if GEOBypass.value == \"Disable\":\n",
- " geobypass = \"\"\n",
- " elif GEOBypass.value == \"Hide\":\n",
- " geobypass = \"--geo-bypass\"\n",
- " else:\n",
- " geobypass = \"--geo-bypass-country \" + GEOBypass.value\n",
- " # Video Quality\n",
- " if VideoQ.value == \"Best Quality (VP9 upto 4K)\":\n",
- " videoqC = \"webm\"\n",
- " else:\n",
- " videoqC = \"mp4\"\n",
- " # Audio Quality\n",
- " if AudioQ.value == \"Best Quality (Opus)\":\n",
- " audioqC = \"webm\"\n",
- " else:\n",
- " audioqC = \"m4a\"\n",
- " # Audio Only Check\n",
- " if AudioOnly.value:\n",
- " subtitleC = \"\"\n",
- " thumbnailC = \"\"\n",
- " extC = \"-x --audio-quality 0 --audio-format \" + Extension.value\n",
- " codecC = \"bestaudio[ext=\" + audioqC + \"]/bestaudio/best\"\n",
- " else:\n",
- " if Subtitle.value:\n",
- " subtitleC = \"--all-subs --convert-subs srt --embed-subs\"\n",
- " else:\n",
- " subtitleC = \"\"\n",
- " if Extension.value == \"mp4\":\n",
- " thumbnailC = \"--embed-thumbnail\"\n",
- " else:\n",
- " thumbnailC = \"\"\n",
- " extC = \"--merge-output-format \" + Extension.value\n",
- " if Resolution.value == \"Highest\":\n",
- " codecC = \"bestvideo[ext=\" + videoqC + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo+bestaudio/best\"\n",
- " else:\n",
- " codecC = \"bestvideo[ext=\" + videoqC + \",height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo[height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio/bestvideo+bestaudio/best\"\n",
- " # Archive\n",
- " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
- " archiveC = \"--download-archive \\\"/root/.youtube-dl.txt\\\"\"\n",
- " else:\n",
- " archiveC = \"\"\n",
- " # Sleep Interval\n",
- " if MinSleep.value > 0 and MaxSleep.value > 0:\n",
- " minsleepC = \"--min-sleep-interval \" + MinSleep.value\n",
- " maxsleepC = \"--max-sleep-interval \" + MaxSleep.value\n",
- " else:\n",
- " minsleepC = \"\"\n",
- " maxsleepC = \"\"\n",
- " # Extra Arguments\n",
- " extraargC = ExtraArg.value\n",
- " for Link in Links.value.splitlines():\n",
- " clear_output(wait=True)\n",
- " Count += 1\n",
- " display(HTML(\"Processing link \" + str(Count) + \" out of \" + Total + \" \"))\n",
- " if \"youtube.com\" in Link or \"youtu.be\" in Link:\n",
- " display(HTML(\"Currently downloading... \"), YouTubeVideo(Link, width=640, height=360), HTML(\" \"))\n",
- " else:\n",
- " display(HTML(\" \"))\n",
- " if (\"youtube.com\" in Link or \"youtu.be\" in Link) and \"list=\" in Link:\n",
- " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s\" \"$Link\"\n",
- " else:\n",
- " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(title)s.%(ext)s\" \"$Link\"\n",
- " if not os.path.exists(SavePathYT.value):\n",
- " get_ipython().system_raw(\"mkdir -p -m 666 \" + SavePathYT.value)\n",
- " get_ipython().system_raw(\"mv /root/.YouTube-DL/* '\" + SavePathYT.value + \"/'\")\n",
- " # Archive Download\n",
- " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
- " files.download(\"/root/.youtube-dl.txt\")\n",
- " ShowYT()\n",
- "\n",
- "if not os.path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
- " get_ipython().system_raw(\"rm -rf /content/sample_data/ && mkdir -p -m 666 /root/.YouTube-DL/ && apt-get install atomicparsley && curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl && chmod a+rx /usr/local/bin/youtube-dl\")\n",
- "if Archive:\n",
- " upload_archive()\n",
- "else:\n",
- " AudioOnly.observe(AudioOnlyChange)\n",
- " Subtitle.observe(SubtitleChange)\n",
- " AudioQ.observe(AudioQChange)\n",
- " ShowYT()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FejGUkxPhDmE"
- },
- "source": [
- "## ✧ *P2P-File Downloader* ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "_GVSJ9jdn6lW"
- },
- "source": [
- "### Deluge "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "z1IqkfEXn-eu",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Deluge \n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, urllib.request, pathlib\n",
- "from IPython.display import clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " findProcess,\n",
- " loadingAn,\n",
- " displayUrl,\n",
- " PortForward_wrapper\n",
- ")\n",
- "\n",
- "clear_output()\n",
- "loadingAn()\n",
- "\n",
- "pathlib.Path('downloads').mkdir(exist_ok=True)\n",
- "pathlib.Path(f\"{HOME}/.config/deluge/\").mkdir(parents=True, exist_ok=True)\n",
- "\n",
- "if not (findProcess(\"/usr/bin/python\", \"deluged\") or findProcess(\"/usr/bin/python\", \"deluge-web\")):\n",
- " runSh('sudo apt install -y deluged deluge-console deluge-webui')\n",
- " runSh(\n",
- " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/core.conf \\\n",
- " -O {HOME}/.config/deluge/core.conf\"\n",
- " )\n",
- " runSh(\n",
- " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/web.conf \\\n",
- " -O {HOME}/.config/deluge/web.conf\"\n",
- " )\n",
- " runSh('deluged &> /dev/null &', shell=True)\n",
- " runSh('deluge-web --fork', shell=True)\n",
- " runSh(\"\"\"sed -i 's/if s.hexdigest() == config\\[\"pwd_sha1\"\\]:/if True:/' /usr/lib/python2.7/dist-packages/deluge/ui/web/auth.py\"\"\")\n",
- " runSh(\"sed -i 's/onShow:function(){this.passwordField.focus(.*)}/onShow:function(){this.onLogin();}/' /usr/lib/python2.7/dist-packages/deluge/ui/web/js/deluge-all.js\")\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['deluge', 8112, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/deluge.yml\", 4042]).start('deluge')\n",
- "displayUrl(Server, pNamU='Deluge : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OJBVlUw-kKyt"
- },
- "source": [
- "### libtorrent "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "NZgOIKJ3kOL9",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install libtorrent \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!apt install python3-libtorrent\n",
- "\n",
- "import libtorrent as lt\n",
- "\n",
- "ses = lt.session()\n",
- "ses.listen_on(6881, 6891)\n",
- "downloads = []\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "wOroL1PJns93",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Add Torrent from File \n",
- "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: \"save_path\": \"/content/downloads\",3. Change /content/downloads to your path \n",
- "# @markdown > You can run this cell as many time as you want.\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "from google.colab import files\n",
- "\n",
- "if os.path.exists(\"/content/downloads\"):\n",
- " pass\n",
- "else:\n",
- " os.mkdir(\"/content/downloads\")\n",
- "\n",
- "source = files.upload()\n",
- "params = {\n",
- " \"save_path\": \"/content/downloads\",\n",
- " \"ti\": lt.torrent_info(list(source.keys())[0]),\n",
- "}\n",
- "downloads.append(ses.add_torrent(params))\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nOQBAsoenwLb",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Add Torrent from Magnet Link \n",
- "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: params = {\"save_path\": \"/content/downloads\"}3. Change /content/downloads to your path \n",
- "# @markdown > You can run this cell as many time as you want.\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "\n",
- "if os.path.exists(\"/content/downloads\"):\n",
- " pass\n",
- "else:\n",
- " os.mkdir(\"/content/downloads\")\n",
- "\n",
- "params = {\"save_path\": \"/content/downloads\"}\n",
- "\n",
- "while True:\n",
- " magnet_link = input(\"Paste the magnet link here or type exit to stop:\\n\")\n",
- " if magnet_link.lower() == \"exit\":\n",
- " break\n",
- " downloads.append(\n",
- " lt.add_magnet_uri(ses, magnet_link, params)\n",
- " )\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "vY4-WX3FmMBB",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] libtorrent \n",
- "# ================================================================ #\n",
- "\n",
- "import time\n",
- "from IPython.display import display\n",
- "import ipywidgets as widgets\n",
- "\n",
- "state_str = [\n",
- " \"queued\",\n",
- " \"checking\",\n",
- " \"downloading metadata\",\n",
- " \"downloading\",\n",
- " \"finished\",\n",
- " \"seeding\",\n",
- " \"allocating\",\n",
- " \"checking fastresume\",\n",
- "]\n",
- "\n",
- "layout = widgets.Layout(width=\"auto\")\n",
- "style = {\"description_width\": \"initial\"}\n",
- "download_bars = [\n",
- " widgets.FloatSlider(\n",
- " step=0.01, disabled=True, layout=layout, style=style\n",
- " )\n",
- " for _ in downloads\n",
- "]\n",
- "display(*download_bars)\n",
- "\n",
- "while downloads:\n",
- " next_shift = 0\n",
- " for index, download in enumerate(downloads[:]):\n",
- " bar = download_bars[index + next_shift]\n",
- " if not download.is_seed():\n",
- " s = download.status()\n",
- "\n",
- " bar.description = \" \".join(\n",
- " [\n",
- " download.name(),\n",
- " str(s.download_rate / 1000),\n",
- " \"kB/s\",\n",
- " state_str[s.state],\n",
- " ]\n",
- " )\n",
- " bar.value = s.progress * 100\n",
- " else:\n",
- " next_shift -= 1\n",
- " ses.remove_torrent(download)\n",
- " downloads.remove(download)\n",
- " bar.close() # Seems to be not working in Colab (see https://github.com/googlecolab/colabtools/issues/726#issue-486731758)\n",
- " download_bars.remove(bar)\n",
- " print(download.name(), \"complete\")\n",
- " time.sleep(1)\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "yqY0BtjuGS78"
- },
- "source": [
- "### qBittorrent "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Yk8cbx3EdKaK",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] qBittorrent \n",
- "# @markdown MiXLab is now using VueTorrent as the default qBittorrent WebUI.
\n",
- "#QBITTORRENT_VARIANT = \"official\" #@param [\"official\", \"unofficial\"]\n",
- "## @markdown ---\n",
- "## @markdown qBittorrent Default Credential
\n",
- "## @markdown > Username: adminPassword: adminadmin\n",
- "## @markdown ---\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, psutil, time, urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "!wget -P /content/qBittorrent/tmp https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/qbittorrent/vuetorrent.zip\n",
- "!unzip /content/qBittorrent/tmp/vuetorrent.zip -d /content/qBittorrent/tmp\n",
- "!mv /content/qBittorrent/tmp/vuetorrent/ /content/qBittorrent/WebUI\n",
- "clear_output()\n",
- "\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " checkAvailable,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " runSh,\n",
- " displayUrl,\n",
- " findProcess\n",
- ")\n",
- "\n",
- "#Note: need to locate where the WebUI is extracted into and then remove it\n",
- "# in order to use the proper WebUI for the Official or Unofficial version of qBittorrent\n",
- "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
- "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
- "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
- "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
- "#runSh(\"rm -f /usr/bin/qbittorrent\")\n",
- "#runSh(\"rm -f /usr/bin/qbittorrent-nox\")\n",
- "#runSh(\"sudo apt-get purge --auto-remove qbittorrent-nox \")\n",
- "#clear_output()\n",
- "\n",
- "def addUtils():\n",
- " if not checkAvailable(\"/usr/local/sessionSettings\"):\n",
- " runSh(\"mkdir -p -m 777 /usr/local/sessionSettings\")\n",
- " if not checkAvailable(\"/content/upload.txt\"):\n",
- " runSh(\"touch /content/upload.txt\")\n",
- " if not checkAvailable(\"checkAptUpdate.txt\", userPath=True):\n",
- " runSh(\"apt update -qq -y\")\n",
- " runSh(\"apt-get install -y iputils-ping\")\n",
- "\n",
- "def configTimezone(auto=True):\n",
- " if checkAvailable(\"timezone.txt\", userPath=True):\n",
- " return\n",
- " if not auto:\n",
- " runSh(\"sudo dpkg-reconfigure tzdata\")\n",
- " else:\n",
- " runSh(\"sudo ln -fs /usr/share/zoneinfo/Asia/Ho_Chi_Minh /etc/localtime\")\n",
- " runSh(\"sudo dpkg-reconfigure -f noninteractive tzdata\")\n",
- "\n",
- "def uploadQBittorrentConfig():\n",
- " if checkAvailable(\"updatedQBSettings.txt\", userPath=True):\n",
- " return\n",
- " runSh(\n",
- " \"mkdir -p -m 666 /content/qBittorrent /root/.qBittorrent_temp /root/.config/qBittorrent\"\n",
- " )\n",
- " runSh(\n",
- " \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/qbittorrent/qBittorrent.conf \\\n",
- " -O /root/.config/qBittorrent/qBittorrent.conf\"\n",
- " )\n",
- "\n",
- "def prepareSession():\n",
- " if checkAvailable(\"ready.txt\", userPath=True):\n",
- " return\n",
- " else:\n",
- " addUtils()\n",
- " configTimezone()\n",
- " uploadQBittorrentConfig()\n",
- "\n",
- "def installQBittorrent():\n",
- " if checkAvailable(\"/usr/bin/qbittorrent-nox\"):\n",
- " return\n",
- " else:\n",
- "# if QBITTORRENT_VARIANT == \"official\":\n",
- " try:\n",
- "# if checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\"):\n",
- "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
- "# elif checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\"):\n",
- "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
- "# else:\n",
- " runSh(\"sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable\")\n",
- " runSh(\"sudo apt-get update\")\n",
- " runSh(\"sudo apt install qbittorrent-nox\")\n",
- " except:\n",
- " raise Exception('Failed to install qBittorrent!')\n",
- "# elif QBITTORRENT_VARIANT == \"unofficial\":\n",
- "# try:\n",
- "# if checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\"):\n",
- "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
- "# elif checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\"):\n",
- "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
- "# else:\n",
- "# runSh(\"sudo add-apt-repository ppa:poplite/qbittorrent-enhanced\")\n",
- "# runSh(\"sudo apt-get update\")\n",
- "# runSh(\"sudo apt-get install qbittorrent-enhanced qbittorrent-enhanced-nox\")\n",
- "# except:\n",
- "# raise Exception('Failed to install qBittorrent!')\n",
- "\n",
- "def startQBService():\n",
- " prepareSession()\n",
- " installQBittorrent()\n",
- " if not findProcess(\"qbittorrent-nox\", \"-d --webui-port\"):\n",
- " runSh(f\"qbittorrent-nox -d --webui-port={QB_Port}\")\n",
- " time.sleep(1)\n",
- "\n",
- "QB_Port = 10001\n",
- "loadingAn()\n",
- "startQBService()\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['qbittorrent', QB_Port, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/qbittorrent.yml\", 4088]).start('qbittorrent', displayB=False)\n",
- "displayUrl(server, pNamU='qBittorrent : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "nFrxKe_52fSj"
- },
- "source": [
- "### rTorrent "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "cN8mVNe52cYu",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] rTorrent \n",
- "# @markdown > rTorrent Default CredentialUsername: adminPassword: admin\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re, urllib.request\n",
- "from shutil import copyfile\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl,\n",
- " textAn\n",
- ")\n",
- "\n",
- "clear_output()\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs('tools/', exist_ok=True)\n",
- "os.makedirs(\"/content/downloads\", mode=0o775, exist_ok=True)\n",
- "os.makedirs(\"/content/tools/rtorrent/session\", mode=0o775, exist_ok=True)\n",
- "\n",
- "configData = \"\"\"\n",
- "# Where rTorrent saves the downloaded files\n",
- "directory = /content/downloads\n",
- "\n",
- "# Where rTorrent saves the session\n",
- "session = /content/tools/rtorrent/session\n",
- "\n",
- "# Which ports rTorrent can use (Make sure to open them in your router)\n",
- "port_range = 50000-50000\n",
- "port_random = no\n",
- "\n",
- "# Check the hash after the end of the download\n",
- "check_hash = yes\n",
- "\n",
- "# Enable DHT (for torrents without trackers)\n",
- "dht = auto\n",
- "dht_port = 6881\n",
- "peer_exchange = yes\n",
- "\n",
- "# Authorize UDP trackers\n",
- "use_udp_trackers = yes\n",
- "\n",
- "# Enable encryption when possible\n",
- "encryption = allow_incoming,try_outgoing,enable_retry\n",
- "\n",
- "# SCGI port, used to communicate with Flood\n",
- "scgi_port = 127.0.0.1:5000\n",
- "\"\"\"\n",
- "with open(\"/root/.rtorrent.rc\", 'w') as rC:\n",
- " rC.write(configData)\n",
- "\n",
- "if not os.path.exists(\"/content/tools/flood/config.js\"):\n",
- " runSh(\"apt install rtorrent screen mediainfo -y\")\n",
- " runSh(\"git clone --depth 1 https://github.com/jfurrow/flood.git tools/flood\", shell=True)\n",
- " copyfile(\"tools/flood/config.template.js\", \"tools/flood/config.js\")\n",
- " runSh(\"npm install\", shell=True, cd=\"tools/flood/\")\n",
- " runSh(\"npm install pm2 -g\", shell=True, cd=\"tools/flood/\")\n",
- " runSh(\"npm run build\", shell=True, cd=\"tools/flood/\")\n",
- "\n",
- " userDB = r\"\"\"{\"username\":\"admin\",\"password\":\"$argon2i$v=19$m=4096,t=3,p=1$3hJdjMSgwdUnJ86uYBhOnA$dud5j5/IokJ3hyb+v5aqmDK0jwP9X5W2pz6Qqek++Tk\",\"host\":\"127.0.0.1\",\"port\":\"5000\",\"isAdmin\":true,\"_id\":\"jLJcPySMAEgp35uB\"}\n",
- "{\"$$indexCreated\":{\"fieldName\":\"username\",\"unique\":true,\"sparse\":false}}\n",
- "\"\"\"\n",
- " userSettingsDB = r\"\"\"{\"id\":\"startTorrentsOnLoad\",\"data\":true,\"_id\":\"5leeeHwIN9rKLgG9\"}\n",
- "{\"id\":\"torrentListColumnWidths\",\"data\":{\"sizeBytes\":61,\"ratio\":56,\"peers\":62},\"_id\":\"PnB52rZSPg5fLEN9\"}\n",
- "{\"id\":\"torrentDestination\",\"data\":\"/content/downloads\",\"_id\":\"YcGroeyigKYWM8Ol\"}\n",
- "{\"id\":\"mountPoints\",\"data\":[\"/\"],\"_id\":\"gJlGwWqOsyPfkLyJ\"}\n",
- "{\"id\":\"torrentListViewSize\",\"data\":\"expanded\",\"_id\":\"q0CmirE9c0KnDGV3\"}\n",
- "\"\"\"\n",
- "\n",
- " os.makedirs(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings\", exist_ok=True)\n",
- " with open(\"tools/flood/server/db/users.db\", 'w') as wDB:\n",
- " wDB.write(userDB)\n",
- " with open(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings/settings.db\", 'w') as wDB:\n",
- " wDB.write(userSettingsDB)\n",
- "\n",
- "if not findProcess(\"rtorrent\", \"\"):\n",
- " runSh(\"screen -d -m -fa -S rtorrent rtorrent\", shell=True)\n",
- "if not findProcess(\"node\", \"start.js\"): \n",
- " runSh(\"pm2 start server/bin/start.js\", shell=True, cd=\"tools/flood/\")\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rTorrent', 3000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/rTorrent.yml\", 1463]).start('rTorrent', btc='b', displayB=True)\n",
- "displayUrl(Server, pNamU='rTorrent : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Ssn-ZMNcv5UQ"
- },
- "source": [
- "### SimpleTorrent "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "zb3hWwWE1Us8"
- },
- "source": [
- "NOT WORKING! USE OTHER TORRENT DOWNLOADER! \n",
- "(I'm... probably not going to fix this...) "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "lrCc585SD2f7",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] SimpleTorrent \n",
- "Install_old_version = False\n",
- "Auto_UP_Gdrive = False\n",
- "AUTO_MOVE_PATH = \"/content/drive/MyDrive\"\n",
- "force_change_version = \"\"\n",
- "rclone_DestinationPath = \"\"\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, time, pathlib, urllib.request, requests, tarfile\n",
- "from subprocess import Popen\n",
- "from IPython.display import clear_output\n",
- " \n",
- "HOME = os.path.expanduser(\"~\")\n",
- "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
- " \n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " findProcess,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl\n",
- ")\n",
- "\n",
- "# Defining environments for SImpleTorrent\n",
- "os.makedirs('downloads', exist_ok=True)\n",
- "os.makedirs('torrents', exist_ok=True)\n",
- "os.makedirs('tools/simple-torrent', exist_ok=True)\n",
- " \n",
- "def generateCmd(src, dst):\n",
- " FAST_LIST = True\n",
- " PATH_RClone_Config = \"/usr/local/sessionSettings\"\n",
- " cmd = f'rclone move \"{src}\" \"{dst}\" ' \\\n",
- " f'--config {PATH_RClone_Config}/rclone.conf ' \\\n",
- " f'{\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" ' \\\n",
- " '--transfers 20 --checkers 20 --drive-server-side-across-configs ' \\\n",
- " '-c --buffer-size 256M --drive-chunk-size 256M ' \\\n",
- " '--drive-upload-cutoff 256M --drive-acknowledge-abuse ' \\\n",
- " '--drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 ' \\\n",
- " '--stats-one-line --stats=5s -v'\n",
- " return cmd\n",
- "\n",
- "\n",
- "if Auto_UP_Gdrive:\n",
- " data = \"\"\"#!/bin/bash\n",
- " dir=${CLD_DIR}\n",
- " path=${CLD_PATH}\n",
- " abp=\"${dir}/${path}\"\n",
- " type=${CLD_TYPE}\n",
- " if [[ ${type} == \"torrent\" ]]; then\n",
- " \"\"\"\n",
- "\n",
- " nUpload = \"\"\" \n",
- " #Upload to Gdrive\n",
- " #mkdir -p \"%s/$(dirname \"${path}\")\"\n",
- " mv \"${abp}\" \"%s/${path}\"\n",
- " \"\"\" % (AUTO_MOVE_PATH, AUTO_MOVE_PATH)\n",
- "\n",
- " rcloneUpload = \"\"\"\n",
- " #You can also use rcone move file to remote\n",
- " %s\n",
- " \"\"\" % generateCmd(r\"${abp}\", rclone_DestinationPath)\n",
- "\n",
- " end = \"\"\"\n",
- " fi\n",
- " \"\"\"\n",
- " \n",
- " data = data + (rcloneUpload if rclone_DestinationPath else nUpload) + end\n",
- " with open(pathDoneCMD, 'w') as w:\n",
- " w.write(data)\n",
- " os.chmod(pathDoneCMD, 0o755)\n",
- "else:\n",
- " try:\n",
- " os.unlink(pathDoneCMD)\n",
- " except FileNotFoundError:\n",
- " pass\n",
- " \n",
- "configPath = pathlib.Path('tools/simple-torrent/cloud-torrent.json')\n",
- "configsdata = r\"\"\"\n",
- "{{\n",
- " \"AutoStart\": true,\n",
- " \"EngineDebug\": false,\n",
- " \"MuteEngineLog\": true,\n",
- " \"ObfsPreferred\": true,\n",
- " \"ObfsRequirePreferred\": false,\n",
- " \"DisableTrackers\": false,\n",
- " \"DisableIPv6\": false,\n",
- " \"DownloadDirectory\": \"/content/downloads/\",\n",
- " \"WatchDirectory\": \"torrents/\",\n",
- " \"EnableUpload\": true,\n",
- " \"EnableSeeding\": false,\n",
- " \"IncomingPort\": 50007,\n",
- " \"DoneCmd\": \"{}/doneCMD.sh\",\n",
- " \"SeedRatio\": 1.5,\n",
- " \"UploadRate\": \"High\",\n",
- " \"DownloadRate\": \"Unlimited\",\n",
- " \"TrackerListURL\": \"https://trackerslist.com/best.txt\",\n",
- " \"AlwaysAddTrackers\": true,\n",
- " \"ProxyURL\": \"\"\n",
- "}}\n",
- "\"\"\".format(HOME)\n",
- "with open(configPath, \"w+\") as configFile:\n",
- " configFile.write(configsdata)\n",
- " \n",
- "loadingAn()\n",
- "\n",
- "if not os.path.isfile(\"tools/simple-torrent/cloud-torrent\"):\n",
- " filename = 'tools/simple-torrent/cloud-torrent_linux_amd64.gz'\n",
- " if Install_old_version:\n",
- " latestTag = '1.2.3'\n",
- " else:\n",
- " latestTag = requests.get(\"https://api.github.com/repos/boypt/simple-torrent/releases/latest\").json()['tag_name']\n",
- " url = \"https://github.com/boypt/simple-torrent/releases/download/\" \\\n",
- " f\"{latestTag}/{filename[21:]}\"\n",
- " \n",
- " urllib.request.urlretrieve(url, filename)\n",
- " import gzip, shutil\n",
- " with gzip.open(filename, 'rb') as f_in:\n",
- " with open('tools/simple-torrent/cloud-torrent', 'wb') as f_out: shutil.copyfileobj(f_in, f_out)\n",
- " os.chmod('tools/simple-torrent/cloud-torrent', 0o775)\n",
- " os.remove(filename)\n",
- " \n",
- "# Launching SimpleTorrent in background\n",
- "if not findProcess(\"cloud-torrent\", \"SimpleTorrent\"):\n",
- " PORT = 4444\n",
- " try:\n",
- " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
- " except:\n",
- " cmdC = f'./cloud-torrent --port {PORT} ' \\\n",
- " '-t Simple-Torrent ' \\\n",
- " '-c cloud-torrent.json ' \\\n",
- " '--host 0.0.0.0'\n",
- " for run in range(10): \n",
- " Popen(cmdC.split(), cwd='tools/simple-torrent')\n",
- " time.sleep(3)\n",
- " try:\n",
- " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
- " break\n",
- " except:\n",
- " print(\"Unable to start SimpleTorrent! Retrying...\")\n",
- " \n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['SimpleTorrent', 4444, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/SimpleTorrent.yml\", 4040]).start('SimpleTorrent')\n",
- "displayUrl(Server, pNamU='SimpleTorrent : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "iLcAVtWT4NTC"
- },
- "source": [
- "### Transmission "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "CePVeFVG4QFz",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Transmission \n",
- "# @markdown > Transmission Default CredentialUsername: adminPassword: admin\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, psutil, time, urllib.request, pathlib\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " checkAvailable,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " runSh,\n",
- " displayUrl,\n",
- " findProcess\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "if not os.path.exists('/usr/bin/transmission-daemon'):\n",
- " os.makedirs('downloads', exist_ok=True)\n",
- " os.makedirs('tools/transmission/', exist_ok=True)\n",
- " runSh('apt install transmission-daemon')\n",
- " nTWC = \"https://raw.githubusercontent.com/ronggang/\" \\\n",
- " \"transmission-web-control/master/release/install-tr-control.sh\"\n",
- " urllib.request.urlretrieve(nTWC, 'tools/transmission/trInstall.sh')\n",
- " runSh('bash tools/transmission/trInstall.sh auto')\n",
- " \n",
- " try:\n",
- " pathlib.Path('tools/transmission/trInstall.sh').unlink()\n",
- " except FileNotFoundError:\n",
- " pass\n",
- "\n",
- "if not findProcess('transmission-daemon', '--no-watch-dir'):\n",
- " !transmission-daemon --no-watch-dir --config-dir tools/transmission \\\n",
- " --port 9091 --download-dir /content/downloads/ --dht --utp --no-portmap \\\n",
- " --peerlimit-global 9999 --peerlimit-torrent 9999 --no-global-seedratio \\\n",
- " -u admin -v admin --auth\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http'], ['transmission', 9091, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/transmission.yml\", 4058]).start('transmission', displayB=False)\n",
- "displayUrl(server, pNamU='Transmission : ', btc='r')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "bQ73mxqlpNjb"
- },
- "source": [
- "### µTorrent "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "unIq2GEJpLzG",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] µTorrent \n",
- "# @markdown > µTorrent Default CredentialUsername: adminPassword: admin\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "\n",
- "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re\n",
- "import urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "r = get_ipython()\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl\n",
- ")\n",
- "\n",
- "clear_output()\n",
- "loadingAn()\n",
- "\n",
- "# Installing µTorrent\n",
- "if not os.path.exists(\"/usr/bin/utserver\"):\n",
- " os.makedirs(\"downloads\", exist_ok=True)\n",
- " r.system_raw(\"apt install libssl1.0.0 libssl-dev\")\n",
- " r.system_raw(r\"wget http://download-new.utorrent.com/endpoint/utserver/os/linux-x64-ubuntu-13-04/track/beta/ -O utserver.tar.gz\")\n",
- " r.system_raw(r\"tar -zxvf utserver.tar.gz -C /opt/\")\n",
- " r.system_raw(\"rm -f utserver.tar.gz\")\n",
- " r.system_raw(\"mv /opt/utorrent-server-* /opt/utorrent\")\n",
- " os.chmod(\"/opt/utorrent\", 0o777)\n",
- " r.system_raw(\"ln -s /opt/utorrent/utserver /usr/bin/utserver\")\n",
- " urllib.request.urlretrieve(\"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/utorrent/utserver.conf\", \"/opt/utorrent/utserver.conf\")\n",
- "\n",
- "if not findProcess(\"utserver\", \"-settingspath\"):\n",
- " cmd = \"utserver -settingspath /opt/utorrent/\" \\\n",
- " \" -configfile /opt/utorrent/utserver.conf\" \\\n",
- " \" -daemon\"\n",
- " runSh(cmd, shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['utorrent', 5454, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/uTorrent.yml\", 4042]).start('utorrent', displayB=False)\n",
- "displayUrl(Server, pNamU='µTorrent : ', ExUrl=fr\"http://admin:admin@{Server['url'][7:]}/gui\", btc=\"g\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "UU-y9pOU4sRB"
- },
- "source": [
- "### vuze "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Uxp5DDkJ4ue1",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] vuze \n",
- "# @markdown > viuze Default CredentialUsername: rootPassword: yesme\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, psutil, time, urllib.request, pathlib\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " checkAvailable,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " runSh,\n",
- " displayUrl,\n",
- " findProcess\n",
- ")\n",
- "\n",
- "def latestTag():\n",
- " import re\n",
- " from urllib.request import urlopen\n",
- " htmlF = urlopen(\"http://dev.vuze.com/\").read().decode('UTF-8')\n",
- " return re.findall(r'\\sVuze_(\\d{4})\\sRelease\\s', htmlF)[0]\n",
- "\n",
- "\n",
- "loadingAn()\n",
- "if not os.path.exists('tools/vuze/Vuze.jar'):\n",
- " os.makedirs('downloads', exist_ok=True)\n",
- " os.makedirs('tools/vuze/', exist_ok=True)\n",
- " runSh('wget -r --level=1 -np -nH -R index.html -nd -k http://svn.vuze.com/public/client/trunk/uis/lib/', cd='tools/vuze/')\n",
- " rv = latestTag()\n",
- " dlink = f\"https://netcologne.dl.sourceforge.net/project/azureus/vuze/Vuze_{rv}/Vuze_{rv}.jar\"\n",
- " urllib.request.urlretrieve(dlink, 'tools/vuze/Vuze.jar') \n",
- "\n",
- " # All command found in set command ex: java -jar Vuze.jar --ui=console -c set\n",
- " runScript = \"\"\"plugin install xmwebui\n",
- "pair enable\n",
- "set \"Plugin.xmwebui.Port\" 9595 int\n",
- "set \"Plugin.xmwebui.Password Enable\" true boolean\n",
- "set \"Plugin.xmwebui.Pairing Enable\" false boolean\n",
- "set \"Plugin.xmwebui.User\" \"root\" string\n",
- "set \"Plugin.xmwebui.Password\" \"yesme\" password\n",
- "set \"Completed Files Directory\" \"/content/downloads/\" string\n",
- "set \"General_sDefaultSave_Directory\" \"/content/downloads/\" string\n",
- "set \"General_sDefaultTorrent_Directory\" \"/content/downloads/\" string\n",
- "\"\"\"\n",
- " with open('tools/vuze/Rscript.sh', 'w') as w: w.write(runScript)\n",
- "\n",
- "if not findProcess('java', '-jar Vuze.jar'):\n",
- " runSh('java -jar Vuze.jar --ui=console -e Rscript.sh &', cd='tools/vuze/', shell=True)\n",
- " time.sleep(7)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/vuze.yml\", 4058]).start('vuze', displayB=False)\n",
- "displayUrl(server, pNamU='vuze : ', ExUrl=fr\"http://root:yesme@{server['url'][7:]}\", btc='b')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "EpwNYbcfRvcl"
- },
- "source": [
- "# ✦ *Utility* ✦ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "5CWw65NugcjI"
- },
- "source": [
- "## ✧ Checksum Tool ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3AbFcLJr5PHk"
- },
- "source": [
- "### MD5 + SHA-1 + SHA-256 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OQTQwwFm5PH1"
- },
- "source": [
- "TO DO (later...):\n",
- "\n",
- "1. Add some kind of checking to make sure file_name does exist.\n",
- "2. Add some kind of checking to make sure file_name is not a directory.\n",
- "3. Add some kind of checking to make sure file_path does exist.\n",
- "4. Add some kind of checking to make sure file_path is not a file.\n",
- "5. Add whether the hash file does exist or not. If not, skip."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "ovjsyICM5PH5"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Generate
\n",
- "file_path = \"/content/\" #@param {type:\"string\"}\n",
- "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
- "\n",
- "generate_md5 = True #@param {type:\"boolean\"}\n",
- "generate_sha1 = True #@param {type:\"boolean\"}\n",
- "generate_sha256 = True #@param {type:\"boolean\"}\n",
- "\n",
- "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.\n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "%cd \"$file_path\"\n",
- "clear_output()\n",
- "\n",
- "if generate_md5 is True:\n",
- " print(\"Generating MD5 hash...\")\n",
- " !md5sum \"$file_name\" > \"$file_name\".md5\n",
- "else:\n",
- " pass\n",
- "\n",
- "if generate_sha1 is True:\n",
- " print(\"Generating SHA-1 hash...\")\n",
- " !sha1sum \"$file_name\" > \"$file_name\".sha1\n",
- "else:\n",
- " pass\n",
- "\n",
- "if generate_sha256 is True:\n",
- " print(\"Generating SHA-256 hash...\")\n",
- " !sha256sum \"$file_name\" > \"$file_name\".sha256\n",
- "else:\n",
- " pass\n",
- "\n",
- "print(\"\\nAll hashes has been generated.\\n\\n\")\n",
- "\n",
- "%cd \"/content\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "O8m9DgFb5PH8"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Compare
\n",
- "file_path = \"/content/\" #@param {type:\"string\"}\n",
- "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
- "\n",
- "compare_md5 = True #@param {type:\"boolean\"}\n",
- "compare_sha1 = True #@param {type:\"boolean\"}\n",
- "compare_sha256 = True #@param {type:\"boolean\"}\n",
- "\n",
- "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.
\n",
- "# @markdown > If the result shows \"OK\", that means the file matches 100%.
\n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "%cd \"$file_path\"\n",
- "clear_output()\n",
- "\n",
- "if compare_md5 is True:\n",
- " print(\"Comparing MD5 hash...\")\n",
- " !md5sum -c \"$file_name\".md5\n",
- "else:\n",
- " pass\n",
- "\n",
- "if compare_sha1 is True:\n",
- " print(\"\\nComparing SHA-1 hash...\")\n",
- " !sha1sum -c \"$file_name\".sha1\n",
- "else:\n",
- " pass\n",
- "\n",
- "if compare_sha256 is True:\n",
- " print(\"\\nComparing SHA-256 hash...\")\n",
- " !sha256sum -c \"$file_name\".sha256\n",
- "else:\n",
- " pass\n",
- "\n",
- "print(\"\\n\")\n",
- "%cd \"/content\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "pIk3H6xUic8a"
- },
- "source": [
- "## ✧ Files Uploader ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "LOmbPf7Tihne"
- },
- "source": [
- "### anonfiles "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "BIMRKjTrinOM"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Upload to anonfiles
\n",
- "file_path = \"\" # @param {type: \"string\"}\n",
- "\n",
- "url = \"https://api.anonfiles.com/upload\"\n",
- "# ================================================================ #\n",
- "\n",
- "import requests\n",
- "\n",
- "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
- "\n",
- "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "paeY4yX7jNd1"
- },
- "source": [
- "### BayFiles "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "b5hRr0CmjSI2"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Upload to BayFiles
\n",
- "file_path = \"\" # @param {type: \"string\"}\n",
- "\n",
- "url = \"https://api.bayfiles.com/upload\"\n",
- "# ================================================================ #\n",
- "\n",
- "import requests\n",
- "\n",
- "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
- "\n",
- "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "j-PgCLYrZFbm"
- },
- "source": [
- "## ✧ File Manager ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "TgwoGxAitg0y"
- },
- "source": [
- "### Cloud Commander "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "sWTkCBV0ZHtJ",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Cloud Commander \n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, psutil, time, urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " loadingAn,\n",
- " displayUrl,\n",
- " PortForward_wrapper,\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "if os.path.isfile(\"/tools/node/bin/cloudcmd\") == False:\n",
- " get_ipython().system_raw(\"npm cache clean -f && npm install -g n && n stable && npm i cloudcmd -g --force\")\n",
- "\n",
- "try:\n",
- " urllib.request.urlopen('http://localhost:7007')\n",
- "except urllib.error.URLError:\n",
- " !nohup cloudcmd --online --no-auth --show-config --show-file-name \\\n",
- " --editor 'deepword' --packer 'tar' --port 7007 \\\n",
- " --no-confirm-copy --confirm-move --name 'File Manager' \\\n",
- " --keys-panel --no-contact --console --sync-console-path \\\n",
- " --no-terminal --no-vim --columns 'name-size-date' --no-log &\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['cloudcmd', 7007, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/CloudCommander.yml\", 7044]).start('cloudcmd')\n",
- "displayUrl(server, pNamU='Cloud Commander : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "xmq_9AJCtvlV"
- },
- "source": [
- "### File Browser "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Cs_DPqJaabw3",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] File Browser \n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "\n",
- "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re, urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl,\n",
- " findProcess\n",
- ")\n",
- "\n",
- "clear_output()\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs('tools/filebrowser/', exist_ok=True)\n",
- "\n",
- "get_ipython().system_raw(r\"curl -fsSL https://filebrowser.xyz/get.sh | bash\")\n",
- "if not findProcess(\"filebrowser\", \"--noauth\"):\n",
- " runSh(\"filebrowser --noauth -r /content/ -p 4000 -d tools/filebrowser/filebrowser.db &\", shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['filebrowser', 4000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/FileBrowser.yml\", 4099]).start('filebrowser')\n",
- "displayUrl(server, pNamU='File Browser : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "nUI7G8OSSXbM"
- },
- "source": [
- "### Go HTTP File Server "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "evBFe60vSfxW",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Go HTTP File Server \n",
- "HOME_DIRECTORY = \"/content\" #@param {type:\"string\"}\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, urllib.request, requests\n",
- "from zipfile import ZipFile as ZZ\n",
- "from IPython.display import clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " CWD,\n",
- "\tdisplayUrl\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "toolLocation = 'tools/ghfs'\n",
- "binaryF = f\"{toolLocation}/ghfs\"\n",
- "if not HOME_DIRECTORY:\n",
- " HOME_DIRECTORY = CWD\n",
- "\n",
- "try:\n",
- " if HOME_DIRECTORY != OldP:\n",
- " os.system(\"pkill ghfs\")\n",
- "except NameError:\n",
- " pass\n",
- " \n",
- "OldP = HOME_DIRECTORY\n",
- "os.makedirs(toolLocation, exist_ok=True)\n",
- "\n",
- "if not os.path.exists(binaryF):\n",
- " ownerProjet = \"mjpclab/go-http-file-server\"\n",
- " DZipBL = f\"{toolLocation}/Zipghfs.zip\"\n",
- " latest_tag = requests.get(f\"https://api.github.com/repos/{ownerProjet}/releases/latest\").json()['tag_name']\n",
- " dBinaryL = f\"https://github.com/{ownerProjet}/releases/download/{latest_tag}/ghfs-{latest_tag}-linux-amd64.zip\"\n",
- " urllib.request.urlretrieve(dBinaryL, DZipBL)\n",
- " with ZZ(DZipBL, 'r') as zip_ref:zip_ref.extractall(toolLocation)\n",
- " os.remove(DZipBL)\n",
- " os.chmod(binaryF, 0o777)\n",
- "\n",
- "if not findProcess(\"ghfs\", \"--listen-plain\"):\n",
- " runSh(f'./ghfs --listen-plain 1717 -R \\\n",
- " -a \":/:{HOME_DIRECTORY}\" \\\n",
- " --global-upload \\\n",
- " --global-mkdir \\\n",
- " --global-delete \\\n",
- " --global-archive \\\n",
- " --global-archive \\\n",
- " &', \n",
- " shell=True,\n",
- " cd=\"tools/ghfs\")\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ghfs', 1717, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/GoHTTPFileServer.yml\", 41717]).start('ghfs')\n",
- "displayUrl(server, pNamU='Go HTTP File Server : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "aStiEPlnDoeY"
- },
- "source": [
- "### Create / Extract Archive "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "88JkX_J3EXWC",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Tools
\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "os.system(\"sudo apt update\")\n",
- "os.system(\"apt install p7zip-full p7zip-rar unrar rar\")\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "RMy0TxzHzCR9",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "#@markdown ← Create archive \n",
- "source_path = \"\" #@param {type:\"string\"}\n",
- "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"tar.gz\"]\n",
- "archive_name = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If left empty, the default name will be used (archive)\n",
- "archive_password = \"\" #@param {type:\"string\"}\n",
- "#@markdown > Leave this field empty if you do not want to protect the archive with password.\n",
- "compression_level = \"no_compression\" #@param [\"no_compression\", \"fastest\", \"fast\", \"normal\", \"maximum\", \"ultra\"]\n",
- "output_path = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If left empty, the default path will be used (/content)\n",
- "\n",
- "#@markdown ---\n",
- "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import HTML, clear_output\n",
- "import os, sys, re\n",
- "\n",
- "\n",
- "if archive_name == \"\":\n",
- " archive_name = \"archive\"\n",
- "else:\n",
- " pass\n",
- "\n",
- "if archive_password == \"\":\n",
- " pass\n",
- "else:\n",
- " archive_password = \"-p\" + archive_password\n",
- "\n",
- "if compression_level == \"no_compression\":\n",
- " compression_level = \"-mx=0\"\n",
- "elif compression_level == \"fastest\":\n",
- " compression_level = \"-mx=1\"\n",
- "elif compression_level == \"fast\":\n",
- " compression_level = \"-mx=3\"\n",
- "elif compression_level == \"normal\":\n",
- " compression_level = \"-mx=5\"\n",
- "elif compression_level == \"maximum\":\n",
- " compression_level = \"-mx=7\"\n",
- "elif compression_level == \"ultra\":\n",
- " compression_level = \"-mx=9\"\n",
- "\n",
- "if output_path == \"\":\n",
- " output_path = \"/content\"\n",
- "else:\n",
- " pass\n",
- "\n",
- "\n",
- "if archive_type == \"zip\":\n",
- " if source_path == \"\":\n",
- " display(HTML(\"❌ The source_path field is empty! \"))\n",
- " else:\n",
- " #output_file_path = re.search(\"^[\\/].+\\/\", source_path)\n",
- " #output_file_path_raw = output_file_path.group(0)\n",
- " #delsplit = re.search(\"\\/(?:.(?!\\/))+$\", source_path)\n",
- " #folder_name = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "\n",
- " #os.environ['inputDir'] = source_path\n",
- " #os.environ['outputPath'] = output_file_path_raw\n",
- " #os.environ['folderName'] = folder_name\n",
- " #os.environ['archiveLevel'] = compression_level\n",
- " #os.environ['archivePassword'] = archive_password\n",
- "\n",
- " #!7z a -tzip \"$archiveLevel\" \"$archivePassword\" \"$outputPath\"/\"$folderName\".zip \"$inputDirectory\"\n",
- " !7z a -tzip \"$compression_level\" \"$archive_password\" \"$output_path\"/\"$archive_name\".zip \"$source_path\"\n",
- "else:\n",
- " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
- "\n",
- "\n",
- "if automatically_clear_cell_output is True:\n",
- "\tclear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Mbmf5lk0zF1q",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "#@markdown ← Extract archive \n",
- "archive_path = \"\" #@param {type:\"string\"}\n",
- "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"gzip\", \"iso\"]\n",
- "archive_password = \"\" #@param {type:\"string\"}\n",
- "#@markdown > Leave the archive_password field empty if archive is not password protected.\n",
- "output_path = \"\" #@param {type:\"string\"}\n",
- "#@markdown > Leave the output_path field empty to use default extraction path (/content).\n",
- "\n",
- "#@markdown ---\n",
- "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os, sys, re\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "\n",
- "if archive_password == \"\":\n",
- " pass\n",
- "elif not archive_password == \"\":\n",
- " archive_password = \"-p\" + archive_password\n",
- "\n",
- "if output_path == \"\":\n",
- " output_path = \"-o/content\"\n",
- "elif output_path == \"/content\":\n",
- " output_path = \"-o/content\"\n",
- "else:\n",
- " output_path = \"-o\" + output_path\n",
- "\n",
- "\n",
- "os.environ['inputFile'] = archive_path\n",
- "os.environ['inputPassword'] = archive_password\n",
- "os.environ['outputFile'] = output_path\n",
- "\n",
- "\n",
- "if archive_path == \"\":\n",
- " display(HTML(\"❌ The archive_path field is empty! \"))\n",
- "else:\n",
- " if archive_type == \"zip\":\n",
- " !7z x \"$inputFile\" \"$inputPassword\" \"$outputFile\"\n",
- " elif archive_type == \"iso\":\n",
- " !7z x \"$inputFile\" \"$outputFile\"\n",
- " else:\n",
- " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
- "\n",
- "\n",
- "if automatically_clear_cell_output is True:\n",
- "\tclear_output()\n",
- "else:\n",
- "\tpass"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "d7hdxEjc-ynr"
- },
- "source": [
- "## ✧ Image Manipulation ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "qs8R07vxhuo2"
- },
- "source": [
- "Some of these cells might require GPU runtime. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Jbw2QIUB6JKR"
- },
- "source": [
- "### Real-ESRGAN "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "JdnKplLq61kb"
- },
- "source": [
- "GPU runtime is required! "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "5kUMYIALO6yI"
- },
- "source": [
- "This is my own simple Google Colab implementation of xinntao 's amazing Real-ESRGAN project.\n",
- "\n",
- " \n",
- "\n",
- "Image credit: Real-ESRGAN "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "OW-WSLlS6S3m",
- "cellView": "form"
- },
- "source": [
- "#============================== [FORM] ==============================#\n",
- "#@markdown ##← [Install] Real-ESRGAN\n",
- "#@markdown You MUST run this cell first! \n",
- "#====================================================================#\n",
- "\n",
- "import subprocess, pathlib, shutil\n",
- "\n",
- "\n",
- "main_path = '/content/Real-ESRGAN'\n",
- "input_path = main_path + '/inputs'\n",
- "cmd = [\n",
- " 'apt get update',\n",
- " 'git clone https://github.com/xinntao/Real-ESRGAN.git',\n",
- " 'pip install basicsr',\n",
- " 'pip install facexlib',\n",
- " 'pip install gfpgan',\n",
- " 'pip install -r requirements.txt',\n",
- " 'python setup.py develop'\n",
- " ]\n",
- "mdl = [\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth',\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth',\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth',\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth',\n",
- " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth'\n",
- " ]\n",
- "\n",
- "\n",
- "for x in cmd[0:2:]:\n",
- " subprocess.run(x, shell=True)\n",
- "for y in cmd[2:]:\n",
- " subprocess.run(y, shell=True, cwd=main_path)\n",
- "for z in mdl:\n",
- " subprocess.run(['wget ' + z + ' -P experiments/pretrained_models'], shell=True, cwd=main_path)\n",
- "\n",
- "\n",
- "remove_path = pathlib.Path(input_path)\n",
- "shutil.rmtree(remove_path)\n",
- "remove_path.mkdir()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "eFcZE1D374Gr",
- "cellView": "form"
- },
- "source": [
- "#============================== [FORM] ==============================#\n",
- "#@markdown ##← Get image\n",
- "image_source = \"upload\" #@param [\"upload\", \"url\"]\n",
- "#====================================================================#\n",
- "\n",
- "import os, sys, shutil\n",
- "from IPython.display import clear_output\n",
- "from google.colab import files\n",
- "\n",
- "\n",
- "main_path = '/content/Real-ESRGAN'\n",
- "input_path = main_path + '/inputs'\n",
- "\n",
- "\n",
- "if image_source == 'upload':\n",
- " uploaded = files.upload()\n",
- "\n",
- " for filename in uploaded.keys():\n",
- " dst_path = os.path.join(input_path, filename)\n",
- " shutil.move(filename, dst_path)\n",
- "\n",
- " print(f'Moved file \"{filename}\" to \"{dst_path}\"') \n",
- "elif image_source == 'url':\n",
- " print('Enter ONLY direct url! For example: https://internet.com/image.jpg')\n",
- " print('Leave the field below blank to cancel.\\n')\n",
- "\n",
- " image_url = input('URL: ')\n",
- "\n",
- " if image_url == '':\n",
- " clear_output()\n",
- " sys.exit('String image_url is empty!')\n",
- " else:\n",
- " os.system('wget -q ' + image_url + ' -N -P ' + input_path)\n",
- "\n",
- " print(f'\\nImage saved to: \"{input_path}\"')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "OCxq4YzeQ2It",
- "cellView": "form"
- },
- "source": [
- "#============================== [FORM] ==============================#\n",
- "#@markdown ##← [Start] Real-ESRGAN\n",
- "style = \"real_life\" #@param [\"real_life\", \"anime\"]\n",
- "upscale_ratio = 2 #@param {type:\"slider\", min: 1, max:10, step:1}\n",
- "#@markdown ---\n",
- "#@markdown ⚙️ Advanced Options ⚙️ \n",
- "output_format = \"auto\" #@param [\"auto\", \"jpg\", \"png\"]\n",
- "alpha_upsampler = \"realesrgan\" #@param [\"realesrgan\", \"bicubic\"]\n",
- "split_chunk = 256 #@param {type:\"slider\", min:0, max:1024, step:128}\n",
- "custom_upscale_ratio = \"1.5\" #@param {type:\"string\"}\n",
- "enable_custom_upscale_ratio = False #@param {type:\"boolean\"}\n",
- "optimize_face = False #@param {type:\"boolean\"}\n",
- "half_precision_mode = False #@param {type:\"boolean\"}\n",
- "#@markdown >This cell is not finished yet!\n",
- "#====================================================================#\n",
- "\n",
- "#\n",
- "# TO DO: if \"inputs\" is empty, upload some image first\n",
- "# optimize_face is not for anime model.\n",
- "# add \"performance mode\" by using the X2 model? since it's faster...\n",
- "# us os.system or subprocess.run\n",
- "#\n",
- "\n",
- "work_path = '/content/Real-ESRGAN'\n",
- "input_path = work_path + '/inputs'\n",
- "output_path = work_path + '/results'\n",
- "model = [\n",
- " 'RealESRGAN_x2plus.pth',\n",
- " 'RealESRGAN_x2plus_netD.pth',\n",
- " 'RealESRNet_x4plus.pth',\n",
- " 'RealESRGAN_x4plus_netD.pth',\n",
- " 'RealESRGAN_x4plus_anime_6B.pth',\n",
- " 'RealESRGAN_x4plus_anime_6B_netD.pth'\n",
- " ]\n",
- "output_format = '--ext ' + output_format\n",
- "alpha_upsampler = '--alpha_upsampler ' + alpha_upsampler\n",
- "split_chunk = '--tile ' + str(split_chunk)\n",
- "\n",
- "if style == 'anime':\n",
- " use_model = model[4]\n",
- "else:\n",
- " use_model = model[2]\n",
- "\n",
- "if enable_custom_upscale_ratio is True:\n",
- " if custom_upscale_ratio == '':\n",
- " sys.exit('The custom_upscale_ratio field cannot be empty!')\n",
- " else:\n",
- " upscale_ratio = '--outscale ' + custom_upscale_ratio\n",
- "else:\n",
- " upscale_ratio = '--outscale ' + str(upscale_ratio)\n",
- "\n",
- "if optimize_face is True:\n",
- " optimize_face = '--face_enhance'\n",
- "else:\n",
- " optimize_face = ''\n",
- "\n",
- "if half_precision_mode is True:\n",
- " half_precision_mode = '--half'\n",
- "else:\n",
- " half_precision_mode = ''\n",
- "\n",
- "\n",
- "!python \"{work_path}/inference_realesrgan.py\" --model_path \"{work_path}/experiments/pretrained_models/{use_model}\" --input \"{input_path}\" --output \"{output_path}\" {upscale_ratio} {split_chunk} {alpha_upsampler} {split_chunk} {optimize_face} {half_precision_mode} {output_format} --suffix 'realesrgan'\n",
- "\n",
- "print('\\nResults are saved in:', output_path)\n",
- "\n",
- "\n",
- "#====================================================================================================\n",
- "#\n",
- "# import subprocess\n",
- "# from subprocess import PIPE\n",
- "\n",
- "# work_path = '/content/Real-ESRGAN'\n",
- "# input_path = work_path + '/inputs'\n",
- "# output_path = work_path + '/results'\n",
- "# use_model = 'RealESRGAN_x2plus.pth'\n",
- "\n",
- "# cmd = 'python inference_realesrgan.py --model_path experiments/pretrained_models/' + use_model + ' --input inputs'\n",
- "# process_run = subprocess.run(cmd, shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True, cwd=work_path)\n",
- "# print(process_run.stdout, process_run.stderr)\n",
- "\n",
- "# print('\\nOutputs are saved in:', output_path)\n",
- "#\n",
- "#===================================================================================================="
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "3cfIjdfASyNI",
- "cellView": "form"
- },
- "source": [
- "#============================== [FORM] ==============================#\n",
- "#@markdown ##← [Start] Visualize comparison (optional)\n",
- "#====================================================================#\n",
- "#\n",
- "# Codes below are from Real-ESRGAN.\n",
- "# Path variables are of course has been changed.\n",
- "#\n",
- "#====================================================================#\n",
- "\n",
- "working_directory = '/content/Real-ESRGAN'\n",
- "input_folder = working_directory + '/inputs'\n",
- "result_folder = working_directory + '/results'\n",
- "\n",
- "# utils for visualization\n",
- "import cv2\n",
- "import matplotlib.pyplot as plt\n",
- "def display(img1, img2):\n",
- " fig = plt.figure(figsize=(25, 10))\n",
- " ax1 = fig.add_subplot(1, 2, 1) \n",
- " plt.title('Input image', fontsize=16)\n",
- " ax1.axis('off')\n",
- " ax2 = fig.add_subplot(1, 2, 2)\n",
- " plt.title('Real-ESRGAN output', fontsize=16)\n",
- " ax2.axis('off')\n",
- " ax1.imshow(img1)\n",
- " ax2.imshow(img2)\n",
- "def imread(img_path):\n",
- " img = cv2.imread(img_path)\n",
- " img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
- " return img\n",
- "\n",
- "# display each image in the upload folder\n",
- "import os\n",
- "import glob\n",
- "\n",
- "input_list = sorted(glob.glob(os.path.join(input_folder, '*')))\n",
- "output_list = sorted(glob.glob(os.path.join(result_folder, '*')))\n",
- "for input_path, output_path in zip(input_list, output_list):\n",
- " img_input = imread(input_path)\n",
- " img_output = imread(output_path)\n",
- " display(img_input, img_output)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "2QB4GeGX0Wbr",
- "cellView": "form"
- },
- "source": [
- "#============================== [FORM] ==============================#\n",
- "#@markdown ##← Download results (archived)\n",
- "#====================================================================#\n",
- "\n",
- "zip_filename = 'Real-ESRGAN_result.zip'\n",
- "\n",
- "if os.path.exists(zip_filename):\n",
- " os.remove(zip_filename)\n",
- "\n",
- "os.system(f\"zip -r -j {zip_filename} /content/Real-ESRGAN/results/*\")\n",
- "\n",
- "files.download(zip_filename)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "e-OWHJwruE6V"
- },
- "source": [
- "### StyleGAN2 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OvwxWoyaUsIL"
- },
- "source": [
- "GPU runtime is required! "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "jqz-1eEnuIer",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Install] StyleGAN2 \n",
- "# ================================================================ #\n",
- "\n",
- "%cd /content\n",
- "!pip install typeguard;\n",
- "!pip install psutil\n",
- "!pip install humanize\n",
- "!pip install tqdm\n",
- "!rm -rf stylegan2 && git clone https://github.com/NVlabs/stylegan2.git;\n",
- "%cd /content/stylegan2\n",
- "\n",
- "print(\"Installing\")\n",
- "\n",
- "from IPython.display import Image, clear_output\n",
- "from google.colab import files\n",
- "import sys\n",
- "import pickle\n",
- "import numpy as np\n",
- "import PIL\n",
- "import psutil\n",
- "import humanize\n",
- "import os\n",
- "import time\n",
- "from tqdm import tqdm\n",
- "\n",
- "from scipy import ndimage\n",
- "\n",
- "%tensorflow_version 1.x\n",
- "sys.path.append('/content/stylegan2/dnnlib')\n",
- "import dnnlib\n",
- "import dnnlib.tflib as tflib\n",
- "dnnlib.tflib.init_tf()\n",
- "\n",
- "entity_to_url = {\n",
- " 'faces': 'https://drive.google.com/uc?id=1erg93hWnekh57m3cwsAnqJYfYVceVVSe',\n",
- " 'celebs': 'https://drive.google.com/uc?id=1q8VldTeTbruoh34ih6GftOcybGNA0dcZ',\n",
- " 'bedrooms': 'https://drive.google.com/uc?id=15EV9JBiQ7ifoi-B-DQAZF4sYPdCAsiCY',\n",
- " 'cars': 'https://drive.google.com/uc?id=1QzWwIqJITrg5NWG7QyqrArhb_4UhStDy',\n",
- " 'cats': 'https://drive.google.com/uc?id=1Fz12B8TSPiRtzCqjhFxTH_W-rIZ5rSGr',\n",
- " 'anime': 'https://drive.google.com/uc?id=1z8N_-xZW9AU45rHYGj1_tDHkIkbnMW-R',\n",
- " 'chruch': 'https://drive.google.com/uc?id=1-0JMXPdCQLIVxkDE_S9pO8t8mWoEvhHl',\n",
- " 'horse': 'https://drive.google.com/uc?id=1-1oc3016pUDi2er1zEvjGcFy8FC-QAh3',\n",
- " 'anime': 'https://drive.google.com/uc?id=1-91fGZSsZJPNlFytg5iHvVLqxKWDLFt8',\n",
- " 'anime_portrait': 'https://drive.google.com/uc?id=1-Bw24cv9o7qjLtd8yq8bzzz9AjR9QAkL',\n",
- " 'faces2': 'https://drive.google.com/uc?id=18rJYK9oF6D7C607Be1B_Fu53rjjHUAT1',\n",
- " 'GOT': 'https://drive.google.com/uc?id=1-0LCuuUxUA0R6gdSd9prn5sP7T01iF0e',\n",
- "}\n",
- "\n",
- "model_cache = {}\n",
- "synthesis_kwargs = dict(output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), minibatch_size=20)\n",
- "\n",
- "def gen_pil_image(latents, zoom=1, psi=0.7):\n",
- " fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n",
- " image = Gs.run(latents, None, randomize_noise=True, output_transform=fmt, truncation_psi=psi)\n",
- " if zoom == 1:\n",
- " return PIL.Image.fromarray(image[0])\n",
- " else:\n",
- " print(image[0].shape)\n",
- " return PIL.Image.fromarray(ndimage.zoom(image[0],(zoom,zoom,1)))\n",
- "\n",
- "import google.colab.output\n",
- "import random\n",
- "import io\n",
- "import base64\n",
- "\n",
- "def gen(l=None, psi=1):\n",
- " if l is None:\n",
- " l = [random.random()*2-1 for x in range(512)]\n",
- " pimg = gen_pil_image(np.array(l).reshape(1,512), psi=psi)\n",
- " bio = io.BytesIO()\n",
- " pimg.save(bio, \"PNG\")\n",
- " b = bio.getvalue()\n",
- " return 'data:image/png;base64,'+str(base64.b64encode(b),encoding='utf-8')\n",
- "\n",
- "google.colab.output.register_callback('gen', gen)\n",
- "\n",
- "##\n",
- "def fetch_model(name):\n",
- " if model_cache.get(name):\n",
- " return model_cache[name]\n",
- " url = entity_to_url[name]\n",
- " with dnnlib.util.open_url(url, cache_dir='cache') as f:\n",
- " _G, _D, Gs = pickle.load(f)\n",
- " model_cache[name] = Gs\n",
- " return model_cache[name]\n",
- "\n",
- "def fetch_file(filename):\n",
- " with open(filename,'rb') as f:\n",
- " return pickle.load(f)\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "BPdx4NeDu1SX",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Render Model \n",
- "# ================================================================ #\n",
- "\n",
- "#choose model here. default is ffhq\n",
- "import os\n",
- "Render_Model = \"anime\" #@param [\"faces\",\"faces2\",\"GOT\",\"celebs\",\"bedrooms\",\"cars\",\"cats\",\"chruch\",\"horse\",\"anime\"]\n",
- "\n",
- "\n",
- "if Render_Model == \"faces\":\n",
- " curr_model = \"faces\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"faces2\":\n",
- " curr_model = \"faces2\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"celebs\":\n",
- " curr_model = \"celebs\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"bedrooms\":\n",
- " curr_model = \"bedrooms\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"cars\":\n",
- " curr_model = \"cars\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"cats\":\n",
- " curr_model = \"cats\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"chruch\":\n",
- " curr_model = \"chruch\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"horse\":\n",
- " curr_model = \"horse\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"anime\":\n",
- " curr_model = \"anime\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"anime_portrait\":\n",
- " curr_model = \"anime_portrait\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
- "\n",
- "if Render_Model == \"GOT\":\n",
- " curr_model = \"GOT\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
- " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "xYUOT5SAu_wz",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] StyleGAN2 \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import HTML\n",
- "\n",
- "def get_latent_html(i):\n",
- " return \"\"\"\n",
- " L%03i: \n",
- " \n",
- "
\"\"\" % (i, i, i, (random.random()*2-1))\n",
- "\n",
- "def get_latents_html():\n",
- " return '\\n'.join([get_latent_html(i) for i in range(512)])\n",
- "\n",
- "input_form = \"\"\"\n",
- " \n",
- " \n",
- "\n",
- "\n",
- "
You have currently loaded %s model
\n",
- "
\n",
- "
\n",
- "
\n",
- "
\n",
- "
\n",
- "
\n",
- " %s\n",
- "
\n",
- "
\n",
- "
\n",
- "\n",
- "
\n",
- "
\n",
- " Generate from latents \n",
- "
\n",
- "
\n",
- "
\n",
- " psi: \n",
- " \n",
- "
\n",
- "
\n",
- "
\n",
- " Mutate randomly \n",
- "
\n",
- "
\n",
- "
\n",
- " Mutation strength: \n",
- " \n",
- "
\n",
- "
\n",
- "
\n",
- " Random image \n",
- "
\n",
- "
\n",
- " Normalize latents \n",
- "
\n",
- "
\n",
- "\n",
- "
\n",
- "
\n",
- " Save latents \n",
- " Load latents \n",
- "
\n",
- "
\n",
- "
\n",
- " \n",
- "
\n",
- "
\n",
- "
\n",
- "\n",
- "
\n",
- "\"\"\" % (curr_model, get_latents_html())\n",
- "\n",
- "javascript = \"\"\"\n",
- " \n",
- "\n",
- "\"\"\"\n",
- "\n",
- "HTML(input_form + javascript)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "AMu9crpy-7yb"
- },
- "source": [
- "### waifu2xLab "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Q1H1wCcM-1Vd"
- },
- "source": [
- "GPU runtime is optional, but waifu2x could perform better on GPU. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "o-L3111Z-2a3"
- },
- "source": [
- "waifu2xLab is a Google Colab implementation of tsurumeso 's waifu2x-chainer \n",
- "\n",
- " \n",
- "\n",
- "2D character picture (Kagamine Rin) is licensed under CC BY-NC by piapro [2]. "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0IOySews_Ine",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Clone waifu2x-chainer and Install Dependencies \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "\n",
- "\n",
- "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
- "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
- "input_path = \"/content/waifu2x/input\"\n",
- "output_path = \"/content/waifu2x/output\"\n",
- "\n",
- "\n",
- "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
- " pass\n",
- "else:\n",
- " # Installing the required dependencies\n",
- " # !pip install -q cupy-cuda100\n",
- " !pip install -q futures\n",
- " !pip install -q chainer\n",
- "\n",
- " # Cloning waifu2x-chainer from github\n",
- " !git clone -l -s https://github.com/tsurumeso/waifu2x-chainer.git /content/tools/waifu2x\n",
- "\n",
- " # Creating input and output directory for waifu2x-chainer to work with\n",
- " if os.path.exists(input_path) and os.path.exists(output_path):\n",
- " pass\n",
- " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
- " os.makedirs(input_path)\n",
- " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
- " os.makedirs(output_path)\n",
- " else:\n",
- " os.makedirs(input_path)\n",
- " os.makedirs(output_path)\n",
- "\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "d_OGARyM_L8P",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Upload Image or Get from URL \n",
- "image_source = \"file_upload\" #@param [\"file_upload\", \"url\"]\n",
- "url = \"\" # @param {type:\"string\"}\n",
- "# @markdown > For the url, input a direct link to the file. (e.g: https://domain.moe/saber_waifu.jpg )\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "import google.colab.files\n",
- "\n",
- "\n",
- "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
- "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
- "input_path = \"/content/waifu2x/input\"\n",
- "output_path = \"/content/waifu2x/output\"\n",
- "\n",
- "\n",
- "def IOFolderCheck():\n",
- " if os.path.exists(input_path) and os.path.exists(output_path):\n",
- " pass\n",
- " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
- " os.makedirs(input_path)\n",
- " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
- " os.makedirs(output_path)\n",
- " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
- " os.makedirs(input_path)\n",
- " os.makedirs(output_path)\n",
- "\n",
- "\n",
- "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
- " IOFolderCheck()\n",
- "\n",
- " %cd /content/waifu2x/input\n",
- " clear_output()\n",
- "\n",
- "\n",
- " if image_source == \"file_upload\":\n",
- " uploaded = google.colab.files.upload()\n",
- " else:\n",
- " if url == \"\":\n",
- " display(HTML(\"❌ The url field is empty! \"))\n",
- " else:\n",
- " !wget -q {url}\n",
- " \n",
- "\n",
- " %cd /content\n",
- " clear_output()\n",
- "else:\n",
- " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "pZJnNTad_W0I",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← [Start] waifu2xLab \n",
- "input = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If the \"input\" and \"output\" fields are empty, waifu2xLab will look for image(s) in \"/content/waifu2x/input\" and store the processed image(s) into \"/content/waifu2x/output\". By default, waifu2xLab will process anything inside the \"input\" folder.To process a single image, type in the absolute path of the file (e.g: /content/downloads/image.jpg).\n",
- "output = \"\" #@param {type:\"string\"}\n",
- "#@markdown > If left empty, the default output path will be used: /content/waifu2x/output\n",
- "\n",
- "#@markdown ---\n",
- "processor = \"CPU\" #@param [\"CPU\", \"GPU\"]\n",
- "mode = \"De-noise\" #@param [\"De-noise\", \"Upscale\", \"De-noise & Upscale\"]\n",
- "tta = \"Disabled\" #@param [\"Enabled\", \"Disabled\"]\n",
- "tta_level = \"8\" #@param [\"2\", \"4\", \"8\"]\n",
- "# tta_level = 2 #@param {type:\"slider\", min:2, max:8, step:2}\n",
- "denoise_level = 0 #@param {type:\"slider\", min:0, max:3, step:1}\n",
- "upscale_ratio = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
- "output_quality = 100 #@param {type:\"slider\", min:1, max:100, step:1}\n",
- "color_profile = \"RGB\" #@param [\"RGB\", \"YUV\"]\n",
- "model = \"VGG7\" #@param [\"VGG7\", \"UpConv7\", \"ResNet10\", \"UpResNet10\"]\n",
- "output_format = \"PNG\" #@param [\"PNG\", \"WEBP\"]\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "import google.colab.files\n",
- "\n",
- "\n",
- "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
- "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
- "input_path = \"/content/waifu2x/input\"\n",
- "output_path = \"/content/waifu2x/output\"\n",
- "\n",
- "\n",
- "def IOFolderCheck():\n",
- " if os.path.exists(input_path) and os.path.exists(output_path):\n",
- " pass\n",
- " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
- " os.makedirs(input_path)\n",
- " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
- " os.makedirs(output_path)\n",
- " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
- " os.makedirs(input_path)\n",
- " os.makedirs(output_path)\n",
- "\n",
- "\n",
- "# For now, the CPU core is hardcoded to use 4 cores.\n",
- "# The same goes for GPU, only GPU = 0 will be used.\n",
- "if processor == \"CPU\":\n",
- " processor = \"\"\n",
- "elif processor == \"GPU\":\n",
- " processor = \"-g 0\"\n",
- "\n",
- "# Checking for which mode is chosen.\n",
- "if mode == \"De-noise\":\n",
- " mode = \"noise\"\n",
- "\n",
- " upscale_ratio = 1\n",
- "elif mode == \"Upscale\":\n",
- " mode = \"scale\"\n",
- "\n",
- " denoise_level = 0\n",
- "elif mode == \"De-noise & Upscale\":\n",
- " mode = \"noise_scale\"\n",
- "\n",
- "# Checking whether TTA is enabled or not.\n",
- "if tta == \"Enabled\":\n",
- " tta1 = \"-t\"\n",
- " tta2 = \"-T\"\n",
- "elif tta == \"Disabled\":\n",
- " tta1 = \"\"\n",
- " tta2 = \"\"\n",
- " tta_level = \"\"\n",
- "\n",
- "# Checking for which arch/model is used and convert it into parameter number.\n",
- "if model == \"VGG7\":\n",
- " model = 0\n",
- "elif model == \"UpConv7\":\n",
- " model = 1\n",
- "elif model == \"ResNet10\":\n",
- " model = 2\n",
- "elif model == \"UpResNet10\":\n",
- " model = 3\n",
- "\n",
- "# Checking for the chosen color profile and convert it into parameter.\n",
- "if color_profile == \"YUV\":\n",
- " color_profile = \"y\"\n",
- "elif color_profile == \"RGB\":\n",
- " color_profile = \"rgb\"\n",
- "\n",
- "# Checking for which output format is chosen and convert it into parameter.\n",
- "if output_format == \"PNG\":\n",
- " output_format = \"png\"\n",
- "elif output_format == \"WEBP\":\n",
- " output_format = \"webp\"\n",
- "\n",
- "# Checking whether input and output fields are empty or not\n",
- "# If they are empty, the default storing path will be used (/content/waifu2x/output/)\n",
- "if input == \"\" and output == \"\":\n",
- " input = input_path\n",
- " output = output_path\n",
- "elif input == \"\" and not output == \"\":\n",
- " input = inpput_path\n",
- "elif not input == \"\" and output == \"\":\n",
- " output = output_path\n",
- "\n",
- "\n",
- "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
- " IOFolderCheck()\n",
- "\n",
- " %cd \"$waifu2x_path_1\"\n",
- " clear_output()\n",
- "\n",
- " !python waifu2x.py {processor} -m {mode} {tta1} {tta2} {tta_level} -n {denoise_level} -s {upscale_ratio} -c {color_profile} -a {model} -e {output_format} -q {output_quality} -i \"{input}\" -o \"{output}\"\n",
- "\n",
- " %cd \"/content\"\n",
- " clear_output()\n",
- "else:\n",
- " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "uQT6GEq9Na_E"
- },
- "source": [
- "## ✧ Programming ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FdDNhzc0NdeS"
- },
- "source": [
- "### Visual Studio Code "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "QaKEKUrRNfHI"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] code-server
\n",
- "# @markdown VS Code in the browser. Run VS Code on any machine anywhere and access it in the browser.
\n",
- "# @markdown \n",
- "# @markdown ⚙️ Install Configuration ⚙️ \n",
- "TOKEN = \"\" \n",
- "REGION = \"AP\"\n",
- "USE_FREE_TOKEN = True #{type:\"boolean\"}\n",
- "INSTALL_EXTENSION = \"ms-python.python ms-vscode.cpptools ritwickdey.LiveServer sidthesloth.html5-boilerplate tht13.python\" #@param {type:\"string\"}\n",
- "USER_DATA_DIR = \"/content/tools/code-server/userdata\" #@param {type:\"string\"}\n",
- "OPEN_FOLDER = \"/content/\" #@param {type: \"string\"} \n",
- "TAG_NAME = \"3.11.1\" #@param {type: \"string\"}\n",
- "#@markdown > See HERE to get the tag name.\n",
- "PACKAGES = \"amd64\" #@param [\"x86_64\", \"amd64\"]\n",
- "RUN_LATEST = True\n",
- "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
- "# ================================================================ #\n",
- "\n",
- "import os,sys, pathlib, zipfile, re, tarfile, shutil\n",
- "import urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl,\n",
- " findPackageR,\n",
- " textAn\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs('tools/code-server/', exist_ok=True)\n",
- "os.makedirs('tools/temp', exist_ok=True)\n",
- "binFile = ''\n",
- "\n",
- "# Downloading code-server\n",
- "if not os.path.exists(\"tools/code-server/README.md\"):\n",
- " os.system(\"apt install net-tools -y\")\n",
- "\n",
- " BASE_URL = r\"https://github.com/cdr/code-server/\"\n",
- " rawRdata = findPackageR(\"cdr/code-server\",\n",
- " f\"linux-{PACKAGES}.tar.gz\",\n",
- " False if RUN_LATEST else TAG_NAME,\n",
- " all_=True)\n",
- " file_name = rawRdata['assets']['name']\n",
- " urlF = rawRdata['assets']['browser_download_url']\n",
- " output_file = \"tools/temp/code-server.tar.gz\"\n",
- "\n",
- " textAn(f\"Installing code-server {rawRdata['tag_name']} ...\", ty=\"twg\")\n",
- " \n",
- " urllib.request.urlretrieve(urlF, output_file)\n",
- " with tarfile.open(output_file, 'r:gz') as tar_ref:\n",
- " tar_ref.extractall('tools/temp/')\n",
- " os.renames(\"tools/temp/\"+file_name[:-7], 'tools/code-server/')\n",
- " try:\n",
- " pathlib.Path(output_file).unlink()\n",
- " except FileNotFoundError:\n",
- " pass\n",
- " try:\n",
- " os.remove('tools/code-server/lib/libstdc++.so.6')\n",
- " except FileNotFoundError:\n",
- " pass\n",
- " \n",
- " binList = ['bin/code-server',\n",
- " 'code-server']\n",
- " for b in binList:\n",
- " if os.path.exists('tools/code-server/'+b):\n",
- " binFile = b\n",
- " break\n",
- " \n",
- " # workspace settings\n",
- " configScript = \"\"\"{\n",
- " \"workbench.colorTheme\": \"Default Dark+\",\n",
- " \"editor.minimap.enabled\": false\n",
- "}\n",
- "\"\"\"\n",
- " os.makedirs(f'{OPEN_FOLDER}/.vscode', exist_ok=True)\n",
- " with open(f'{OPEN_FOLDER}/.vscode/settings.json', 'w') as w:w.write(configScript)\n",
- "\n",
- " if INSTALL_EXTENSION:\n",
- " perExtension = INSTALL_EXTENSION.split(' ')\n",
- " for l in perExtension:\n",
- " cmdE = f\"./{binFile} \" \\\n",
- " f\"--user-data-dir {USER_DATA_DIR}\" \\\n",
- " f\" --install-extension {l}\"\n",
- " runSh(cmdE, cd=\"tools/code-server\", shell=True)\n",
- "\n",
- "\n",
- "if not findProcess(\"node\", \"--extensions-dir\"):\n",
- " cmdDo = f\"./{binFile} --auth none \" \\\n",
- " f\" --port 5050 --user-data-dir {USER_DATA_DIR}\" \\\n",
- " \" &\"\n",
- " runSh(cmdDo, \n",
- " cd=\"tools/code-server\",\n",
- " shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(\n",
- " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['code-server', 5050, 'http']], REGION.lower(), \n",
- " [f\"{HOME}/.ngrok2/code-server.yml\", 30499]\n",
- ").start('code-server', displayB=False)\n",
- "displayUrl(server, EcUrl=f\"/?folder={OPEN_FOLDER}\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "-HjoEvVINmgx"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Port Forwarding
\n",
- "# @markdown Type in whatever PORT you want and separate them with comma and space. `80, 8080, 4040`
\n",
- "USE_FREE_TOKEN = True \n",
- "TOKEN = \"\" \n",
- "REGION = \"US\" #[\"US\", \"EU\", \"AP\", \"AU\", \"SA\", \"JP\", \"IN\"]\n",
- "PORT_LIST = \"\" #@param {type:\"string\"}\n",
- "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
- "# ================================================================ #\n",
- "\n",
- "import os, pathlib, zipfile, re\n",
- "import urllib.request\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl,\n",
- " textAn\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs('tools/', exist_ok=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "m = []\n",
- "splitPortList = PORT_LIST.split(',')\n",
- "for p in splitPortList:\n",
- " p = int(p)\n",
- " m.append([f\"s{p}\", p, 'http'])\n",
- "\n",
- "Server = PortForward_wrapper(\n",
- " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, m, REGION.lower(), \n",
- " [f\"{HOME}/.ngrok2/randomPortOpen.yml\", 45535]\n",
- ")\n",
- "\n",
- "for l in m:\n",
- " displayUrl(Server.start(l[0], displayB=False, v=False), \n",
- " pNamU=f\"{l[0][1:]} -> \", cls=False)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "_wlFbVS6JcSL"
- },
- "source": [
- "## ✧ Remote Connection ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "KFpBZnkQhQz2"
- },
- "source": [
- "**!! NOT FOR CRYPTOCURRENCY MINING !!** "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "WaSgbPEch7KH"
- },
- "source": [
- "### Chrome Remote Desktop "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1-hL0LM7vRH8"
- },
- "source": [
- "Original code written by PradyumnaKrishna (modified for MiXLab use)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "t4yNp3KmLtZ6",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Create user \n",
- "username = \"MiXLab\" #@param {type:\"string\"}\n",
- "password = \"123456qwerty\" #@param {type:\"string\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "\n",
- "print(\"Creating user and setting it up...\")\n",
- "\n",
- "# Creation of user\n",
- "os.system(f\"useradd -m {username}\")\n",
- "\n",
- "# Add user to sudo group\n",
- "os.system(f\"adduser {username} sudo\")\n",
- " \n",
- "# Set password of user to 'root'\n",
- "os.system(f\"echo '{username}:{password}' | sudo chpasswd\")\n",
- "\n",
- "# Change default shell from sh to bash\n",
- "os.system(\"sed -i 's/\\/bin\\/sh/\\/bin\\/bash/g' /etc/passwd\")\n",
- "\n",
- "print(\"User created and configured.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Q6bl1b0EifVG",
- "cellView": "form"
- },
- "source": [
- "#============================= FORM ============================= #\n",
- "#@markdown ← [Start] Remote Desktop \n",
- "#@markdown \n",
- "#@markdown \tClick HERE (opens in new tab) and set up a computer first. \n",
- "#@markdown \tAfter you have done setting up a computer, get the Debian Linux command / authcode and paste it into the field below. \n",
- "#@markdown \tRun the cell and wait for it to finish. \n",
- "#@markdown \tNow, go to HERE (opens in new tab) and you should see a machine pops up in there. \n",
- "#@markdown \tClick on that machine to remote it and enter the pin. \n",
- "#@markdown \n",
- "CRP = \"\" #@param {type:\"string\"}\n",
- "\n",
- "#@markdown Enter a PIN that is equal to or more than 6 digits\n",
- "Pin = 123456 #@param {type: \"integer\"}\n",
- "\n",
- "#@markdown > It takes about 4 to 5 minutes for the installation process.\n",
- "#================================================================ #\n",
- "\n",
- "import os\n",
- "import subprocess\n",
- "\n",
- "\n",
- "class CRD:\n",
- " def __init__(self):\n",
- " os.system(\"apt update\")\n",
- " self.installCRD()\n",
- " self.installDesktopEnvironment()\n",
- " self.installGoogleChorme()\n",
- " self.finish()\n",
- "\n",
- " @staticmethod\n",
- " def installCRD():\n",
- " print(\"Installing Chrome Remote Desktop...\")\n",
- " subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
- " subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
- " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
- "\n",
- " @staticmethod\n",
- " def installDesktopEnvironment():\n",
- " print(\"Installing Desktop Environment...\")\n",
- " os.system(\"export DEBIAN_FRONTEND=noninteractive\")\n",
- " os.system(\"apt install --assume-yes xfce4 desktop-base xfce4-terminal\")\n",
- " os.system(\"bash -c 'echo \\\"exec /etc/X11/Xsession /usr/bin/xfce4-session\\\" > /etc/chrome-remote-desktop-session'\")\n",
- " os.system(\"apt remove --assume-yes gnome-terminal\")\n",
- " os.system(\"apt install --assume-yes xscreensaver\")\n",
- " os.system(\"systemctl disable lightdm.service\")\n",
- "\n",
- " @staticmethod\n",
- " def installGoogleChorme():\n",
- " print(\"Installing Google Chrome...\")\n",
- " subprocess.run([\"wget\", \"https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
- " subprocess.run([\"dpkg\", \"--install\", \"google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
- " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
- "\n",
- " @staticmethod\n",
- " def finish():\n",
- " print(\"Finalizing...\")\n",
- " os.system(f\"adduser {username} chrome-remote-desktop\")\n",
- " command = f\"{CRP} --pin={Pin}\"\n",
- " os.system(f\"su - {username} -c '{command}'\")\n",
- " os.system(\"service chrome-remote-desktop start\")\n",
- " print(\"Finished Succesfully!\")\n",
- "\n",
- "\n",
- "try:\n",
- " if username:\n",
- " if CRP == \"\":\n",
- " print(\"Please enter the authcode from the Chrome Remote Desktop site!\")\n",
- " elif len(str(Pin)) < 6:\n",
- " print(\"Enter a PIN that is equal to or more than 6 digits!\")\n",
- " else:\n",
- " CRD()\n",
- "except NameError as e:\n",
- " print(\"Username variable not found! Create a user first!\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Th3Qyn2uttiW"
- },
- "source": [
- "#### Optionals "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "vk2qtOTGIFsQ",
- "cellView": "form"
- },
- "source": [
- "#@title **Google Drive Mount**\n",
- "#@markdown Google Drive used as Persistance HDD for files. \n",
- "#@markdown Mounted at `user` Home directory inside drive folder\n",
- "#@markdown (If `username` variable not defined then use root as default).\n",
- "\n",
- "def MountGDrive():\n",
- " from google.colab import drive\n",
- "\n",
- " ! runuser -l $user -c \"yes | python3 -m pip install --user google-colab\" > /dev/null 2>&1\n",
- "\n",
- " mount = \"\"\"from os import environ as env\n",
- "from google.colab import drive\n",
- "\n",
- "env['CLOUDSDK_CONFIG'] = '/content/.config'\n",
- "drive.mount('{}')\"\"\".format(mountpoint)\n",
- "\n",
- " with open('/content/mount.py', 'w') as script:\n",
- " script.write(mount)\n",
- "\n",
- " ! runuser -l $user -c \"python3 /content/mount.py\"\n",
- "\n",
- "try:\n",
- " if username:\n",
- " mountpoint = \"/home/\"+username+\"/drive\"\n",
- " user = username\n",
- "except NameError:\n",
- " print(\"username variable not found, mounting at `/content/drive' using `root'\")\n",
- " mountpoint = '/content/drive'\n",
- " user = 'root'\n",
- "\n",
- "MountGDrive()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "8icuQYnyKDLk",
- "cellView": "form"
- },
- "source": [
- "#@title **SSH**\n",
- "\n",
- "! pip install colab_ssh --upgrade &> /dev/null\n",
- "\n",
- "Ngrok = False #@param {type:'boolean'}\n",
- "Agro = False #@param {type:'boolean'}\n",
- "\n",
- "\n",
- "#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)\n",
- "ngrokToken = \"\" #@param {type:'string'}\n",
- "\n",
- "\n",
- "def runNGROK():\n",
- " from colab_ssh import launch_ssh\n",
- " from IPython.display import clear_output\n",
- " launch_ssh(ngrokToken, password)\n",
- " clear_output()\n",
- "\n",
- " print(\"ssh\", username, end='@')\n",
- " ! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n",
- " \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))\"\n",
- "\n",
- "\n",
- "def runAgro():\n",
- " from colab_ssh import launch_ssh_cloudflared\n",
- " launch_ssh_cloudflared(password=password)\n",
- "\n",
- "\n",
- "try:\n",
- " if username:\n",
- " pass\n",
- " elif password:\n",
- " pass\n",
- "except NameError:\n",
- " print(\"No user found using username and password as 'root'\")\n",
- " username='root'\n",
- " password='root'\n",
- "\n",
- "\n",
- "if Agro and Ngrok:\n",
- " print(\"You can't do that\")\n",
- " print(\"Select only one of them\")\n",
- "elif Agro:\n",
- " runAgro()\n",
- "elif Ngrok:\n",
- " if ngrokToken == \"\":\n",
- " print(\"No ngrokToken Found, Please enter it\")\n",
- " else:\n",
- " runNGROK()\n",
- "else:\n",
- " print(\"Select one of them\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "OXsG6_pxeEFu"
- },
- "source": [
- "#@title Package Installer { vertical-output: true }\n",
- "run = False #@param {type:\"boolean\"}\n",
- "#@markdown *Package management actions (gasp)*\n",
- "action = \"Install\" #@param [\"Install\", \"Check Installed\", \"Remove\"] {allow-input: true}\n",
- "\n",
- "package = \"wget\" #@param {type:\"string\"}\n",
- "system = \"apt\" #@param [\"apt\", \"\"]\n",
- "\n",
- "def install(package=package, system=system):\n",
- " if system == \"apt\":\n",
- " !apt --fix-broken install > /dev/null 2>&1\n",
- " !killall apt > /dev/null 2>&1\n",
- " !rm /var/lib/dpkg/lock-frontend\n",
- " !dpkg --configure -a > /dev/null 2>&1\n",
- "\n",
- " !apt-get install -o Dpkg::Options::=\"--force-confold\" --no-install-recommends -y $package\n",
- " \n",
- " !dpkg --configure -a > /dev/null 2>&1 \n",
- " !apt update > /dev/null 2>&1\n",
- "\n",
- " !apt install $package > /dev/null 2>&1\n",
- "\n",
- "def check_installed(package=package, system=system):\n",
- " if system == \"apt\":\n",
- " !apt list --installed | grep $package\n",
- "\n",
- "def remove(package=package, system=system):\n",
- " if system == \"apt\":\n",
- " !apt remove $package\n",
- "\n",
- "if run:\n",
- " if action == \"Install\":\n",
- " install()\n",
- " if action == \"Check Installed\":\n",
- " check_installed()\n",
- " if action == \"Remove\":\n",
- " remove()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "UoeBdz6_KE6a",
- "cellView": "form"
- },
- "source": [
- "#@title **Colab Shutdown**\n",
- "\n",
- "#@markdown To Kill NGROK Tunnel\n",
- "NGROK = False #@param {type:'boolean'}\n",
- "\n",
- "#@markdown To Unmount GDrive\n",
- "GDrive = False #@param {type:'boolean'}\n",
- "\n",
- "#@markdown To Sleep Colab\n",
- "Sleep = True #@param {type:'boolean'}\n",
- "\n",
- "if NGROK:\n",
- " ! killall ngrok\n",
- "\n",
- "if GDrive:\n",
- " with open('/content/unmount.py', 'w') as unmount:\n",
- " unmount.write(\"\"\"from google.colab import drive\n",
- "drive.flush_and_unmount()\"\"\")\n",
- " \n",
- " try:\n",
- " if user:\n",
- " ! runuser $user -c 'python3 /content/unmount.py'\n",
- " except NameError:\n",
- " print(\"Google Drive not Mounted\")\n",
- "\n",
- "if Sleep:\n",
- " from time import sleep\n",
- " sleep(43200)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "CKxGMNKUJloT"
- },
- "source": [
- "### IceMW + noVNC "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "NXhG3KGGJqtf",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] IceWM \n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, random, string, urllib.request, time\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "runW = get_ipython()\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl,\n",
- " findProcess,\n",
- " CWD,\n",
- " textAn,\n",
- ")\n",
- "\n",
- "# Defining Github latest tag so the code can fetch the latest release, if there is any\n",
- "def latestTag(link):\n",
- " import re\n",
- " from urllib.request import urlopen\n",
- " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
- " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
- "\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs(\"tools/noVnc\", exist_ok=True)\n",
- "\n",
- "# Generating the password\n",
- "try:\n",
- " print(f\"Found old password! : {password}\")\n",
- "except:\n",
- " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
- "\n",
- "clear_output()\n",
- "\n",
- "if not findProcess(\"Xtightvnc\", \":1\"):\n",
- " textAn(\"Please wait while noVNC is being prepared...\")\n",
- " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
- " runW.system_raw('apt update -y')\n",
- " runW.system_raw('apt install -y icewm firefox tightvncserver autocutsel xterm')\n",
- " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
- " data = \"\"\"\n",
- "#!/bin/bash\n",
- "xrdb $HOME/.Xresources\n",
- "xsetroot -solid black -cursor_name left_ptr\n",
- "autocutsel -fork\n",
- "icewm-session &\n",
- "\"\"\"\n",
- " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
- " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
- " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
- " \n",
- " runSh('sudo vncserver :1 -geometry 1440x870 -economictranslate -dontdisconnect &', shell=True)\n",
- "\n",
- " BASE_URL = \"https://github.com/geek1011/easy-novnc\"\n",
- " LATEST_TAG = latestTag(BASE_URL)\n",
- " output_file = \"tools/noVnc/easy-noVnc_linux-64bit\"\n",
- " file_name = f\"easy-novnc_linux-64bit\"\n",
- " urlF = f\"{BASE_URL}/releases/download/{LATEST_TAG}/{file_name}\"\n",
- "\n",
- " try:\n",
- " urllib.request.urlretrieve(urlF, output_file)\n",
- " except OSError:\n",
- " pass\n",
- "\n",
- " os.chmod(output_file, 0o755)\n",
- "\n",
- "if not findProcess(\"easy-noVnc_linux-64bit\", '--addr \"0.0.0.0:6080\"'):\n",
- " cmdDo = \"./easy-noVnc_linux-64bit --addr 0.0.0.0:6080 --port 5901\" \\\n",
- " \" &\"\n",
- " runSh(cmdDo, cd=\"tools/noVnc/\", shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC.yml\", 4455])\n",
- "data = Server.start('vnc', displayB=False)\n",
- "displayUrl(data, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}&path=vnc&resize=scale&reconnect=true&show_dot=true')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "COqwo7iH6_vu"
- },
- "source": [
- "### NoMachine "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "eypiLPD8UtD2"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] NoMachine \n",
- "USE_FREE_TOKEN = False\n",
- "TOKEN = \"\" # @param {type:\"string\"}\n",
- "REGION = \"US\"\n",
- "PORT_FORWARD = \"ngrok\"\n",
- "# @markdown > You would need to provide your own ngrok Authtoken.Click here to register for a free ngrok account.Click here to copy your ngrok Authtoken.Click here to download NoMachine.\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import tarfile\n",
- "import urllib.request\n",
- "import shutil\n",
- "import time\n",
- "from IPython.display import HTML, clear_output\n",
- "from subprocess import Popen\n",
- "\n",
- "APT_INSTALL = \"apt install -y \"\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "runW = get_ipython()\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " textAn,\n",
- " displayUrl\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "os.makedirs(\"tools/nomachine\", exist_ok=True)\n",
- "os.makedirs(\"/root/.icewm\", exist_ok=True)\n",
- "\n",
- "# password ganarate\n",
- "try:\n",
- " print(f\"Found the old password! : {password}\")\n",
- "except:\n",
- " password = 'nomachine'\n",
- "\n",
- "clear_output()\n",
- "\n",
- "start = time.time()\n",
- "if not os.path.exists(\"tools/nomachine/NX/bin/nxserver\"):\n",
- " textAn(\"Please wait while noMachine is being prepared...\")\n",
- "\n",
- " runW.system_raw('apt update --quiet --force-yes')\n",
- "\n",
- " # Minimal install \n",
- " runW.system_raw(\n",
- " 'apt install --quiet --force-yes --no-install-recommends \\\n",
- " icewm x11-xserver-utils firefox xterm pcmanfm')\n",
- "\n",
- " # icewm theme\n",
- " with open('/root/.icewm/theme', 'w') as w:\n",
- " w.write('Theme=\"NanoBlue/default.theme\"')\n",
- " \n",
- " # with open('/root/.icewm/toolbar', 'w') as w:\n",
- " # w.write('prog \"chromium\" ! chromium-browser --no-sandbox')\n",
- "\n",
- " # nomachine\n",
- " \n",
- " staticUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/nomachine_6.9.2_1_x86_64.tar.gz\"\n",
- " configUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/NXetc.tar.gz\"\n",
- " \n",
- " output_file = 'tools/nomachine/nm.tar.gz'\n",
- " config_file = 'tools/nomachine/etc.tar.gz'\n",
- " urllib.request.urlretrieve(staticUrl, output_file)\n",
- " urllib.request.urlretrieve(configUrl, config_file)\n",
- " \n",
- " with tarfile.open(output_file, 'r:gz') as t:t.extractall('tools/nomachine')\n",
- " runSh('./nxserver --install', cd='tools/nomachine/NX', shell=True)\n",
- " runSh('./nxserver --stop', cd='tools/nomachine/NX/bin', shell=True)\n",
- " \n",
- " shutil.rmtree('tools/nomachine/NX/etc')\n",
- " with tarfile.open(config_file, 'r:gz') as t:t.extractall('tools/nomachine/NX')\n",
- " os.remove(config_file)\n",
- " \n",
- " os.remove(output_file)\n",
- " runSh('./nxserver --startup', cd='tools/nomachine/NX/bin', shell=True)\n",
- " runW.system_raw(\"echo root:$password | chpasswd\")\n",
- "\n",
- "end = time.time()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['nomachine', 4000, 'tcp']], REGION.lower(), [f\"{HOME}/.ngrok2/nomachine.yml\", 8459])\n",
- "\n",
- "data = Server.start('nomachine', displayB=False)\n",
- "host, port = data['url'][7:].split(':')\n",
- "user = os.popen('whoami').read()\n",
- "\n",
- "# Colors\n",
- "bttxt = 'hsla(10, 50%, 85%, 1)'\n",
- "btcolor = 'hsla(10, 86%, 56%, 1)'\n",
- "btshado = 'hsla(10, 40%, 52%, .4)'\n",
- "\n",
- "clear_output()\n",
- "\n",
- "display(HTML(\"\"\"NoMachine Configuration
Username Password Protocol Host Port
\"\"\"+user+\"\"\" \"\"\"+password+\"\"\" NX \"\"\"+host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.\"\"\"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "JM1Do14AKIdF"
- },
- "source": [
- "### SSH + noVNC "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "5-jp3jmlKKk5",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] SSH \n",
- "CREATE_VNC = True #@param {type:\"boolean\"}\n",
- "CREATE_SSH = True #@param {type:\"boolean\"}\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "#TOKEN = \"\" #@param {type:\"string\"}\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, random, string, urllib.request, time\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "runW = get_ipython()\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " displayUrl,\n",
- " findProcess,\n",
- " CWD,\n",
- " textAn,\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "# Generating password\n",
- "try:\n",
- " print(f\"Found the old password! : {password}\")\n",
- "except:\n",
- " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
- "\n",
- "if CREATE_SSH:\n",
- " USE_FREE_TOKEN = False\n",
- "\n",
- "# Setting up the root password\n",
- "if CREATE_SSH and os.path.exists('/var/run/sshd') == False:\n",
- " # Setting up the SSH Daemon\n",
- " runSh('apt install -qq -o=Dpkg::Use-Pty=0 openssh-server pwgen')\n",
- " runW.system_raw(\"echo root:$password | chpasswd\")\n",
- " os.makedirs(\"/var/run/sshd\", exist_ok=True)\n",
- " runW.system_raw('echo \"PermitRootLogin yes\" >> /etc/ssh/sshd_config')\n",
- " runW.system_raw('echo \"PasswordAuthentication yes\" >> /etc/ssh/sshd_config')\n",
- " runW.system_raw('echo \"LD_LIBRARY_PATH=/usr/lib64-nvidia\" >> /root/.bashrc')\n",
- " runW.system_raw('echo \"export LD_LIBRARY_PATH\" >> /root/.bashrc')\n",
- "\n",
- " # Running the SSH Daemon\n",
- " if not findProcess(\"/usr/sbin/sshd\", command=\"-D\"):\n",
- " runSh('/usr/sbin/sshd -D &', shell=True)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "if CREATE_VNC:\n",
- " # Start = time.time()\n",
- " textAn(\"Please wait while noVNC is being prepared...\")\n",
- " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
- " runW.system_raw('add-apt-repository -y ppa:apt-fast/stable < /dev/null')\n",
- " runW.system_raw('echo debconf apt-fast/maxdownloads string 16 | debconf-set-selections')\n",
- " runW.system_raw('echo debconf apt-fast/dlflag boolean true | debconf-set-selections')\n",
- " runW.system_raw('echo debconf apt-fast/aptmanager string apt-get | debconf-set-selections')\n",
- " runW.system_raw('apt install -y apt-fast')\n",
- " runW.system_raw('apt-fast install -y xfce4 xfce4-goodies firefox tightvncserver autocutsel')\n",
- " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
- " data = \"\"\"\n",
- "#!/bin/bash\n",
- "xrdb $HOME/.Xresources\n",
- "autocutsel -fork\n",
- "startxfce4 &\n",
- "\"\"\"\n",
- " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
- " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
- " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
- " runSh('sudo vncserver &', shell=True)\n",
- " runSh(f'git clone https://github.com/novnc/noVNC.git {CWD}/noVNC')\n",
- " runSh(\"bash noVNC/utils/launch.sh --listen 6080 --vnc localhost:5901 &\", shell=True)\n",
- " # End = time.time()\n",
- "\n",
- "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ssh', 22, 'tcp'], ['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC_SSH.yml\", 4455])\n",
- "data = Server.start('ssh', displayB=False)\n",
- "\n",
- "clear_output()\n",
- "\n",
- "Host,port = data['url'][7:].split(':')\n",
- "data2 = Server.start('vnc', displayB=False)\n",
- "\n",
- "if CREATE_VNC:\n",
- " displayUrl(data2, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}')\n",
- "if CREATE_SSH:\n",
- " display(HTML(\"\"\"SSH Configuration
Host Port Password
\"\"\"+Host+\"\"\" \"\"\"+port+\"\"\" \"\"\"+password+\"\"\"
Simple SSH Commands Terminal connect ssh root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\" SOCKS5 proxy ssh -D 8282 -q -C -N root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "0vHRnizI9BXA"
- },
- "source": [
- "### WeTTY "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "FMd-AFnVYZid",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] WeTTY \n",
- "# @markdown Terminal access in browser over HTTP / HTTPS.\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, tarfile, urllib.request\n",
- "from IPython.display import clear_output\n",
- "from subprocess import Popen\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "os.makedirs('tools/temp', exist_ok=True)\n",
- "\n",
- "if not os.path.exists(\"tools/wetty/wetty\"):\n",
- " # Build WeTTy from source\n",
- " # os.system(\"git clone https://github.com/butlerx/wetty.git tools/wetty\")\n",
- " # Popen('npm install'.split(), cwd='tools/wetty').wait()\n",
- " # Popen('npm run-script build'.split(), cwd='tools/wetty').wait()\n",
- " # Popen('npm i -g'.split(), cwd='tools/wetty').wait()\n",
- " # --------------------------------------------------\n",
- " # Download a pre-built WeTTy package from github\n",
- " wettyBF = 'https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/wetty/wetty.tar.gz'\n",
- " fileSN = 'tools/temp/wetty.tar.gz'\n",
- " urllib.request.urlretrieve(wettyBF, fileSN)\n",
- " with tarfile.open(fileSN, 'r:gz') as t:t.extractall('tools/')\n",
- " os.remove(fileSN)\n",
- "\n",
- "if not findProcess(\"wetty\", \"--port\"):\n",
- "# Popen(\n",
- "# r'wetty --port 4343 --bypasshelmet \\\n",
- "# -b \"/\" -c \"/bin/bash\"'.split(), \n",
- "# cwd='/content')\n",
- " Popen(\n",
- " r'tools/wetty/wetty --port 4343 --bypasshelmet \\\n",
- " -b \"/\" -c \"/bin/bash\"'.split(), \n",
- " cwd='/content')\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['wetty', '4343', 'http']], REGION.lower, [f\"{HOME}/.ngrok2/wetty.yml\", 31199]).start('wetty', displayB=True)\n",
- "displayUrl(server, pNamU='WeTTy : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9JBIZh3OZBaL"
- },
- "source": [
- "## ✧ System Tools ✧ "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2zGMePbPQJWI"
- },
- "source": [
- "### Glances "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "vLhOue7XQJWa",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Glances \n",
- "# @markdown Glances is a cross-platform system monitoring tool written in Python.
\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, urllib.request\n",
- "from IPython.display import clear_output\n",
- "from subprocess import Popen\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " findProcess,\n",
- " displayUrl\n",
- ")\n",
- "\n",
- "loadingAn()\n",
- "\n",
- "if not os.path.exists(\"/usr/local/bin/glances\"):\n",
- " os.system(\"pip3 install https://github.com/nicolargo/glances/archive/master.zip\")\n",
- " os.system('pip3 install Bottle')\n",
- " os.system(\"pip3 install 'glances[gpu,ip]'\")\n",
- "\n",
- "if not findProcess(\"glances\", \"--webserver\"):\n",
- " Popen(\n",
- " 'glances --webserver --port 61208 --time 0 --enable-process-extended \\\n",
- " --byte --diskio-show-ramfs --fs-free-space \\\n",
- " --disable-check-update'.split()\n",
- " )\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['glances', '61208', 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/Glances.yml\", 31499]).start('glances', displayB=True)\n",
- "displayUrl(server, pNamU='Glances : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "eaUJNGmju5G6"
- },
- "source": [
- "### netdata "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "WSUUUDXsUOkl",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] netdata \n",
- "# @markdown netdata is a real-time system performance monitoring utility.
\n",
- "USE_FREE_TOKEN = True\n",
- "TOKEN = \"\"\n",
- "REGION = \"US\"\n",
- "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
- "\n",
- "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
- " PORT_FORWARD = \"argotunnel\"\n",
- "elif Tunneling == \"localhost.run\":\n",
- " PORT_FORWARD = \"localhost\"\n",
- "elif Tunneling == \"ngrok\":\n",
- " PORT_FORWARD = \"ngrok\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os, psutil, subprocess, shlex\n",
- "from IPython.display import HTML, clear_output\n",
- "import time\n",
- "\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\" \\\n",
- " f\" -O {HOME}/.ipython/mixlab.py\"\n",
- " subprocess.run(shlex.split(shellCmd))\n",
- "\n",
- "from mixlab import (\n",
- " loadingAn,\n",
- " PortForward_wrapper,\n",
- " runSh,\n",
- " displayUrl,\n",
- " textAn\n",
- ")\n",
- "\n",
- "def CheckProcess(process, command):\n",
- " for pid in psutil.pids():\n",
- " try:\n",
- " p = psutil.Process(pid)\n",
- " if process in p.name():\n",
- " for arg in p.cmdline():\n",
- " if command in str(arg): \n",
- " return True\n",
- " else:\n",
- " pass\n",
- " else:\n",
- " pass\n",
- " except:\n",
- " continue\n",
- "\n",
- "def Start_ServerMT():\n",
- " if CheckProcess(\"netdata\", \"\") != True:\n",
- " runSh('/usr/sbin/netdata', shell=True)\n",
- "\n",
- "loadingAn() \n",
- "\n",
- "if not os.path.isfile(\"/usr/sbin/netdata\"):\n",
- " clear_output(wait=True)\n",
- " textAn(\"Installing netdata...\")\n",
- " # Start = time.time()\n",
- " get_ipython().system_raw(\"bash <(curl -Ss https://my-netdata.io/kickstart.sh) --dont-wait --dont-start-it\")\n",
- " # End = time.time()\n",
- " Start_ServerMT()\n",
- "\n",
- "clear_output()\n",
- "\n",
- "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['netdata', 19999, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/netdata.yml\", 7044]).start('netdata', 'g')\n",
- "displayUrl(server, pNamU='netdata : ')"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "xzeZBOnhyKPy"
- },
- "source": [
- "### speedtest "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Az1Yh9WMyQwB",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] speedtest \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import re\n",
- "import csv\n",
- "import sys\n",
- "import math\n",
- "import errno\n",
- "import signal\n",
- "import socket\n",
- "import timeit\n",
- "import datetime\n",
- "import platform\n",
- "import threading\n",
- "import xml.parsers.expat\n",
- "\n",
- "try:\n",
- " import gzip\n",
- " GZIP_BASE = gzip.GzipFile\n",
- "except ImportError:\n",
- " gzip = None\n",
- " GZIP_BASE = object\n",
- "\n",
- "__version__ = '2.1.1'\n",
- "\n",
- "class FakeShutdownEvent(object):\n",
- " \"\"\"Class to fake a threading.Event.isSet so that users of this module\n",
- " are not required to register their own threading.Event()\n",
- " \"\"\"\n",
- "\n",
- " @staticmethod\n",
- " def isSet():\n",
- " \"Dummy method to always return false\"\"\"\n",
- " return False\n",
- "\n",
- "# Some global variables we use\n",
- "DEBUG = False\n",
- "_GLOBAL_DEFAULT_TIMEOUT = object()\n",
- "\n",
- "# Begin import game to handle Python 2 and Python 3\n",
- "try:\n",
- " import json\n",
- "except ImportError:\n",
- " try:\n",
- " import simplejson as json\n",
- " except ImportError:\n",
- " json = None\n",
- "\n",
- "try:\n",
- " import xml.etree.cElementTree as ET\n",
- "except ImportError:\n",
- " try:\n",
- " import xml.etree.ElementTree as ET\n",
- " except ImportError:\n",
- " from xml.dom import minidom as DOM\n",
- " from xml.parsers.expat import ExpatError\n",
- " ET = None\n",
- "\n",
- "try:\n",
- " from urllib2 import (urlopen, Request, HTTPError, URLError,\n",
- " AbstractHTTPHandler, ProxyHandler,\n",
- " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
- " HTTPErrorProcessor, OpenerDirector)\n",
- "except ImportError:\n",
- " from urllib.request import (urlopen, Request, HTTPError, URLError,\n",
- " AbstractHTTPHandler, ProxyHandler,\n",
- " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
- " HTTPErrorProcessor, OpenerDirector)\n",
- "\n",
- "try:\n",
- " from httplib import HTTPConnection, BadStatusLine\n",
- "except ImportError:\n",
- " from http.client import HTTPConnection, BadStatusLine\n",
- "\n",
- "try:\n",
- " from httplib import HTTPSConnection\n",
- "except ImportError:\n",
- " try:\n",
- " from http.client import HTTPSConnection\n",
- " except ImportError:\n",
- " HTTPSConnection = None\n",
- "\n",
- "try:\n",
- " from httplib import FakeSocket\n",
- "except ImportError:\n",
- " FakeSocket = None\n",
- "\n",
- "try:\n",
- " from Queue import Queue\n",
- "except ImportError:\n",
- " from queue import Queue\n",
- "\n",
- "try:\n",
- " from urlparse import urlparse\n",
- "except ImportError:\n",
- " from urllib.parse import urlparse\n",
- "\n",
- "try:\n",
- " from urlparse import parse_qs\n",
- "except ImportError:\n",
- " try:\n",
- " from urllib.parse import parse_qs\n",
- " except ImportError:\n",
- " from cgi import parse_qs\n",
- "\n",
- "try:\n",
- " from hashlib import md5\n",
- "except ImportError:\n",
- " from md5 import md5\n",
- "\n",
- "try:\n",
- " from argparse import ArgumentParser as ArgParser\n",
- " from argparse import SUPPRESS as ARG_SUPPRESS\n",
- " PARSER_TYPE_INT = int\n",
- " PARSER_TYPE_STR = str\n",
- " PARSER_TYPE_FLOAT = float\n",
- "except ImportError:\n",
- " from optparse import OptionParser as ArgParser\n",
- " from optparse import SUPPRESS_HELP as ARG_SUPPRESS\n",
- " PARSER_TYPE_INT = 'int'\n",
- " PARSER_TYPE_STR = 'string'\n",
- " PARSER_TYPE_FLOAT = 'float'\n",
- "\n",
- "try:\n",
- " from cStringIO import StringIO\n",
- " BytesIO = None\n",
- "except ImportError:\n",
- " try:\n",
- " from StringIO import StringIO\n",
- " BytesIO = None\n",
- " except ImportError:\n",
- " from io import StringIO, BytesIO\n",
- "\n",
- "try:\n",
- " import __builtin__\n",
- "except ImportError:\n",
- " import builtins\n",
- " from io import TextIOWrapper, FileIO\n",
- "\n",
- " class _Py3Utf8Output(TextIOWrapper):\n",
- " \"\"\"UTF-8 encoded wrapper around stdout for py3, to override\n",
- " ASCII stdout\n",
- " \"\"\"\n",
- " def __init__(self, f, **kwargs):\n",
- " buf = FileIO(f.fileno(), 'w')\n",
- " super(_Py3Utf8Output, self).__init__(\n",
- " buf,\n",
- " encoding='utf8',\n",
- " errors='strict'\n",
- " )\n",
- "\n",
- " def write(self, s):\n",
- " super(_Py3Utf8Output, self).write(s)\n",
- " self.flush()\n",
- "\n",
- " _py3_print = getattr(builtins, 'print')\n",
- " try:\n",
- " _py3_utf8_stdout = _Py3Utf8Output(sys.stdout)\n",
- " _py3_utf8_stderr = _Py3Utf8Output(sys.stderr)\n",
- " except OSError:\n",
- " # sys.stdout/sys.stderr is not a compatible stdout/stderr object\n",
- " # just use it and hope things go ok\n",
- " _py3_utf8_stdout = sys.stdout\n",
- " _py3_utf8_stderr = sys.stderr\n",
- "\n",
- " def to_utf8(v):\n",
- " \"\"\"No-op encode to utf-8 for py3\"\"\"\n",
- " return v\n",
- "\n",
- " def print_(*args, **kwargs):\n",
- " \"\"\"Wrapper function for py3 to print, with a utf-8 encoded stdout\"\"\"\n",
- " if kwargs.get('file') == sys.stderr:\n",
- " kwargs['file'] = _py3_utf8_stderr\n",
- " else:\n",
- " kwargs['file'] = kwargs.get('file', _py3_utf8_stdout)\n",
- " _py3_print(*args, **kwargs)\n",
- "else:\n",
- " del __builtin__\n",
- "\n",
- " def to_utf8(v):\n",
- " \"\"\"Encode value to utf-8 if possible for py2\"\"\"\n",
- " try:\n",
- " return v.encode('utf8', 'strict')\n",
- " except AttributeError:\n",
- " return v\n",
- "\n",
- " def print_(*args, **kwargs):\n",
- " \"\"\"The new-style print function for Python 2.4 and 2.5.\n",
- " Taken from https://pypi.python.org/pypi/six/\n",
- " Modified to set encoding to UTF-8 always, and to flush after write\n",
- " \"\"\"\n",
- " fp = kwargs.pop(\"file\", sys.stdout)\n",
- " if fp is None:\n",
- " return\n",
- "\n",
- " def write(data):\n",
- " if not isinstance(data, basestring):\n",
- " data = str(data)\n",
- " # If the file has an encoding, encode unicode with it.\n",
- " encoding = 'utf8' # Always trust UTF-8 for output\n",
- " if (isinstance(fp, file) and\n",
- " isinstance(data, unicode) and\n",
- " encoding is not None):\n",
- " errors = getattr(fp, \"errors\", None)\n",
- " if errors is None:\n",
- " errors = \"strict\"\n",
- " data = data.encode(encoding, errors)\n",
- " fp.write(data)\n",
- " fp.flush()\n",
- " want_unicode = False\n",
- " sep = kwargs.pop(\"sep\", None)\n",
- " if sep is not None:\n",
- " if isinstance(sep, unicode):\n",
- " want_unicode = True\n",
- " elif not isinstance(sep, str):\n",
- " raise TypeError(\"sep must be None or a string\")\n",
- " end = kwargs.pop(\"end\", None)\n",
- " if end is not None:\n",
- " if isinstance(end, unicode):\n",
- " want_unicode = True\n",
- " elif not isinstance(end, str):\n",
- " raise TypeError(\"end must be None or a string\")\n",
- " if kwargs:\n",
- " raise TypeError(\"invalid keyword arguments to print()\")\n",
- " if not want_unicode:\n",
- " for arg in args:\n",
- " if isinstance(arg, unicode):\n",
- " want_unicode = True\n",
- " break\n",
- " if want_unicode:\n",
- " newline = unicode(\"\\n\")\n",
- " space = unicode(\" \")\n",
- " else:\n",
- " newline = \"\\n\"\n",
- " space = \" \"\n",
- " if sep is None:\n",
- " sep = space\n",
- " if end is None:\n",
- " end = newline\n",
- " for i, arg in enumerate(args):\n",
- " if i:\n",
- " write(sep)\n",
- " write(arg)\n",
- " write(end)\n",
- "\n",
- "\n",
- "# Exception \"constants\" to support Python 2 through Python 3\n",
- "try:\n",
- " import ssl\n",
- " try:\n",
- " CERT_ERROR = (ssl.CertificateError,)\n",
- " except AttributeError:\n",
- " CERT_ERROR = tuple()\n",
- "\n",
- " HTTP_ERRORS = (\n",
- " (HTTPError, URLError, socket.error, ssl.SSLError, BadStatusLine) +\n",
- " CERT_ERROR\n",
- " )\n",
- "except ImportError:\n",
- " ssl = None\n",
- " HTTP_ERRORS = (HTTPError, URLError, socket.error, BadStatusLine)\n",
- "\n",
- "\n",
- "class SpeedtestException(Exception):\n",
- " \"\"\"Base exception for this module\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestCLIError(SpeedtestException):\n",
- " \"\"\"Generic exception for raising errors during CLI operation\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestHTTPError(SpeedtestException):\n",
- " \"\"\"Base HTTP exception for this module\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestConfigError(SpeedtestException):\n",
- " \"\"\"Configuration XML is invalid\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestServersError(SpeedtestException):\n",
- " \"\"\"Servers XML is invalid\"\"\"\n",
- "\n",
- "\n",
- "class ConfigRetrievalError(SpeedtestHTTPError):\n",
- " \"\"\"Could not retrieve config.php\"\"\"\n",
- "\n",
- "\n",
- "class ServersRetrievalError(SpeedtestHTTPError):\n",
- " \"\"\"Could not retrieve speedtest-servers.php\"\"\"\n",
- "\n",
- "\n",
- "class InvalidServerIDType(SpeedtestException):\n",
- " \"\"\"Server ID used for filtering was not an integer\"\"\"\n",
- "\n",
- "\n",
- "class NoMatchedServers(SpeedtestException):\n",
- " \"\"\"No servers matched when filtering\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestMiniConnectFailure(SpeedtestException):\n",
- " \"\"\"Could not connect to the provided speedtest mini server\"\"\"\n",
- "\n",
- "\n",
- "class InvalidSpeedtestMiniServer(SpeedtestException):\n",
- " \"\"\"Server provided as a speedtest mini server does not actually appear\n",
- " to be a speedtest mini server\n",
- " \"\"\"\n",
- "\n",
- "\n",
- "class ShareResultsConnectFailure(SpeedtestException):\n",
- " \"\"\"Could not connect to speedtest.net API to POST results\"\"\"\n",
- "\n",
- "\n",
- "class ShareResultsSubmitFailure(SpeedtestException):\n",
- " \"\"\"Unable to successfully POST results to speedtest.net API after\n",
- " connection\n",
- " \"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestUploadTimeout(SpeedtestException):\n",
- " \"\"\"testlength configuration reached during upload\n",
- " Used to ensure the upload halts when no additional data should be sent\n",
- " \"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestBestServerFailure(SpeedtestException):\n",
- " \"\"\"Unable to determine best server\"\"\"\n",
- "\n",
- "\n",
- "class SpeedtestMissingBestServer(SpeedtestException):\n",
- " \"\"\"get_best_server not called or not able to determine best server\"\"\"\n",
- "\n",
- "\n",
- "def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,\n",
- " source_address=None):\n",
- " \"\"\"Connect to *address* and return the socket object.\n",
- " Convenience function. Connect to *address* (a 2-tuple ``(host,\n",
- " port)``) and return the socket object. Passing the optional\n",
- " *timeout* parameter will set the timeout on the socket instance\n",
- " before attempting to connect. If no *timeout* is supplied, the\n",
- " global default timeout setting returned by :func:`getdefaulttimeout`\n",
- " is used. If *source_address* is set it must be a tuple of (host, port)\n",
- " for the socket to bind as a source address before making the connection.\n",
- " An host of '' or port 0 tells the OS to use the default.\n",
- " Largely vendored from Python 2.7, modified to work with Python 2.4\n",
- " \"\"\"\n",
- "\n",
- " host, port = address\n",
- " err = None\n",
- " for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):\n",
- " af, socktype, proto, canonname, sa = res\n",
- " sock = None\n",
- " try:\n",
- " sock = socket.socket(af, socktype, proto)\n",
- " if timeout is not _GLOBAL_DEFAULT_TIMEOUT:\n",
- " sock.settimeout(float(timeout))\n",
- " if source_address:\n",
- " sock.bind(source_address)\n",
- " sock.connect(sa)\n",
- " return sock\n",
- "\n",
- " except socket.error:\n",
- " err = get_exception()\n",
- " if sock is not None:\n",
- " sock.close()\n",
- "\n",
- " if err is not None:\n",
- " raise err\n",
- " else:\n",
- " raise socket.error(\"getaddrinfo returns an empty list\")\n",
- "\n",
- "\n",
- "class SpeedtestHTTPConnection(HTTPConnection):\n",
- " \"\"\"Custom HTTPConnection to support source_address across\n",
- " Python 2.4 - Python 3\n",
- " \"\"\"\n",
- " def __init__(self, *args, **kwargs):\n",
- " source_address = kwargs.pop('source_address', None)\n",
- " timeout = kwargs.pop('timeout', 10)\n",
- "\n",
- " HTTPConnection.__init__(self, *args, **kwargs)\n",
- "\n",
- " self.source_address = source_address\n",
- " self.timeout = timeout\n",
- "\n",
- " def connect(self):\n",
- " \"\"\"Connect to the host and port specified in __init__.\"\"\"\n",
- " try:\n",
- " self.sock = socket.create_connection(\n",
- " (self.host, self.port),\n",
- " self.timeout,\n",
- " self.source_address\n",
- " )\n",
- " except (AttributeError, TypeError):\n",
- " self.sock = create_connection(\n",
- " (self.host, self.port),\n",
- " self.timeout,\n",
- " self.source_address\n",
- " )\n",
- "\n",
- "\n",
- "if HTTPSConnection:\n",
- " class SpeedtestHTTPSConnection(HTTPSConnection,\n",
- " SpeedtestHTTPConnection):\n",
- " \"\"\"Custom HTTPSConnection to support source_address across\n",
- " Python 2.4 - Python 3\n",
- " \"\"\"\n",
- " def __init__(self, *args, **kwargs):\n",
- " source_address = kwargs.pop('source_address', None)\n",
- " timeout = kwargs.pop('timeout', 10)\n",
- "\n",
- " HTTPSConnection.__init__(self, *args, **kwargs)\n",
- "\n",
- " self.timeout = timeout\n",
- " self.source_address = source_address\n",
- "\n",
- " def connect(self):\n",
- " \"Connect to a host on a given (SSL) port.\"\n",
- "\n",
- " SpeedtestHTTPConnection.connect(self)\n",
- "\n",
- " if ssl:\n",
- " try:\n",
- " kwargs = {}\n",
- " if hasattr(ssl, 'SSLContext'):\n",
- " kwargs['server_hostname'] = self.host\n",
- " self.sock = self._context.wrap_socket(self.sock, **kwargs)\n",
- " except AttributeError:\n",
- " self.sock = ssl.wrap_socket(self.sock)\n",
- " try:\n",
- " self.sock.server_hostname = self.host\n",
- " except AttributeError:\n",
- " pass\n",
- " elif FakeSocket:\n",
- " # Python 2.4/2.5 support\n",
- " try:\n",
- " self.sock = FakeSocket(self.sock, socket.ssl(self.sock))\n",
- " except AttributeError:\n",
- " raise SpeedtestException(\n",
- " 'This version of Python does not support HTTPS/SSL '\n",
- " 'functionality'\n",
- " )\n",
- " else:\n",
- " raise SpeedtestException(\n",
- " 'This version of Python does not support HTTPS/SSL '\n",
- " 'functionality'\n",
- " )\n",
- "\n",
- "\n",
- "def _build_connection(connection, source_address, timeout, context=None):\n",
- " \"\"\"Cross Python 2.4 - Python 3 callable to build an ``HTTPConnection`` or\n",
- " ``HTTPSConnection`` with the args we need\n",
- " Called from ``http(s)_open`` methods of ``SpeedtestHTTPHandler`` or\n",
- " ``SpeedtestHTTPSHandler``\n",
- " \"\"\"\n",
- " def inner(host, **kwargs):\n",
- " kwargs.update({\n",
- " 'source_address': source_address,\n",
- " 'timeout': timeout\n",
- " })\n",
- " if context:\n",
- " kwargs['context'] = context\n",
- " return connection(host, **kwargs)\n",
- " return inner\n",
- "\n",
- "\n",
- "class SpeedtestHTTPHandler(AbstractHTTPHandler):\n",
- " \"\"\"Custom ``HTTPHandler`` that can build a ``HTTPConnection`` with the\n",
- " args we need for ``source_address`` and ``timeout``\n",
- " \"\"\"\n",
- " def __init__(self, debuglevel=0, source_address=None, timeout=10):\n",
- " AbstractHTTPHandler.__init__(self, debuglevel)\n",
- " self.source_address = source_address\n",
- " self.timeout = timeout\n",
- "\n",
- " def http_open(self, req):\n",
- " return self.do_open(\n",
- " _build_connection(\n",
- " SpeedtestHTTPConnection,\n",
- " self.source_address,\n",
- " self.timeout\n",
- " ),\n",
- " req\n",
- " )\n",
- "\n",
- " http_request = AbstractHTTPHandler.do_request_\n",
- "\n",
- "\n",
- "class SpeedtestHTTPSHandler(AbstractHTTPHandler):\n",
- " \"\"\"Custom ``HTTPSHandler`` that can build a ``HTTPSConnection`` with the\n",
- " args we need for ``source_address`` and ``timeout``\n",
- " \"\"\"\n",
- " def __init__(self, debuglevel=0, context=None, source_address=None,\n",
- " timeout=10):\n",
- " AbstractHTTPHandler.__init__(self, debuglevel)\n",
- " self._context = context\n",
- " self.source_address = source_address\n",
- " self.timeout = timeout\n",
- "\n",
- " def https_open(self, req):\n",
- " return self.do_open(\n",
- " _build_connection(\n",
- " SpeedtestHTTPSConnection,\n",
- " self.source_address,\n",
- " self.timeout,\n",
- " context=self._context,\n",
- " ),\n",
- " req\n",
- " )\n",
- "\n",
- " https_request = AbstractHTTPHandler.do_request_\n",
- "\n",
- "\n",
- "def build_opener(source_address=None, timeout=10):\n",
- " \"\"\"Function similar to ``urllib2.build_opener`` that will build\n",
- " an ``OpenerDirector`` with the explicit handlers we want,\n",
- " ``source_address`` for binding, ``timeout`` and our custom\n",
- " `User-Agent`\n",
- " \"\"\"\n",
- "\n",
- " printer('Timeout set to %d' % timeout, debug=True)\n",
- "\n",
- " if source_address:\n",
- " source_address_tuple = (source_address, 0)\n",
- " printer('Binding to source address: %r' % (source_address_tuple,),\n",
- " debug=True)\n",
- " else:\n",
- " source_address_tuple = None\n",
- "\n",
- " handlers = [\n",
- " ProxyHandler(),\n",
- " SpeedtestHTTPHandler(source_address=source_address_tuple,\n",
- " timeout=timeout),\n",
- " SpeedtestHTTPSHandler(source_address=source_address_tuple,\n",
- " timeout=timeout),\n",
- " HTTPDefaultErrorHandler(),\n",
- " HTTPRedirectHandler(),\n",
- " HTTPErrorProcessor()\n",
- " ]\n",
- "\n",
- " opener = OpenerDirector()\n",
- " opener.addheaders = [('User-agent', build_user_agent())]\n",
- "\n",
- " for handler in handlers:\n",
- " opener.add_handler(handler)\n",
- "\n",
- " return opener\n",
- "\n",
- "\n",
- "class GzipDecodedResponse(GZIP_BASE):\n",
- " \"\"\"A file-like object to decode a response encoded with the gzip\n",
- " method, as described in RFC 1952.\n",
- " Largely copied from ``xmlrpclib``/``xmlrpc.client`` and modified\n",
- " to work for py2.4-py3\n",
- " \"\"\"\n",
- " def __init__(self, response):\n",
- " # response doesn't support tell() and read(), required by\n",
- " # GzipFile\n",
- " if not gzip:\n",
- " raise SpeedtestHTTPError('HTTP response body is gzip encoded, '\n",
- " 'but gzip support is not available')\n",
- " IO = BytesIO or StringIO\n",
- " self.io = IO()\n",
- " while 1:\n",
- " chunk = response.read(1024)\n",
- " if len(chunk) == 0:\n",
- " break\n",
- " self.io.write(chunk)\n",
- " self.io.seek(0)\n",
- " gzip.GzipFile.__init__(self, mode='rb', fileobj=self.io)\n",
- "\n",
- " def close(self):\n",
- " try:\n",
- " gzip.GzipFile.close(self)\n",
- " finally:\n",
- " self.io.close()\n",
- "\n",
- "\n",
- "def get_exception():\n",
- " \"\"\"Helper function to work with py2.4-py3 for getting the current\n",
- " exception in a try/except block\n",
- " \"\"\"\n",
- " return sys.exc_info()[1]\n",
- "\n",
- "\n",
- "def distance(origin, destination):\n",
- " \"\"\"Determine distance between 2 sets of [lat,lon] in km\"\"\"\n",
- "\n",
- " lat1, lon1 = origin\n",
- " lat2, lon2 = destination\n",
- " radius = 6371 # km\n",
- "\n",
- " dlat = math.radians(lat2 - lat1)\n",
- " dlon = math.radians(lon2 - lon1)\n",
- " a = (math.sin(dlat / 2) * math.sin(dlat / 2) +\n",
- " math.cos(math.radians(lat1)) *\n",
- " math.cos(math.radians(lat2)) * math.sin(dlon / 2) *\n",
- " math.sin(dlon / 2))\n",
- " c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n",
- " d = radius * c\n",
- "\n",
- " return d\n",
- "\n",
- "\n",
- "def build_user_agent():\n",
- " \"\"\"Build a Mozilla/5.0 compatible User-Agent string\"\"\"\n",
- "\n",
- " ua_tuple = (\n",
- " 'Mozilla/5.0',\n",
- " '(%s; U; %s; en-us)' % (platform.platform(),\n",
- " platform.architecture()[0]),\n",
- " 'Python/%s' % platform.python_version(),\n",
- " '(KHTML, like Gecko)',\n",
- " 'speedtest-cli/%s' % __version__\n",
- " )\n",
- " user_agent = ' '.join(ua_tuple)\n",
- " printer('User-Agent: %s' % user_agent, debug=True)\n",
- " return user_agent\n",
- "\n",
- "\n",
- "def build_request(url, data=None, headers=None, bump='0', secure=False):\n",
- " \"\"\"Build a urllib2 request object\n",
- " This function automatically adds a User-Agent header to all requests\n",
- " \"\"\"\n",
- "\n",
- " if not headers:\n",
- " headers = {}\n",
- "\n",
- " if url[0] == ':':\n",
- " scheme = ('http', 'https')[bool(secure)]\n",
- " schemed_url = '%s%s' % (scheme, url)\n",
- " else:\n",
- " schemed_url = url\n",
- "\n",
- " if '?' in url:\n",
- " delim = '&'\n",
- " else:\n",
- " delim = '?'\n",
- "\n",
- " # WHO YOU GONNA CALL? CACHE BUSTERS!\n",
- " final_url = '%s%sx=%s.%s' % (schemed_url, delim,\n",
- " int(timeit.time.time() * 1000),\n",
- " bump)\n",
- "\n",
- " headers.update({\n",
- " 'Cache-Control': 'no-cache',\n",
- " })\n",
- "\n",
- " printer('%s %s' % (('GET', 'POST')[bool(data)], final_url),\n",
- " debug=True)\n",
- "\n",
- " return Request(final_url, data=data, headers=headers)\n",
- "\n",
- "\n",
- "def catch_request(request, opener=None):\n",
- " \"\"\"Helper function to catch common exceptions encountered when\n",
- " establishing a connection with a HTTP/HTTPS request\n",
- " \"\"\"\n",
- "\n",
- " if opener:\n",
- " _open = opener.open\n",
- " else:\n",
- " _open = urlopen\n",
- "\n",
- " try:\n",
- " uh = _open(request)\n",
- " if request.get_full_url() != uh.geturl():\n",
- " printer('Redirected to %s' % uh.geturl(), debug=True)\n",
- " return uh, False\n",
- " except HTTP_ERRORS:\n",
- " e = get_exception()\n",
- " return None, e\n",
- "\n",
- "\n",
- "def get_response_stream(response):\n",
- " \"\"\"Helper function to return either a Gzip reader if\n",
- " ``Content-Encoding`` is ``gzip`` otherwise the response itself\n",
- " \"\"\"\n",
- "\n",
- " try:\n",
- " getheader = response.headers.getheader\n",
- " except AttributeError:\n",
- " getheader = response.getheader\n",
- "\n",
- " if getheader('content-encoding') == 'gzip':\n",
- " return GzipDecodedResponse(response)\n",
- "\n",
- " return response\n",
- "\n",
- "\n",
- "def get_attributes_by_tag_name(dom, tag_name):\n",
- " \"\"\"Retrieve an attribute from an XML document and return it in a\n",
- " consistent format\n",
- " Only used with xml.dom.minidom, which is likely only to be used\n",
- " with python versions older than 2.5\n",
- " \"\"\"\n",
- " elem = dom.getElementsByTagName(tag_name)[0]\n",
- " return dict(list(elem.attributes.items()))\n",
- "\n",
- "\n",
- "def print_dots(shutdown_event):\n",
- " \"\"\"Built in callback function used by Thread classes for printing\n",
- " status\n",
- " \"\"\"\n",
- " def inner(current, total, start=False, end=False):\n",
- " if shutdown_event.isSet():\n",
- " return\n",
- "\n",
- " sys.stdout.write('.')\n",
- " if current + 1 == total and end is True:\n",
- " sys.stdout.write('\\n')\n",
- " sys.stdout.flush()\n",
- " return inner\n",
- "\n",
- "\n",
- "def do_nothing(*args, **kwargs):\n",
- " pass\n",
- "\n",
- "\n",
- "class HTTPDownloader(threading.Thread):\n",
- " \"\"\"Thread class for retrieving a URL\"\"\"\n",
- "\n",
- " def __init__(self, i, request, start, timeout, opener=None,\n",
- " shutdown_event=None):\n",
- " threading.Thread.__init__(self)\n",
- " self.request = request\n",
- " self.result = [0]\n",
- " self.starttime = start\n",
- " self.timeout = timeout\n",
- " self.i = i\n",
- " if opener:\n",
- " self._opener = opener.open\n",
- " else:\n",
- " self._opener = urlopen\n",
- "\n",
- " if shutdown_event:\n",
- " self._shutdown_event = shutdown_event\n",
- " else:\n",
- " self._shutdown_event = FakeShutdownEvent()\n",
- "\n",
- " def run(self):\n",
- " try:\n",
- " if (timeit.default_timer() - self.starttime) <= self.timeout:\n",
- " f = self._opener(self.request)\n",
- " while (not self._shutdown_event.isSet() and\n",
- " (timeit.default_timer() - self.starttime) <=\n",
- " self.timeout):\n",
- " self.result.append(len(f.read(10240)))\n",
- " if self.result[-1] == 0:\n",
- " break\n",
- " f.close()\n",
- " except IOError:\n",
- " pass\n",
- "\n",
- "\n",
- "class HTTPUploaderData(object):\n",
- " \"\"\"File like object to improve cutting off the upload once the timeout\n",
- " has been reached\n",
- " \"\"\"\n",
- "\n",
- " def __init__(self, length, start, timeout, shutdown_event=None):\n",
- " self.length = length\n",
- " self.start = start\n",
- " self.timeout = timeout\n",
- "\n",
- " if shutdown_event:\n",
- " self._shutdown_event = shutdown_event\n",
- " else:\n",
- " self._shutdown_event = FakeShutdownEvent()\n",
- "\n",
- " self._data = None\n",
- "\n",
- " self.total = [0]\n",
- "\n",
- " def pre_allocate(self):\n",
- " chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n",
- " multiplier = int(round(int(self.length) / 36.0))\n",
- " IO = BytesIO or StringIO\n",
- " try:\n",
- " self._data = IO(\n",
- " ('content1=%s' %\n",
- " (chars * multiplier)[0:int(self.length) - 9]\n",
- " ).encode()\n",
- " )\n",
- " except MemoryError:\n",
- " raise SpeedtestCLIError(\n",
- " 'Insufficient memory to pre-allocate upload data. Please '\n",
- " 'use --no-pre-allocate'\n",
- " )\n",
- "\n",
- " @property\n",
- " def data(self):\n",
- " if not self._data:\n",
- " self.pre_allocate()\n",
- " return self._data\n",
- "\n",
- " def read(self, n=10240):\n",
- " if ((timeit.default_timer() - self.start) <= self.timeout and\n",
- " not self._shutdown_event.isSet()):\n",
- " chunk = self.data.read(n)\n",
- " self.total.append(len(chunk))\n",
- " return chunk\n",
- " else:\n",
- " raise SpeedtestUploadTimeout()\n",
- "\n",
- " def __len__(self):\n",
- " return self.length\n",
- "\n",
- "\n",
- "class HTTPUploader(threading.Thread):\n",
- " \"\"\"Thread class for putting a URL\"\"\"\n",
- "\n",
- " def __init__(self, i, request, start, size, timeout, opener=None,\n",
- " shutdown_event=None):\n",
- " threading.Thread.__init__(self)\n",
- " self.request = request\n",
- " self.request.data.start = self.starttime = start\n",
- " self.size = size\n",
- " self.result = None\n",
- " self.timeout = timeout\n",
- " self.i = i\n",
- "\n",
- " if opener:\n",
- " self._opener = opener.open\n",
- " else:\n",
- " self._opener = urlopen\n",
- "\n",
- " if shutdown_event:\n",
- " self._shutdown_event = shutdown_event\n",
- " else:\n",
- " self._shutdown_event = FakeShutdownEvent()\n",
- "\n",
- " def run(self):\n",
- " request = self.request\n",
- " try:\n",
- " if ((timeit.default_timer() - self.starttime) <= self.timeout and\n",
- " not self._shutdown_event.isSet()):\n",
- " try:\n",
- " f = self._opener(request)\n",
- " except TypeError:\n",
- " # PY24 expects a string or buffer\n",
- " # This also causes issues with Ctrl-C, but we will concede\n",
- " # for the moment that Ctrl-C on PY24 isn't immediate\n",
- " request = build_request(self.request.get_full_url(),\n",
- " data=request.data.read(self.size))\n",
- " f = self._opener(request)\n",
- " f.read(11)\n",
- " f.close()\n",
- " self.result = sum(self.request.data.total)\n",
- " else:\n",
- " self.result = 0\n",
- " except (IOError, SpeedtestUploadTimeout):\n",
- " self.result = sum(self.request.data.total)\n",
- "\n",
- "\n",
- "class SpeedtestResults(object):\n",
- " \"\"\"Class for holding the results of a speedtest, including:\n",
- " Download speed\n",
- " Upload speed\n",
- " Ping/Latency to test server\n",
- " Data about server that the test was run against\n",
- " Additionally this class can return a result data as a dictionary or CSV,\n",
- " as well as submit a POST of the result data to the speedtest.net API\n",
- " to get a share results image link.\n",
- " \"\"\"\n",
- "\n",
- " def __init__(self, download=0, upload=0, ping=0, server=None, client=None,\n",
- " opener=None, secure=False):\n",
- " self.download = download\n",
- " self.upload = upload\n",
- " self.ping = ping\n",
- " if server is None:\n",
- " self.server = {}\n",
- " else:\n",
- " self.server = server\n",
- " self.client = client or {}\n",
- "\n",
- " self._share = None\n",
- " self.timestamp = '%sZ' % datetime.datetime.utcnow().isoformat()\n",
- " self.bytes_received = 0\n",
- " self.bytes_sent = 0\n",
- "\n",
- " if opener:\n",
- " self._opener = opener\n",
- " else:\n",
- " self._opener = build_opener()\n",
- "\n",
- " self._secure = secure\n",
- "\n",
- " def __repr__(self):\n",
- " return repr(self.dict())\n",
- "\n",
- " def share(self):\n",
- " \"\"\"POST data to the speedtest.net API to obtain a share results\n",
- " link\n",
- " \"\"\"\n",
- "\n",
- " if self._share:\n",
- " return self._share\n",
- "\n",
- " download = int(round(self.download / 1000.0, 0))\n",
- " ping = int(round(self.ping, 0))\n",
- " upload = int(round(self.upload / 1000.0, 0))\n",
- "\n",
- " # Build the request to send results back to speedtest.net\n",
- " # We use a list instead of a dict because the API expects parameters\n",
- " # in a certain order\n",
- " api_data = [\n",
- " 'recommendedserverid=%s' % self.server['id'],\n",
- " 'ping=%s' % ping,\n",
- " 'screenresolution=',\n",
- " 'promo=',\n",
- " 'download=%s' % download,\n",
- " 'screendpi=',\n",
- " 'upload=%s' % upload,\n",
- " 'testmethod=http',\n",
- " 'hash=%s' % md5(('%s-%s-%s-%s' %\n",
- " (ping, upload, download, '297aae72'))\n",
- " .encode()).hexdigest(),\n",
- " 'touchscreen=none',\n",
- " 'startmode=pingselect',\n",
- " 'accuracy=1',\n",
- " 'bytesreceived=%s' % self.bytes_received,\n",
- " 'bytessent=%s' % self.bytes_sent,\n",
- " 'serverid=%s' % self.server['id'],\n",
- " ]\n",
- "\n",
- " headers = {'Referer': 'http://c.speedtest.net/flash/speedtest.swf'}\n",
- " request = build_request('://www.speedtest.net/api/api.php',\n",
- " data='&'.join(api_data).encode(),\n",
- " headers=headers, secure=self._secure)\n",
- " f, e = catch_request(request, opener=self._opener)\n",
- " if e:\n",
- " raise ShareResultsConnectFailure(e)\n",
- "\n",
- " response = f.read()\n",
- " code = f.code\n",
- " f.close()\n",
- "\n",
- " if int(code) != 200:\n",
- " raise ShareResultsSubmitFailure('Could not submit results to '\n",
- " 'speedtest.net')\n",
- "\n",
- " qsargs = parse_qs(response.decode())\n",
- " resultid = qsargs.get('resultid')\n",
- " if not resultid or len(resultid) != 1:\n",
- " raise ShareResultsSubmitFailure('Could not submit results to '\n",
- " 'speedtest.net')\n",
- "\n",
- " self._share = 'http://www.speedtest.net/result/%s.png' % resultid[0]\n",
- "\n",
- " return self._share\n",
- "\n",
- " def dict(self):\n",
- " \"\"\"Return dictionary of result data\"\"\"\n",
- "\n",
- " return {\n",
- " 'download': self.download,\n",
- " 'upload': self.upload,\n",
- " 'ping': self.ping,\n",
- " 'server': self.server,\n",
- " 'timestamp': self.timestamp,\n",
- " 'bytes_sent': self.bytes_sent,\n",
- " 'bytes_received': self.bytes_received,\n",
- " 'share': self._share,\n",
- " 'client': self.client,\n",
- " }\n",
- "\n",
- " @staticmethod\n",
- " def csv_header(delimiter=','):\n",
- " \"\"\"Return CSV Headers\"\"\"\n",
- "\n",
- " row = ['Server ID', 'Sponsor', 'Server Name', 'Timestamp', 'Distance',\n",
- " 'Ping', 'Download', 'Upload', 'Share', 'IP Address']\n",
- " out = StringIO()\n",
- " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
- " writer.writerow([to_utf8(v) for v in row])\n",
- " return out.getvalue()\n",
- "\n",
- " def csv(self, delimiter=','):\n",
- " \"\"\"Return data in CSV format\"\"\"\n",
- "\n",
- " data = self.dict()\n",
- " out = StringIO()\n",
- " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
- " row = [data['server']['id'], data['server']['sponsor'],\n",
- " data['server']['name'], data['timestamp'],\n",
- " data['server']['d'], data['ping'], data['download'],\n",
- " data['upload'], self._share or '', self.client['ip']]\n",
- " writer.writerow([to_utf8(v) for v in row])\n",
- " return out.getvalue()\n",
- "\n",
- " def json(self, pretty=False):\n",
- " \"\"\"Return data in JSON format\"\"\"\n",
- "\n",
- " kwargs = {}\n",
- " if pretty:\n",
- " kwargs.update({\n",
- " 'indent': 4,\n",
- " 'sort_keys': True\n",
- " })\n",
- " return json.dumps(self.dict(), **kwargs)\n",
- "\n",
- "\n",
- "class Speedtest(object):\n",
- " \"\"\"Class for performing standard speedtest.net testing operations\"\"\"\n",
- "\n",
- " def __init__(self, config=None, source_address=None, timeout=10,\n",
- " secure=False, shutdown_event=None):\n",
- " self.config = {}\n",
- "\n",
- " self._source_address = source_address\n",
- " self._timeout = timeout\n",
- " self._opener = build_opener(source_address, timeout)\n",
- "\n",
- " self._secure = secure\n",
- "\n",
- " if shutdown_event:\n",
- " self._shutdown_event = shutdown_event\n",
- " else:\n",
- " self._shutdown_event = FakeShutdownEvent()\n",
- "\n",
- " self.get_config()\n",
- " if config is not None:\n",
- " self.config.update(config)\n",
- "\n",
- " self.servers = {}\n",
- " self.closest = []\n",
- " self._best = {}\n",
- "\n",
- " self.results = SpeedtestResults(\n",
- " client=self.config['client'],\n",
- " opener=self._opener,\n",
- " secure=secure,\n",
- " )\n",
- "\n",
- " @property\n",
- " def best(self):\n",
- " if not self._best:\n",
- " self.get_best_server()\n",
- " return self._best\n",
- "\n",
- " def get_config(self):\n",
- " \"\"\"Download the speedtest.net configuration and return only the data\n",
- " we are interested in\n",
- " \"\"\"\n",
- "\n",
- " headers = {}\n",
- " if gzip:\n",
- " headers['Accept-Encoding'] = 'gzip'\n",
- " request = build_request('://www.speedtest.net/speedtest-config.php',\n",
- " headers=headers, secure=self._secure)\n",
- " uh, e = catch_request(request, opener=self._opener)\n",
- " if e:\n",
- " raise ConfigRetrievalError(e)\n",
- " configxml_list = []\n",
- "\n",
- " stream = get_response_stream(uh)\n",
- "\n",
- " while 1:\n",
- " try:\n",
- " configxml_list.append(stream.read(1024))\n",
- " except (OSError, EOFError):\n",
- " raise ConfigRetrievalError(get_exception())\n",
- " if len(configxml_list[-1]) == 0:\n",
- " break\n",
- " stream.close()\n",
- " uh.close()\n",
- "\n",
- " if int(uh.code) != 200:\n",
- " return None\n",
- "\n",
- " configxml = ''.encode().join(configxml_list)\n",
- "\n",
- " printer('Config XML:\\n%s' % configxml, debug=True)\n",
- "\n",
- " try:\n",
- " try:\n",
- " root = ET.fromstring(configxml)\n",
- " except ET.ParseError:\n",
- " e = get_exception()\n",
- " raise SpeedtestConfigError(\n",
- " 'Malformed speedtest.net configuration: %s' % e\n",
- " )\n",
- " server_config = root.find('server-config').attrib\n",
- " download = root.find('download').attrib\n",
- " upload = root.find('upload').attrib\n",
- " # times = root.find('times').attrib\n",
- " client = root.find('client').attrib\n",
- "\n",
- " except AttributeError:\n",
- " try:\n",
- " root = DOM.parseString(configxml)\n",
- " except ExpatError:\n",
- " e = get_exception()\n",
- " raise SpeedtestConfigError(\n",
- " 'Malformed speedtest.net configuration: %s' % e\n",
- " )\n",
- " server_config = get_attributes_by_tag_name(root, 'server-config')\n",
- " download = get_attributes_by_tag_name(root, 'download')\n",
- " upload = get_attributes_by_tag_name(root, 'upload')\n",
- " # times = get_attributes_by_tag_name(root, 'times')\n",
- " client = get_attributes_by_tag_name(root, 'client')\n",
- "\n",
- " ignore_servers = list(\n",
- " map(int, server_config['ignoreids'].split(','))\n",
- " )\n",
- "\n",
- " ratio = int(upload['ratio'])\n",
- " upload_max = int(upload['maxchunkcount'])\n",
- " up_sizes = [32768, 65536, 131072, 262144, 524288, 1048576, 7340032]\n",
- " sizes = {\n",
- " 'upload': up_sizes[ratio - 1:],\n",
- " 'download': [350, 500, 750, 1000, 1500, 2000, 2500,\n",
- " 3000, 3500, 4000]\n",
- " }\n",
- "\n",
- " size_count = len(sizes['upload'])\n",
- "\n",
- " upload_count = int(math.ceil(upload_max / size_count))\n",
- "\n",
- " counts = {\n",
- " 'upload': upload_count,\n",
- " 'download': int(download['threadsperurl'])\n",
- " }\n",
- "\n",
- " threads = {\n",
- " 'upload': int(upload['threads']),\n",
- " 'download': int(server_config['threadcount']) * 2\n",
- " }\n",
- "\n",
- " length = {\n",
- " 'upload': int(upload['testlength']),\n",
- " 'download': int(download['testlength'])\n",
- " }\n",
- "\n",
- " self.config.update({\n",
- " 'client': client,\n",
- " 'ignore_servers': ignore_servers,\n",
- " 'sizes': sizes,\n",
- " 'counts': counts,\n",
- " 'threads': threads,\n",
- " 'length': length,\n",
- " 'upload_max': upload_count * size_count\n",
- " })\n",
- "\n",
- " try:\n",
- " self.lat_lon = (float(client['lat']), float(client['lon']))\n",
- " except ValueError:\n",
- " raise SpeedtestConfigError(\n",
- " 'Unknown location: lat=%r lon=%r' %\n",
- " (client.get('lat'), client.get('lon'))\n",
- " )\n",
- "\n",
- " printer('Config:\\n%r' % self.config, debug=True)\n",
- "\n",
- " return self.config\n",
- "\n",
- " def get_servers(self, servers=None, exclude=None):\n",
- " \"\"\"Retrieve a the list of speedtest.net servers, optionally filtered\n",
- " to servers matching those specified in the ``servers`` argument\n",
- " \"\"\"\n",
- " if servers is None:\n",
- " servers = []\n",
- "\n",
- " if exclude is None:\n",
- " exclude = []\n",
- "\n",
- " self.servers.clear()\n",
- "\n",
- " for server_list in (servers, exclude):\n",
- " for i, s in enumerate(server_list):\n",
- " try:\n",
- " server_list[i] = int(s)\n",
- " except ValueError:\n",
- " raise InvalidServerIDType(\n",
- " '%s is an invalid server type, must be int' % s\n",
- " )\n",
- "\n",
- " urls = [\n",
- " '://www.speedtest.net/speedtest-servers-static.php',\n",
- " 'http://c.speedtest.net/speedtest-servers-static.php',\n",
- " '://www.speedtest.net/speedtest-servers.php',\n",
- " 'http://c.speedtest.net/speedtest-servers.php',\n",
- " ]\n",
- "\n",
- " headers = {}\n",
- " if gzip:\n",
- " headers['Accept-Encoding'] = 'gzip'\n",
- "\n",
- " errors = []\n",
- " for url in urls:\n",
- " try:\n",
- " request = build_request(\n",
- " '%s?threads=%s' % (url,\n",
- " self.config['threads']['download']),\n",
- " headers=headers,\n",
- " secure=self._secure\n",
- " )\n",
- " uh, e = catch_request(request, opener=self._opener)\n",
- " if e:\n",
- " errors.append('%s' % e)\n",
- " raise ServersRetrievalError()\n",
- "\n",
- " stream = get_response_stream(uh)\n",
- "\n",
- " serversxml_list = []\n",
- " while 1:\n",
- " try:\n",
- " serversxml_list.append(stream.read(1024))\n",
- " except (OSError, EOFError):\n",
- " raise ServersRetrievalError(get_exception())\n",
- " if len(serversxml_list[-1]) == 0:\n",
- " break\n",
- "\n",
- " stream.close()\n",
- " uh.close()\n",
- "\n",
- " if int(uh.code) != 200:\n",
- " raise ServersRetrievalError()\n",
- "\n",
- " serversxml = ''.encode().join(serversxml_list)\n",
- "\n",
- " printer('Servers XML:\\n%s' % serversxml, debug=True)\n",
- "\n",
- " try:\n",
- " try:\n",
- " try:\n",
- " root = ET.fromstring(serversxml)\n",
- " except ET.ParseError:\n",
- " e = get_exception()\n",
- " raise SpeedtestServersError(\n",
- " 'Malformed speedtest.net server list: %s' % e\n",
- " )\n",
- " elements = root.getiterator('server')\n",
- " except AttributeError:\n",
- " try:\n",
- " root = DOM.parseString(serversxml)\n",
- " except ExpatError:\n",
- " e = get_exception()\n",
- " raise SpeedtestServersError(\n",
- " 'Malformed speedtest.net server list: %s' % e\n",
- " )\n",
- " elements = root.getElementsByTagName('server')\n",
- " except (SyntaxError, xml.parsers.expat.ExpatError):\n",
- " raise ServersRetrievalError()\n",
- "\n",
- " for server in elements:\n",
- " try:\n",
- " attrib = server.attrib\n",
- " except AttributeError:\n",
- " attrib = dict(list(server.attributes.items()))\n",
- "\n",
- " if servers and int(attrib.get('id')) not in servers:\n",
- " continue\n",
- "\n",
- " if (int(attrib.get('id')) in self.config['ignore_servers']\n",
- " or int(attrib.get('id')) in exclude):\n",
- " continue\n",
- "\n",
- " try:\n",
- " d = distance(self.lat_lon,\n",
- " (float(attrib.get('lat')),\n",
- " float(attrib.get('lon'))))\n",
- " except Exception:\n",
- " continue\n",
- "\n",
- " attrib['d'] = d\n",
- "\n",
- " try:\n",
- " self.servers[d].append(attrib)\n",
- " except KeyError:\n",
- " self.servers[d] = [attrib]\n",
- "\n",
- " break\n",
- "\n",
- " except ServersRetrievalError:\n",
- " continue\n",
- "\n",
- " if (servers or exclude) and not self.servers:\n",
- " raise NoMatchedServers()\n",
- "\n",
- " return self.servers\n",
- "\n",
- " def set_mini_server(self, server):\n",
- " \"\"\"Instead of querying for a list of servers, set a link to a\n",
- " speedtest mini server\n",
- " \"\"\"\n",
- "\n",
- " urlparts = urlparse(server)\n",
- "\n",
- " name, ext = os.path.splitext(urlparts[2])\n",
- " if ext:\n",
- " url = os.path.dirname(server)\n",
- " else:\n",
- " url = server\n",
- "\n",
- " request = build_request(url)\n",
- " uh, e = catch_request(request, opener=self._opener)\n",
- " if e:\n",
- " raise SpeedtestMiniConnectFailure('Failed to connect to %s' %\n",
- " server)\n",
- " else:\n",
- " text = uh.read()\n",
- " uh.close()\n",
- "\n",
- " extension = re.findall('upload_?[Ee]xtension: \"([^\"]+)\"',\n",
- " text.decode())\n",
- " if not extension:\n",
- " for ext in ['php', 'asp', 'aspx', 'jsp']:\n",
- " try:\n",
- " f = self._opener.open(\n",
- " '%s/speedtest/upload.%s' % (url, ext)\n",
- " )\n",
- " except Exception:\n",
- " pass\n",
- " else:\n",
- " data = f.read().strip().decode()\n",
- " if (f.code == 200 and\n",
- " len(data.splitlines()) == 1 and\n",
- " re.match('size=[0-9]', data)):\n",
- " extension = [ext]\n",
- " break\n",
- " if not urlparts or not extension:\n",
- " raise InvalidSpeedtestMiniServer('Invalid Speedtest Mini Server: '\n",
- " '%s' % server)\n",
- "\n",
- " self.servers = [{\n",
- " 'sponsor': 'Speedtest Mini',\n",
- " 'name': urlparts[1],\n",
- " 'd': 0,\n",
- " 'url': '%s/speedtest/upload.%s' % (url.rstrip('/'), extension[0]),\n",
- " 'latency': 0,\n",
- " 'id': 0\n",
- " }]\n",
- "\n",
- " return self.servers\n",
- "\n",
- " def get_closest_servers(self, limit=5):\n",
- " \"\"\"Limit servers to the closest speedtest.net servers based on\n",
- " geographic distance\n",
- " \"\"\"\n",
- "\n",
- " if not self.servers:\n",
- " self.get_servers()\n",
- "\n",
- " for d in sorted(self.servers.keys()):\n",
- " for s in self.servers[d]:\n",
- " self.closest.append(s)\n",
- " if len(self.closest) == limit:\n",
- " break\n",
- " else:\n",
- " continue\n",
- " break\n",
- "\n",
- " printer('Closest Servers:\\n%r' % self.closest, debug=True)\n",
- " return self.closest\n",
- "\n",
- " def get_best_server(self, servers=None):\n",
- " \"\"\"Perform a speedtest.net \"ping\" to determine which speedtest.net\n",
- " server has the lowest latency\n",
- " \"\"\"\n",
- "\n",
- " if not servers:\n",
- " if not self.closest:\n",
- " servers = self.get_closest_servers()\n",
- " servers = self.closest\n",
- "\n",
- " if self._source_address:\n",
- " source_address_tuple = (self._source_address, 0)\n",
- " else:\n",
- " source_address_tuple = None\n",
- "\n",
- " user_agent = build_user_agent()\n",
- "\n",
- " results = {}\n",
- " for server in servers:\n",
- " cum = []\n",
- " url = os.path.dirname(server['url'])\n",
- " stamp = int(timeit.time.time() * 1000)\n",
- " latency_url = '%s/latency.txt?x=%s' % (url, stamp)\n",
- " for i in range(0, 3):\n",
- " this_latency_url = '%s.%s' % (latency_url, i)\n",
- " printer('%s %s' % ('GET', this_latency_url),\n",
- " debug=True)\n",
- " urlparts = urlparse(latency_url)\n",
- " try:\n",
- " if urlparts[0] == 'https':\n",
- " h = SpeedtestHTTPSConnection(\n",
- " urlparts[1],\n",
- " source_address=source_address_tuple\n",
- " )\n",
- " else:\n",
- " h = SpeedtestHTTPConnection(\n",
- " urlparts[1],\n",
- " source_address=source_address_tuple\n",
- " )\n",
- " headers = {'User-Agent': user_agent}\n",
- " path = '%s?%s' % (urlparts[2], urlparts[4])\n",
- " start = timeit.default_timer()\n",
- " h.request(\"GET\", path, headers=headers)\n",
- " r = h.getresponse()\n",
- " total = (timeit.default_timer() - start)\n",
- " except HTTP_ERRORS:\n",
- " e = get_exception()\n",
- " printer('ERROR: %r' % e, debug=True)\n",
- " cum.append(3600)\n",
- " continue\n",
- "\n",
- " text = r.read(9)\n",
- " if int(r.status) == 200 and text == 'test=test'.encode():\n",
- " cum.append(total)\n",
- " else:\n",
- " cum.append(3600)\n",
- " h.close()\n",
- "\n",
- " avg = round((sum(cum) / 6) * 1000.0, 3)\n",
- " results[avg] = server\n",
- "\n",
- " try:\n",
- " fastest = sorted(results.keys())[0]\n",
- " except IndexError:\n",
- " raise SpeedtestBestServerFailure('Unable to connect to servers to '\n",
- " 'test latency.')\n",
- " best = results[fastest]\n",
- " best['latency'] = fastest\n",
- "\n",
- " self.results.ping = fastest\n",
- " self.results.server = best\n",
- "\n",
- " self._best.update(best)\n",
- " printer('Best Server:\\n%r' % best, debug=True)\n",
- " return best\n",
- "\n",
- " def download(self, callback=do_nothing, threads=None):\n",
- " \"\"\"Test download speed against speedtest.net\n",
- " A ``threads`` value of ``None`` will fall back to those dictated\n",
- " by the speedtest.net configuration\n",
- " \"\"\"\n",
- "\n",
- " urls = []\n",
- " for size in self.config['sizes']['download']:\n",
- " for _ in range(0, self.config['counts']['download']):\n",
- " urls.append('%s/random%sx%s.jpg' %\n",
- " (os.path.dirname(self.best['url']), size, size))\n",
- "\n",
- " request_count = len(urls)\n",
- " requests = []\n",
- " for i, url in enumerate(urls):\n",
- " requests.append(\n",
- " build_request(url, bump=i, secure=self._secure)\n",
- " )\n",
- "\n",
- " def producer(q, requests, request_count):\n",
- " for i, request in enumerate(requests):\n",
- " thread = HTTPDownloader(\n",
- " i,\n",
- " request,\n",
- " start,\n",
- " self.config['length']['download'],\n",
- " opener=self._opener,\n",
- " shutdown_event=self._shutdown_event\n",
- " )\n",
- " thread.start()\n",
- " q.put(thread, True)\n",
- " callback(i, request_count, start=True)\n",
- "\n",
- " finished = []\n",
- "\n",
- " def consumer(q, request_count):\n",
- " while len(finished) < request_count:\n",
- " thread = q.get(True)\n",
- " while thread.isAlive():\n",
- " thread.join(timeout=0.1)\n",
- " finished.append(sum(thread.result))\n",
- " callback(thread.i, request_count, end=True)\n",
- "\n",
- " q = Queue(threads or self.config['threads']['download'])\n",
- " prod_thread = threading.Thread(target=producer,\n",
- " args=(q, requests, request_count))\n",
- " cons_thread = threading.Thread(target=consumer,\n",
- " args=(q, request_count))\n",
- " start = timeit.default_timer()\n",
- " prod_thread.start()\n",
- " cons_thread.start()\n",
- " while prod_thread.isAlive():\n",
- " prod_thread.join(timeout=0.1)\n",
- " while cons_thread.isAlive():\n",
- " cons_thread.join(timeout=0.1)\n",
- "\n",
- " stop = timeit.default_timer()\n",
- " self.results.bytes_received = sum(finished)\n",
- " self.results.download = (\n",
- " (self.results.bytes_received / (stop - start)) * 8.0\n",
- " )\n",
- " if self.results.download > 100000:\n",
- " self.config['threads']['upload'] = 8\n",
- " return self.results.download\n",
- "\n",
- " def upload(self, callback=do_nothing, pre_allocate=True, threads=None):\n",
- " \"\"\"Test upload speed against speedtest.net\n",
- " A ``threads`` value of ``None`` will fall back to those dictated\n",
- " by the speedtest.net configuration\n",
- " \"\"\"\n",
- "\n",
- " sizes = []\n",
- "\n",
- " for size in self.config['sizes']['upload']:\n",
- " for _ in range(0, self.config['counts']['upload']):\n",
- " sizes.append(size)\n",
- "\n",
- " # request_count = len(sizes)\n",
- " request_count = self.config['upload_max']\n",
- "\n",
- " requests = []\n",
- " for i, size in enumerate(sizes):\n",
- " # We set ``0`` for ``start`` and handle setting the actual\n",
- " # ``start`` in ``HTTPUploader`` to get better measurements\n",
- " data = HTTPUploaderData(\n",
- " size,\n",
- " 0,\n",
- " self.config['length']['upload'],\n",
- " shutdown_event=self._shutdown_event\n",
- " )\n",
- " if pre_allocate:\n",
- " data.pre_allocate()\n",
- "\n",
- " headers = {'Content-length': size}\n",
- " requests.append(\n",
- " (\n",
- " build_request(self.best['url'], data, secure=self._secure,\n",
- " headers=headers),\n",
- " size\n",
- " )\n",
- " )\n",
- "\n",
- " def producer(q, requests, request_count):\n",
- " for i, request in enumerate(requests[:request_count]):\n",
- " thread = HTTPUploader(\n",
- " i,\n",
- " request[0],\n",
- " start,\n",
- " request[1],\n",
- " self.config['length']['upload'],\n",
- " opener=self._opener,\n",
- " shutdown_event=self._shutdown_event\n",
- " )\n",
- " thread.start()\n",
- " q.put(thread, True)\n",
- " callback(i, request_count, start=True)\n",
- "\n",
- " finished = []\n",
- "\n",
- " def consumer(q, request_count):\n",
- " while len(finished) < request_count:\n",
- " thread = q.get(True)\n",
- " while thread.isAlive():\n",
- " thread.join(timeout=0.1)\n",
- " finished.append(thread.result)\n",
- " callback(thread.i, request_count, end=True)\n",
- "\n",
- " q = Queue(threads or self.config['threads']['upload'])\n",
- " prod_thread = threading.Thread(target=producer,\n",
- " args=(q, requests, request_count))\n",
- " cons_thread = threading.Thread(target=consumer,\n",
- " args=(q, request_count))\n",
- " start = timeit.default_timer()\n",
- " prod_thread.start()\n",
- " cons_thread.start()\n",
- " while prod_thread.isAlive():\n",
- " prod_thread.join(timeout=0.1)\n",
- " while cons_thread.isAlive():\n",
- " cons_thread.join(timeout=0.1)\n",
- "\n",
- " stop = timeit.default_timer()\n",
- " self.results.bytes_sent = sum(finished)\n",
- " self.results.upload = (\n",
- " (self.results.bytes_sent / (stop - start)) * 8.0\n",
- " )\n",
- " return self.results.upload\n",
- "\n",
- "\n",
- "def ctrl_c(shutdown_event):\n",
- " \"\"\"Catch Ctrl-C key sequence and set a SHUTDOWN_EVENT for our threaded\n",
- " operations\n",
- " \"\"\"\n",
- " def inner(signum, frame):\n",
- " shutdown_event.set()\n",
- " printer('\\nCancelling...', error=True)\n",
- " sys.exit(0)\n",
- " return inner\n",
- "\n",
- "\n",
- "def version():\n",
- " \"\"\"Print the version\"\"\"\n",
- "\n",
- " printer('speedtest-cli %s' % __version__)\n",
- " printer('Python %s' % sys.version.replace('\\n', ''))\n",
- " sys.exit(0)\n",
- "\n",
- "\n",
- "def csv_header(delimiter=','):\n",
- " \"\"\"Print the CSV Headers\"\"\"\n",
- "\n",
- " printer(SpeedtestResults.csv_header(delimiter=delimiter))\n",
- " sys.exit(0)\n",
- "\n",
- "\n",
- "def parse_args():\n",
- " \"\"\"Function to handle building and parsing of command line arguments\"\"\"\n",
- " description = (\n",
- " 'Command line interface for testing internet bandwidth using '\n",
- " 'speedtest.net.\\n'\n",
- " '------------------------------------------------------------'\n",
- " '--------------\\n'\n",
- " 'https://github.com/sivel/speedtest-cli')\n",
- "\n",
- " parser = ArgParser(description=description)\n",
- " # Give optparse.OptionParser an `add_argument` method for\n",
- " # compatibility with argparse.ArgumentParser\n",
- " try:\n",
- " parser.add_argument = parser.add_option\n",
- " except AttributeError:\n",
- " pass\n",
- " parser.add_argument('--no-download', dest='download', default=True,\n",
- " action='store_const', const=False,\n",
- " help='Do not perform download test')\n",
- " parser.add_argument('--no-upload', dest='upload', default=True,\n",
- " action='store_const', const=False,\n",
- " help='Do not perform upload test')\n",
- " parser.add_argument('--single', default=False, action='store_true',\n",
- " help='Only use a single connection instead of '\n",
- " 'multiple. This simulates a typical file '\n",
- " 'transfer.')\n",
- " parser.add_argument('--bytes', dest='units', action='store_const',\n",
- " const=('byte', 8), default=('bit', 1),\n",
- " help='Display values in bytes instead of bits. Does '\n",
- " 'not affect the image generated by --share, nor '\n",
- " 'output from --json or --csv')\n",
- " parser.add_argument('--share', action='store_true',\n",
- " help='Generate and provide a URL to the speedtest.net '\n",
- " 'share results image, not displayed with --csv')\n",
- " parser.add_argument('--simple', action='store_true', default=False,\n",
- " help='Suppress verbose output, only show basic '\n",
- " 'information')\n",
- " parser.add_argument('--csv', action='store_true', default=False,\n",
- " help='Suppress verbose output, only show basic '\n",
- " 'information in CSV format. Speeds listed in '\n",
- " 'bit/s and not affected by --bytes')\n",
- " parser.add_argument('--csv-delimiter', default=',', type=PARSER_TYPE_STR,\n",
- " help='Single character delimiter to use in CSV '\n",
- " 'output. Default \",\"')\n",
- " parser.add_argument('--csv-header', action='store_true', default=False,\n",
- " help='Print CSV headers')\n",
- " parser.add_argument('--json', action='store_true', default=False,\n",
- " help='Suppress verbose output, only show basic '\n",
- " 'information in JSON format. Speeds listed in '\n",
- " 'bit/s and not affected by --bytes')\n",
- " parser.add_argument('--list', action='store_true',\n",
- " help='Display a list of speedtest.net servers '\n",
- " 'sorted by distance')\n",
- " parser.add_argument('--server', type=PARSER_TYPE_INT, action='append',\n",
- " help='Specify a server ID to test against. Can be '\n",
- " 'supplied multiple times')\n",
- " parser.add_argument('--exclude', type=PARSER_TYPE_INT, action='append',\n",
- " help='Exclude a server from selection. Can be '\n",
- " 'supplied multiple times')\n",
- " parser.add_argument('--mini', help='URL of the Speedtest Mini server')\n",
- " parser.add_argument('--source', help='Source IP address to bind to')\n",
- " parser.add_argument('--timeout', default=10, type=PARSER_TYPE_FLOAT,\n",
- " help='HTTP timeout in seconds. Default 10')\n",
- " parser.add_argument('--secure', action='store_true',\n",
- " help='Use HTTPS instead of HTTP when communicating '\n",
- " 'with speedtest.net operated servers')\n",
- " parser.add_argument('--no-pre-allocate', dest='pre_allocate',\n",
- " action='store_const', default=True, const=False,\n",
- " help='Do not pre allocate upload data. Pre allocation '\n",
- " 'is enabled by default to improve upload '\n",
- " 'performance. To support systems with '\n",
- " 'insufficient memory, use this option to avoid a '\n",
- " 'MemoryError')\n",
- " parser.add_argument('--version', action='store_true',\n",
- " help='Show the version number and exit')\n",
- " parser.add_argument('--debug', action='store_true',\n",
- " help=ARG_SUPPRESS, default=ARG_SUPPRESS)\n",
- "\n",
- " options = parser.parse_args(args=[])\n",
- " if isinstance(options, tuple):\n",
- " args = options[0]\n",
- " else:\n",
- " args = options\n",
- " return args\n",
- "\n",
- "\n",
- "def validate_optional_args(args):\n",
- " \"\"\"Check if an argument was provided that depends on a module that may\n",
- " not be part of the Python standard library.\n",
- " If such an argument is supplied, and the module does not exist, exit\n",
- " with an error stating which module is missing.\n",
- " \"\"\"\n",
- " optional_args = {\n",
- " 'json': ('json/simplejson python module', json),\n",
- " 'secure': ('SSL support', HTTPSConnection),\n",
- " }\n",
- "\n",
- " for arg, info in optional_args.items():\n",
- " if getattr(args, arg, False) and info[1] is None:\n",
- " raise SystemExit('%s is not installed. --%s is '\n",
- " 'unavailable' % (info[0], arg))\n",
- "\n",
- "\n",
- "def printer(string, quiet=False, debug=False, error=False, **kwargs):\n",
- " \"\"\"Helper function print a string with various features\"\"\"\n",
- "\n",
- " if debug and not DEBUG:\n",
- " return\n",
- "\n",
- " if debug:\n",
- " if sys.stdout.isatty():\n",
- " out = '\\033[1;30mDEBUG: %s\\033[0m' % string\n",
- " else:\n",
- " out = 'DEBUG: %s' % string\n",
- " else:\n",
- " out = string\n",
- "\n",
- " if error:\n",
- " kwargs['file'] = sys.stderr\n",
- "\n",
- " if not quiet:\n",
- " print_(out, **kwargs)\n",
- "\n",
- "\n",
- "def shell():\n",
- " \"\"\"Run the full speedtest.net test\"\"\"\n",
- "\n",
- " global DEBUG\n",
- " shutdown_event = threading.Event()\n",
- "\n",
- " signal.signal(signal.SIGINT, ctrl_c(shutdown_event))\n",
- "\n",
- " args = parse_args()\n",
- "\n",
- " # Print the version and exit\n",
- " if args.version:\n",
- " version()\n",
- "\n",
- " if not args.download and not args.upload:\n",
- " raise SpeedtestCLIError('Cannot supply both --no-download and '\n",
- " '--no-upload')\n",
- "\n",
- " if len(args.csv_delimiter) != 1:\n",
- " raise SpeedtestCLIError('--csv-delimiter must be a single character')\n",
- "\n",
- " if args.csv_header:\n",
- " csv_header(args.csv_delimiter)\n",
- "\n",
- " validate_optional_args(args)\n",
- "\n",
- " debug = getattr(args, 'debug', False)\n",
- " if debug == 'SUPPRESSHELP':\n",
- " debug = False\n",
- " if debug:\n",
- " DEBUG = True\n",
- "\n",
- " if args.simple or args.csv or args.json:\n",
- " quiet = True\n",
- " else:\n",
- " quiet = False\n",
- "\n",
- " if args.csv or args.json:\n",
- " machine_format = True\n",
- " else:\n",
- " machine_format = False\n",
- "\n",
- " # Don't set a callback if we are running quietly\n",
- " if quiet or debug:\n",
- " callback = do_nothing\n",
- " else:\n",
- " callback = print_dots(shutdown_event)\n",
- "\n",
- " printer('Retrieving speedtest.net configuration...', quiet)\n",
- " try:\n",
- " speedtest = Speedtest(\n",
- " source_address=args.source,\n",
- " timeout=args.timeout,\n",
- " secure=args.secure\n",
- " )\n",
- " except (ConfigRetrievalError,) + HTTP_ERRORS:\n",
- " printer('Cannot retrieve speedtest configuration', error=True)\n",
- " raise SpeedtestCLIError(get_exception())\n",
- "\n",
- " if args.list:\n",
- " try:\n",
- " speedtest.get_servers()\n",
- " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
- " printer('Cannot retrieve speedtest server list', error=True)\n",
- " raise SpeedtestCLIError(get_exception())\n",
- "\n",
- " for _, servers in sorted(speedtest.servers.items()):\n",
- " for server in servers:\n",
- " line = ('%(id)5s) %(sponsor)s (%(name)s, %(country)s) '\n",
- " '[%(d)0.2f km]' % server)\n",
- " try:\n",
- " printer(line)\n",
- " except IOError:\n",
- " e = get_exception()\n",
- " if e.errno != errno.EPIPE:\n",
- " raise\n",
- " sys.exit(0)\n",
- "\n",
- " printer('Testing from %(isp)s (%(ip)s)...' % speedtest.config['client'],\n",
- " quiet)\n",
- "\n",
- " if not args.mini:\n",
- " printer('Retrieving speedtest.net server list...', quiet)\n",
- " try:\n",
- " speedtest.get_servers(servers=args.server, exclude=args.exclude)\n",
- " except NoMatchedServers:\n",
- " raise SpeedtestCLIError(\n",
- " 'No matched servers: %s' %\n",
- " ', '.join('%s' % s for s in args.server)\n",
- " )\n",
- " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
- " printer('Cannot retrieve speedtest server list', error=True)\n",
- " raise SpeedtestCLIError(get_exception())\n",
- " except InvalidServerIDType:\n",
- " raise SpeedtestCLIError(\n",
- " '%s is an invalid server type, must '\n",
- " 'be an int' % ', '.join('%s' % s for s in args.server)\n",
- " )\n",
- "\n",
- " if args.server and len(args.server) == 1:\n",
- " printer('Retrieving information for the selected server...', quiet)\n",
- " else:\n",
- " printer('Selecting best server based on ping...', quiet)\n",
- " speedtest.get_best_server()\n",
- " elif args.mini:\n",
- " speedtest.get_best_server(speedtest.set_mini_server(args.mini))\n",
- "\n",
- " results = speedtest.results\n",
- "\n",
- " printer('Hosted by %(sponsor)s (%(name)s) [%(d)0.2f km]: '\n",
- " '%(latency)s ms' % results.server, quiet)\n",
- "\n",
- " if args.download:\n",
- " printer('Testing download speed', quiet,\n",
- " end=('', '\\n')[bool(debug)])\n",
- " speedtest.download(\n",
- " callback=callback,\n",
- " threads=(None, 1)[args.single]\n",
- " )\n",
- " printer('Download: %0.2f M%s/s' %\n",
- " ((results.download / 1000.0 / 1000.0) / args.units[1],\n",
- " args.units[0]),\n",
- " quiet)\n",
- " else:\n",
- " printer('Skipping download test', quiet)\n",
- "\n",
- " if args.upload:\n",
- " printer('Testing upload speed', quiet,\n",
- " end=('', '\\n')[bool(debug)])\n",
- " speedtest.upload(\n",
- " callback=callback,\n",
- " pre_allocate=args.pre_allocate,\n",
- " threads=(None, 1)[args.single]\n",
- " )\n",
- " printer('Upload: %0.2f M%s/s' %\n",
- " ((results.upload / 1000.0 / 1000.0) / args.units[1],\n",
- " args.units[0]),\n",
- " quiet)\n",
- " else:\n",
- " printer('Skipping upload test', quiet)\n",
- "\n",
- " printer('Results:\\n%r' % results.dict(), debug=True)\n",
- "\n",
- " if not args.simple and args.share:\n",
- " results.share()\n",
- "\n",
- " if args.simple:\n",
- " printer('Ping: %s ms\\nDownload: %0.2f M%s/s\\nUpload: %0.2f M%s/s' %\n",
- " (results.ping,\n",
- " (results.download / 1000.0 / 1000.0) / args.units[1],\n",
- " args.units[0],\n",
- " (results.upload / 1000.0 / 1000.0) / args.units[1],\n",
- " args.units[0]))\n",
- " elif args.csv:\n",
- " printer(results.csv(delimiter=args.csv_delimiter))\n",
- " elif args.json:\n",
- " printer(results.json())\n",
- "\n",
- " if args.share and not machine_format:\n",
- " printer('Share results: %s' % results.share())\n",
- "\n",
- "\n",
- "def main():\n",
- " try:\n",
- " shell()\n",
- " except KeyboardInterrupt:\n",
- " printer('\\nCancelling...', error=True)\n",
- " except (SpeedtestException, SystemExit):\n",
- " e = get_exception()\n",
- " # Ignore a successful exit, or argparse exit\n",
- " if getattr(e, 'code', 1) not in (0, 2):\n",
- " msg = '%s' % e\n",
- " if not msg:\n",
- " msg = '%r' % e\n",
- " raise SystemExit('ERROR: %s' % msg)\n",
- "\n",
- "\n",
- "if __name__ == '__main__':\n",
- " main()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NgCsGSiDu1bY"
- },
- "source": [
- "### Virtual Machine "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "qUU2tyDpSAB2",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Ubuntu VM updater \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import HTML\n",
- "\n",
- "!apt update -qq -y &> /dev/null\n",
- "!apt upgrade -qq -y &> /dev/null\n",
- "!npm i -g npm &> /dev/null\n",
- "\n",
- "display(HTML(\"The system has been updated! \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "arzz5dBiSEDd",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Check VM status \n",
- "Check_IP = True #@param {type:\"boolean\"}\n",
- "Loop_Check = False #@param {type:\"boolean\"}\n",
- "Loop_Interval = 4 #@param {type:\"slider\", min:1, max:15, step:1}\n",
- "# ================================================================ #\n",
- "\n",
- "import time, requests\n",
- "from IPython.display import clear_output\n",
- "Loop = True\n",
- "\n",
- "try:\n",
- " while Loop == True:\n",
- " clear_output(wait=True)\n",
- " !top -bcn1 -w512\n",
- " if Check_IP: print(\"\\nYour Public IP: \" + requests.get('http://ip.42.pl/raw').text)\n",
- " if Loop_Check == False:\n",
- " Loop = False\n",
- " else:\n",
- " time.sleep(Loop_Interval)\n",
- "except:\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "YBpux5mNSHhG",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Get VM specification \n",
- "Output_Format = \"TEXT\" #@param [\"TEXT\", \"HTML\", \"XML\", \"JSON\"]\n",
- "Short_Output = True #@param {type:\"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from google.colab import files\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "try:\n",
- " Output_Format_Ext\n",
- "except NameError:\n",
- " get_ipython().system_raw(\"apt install lshw -qq -y\")\n",
- "\n",
- "if Short_Output:\n",
- " Output_Format = \"txt\"\n",
- " Output_Format2 = \"-short\"\n",
- " Output_Format_Ext = \"txt\"\n",
- "elif Output_Format == \"TEXT\":\n",
- " Output_Format = \"txt\"\n",
- " Output_Format2 = \"\"\n",
- " Output_Format_Ext = \"txt\"\n",
- "else:\n",
- " Output_Format = Output_Format.lower()\n",
- " Output_Format2 = \"-\"+Output_Format.lower()\n",
- " Output_Format_Ext = Output_Format.lower()\n",
- "\n",
- "get_ipython().system_raw(\"lshw \" + Output_Format2 + \" > Specification.\" + Output_Format)\n",
- "files.download(\"/content/Specification.\" + Output_Format_Ext)\n",
- "get_ipython().system_raw(\"rm -f /content/Specification.$outputformatC\")\n",
- "display(HTML(\"Sending log to your browser... \"))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nJlifxF8_yv1",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Check for GPU (GPU runtime is needed) \n",
- "# @markdown You should never ever connect to GPU runtime if you do not have any use for GPU at all! \n",
- "# ================================================================ #\n",
- "\n",
- "gpu = !nvidia-smi --query-gpu=gpu_name,driver_version,memory.total --format=csv\n",
- "\n",
- "print(\"\")\n",
- "print(gpu[1])\n",
- "print(\"\")\n",
- "print(\"(If the output shows nothing, that means you are not connected to GPU runtime)\")\n",
- "print(\"----------------------------------------------------------------------------------------------------\")\n",
- "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
- "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "6sxlwKm9SLBa",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Crash the VM \n",
- "# @markdown Run this cell to crash the VM. ONLY when needed!
\n",
- "# @markdown > You might need to run this cell when the VM is out of disk due to rclone caching.\n",
- "# ================================================================ #\n",
- "\n",
- "some_str = ' ' * 5120000000000"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OOpAjMjxsNd6"
- },
- "source": [
- "# ✦ *EXPERIMENTAL* ✦ \n",
- "\n",
- "**Everything in this section is in EXPERIMENTAL state and/or UNFINISHED and/or LEFT AS IS!\n",
- "\n",
- "Any issue regarding this section will be IGNORED!** "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "UdiQLlm5zX3_"
- },
- "source": [
- "## FFMPEG 1 \n",
- "GPU runtime needed! "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "EFOqhHG6hOVH"
- },
- "source": [
- "### ***Required to use Scripts:*** Install FFmpeg, VCSI & Mkvtoolnix"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "G3JHGE0Jtzme",
- "cellView": "form"
- },
- "source": [
- "#@markdown ← Click Here to Install FFmpeg, VCSI, Mkvtoolnix, Firefox, Furiousmount & Handbrake \n",
- "\n",
- "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
- "from IPython.display import clear_output\n",
- "import os, urllib.request\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
- "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/totalleecher/\" \\\n",
- " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
- "\n",
- "from ttmg import (\n",
- " loadingAn,\n",
- " textAn,\n",
- ")\n",
- "\n",
- "loadingAn(name=\"lds\")\n",
- "textAn(\"Installing Dependencies...\", ty='twg')\n",
- "#os.system('pip install git+git://github.com/AWConant/jikanpy.git') //GPU Not supported\n",
- "#os.system('add-apt-repository -y ppa:jonathonf/ffmpeg-4') //GPU Not supported\n",
- "os.system('apt-get update')\n",
- "os.system('apt-get install ffmpeg')\n",
- "os.system('apt-get install mkvtoolnix')\n",
- "os.system('pip install vcsi')\n",
- "#os.system('sudo apt-get install synaptic')\n",
- "#os.system('sudo apt install firefox')\n",
- "os.system('sudo add-apt-repository ppa:stebbins/handbrake-releases -y')\n",
- "os.system('sudo apt update -y')\n",
- "os.system('sudo apt install --install-recommends handbrake-gtk handbrake-cli')\n",
- "#os.system('sudo apt-get install furiusisomount')\n",
- "\n",
- "clear_output()\n",
- "print(\"Install Finished\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ey6-UveDalxR"
- },
- "source": [
- "### » Re-encode a Video to a Different Resolution (*H265*) - Need GPU - Nvidia Telsa P100 or T4 (Support Both Single & Batch Processing)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "tsY6jhC9SXvF",
- "cellView": "form"
- },
- "source": [
- "#@title Check GPU\n",
- "#@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you.\n",
- "\n",
- "gpu = !nvidia-smi --query-gpu=gpu_name --format=csv\n",
- "print(gpu[1])\n",
- "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
- "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "Zam_JHPDalxc"
- },
- "source": [
- "path = \"\" #@param {type:\"string\"}\n",
- "save_txt = False #@param {type:\"boolean\"}\n",
- "import os, uuid, re, IPython\n",
- "import ipywidgets as widgets\n",
- "import time\n",
- "\n",
- "from glob import glob\n",
- "from IPython.display import HTML, clear_output\n",
- "from google.colab import output, drive\n",
- "\n",
- "def mediainfo():\n",
- " display(HTML(\" \"))\n",
- "# print(path.split(\"/\")[::-1][0])\n",
- " display(HTML(\" \"))\n",
- "# media = !mediainfo \"$path\"\n",
- "# media = \"\\n\".join(media).replace(os.path.dirname(path)+\"/\", \"\")\n",
- " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path\" \"\"\")\n",
- " with open('/root/.nfo', 'r') as file:\n",
- " media = file.read()\n",
- " media = media.replace(os.path.dirname(path)+\"/\", \"\")\n",
- " print(media)\n",
- " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
- " \n",
- " if save_txt:\n",
- " txt = path.rpartition('.')[0] + \".txt\"\n",
- " if os.path.exists(txt):\n",
- " get_ipython().system_raw(\"rm -f '$txt'\")\n",
- " !curl -s https://pastebin.com/raw/TApKLQfM -o \"$txt\"\n",
- " with open(txt, 'a+') as file:\n",
- " file.write(\"\\n\\n\")\n",
- " file.write(media)\n",
- "\n",
- "while not os.path.exists(\"/content/drive\"):\n",
- " try:\n",
- " drive.mount(\"/content/drive\")\n",
- " clear_output(wait=True)\n",
- " except:\n",
- " clear_output()\n",
- " \n",
- "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
- " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
- " \n",
- "mediainfo()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "SHBPElqualx6"
- },
- "source": [
- "import os, sys, re\n",
- "#@markdown Encoder \n",
- "Encoder = \"CPU\" #@param [\"GPU\", \"CPU\"]\n",
- "codec = \"x264\" #@param [\"x264\", \"x265\"]\n",
- "#@markdown Encoding all videos in folder \n",
- "video_folder_path = '' #@param {type:\"string\"}\n",
- "#@markdown ---\n",
- "#@markdown Encoding selected videos \n",
- "video_file_path1 = '' #@param {type:\"string\"}\n",
- "video_file_path2 = '' #@param {type:\"string\"}\n",
- "video_file_path3 = '' #@param {type:\"string\"}\n",
- "video_file_path4 = '' #@param {type:\"string\"}\n",
- "video_file_path5 = '' #@param {type:\"string\"}\n",
- "\n",
- "#counting\n",
- "if video_file_path1 != \"\":\n",
- " coa = 1\n",
- "else:\n",
- " coa = 0\n",
- "\n",
- "if video_file_path2 != \"\":\n",
- " cob = 1\n",
- "else:\n",
- " cob = 0\n",
- "\n",
- "if video_file_path3 != \"\":\n",
- " coc = 1\n",
- "else:\n",
- " coc = 0\n",
- "\n",
- "if video_file_path4 != \"\":\n",
- " cod = 1\n",
- "else:\n",
- " cod = 0\n",
- "\n",
- "if video_file_path5 != \"\":\n",
- " coe = 1\n",
- "else:\n",
- " coe = 0\n",
- "\n",
- "#@markdown ---\n",
- "resolution = '360p' #@param [\"2160p\",\"1440p\",\"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"same as input\"]\n",
- "encode_setting = 'Advance' #@param [\"Advance\", \"HEVC\", \"HEVC 10 Bit\"]\n",
- "file_type = 'mkv' #@param [\"mkv\", \"mp4\"]\n",
- "rip_audio = False #@param {type:\"boolean\"}\n",
- "rip_subtitle = False #@param {type:\"boolean\"}\n",
- "\n",
- "if rip_audio == False:\n",
- " rip_audio_string = \"-acodec copy\"\n",
- "else:\n",
- " rip_audio_string = \"-an\"\n",
- "\n",
- "if rip_subtitle == False:\n",
- " rip_subtitle_string = \"-scodec copy\"\n",
- "else:\n",
- " rip_subtitle_string = \"-sn\"\n",
- "\n",
- "\n",
- "if resolution == '2160p':\n",
- " w = '3840'\n",
- "elif resolution == '1440p':\n",
- " w = '2560'\n",
- "elif resolution == '1080p':\n",
- " w = '1980'\n",
- "elif resolution == '720p':\n",
- " w = '1280'\n",
- "elif resolution == '480p':\n",
- " w = '854'\n",
- "elif resolution == '360p':\n",
- " w = '640'\n",
- "elif resolution == '240p':\n",
- " w = '426'\n",
- "else:\n",
- " w = ''\n",
- "\n",
- "if (w == '3840' or w == '2560' or w == '1980' or w == '1280' or w == '854' or w == '640' or w == '426'):\n",
- " scale_string = \"-vf scale=\"+(w)+\":-1:flags=lanczos\" \n",
- "else:\n",
- " scale_string = \"\"\n",
- "\n",
- "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
- "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
- "filePath = \"ffmpeg.txt\"\n",
- "if os.path.exists(filePath):\n",
- " os.remove(filePath)\n",
- "\n",
- "if video_folder_path == \"\":\n",
- " #try:\n",
- " f = open(\"ffmpeg.txt\", \"+w\")\n",
- " x = (video_file_path1) + \"\\n\" + (video_file_path2) + \"\\n\" +(video_file_path3) + \"\\n\" +(video_file_path4) +\"\\n\" + (video_file_path5)\n",
- " f.write(x)\n",
- " f.close()\n",
- " count = coa+cob+coc+cod+coe\n",
- " #except:\n",
- " #err = 1\n",
- "\n",
- "else:\n",
- "#writing temp file\n",
- " for file in os.listdir(video_folder_path):\n",
- " if file.endswith(tuple(ext)):\n",
- " \n",
- " x = os.path.join(video_folder_path, file) \n",
- " #print(x)\n",
- " print(x, file=open(\"ffmpeg.txt\", \"+a\")) \n",
- "\n",
- "#counting line\n",
- " thefilepath = \"ffmpeg.txt\"\n",
- " count = len(open(thefilepath).readlines( ))\n",
- "\n",
- "#@markdown ---\n",
- "#@markdown Advance Settings \n",
- "#@markdown Video Setting \n",
- "preset = 'slow' #@param [\"slow\", \"medium\", \"fast\", \"hq\", \"hp\", \"bd\", \"ll\", \"llhq\", \"llhp\", \"lossless\", \"losslesshp\"]\n",
- "level = '5.2' #@param [\"default\",\"4.1\", \"5.1\", \"5.2\", \"6.2\"]\n",
- "tier = 'main' #@param [\"default\",\"main\", \"high\"]\n",
- "#@markdown Setting only for GPU Encoding
\n",
- "profile = 'main' #@param [\"main\", \"main10\", \"rext\"]\n",
- "pixfmt = 'p010le' #@param [\"nv12\", \"yuv420p\", \"p010le\", \"yuv444p\", \"p016le\", \"yuv444p16le\"]\n",
- "rc = 'vbr_hq' #@param [\"vbr\", \"cbr\", \"vbr_2pass\", \"ll_2pass_size\", \"vbr_hq\", \"cbr_hq\"]\n",
- "rcla = '32' #@param [\"8\", \"16\", \"32\", \"64\"]\n",
- "overall_bitrate = 2500 #@param {type:\"slider\", min:500, max:10000, step:100}\n",
- "max_bitrate = 20000 #@param {type:\"slider\", min:500, max:50000, step:100}\n",
- "buffer_size = 60000 #@param {type:\"slider\", min:500, max:90000, step:100}\n",
- "deblock = -3 #@param {type:\"slider\", min:-6, max:6, step:1}\n",
- "reframe = 5 #@param {type:\"slider\", min:1, max:6, step:1}\n",
- "surfaces = 64 #@param {type:\"slider\", min:0, max:64, step:1}\n",
- "#@markdown Setting only for CPU Encoding
\n",
- "profile_cpu = 'main10' #@param [\"main10\"]\n",
- "pixfmt_cpu = 'yuv420p10le' #@param [\"yuv420p\",\"yuv420p10le\",\"yuv444p\",\"yuv444p16le\"]\n",
- "threads = 16 #@param {type:\"slider\", min:0, max:16, step:1}\n",
- "crf = 28 #@param {type:\"slider\", min:0, max:30, step:1}\n",
- "\n",
- "\n",
- "if level != \"default\":\n",
- " l_string = \"-level \"+str(level)\n",
- "else:\n",
- " l_string =\"\"\n",
- "\n",
- "if tier != \"default\":\n",
- " t_string = \"-tier \"+str(tier)\n",
- "else:\n",
- " t_string = \"\"\n",
- "\n",
- "#tp = '1' #@param [\"0\", \"1\"]\n",
- "#cq = '21' #@param {type:\"string\"}\n",
- "#qm ='21' #@param {type:\"string\"}\n",
- "#qmx = '27' #@param {type:\"string\"}\n",
- "#qp = '23' #@param {type:\"string\"}\n",
- "#qb = '25' #@param {type:\"string\"}\n",
- "#qi = '21' #@param {type:\"string\"}\n",
- "\n",
- "#@markdown Audio Setting \n",
- "\n",
- "audio_output = 'No audio' #@param [\"None\", \"copy\", \"flac\", \"aac\", \"libopus\", \"eac3\", \"No audio\", \"same as input\"]\n",
- "channel = 'same as input' #@param [\"DownMix 2CH\", \"same as input\"]\n",
- "\n",
- "if audio_output == \"same as input\":\n",
- " audio_string = \"-acodec copy\"\n",
- "elif audio_output == \"No audio\":\n",
- " audio_string = \"-an\"\n",
- "elif audio_output == \"None\":\n",
- " audio_string = \"\"\n",
- "else:\n",
- " audio_string = \"-c:a \"+(audio_output)\n",
- "\n",
- "if channel == \"DownMix 2CH\":\n",
- " channel_string =\"-ac 2\"\n",
- "else:\n",
- " channel_string =\"\"\n",
- "\n",
- "#@markdown Subtitle Setting \n",
- "#@markdown Please use ass
file for hardsub \n",
- "hardsub = False #@param {type:\"boolean\"}\n",
- "subtitle_option = 'same as input' #@param [\"None\",\"No sub\", \"Add custom sub\",\"same as input\"]\n",
- "custom_subtitle_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "#@markdown Custom Added Setting \n",
- "custom_command = \"\" #@param {type:\"string\"}\n",
- "\n",
- "\n",
- "if hardsub == False:\n",
- "\n",
- " if subtitle_option == \"No sub\":\n",
- " subtitle_string = \"-sn\"\n",
- " elif subtitle_option == \"same as input\":\n",
- " subtitle_string = \"-scodec copy\"\n",
- " elif subtitle_option == \"None\":\n",
- " subtitle_string = \"\"\n",
- " else:\n",
- " subtitle_string = \"-i \"+(custom_subtitle_path)\n",
- "\n",
- "else:\n",
- " subtitle_string = \"ass=\"+(custom_subtitle_path)\n",
- "#=================\n",
- "if custom_command != \"\":\n",
- " c_string = custom_command\n",
- "else:\n",
- " c_string = \"\"\n",
- "#=================\n",
- "\n",
- "os.environ['ps'] = preset\n",
- "os.environ['pf'] = profile\n",
- "os.environ['pf_cpu'] = profile_cpu\n",
- "os.environ['pfm'] = pixfmt\n",
- "os.environ['pfmcpu'] = pixfmt_cpu\n",
- "os.environ['br'] = str(overall_bitrate)\n",
- "os.environ['max'] = str(max_bitrate)\n",
- "os.environ['buff'] = str(buffer_size)\n",
- "os.environ['de'] = str(deblock)\n",
- "os.environ['ref'] = str(reframe)\n",
- "os.environ['sur'] = str(surfaces)\n",
- "os.environ['lv'] = l_string\n",
- "os.environ['ti'] = t_string\n",
- "os.environ['rc'] = rc\n",
- "os.environ['rl'] = rcla\n",
- "os.environ['thr'] = str(threads)\n",
- "os.environ['crf'] = str(crf)\n",
- "os.environ['res'] = resolution\n",
- "#os.environ['tp'] = tp\n",
- "#os.environ['cq'] = cq\n",
- "#os.environ['qP'] = qp\n",
- "#os.environ['qB'] = qb\n",
- "#os.environ['qI'] = qi\n",
- "#os.environ['qm'] = qm\n",
- "#os.environ['qmx'] = qmx\n",
- "os.environ['scs'] = str(scale_string)\n",
- "os.environ['aus'] = audio_string\n",
- "os.environ['chc'] = channel_string\n",
- "os.environ['sus'] = subtitle_string\n",
- "os.environ['cus'] = str(c_string)\n",
- "#=================\n",
- "#Batch Encoding\n",
- "if count != 0:\n",
- " f=open('ffmpeg.txt')\n",
- " lines=f.readlines()\n",
- "\n",
- " i = 0\n",
- " while i < count:\n",
- " video_file_path = lines[i]\n",
- " video_file_path = video_file_path.rstrip(\"\\n\")\n",
- " #print(video_file_path)\n",
- "\n",
- " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- " testsplit = video_file_path.split(\"/\")\n",
- " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- " resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
- " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "\n",
- " os.environ['inputFile'] = video_file_path\n",
- " os.environ['outputPath'] = output_file_path.group(0)\n",
- " os.environ['fileName'] = filename_raw\n",
- " os.environ['fileType'] = file_type\n",
- " os.environ['resolutionWidth'] = resolution_raw.group(0)\n",
- "\n",
- " if Encoder == \"GPU\":\n",
- " if codec == \"x265\":\n",
- " if encode_setting == \"Advance\":\n",
- "\n",
- " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v \"$ps\" -rc \"$rc\" -2pass 1 -b:v \"$br\"k -maxrate \"$max\"k -bufsize \"$buff\"k -profile:v \"$pf\" $lv $ti -pix_fmt \"$pfm\" -rc-lookahead \"$rl\" -no-scenecut 1 -weighted_pred 1 -deblock:v \"$de\":\"$de\" -refs:v \"$ref\" -surfaces \"$sur\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- " \n",
- " elif encode_setting == \"HEVC\":\n",
- " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- "\n",
- " else:\n",
- " !ffmpeg -hwaccel cuvid -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -profile:v main10 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- " else:\n",
- " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -c:v h264_cuvid $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- " \n",
- " else:\n",
- " if codec == \"x265\":\n",
- " if encode_setting == \"Advance\":\n",
- " !ffmpeg -i \"$inputFile\" -flags +loop -c:v libx265 -profile:v \"$pf_cpu\" $lv $ti -pix_fmt \"$pfmcpu\" -threads \"$thr\" -thread_type frame -preset:v \"$ps\" -crf \"$crf\" -x265-params \"rc-lookahead=40:bframes=4:b-adapt=2:ref=6:aq-mode=0:aq-strength=0:aq-motion=0:me=hex:subme=3:max-merge=3:weightb=1:no-fast-intra=1:tskip-fast=0:rskip=0:strong-intra-smoothing=0:b-intra=1:early-skip=0:sao=0:rd=1:psy-rd=0:deblock=-5,-5\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- " \n",
- " elif encode_setting == \"HEVC\":\n",
- " !ffmpeg -i \"$inputFile\" -c:v libx265 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- "\n",
- " else:\n",
- " !ffmpeg -i \"$inputFile\" -c:v libx265 -profile:v main10 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- " else:\n",
- " !ffmpeg -hide_banner -i \"$inputFile\" -c:v libx264 -preset \"$ps\" -crf \"$crf\" -threads \"$thr\" -strict experimental $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
- "\n",
- " i += 1\n",
- "\n",
- " else:\n",
- " print(\"All Finished\")\n",
- " os.remove(filePath)\n",
- "else:\n",
- " print(\"Please input file or folder path\")\n",
- "#End of Code V1.5 - Codemater - "
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "GahMjYf8miNs"
- },
- "source": [
- "### » Generate Thumbnails - Preview from Video "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0nY7QbDIrnGl",
- "cellView": "form"
- },
- "source": [
- "#@markdown ← Click Here to generate thumbnail for all video in input folder path \n",
- "\n",
- "import os\n",
- "folder_path = \"\" #@param {type:\"string\"}\n",
- "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
- "video_path = '' #@param {type:\"string\"}\n",
- "\n",
- "\n",
- "#counting\n",
- "if video_path != \"\":\n",
- " count = 1\n",
- "else:\n",
- " count = 0\n",
- "\n",
- "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
- "filePath = \"vcsi.txt\"\n",
- "if os.path.exists(filePath):\n",
- " os.remove(filePath)\n",
- "\n",
- "\n",
- "\n",
- "if (folder_path == \"\") and (video_path != \"\"):\n",
- " #try:\n",
- " f = open(\"vcsi.txt\", \"+w\")\n",
- " f.write(video_path)\n",
- " f.close()\n",
- " count = 1\n",
- "\n",
- "elif (folder_path == \"\") and (video_path == \"\"):\n",
- " count = 0\n",
- "\n",
- "else:\n",
- "#writing temp file\n",
- " for file in os.listdir(folder_path):\n",
- " if file.endswith(tuple(ext)):\n",
- " \n",
- " x = os.path.join(folder_path, file) \n",
- " #print(x)\n",
- " print(x, file=open(\"vcsi.txt\", \"+a\")) \n",
- "\n",
- "#counting line\n",
- " thefilepath = \"vcsi.txt\"\n",
- " count = len(open(thefilepath).readlines( ))\n",
- "\n",
- "\n",
- "import os, sys, re\n",
- "from IPython.display import Image, display\n",
- "os.makedirs(\"/content/drive/My Drive/Thumbnail\", exist_ok=True)\n",
- "\n",
- "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
- "creation_engine = 'vcsi' #@param [\"ffmpeg\", \"vcsi\"]\n",
- "output_path = 'same folder' #@param [\"same folder\", \"My Drive/Thumbnail\"]\n",
- "#@markdown Eg : gird 3 = 3x3
\n",
- "grid = 4 #@param {type:\"slider\", min:1, max:20, step:1}\n",
- "default_grid = True #@param {type:\"boolean\"}\n",
- "time_stamp = False #@param {type:\"boolean\"}\n",
- "\n",
- "\n",
- "if time_stamp == True:\n",
- " t_string = \"-t\"\n",
- "else:\n",
- " t_string = \"\"\n",
- "\n",
- "if default_grid == False:\n",
- " g_string = \"-g \" + str(grid) + \"x\" + str(grid) \n",
- "else:\n",
- " g_string = \"\"\n",
- "\n",
- "os.environ['ts'] = t_string\n",
- "os.environ['gs'] = g_string\n",
- "#Batch Encoding\n",
- "if count != 0:\n",
- " f=open('vcsi.txt')\n",
- " lines=f.readlines()\n",
- "\n",
- " i = 0\n",
- " while i < count:\n",
- " video_file_path = lines[i]\n",
- " video_file_path = video_file_path.rstrip(\"\\n\")\n",
- " print(video_file_path)\n",
- " \n",
- " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- " output_file_path_raw = output_file_path.group(0)\n",
- " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- " file_extension = re.search(\".{3}$\", filename)\n",
- " file_extension_raw = file_extension.group(0)\n",
- "\n",
- " os.environ['inputFile'] = video_file_path\n",
- " os.environ['outputPath'] = output_file_path_raw\n",
- " os.environ['outputExtension'] = output_file_type\n",
- " os.environ['fileName'] = filename_raw\n",
- " os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- " if output_path == \"same folder\":\n",
- " if creation_engine == 'ffmpeg':\n",
- " !ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 0 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
- "\n",
- " if output_path == \"same folder\":\n",
- " if creation_engine == 'vcsi':\n",
- " !vcsi $ts $gs \"$inputFile\" -o \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
- "\n",
- " if not output_path == \"same folder\":\n",
- " !vcsi $ts $gs \"$inputFile\" -o \"/content/drive/My Drive/Thumbnail\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
- "\n",
- " i += 1\n",
- "\n",
- " else:\n",
- " print(\"All Finished\")\n",
- " os.remove(filePath)\n",
- "else:\n",
- " print(\"Please video file or folder path\")\n",
- "#End of Code V1.2 - Codemater - "
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NQ0TxfKeghR8"
- },
- "source": [
- "### » Misc."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Ls4O5VLwief-",
- "cellView": "form"
- },
- "source": [
- "#@title Convert *.mkv* ➔ *.mp4* (Lossless)\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputFile'] = filename_raw\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict experimental \"$outputPath\"\"$outputFile\".mp4"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "iFBUeQhn7QTc",
- "cellView": "form"
- },
- "source": [
- "#@title Convert Trim Video File (Lossless)\n",
- "\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
- "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nSeO98YQoTJe",
- "cellView": "form"
- },
- "source": [
- "#@title Extract Audio from Video File (Lossless)\n",
- "\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
- "\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path.group(0)\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileType'] = output_file_extension\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "CEHi5EMm9lXG"
- },
- "source": [
- "#@title Crop Video\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "out_width = \"1280\" #@param {type:\"string\"}\n",
- "out_height = \"200\" #@param {type:\"string\"}\n",
- "starting_position_x = \"0\" #@param {type:\"string\"}\n",
- "starting_position_y = \"300\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['outWidth'] = out_width\n",
- "os.environ['outHeight'] = out_height\n",
- "os.environ['positionX'] = starting_position_x\n",
- "os.environ['positionY'] = starting_position_y\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "ee5omyu53kv0"
- },
- "source": [
- "#@title Extract Individual Frames from Video (*Lossless*)\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
- "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
- "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
- "\n",
- "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
- "#@markdown * [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
- "\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['frameRate'] = frame_rate\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "qRVrWJDPFvYY",
- "cellView": "form"
- },
- "source": [
- "#@markdown ← Verify Tracks for Video \n",
- "import os, sys, re\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "!mkvmerge -i \"$video_file_path\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "IVoQDyfT06bN"
- },
- "source": [
- "#@title Extract Subtitle from Video \n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_type = 'idx/sub' #@param [\"srt\", \"ass\", \"idx/sub\"]\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['outputExtension'] = output_file_type\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "if output_file_type == 'srt':\n",
- " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
- "\n",
- "if output_file_type == 'ass':\n",
- " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
- "\n",
- "if output_file_type == 'idx/sub':\n",
- " !mkvextract \"$inputFile\" tracks 2:\"$outputPath\"/\"$fileName\".idx"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "aURlOf9BC1P3",
- "cellView": "form"
- },
- "source": [
- "#@title Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)\n",
- "import os, sys, re\n",
- "\n",
- "audio_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = audio_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['fileExtension'] = output_file_type\n",
- "os.environ['fileName'] = filename_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Ja95mvvq8oei"
- },
- "source": [
- "### Extract HardSub (*Code still pending - Require python 3.7*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nsT83IDywPFe",
- "cellView": "form"
- },
- "source": [
- "#@title\n",
- "#@markdown ⬅️ Click Here to START server \n",
- "\n",
- "!sudo apt-get update \n",
- "!sudo apt install tesseract-ocr\n",
- "!sudo apt install libtesseract-dev\n",
- "!sudo apt-get install tesseract-ocr-eng-mya\n",
- "!sudo pip install pytesseract\n",
- "!pip3 install opencv-python\n",
- "!sudo apt-get install libopencv-dev\n",
- "!pip install videocr\n",
- "\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "clear_output()\n",
- "\n",
- "print(\"Server Started Successfully\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "EzF2X0m7FIku"
- },
- "source": [
- "!pip install progressbar2 baidu-aip opencv-python-headless numpy"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "3kabgg9wFmjv"
- },
- "source": [
- "!git clone https://github.com/fanyange/ocr_video_hardcoded_subtitles.git"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "D313rmPQFrQ3"
- },
- "source": [
- "%cd /content/ocr_video_hardcoded_subtitles"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "40JsjJCBxWcn"
- },
- "source": [
- "from videocr import get_subtitles\n",
- "\n",
- "if __name__ == '__main__': # This check is mandatory for Windows.\n",
- " print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "KXWYsnPOxJVd"
- },
- "source": [
- "get_subtitles(\n",
- " video_path: str, lang='eng', time_start='0:00', time_end='',\n",
- " conf_threshold=65, sim_threshold=90, use_fullframe=False)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Dnkbv5UyGzMJ"
- },
- "source": [
- "%cd /content"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "knnSIyZzG2gs"
- },
- "source": [
- "!git clone https://github.com/aritra1999/Video-OCR"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "dJYInqIZHAPJ"
- },
- "source": [
- "%cd /content/Video-OCR"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "anaBbX-VHEwk"
- },
- "source": [
- "!pip install -r reuirements.txt\n",
- "!python final.py"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "BKvHp7QUKMGL"
- },
- "source": [
- "!git clone https://github.com/rflynn/mangold.git"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Sc5dglFDKU80"
- },
- "source": [
- "%cd /content/mangold"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "BUm3Yn42KbHD"
- },
- "source": [
- "!python ocr1.py pitrain.png"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "CD36vcpf2FSb"
- },
- "source": [
- "## FFMPEG 2 \n",
- "GPU runtime needed! "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "RDHuIkoi6l9a"
- },
- "source": [
- "### » Display Media File Metadata"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Sv8au_RO6WUs",
- "cellView": "form"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "media_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "os.environ['inputFile'] = media_file_path\n",
- "\n",
- "!ffmpeg -i \"$inputFile\" -hide_banner"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "X4yIG_nqYAoH"
- },
- "source": [
- "> *You can ignore the* \"`At least one output file must be specified`\" *error after running this.*\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "66I2t2sQ2SMq"
- },
- "source": [
- "### » Convert *Video File* ➔ *.mp4* (*Lossless*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "o6fcC2wN2SM8"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mp4"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NObEcBWAJoaz"
- },
- "source": [
- "### » Convert *Video File* ➔ *.mkv* (*Lossless*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "zsx4JFLRJoa0"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mkv"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FpJXJiRl6-gK"
- },
- "source": [
- "### » Trim Video File (*Lossless*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "8rjW6Fcb2SN0"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
- "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "SNDGdMRn3PA-"
- },
- "source": [
- "### » Crop Video"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "KFcIThDuBii_"
- },
- "source": [
- " Crop Variables Explanation:\n",
- "\n",
- "* `out_width` = The width of your cropped video file.\n",
- "* `out_height` = The height of your cropped video file.\n",
- "* `starting_position_x` & `starting_position_y` = These values define the x & y coordinates of the top left corner of your original video to start cropping from.\n",
- "\n",
- "###### *Example: For cropping the black bars from a video that looked like* [this](https://yuju.pw/y/312r.png):\n",
- "* *For your starting coordinates* (`x` , `y`) *you would use* (`0` , `138`).\n",
- "* *For* `out_width` *you would use* `1920`. *And for* `out_height` *you would use `804`.*\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "wuMEJdjV2SOT"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "out_width = \"1920\" #@param {type:\"string\"}\n",
- "out_height = \"804\" #@param {type:\"string\"}\n",
- "starting_position_x = \"0\" #@param {type:\"string\"}\n",
- "starting_position_y = \"138\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['outWidth'] = out_width\n",
- "os.environ['outHeight'] = out_height\n",
- "os.environ['positionX'] = starting_position_x\n",
- "os.environ['positionY'] = starting_position_y\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2f-THZmDoOaY"
- },
- "source": [
- "### » Extract Audio from Video File (*Lossless*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "JNckCucf2SOs"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
- "\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path.group(0)\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileType'] = output_file_extension\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "MSUasbRUDP3B"
- },
- "source": [
- "### » Re-encode a Video to a Different Resolution"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "nd2LvSRZCxRe",
- "cellView": "form"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "video_file_path = '' #@param {type:\"string\"}\n",
- "resolution = '1080p' #@param [\"2160p\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\"]\n",
- "file_type = 'mp4' #@param [\"mkv\", \"mp4\"]\n",
- "\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "testsplit = video_file_path.split(\"/\")\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path.group(0)\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileType'] = file_type\n",
- "os.environ['resolutionHeight'] = resolution_raw.group(0)\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -vf \"scale=-1:\"$resolutionHeight\"\" -c:a copy -strict experimental \"$outputPath\"/\"$fileName\"-\"$resolutionHeight\"p.\"$fileType\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9UagRtLPyKoQ"
- },
- "source": [
- "### » Extract Individual Frames from Video"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "jTnByMhAyKoF"
- },
- "source": [
- "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
- "* [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
- "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
- "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['startTime'] = start_time\n",
- "os.environ['endTime'] = end_time\n",
- "os.environ['frameRate'] = frame_rate\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9ZcgdPBT2SQK"
- },
- "source": [
- "### » Generate Thumbnails - Preview from Video (3x2)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "J2u-Rha8miNy"
- },
- "source": [
- "#@markdown Example of output image: https://yuju.pw/y/39i2.png \n",
- "import os, sys, re\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['outputExtension'] = output_file_type\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 2 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "7-3O4en4C4IL"
- },
- "source": [
- "### » Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "2sKzNHSG2SQq"
- },
- "source": [
- "import os, sys, re\n",
- "\n",
- "audio_file_path = \"\" #@param {type:\"string\"}\n",
- "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = audio_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['fileExtension'] = output_file_type\n",
- "os.environ['fileName'] = filename_raw\n",
- "\n",
- "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "VRk2Ye1exWVA"
- },
- "source": [
- "### » Extract + Upload Frames from Video "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "BIGsgarfxWVI"
- },
- "source": [
- "import os, re, time, pathlib\n",
- "import urllib.request\n",
- "from IPython.display import clear_output\n",
- "\n",
- "Auto_UP_Gdrive = False \n",
- "AUTO_MOVE_PATH = \"/content\" \n",
- "HOME = os.path.expanduser(\"~\")\n",
- "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/biplobsd/\" \\\n",
- " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
- "\n",
- "from ttmg import (\n",
- " runSh,\n",
- " findProcess,\n",
- " loadingAn,\n",
- " updateCheck,\n",
- " ngrok\n",
- ")\n",
- "\n",
- "video_file_path = \"\" #@param {type:\"string\"}\n",
- "\n",
- "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
- "output_file_path_raw = output_file_path.group(0)\n",
- "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
- "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
- "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
- "file_extension = re.search(\".{3}$\", filename)\n",
- "file_extension_raw = file_extension.group(0)\n",
- "\n",
- "os.environ['inputFile'] = video_file_path\n",
- "os.environ['outputPath'] = output_file_path_raw\n",
- "os.environ['fileName'] = filename_raw\n",
- "os.environ['fileExtension'] = file_extension_raw\n",
- "\n",
- "!mkdir -p \"/content/frames\"\n",
- "\n",
- "for i in range(10):\n",
- " clear_output()\n",
- " loadingAn()\n",
- " print(\"Uploading Frames...\")\n",
- "\n",
- "%cd \"/content/frames\"\n",
- "!ffmpeg -hide_banner -ss 00:56.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame1.png\"\n",
- "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame1.png\" https://catbox.moe/user/api.php -o frame1.txt\n",
- "f1 = open('frame1.txt', 'r')\n",
- "%cd \"/content\"\n",
- "file_content1 = f1.read()\n",
- "\n",
- "%cd \"/content/frames\"\n",
- "!ffmpeg -hide_banner -ss 02:20.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame2.png\"\n",
- "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame2.png\" https://catbox.moe/user/api.php -o frame2.txt\n",
- "%cd \"/content/frames\"\n",
- "f2 = open('frame2.txt', 'r')\n",
- "%cd \"/content\"\n",
- "file_content2 = f2.read()\n",
- "\n",
- "clear_output()\n",
- "print (\"Screenshot URLs:\")\n",
- "print (\"1. \" + file_content1)\n",
- "print (\"2. \" + file_content2)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "tozwpAhhnm69"
- },
- "source": [
- "### MediaInfo "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "NTULRguzu0b0",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] MediaInfo \n",
- "path_to_file = \"\" # @param {type:\"string\"}\n",
- "save_output_to_file = False # @param {type:\"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os, uuid, re, IPython\n",
- "import ipywidgets as widgets\n",
- "import time\n",
- "from glob import glob\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "def mediainfo():\n",
- " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path_to_file\" \"\"\")\n",
- " with open('/root/.nfo', 'r') as file:\n",
- " media = file.read()\n",
- " media = media.replace(os.path.dirname(path_to_file)+\"/\", \"\")\n",
- " print(media)\n",
- " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
- " \n",
- " if save_output_to_file:\n",
- " txt = path.rpartition('.')[0] + \".txt\"\n",
- " if os.path.exists(txt):\n",
- " get_ipython().system_raw(\"rm -f '$txt'\")\n",
- " with open(txt, 'a+') as file:\n",
- " file.write(media)\n",
- " \n",
- "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
- " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
- " \n",
- "mediainfo()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Ts6zYXUdEfrz"
- },
- "source": [
- "## Google Drive Downloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "B10h_KlyE_S5"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Module \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!pip install googleDriveFileDownloader\n",
- "\n",
- "path1 = '/content/downloads'\n",
- "path2 = '/content/downloads/Google Drive'\n",
- "\n",
- "if os.path.exists(path1) == False:\n",
- " os.makedirs(path1)\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "elif os.path.exists(path1) == True:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "mHTDvjRKEs9n"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Google Drive Downloader \n",
- "url = \"\" # @param {type:\"string\"}\n",
- "output = \"\" # @param {type:\"string\"}\n",
- "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/Google Drive).
\n",
- "# @markdown > This downloader is somewhat working.The only problem (for now) is that the downloaded file is not stored with the same name and appears to not have extension as well.\n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "from googleDriveFileDownloader import googleDriveFileDownloader\n",
- "\n",
- "if url == '':\n",
- " print(\"The url field is empty!\")\n",
- "else:\n",
- " if output == '':\n",
- " output = '/content/downloads/Google Drive'\n",
- " %cd \"$output\"\n",
- " a = googleDriveFileDownloader()\n",
- " a.downloadFile(url)\n",
- " else:\n",
- " %cd \"$output\"\n",
- " a = googleDriveFileDownloader()\n",
- " a.downloadFile(url)\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FWdEg4H9JlSp"
- },
- "source": [
- "## HandBrake "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "E2seNDqYO8wg",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install HandBrake \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "from os import makedirs\n",
- "\n",
- "makedirs(\"/content/temp/HandbrakeTemp\", exist_ok = True)\n",
- "\n",
- "!wget -qq https://github.com/vot/ffbinaries-prebuilt/releases/download/v4.2.1/ffmpeg-4.2.1-linux-64.zip \n",
- "!rm -f ffmpeg-4.2.1-linux-64.zip\n",
- "!add-apt-repository ppa:stebbins/handbrake-releases -y \n",
- "!apt-get install -y handbrake-cli\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "CQdjykVdJw0a",
- "cellView": "form"
- },
- "source": [
- "##################################################\n",
- "#\n",
- "# Code author: SKGHD\n",
- "# https://github.com/SKGHD/Handy\n",
- "#\n",
- "##################################################\n",
- "\n",
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] HandBrake \n",
- "MODE = \"SINGLE\" #@param [\"SINGLE\", \"BATCH\"]\n",
- "# @markdown > Select mode (batch conversion / single file)\n",
- "# @markdown ---\n",
- "SOURCE = \"\" # @param {type:\"string\"}\n",
- "DESTINATION = \"\" # @param {type:\"string\"}\n",
- "FORMAT = \"mkv\" # @param [\"mp4\", \"mkv\"]\n",
- "RESOLUTION = \"480p\" # @param [\"480p\", \"576p\", \"720p\", \"1080p\"]\n",
- "Encoder = \"x264\" # @param [\"x264\", \"x265\"]\n",
- "Encoder_Preset = \"ultrafast\" # @param [\"ultrafast\", \"faster\", \"fast\", \"medium\", \"slow\", \"slower\"]\n",
- "CQ = 30 #@param {type:\"slider\", min:10, max:30, step:1}\n",
- "# @markdown > Choose Constant Quality Rate (higher quality / smaller file size)\n",
- "Additional_Flags = \"\" # @param {type:\"string\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import smtplib\n",
- "import os\n",
- "\n",
- "formats = ('.mkv','.mp4','.ts','.avi','.mov','.wmv')\n",
- "\n",
- "######## Renames the file ########\n",
- "def fileName(fPath):\n",
- " tName = fPath.split('/')[-1] \n",
- " if tName.endswith('ts'):\n",
- " tName = '[HandBrake] ' + tName[:-3] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
- " else:\n",
- " tName = '[HandBrake] ' + tName[:-4] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
- " return tName\n",
- "\n",
- "def set_resolution():\n",
- " global w,h,flags\n",
- " if RESOLUTION == \"480p\":\n",
- " w, h = \"854\" , \"480\"\n",
- " if RESOLUTION == \"480p\":\n",
- " w, h = \"1024\" , \"576\"\n",
- " elif RESOLUTION == \"720p\":\n",
- " w, h = \"1280\" , \"720\"\n",
- " elif RESOLUTION==\"1080p\":\n",
- " w, h = \"1920\" , \"1080\"\n",
- "\n",
- "def addFlags():\n",
- " global flags\n",
- " flags = f\" --encoder {Encoder} --all-audio -s '0,1,2,3' --cfr --optimize --quality={CQ} --width={w} --height={h} --format={FORMAT} --encoder-preset={Encoder_Preset} \"\n",
- " if Additional_Flags != \"\":\n",
- " flags += str(Additional_Flags)\n",
- "\n",
- "set_resolution()\n",
- "addFlags()\n",
- "\n",
- "##### HandBrake and Rclone #####\n",
- "def runner(path):\n",
- " f_name = fileName(path)\n",
- " hTemp=f\"/content/temp/HandbrakeTemp/{f_name}\"\n",
- " !HandBrakeCLI -i \"$path\" -o \"$hTemp\" $flags\n",
- "\n",
- "\n",
- " if os.path.isfile(hTemp):\n",
- " print(f\"\\n\\n********** Successfully converted {f_name}\\n Now saving to Destination.....\")\n",
- " if os.path.exists('/usr/bin/rclone'):\n",
- " !rclone move \"$hTemp\" --user-agent \"Mozilla\" \"$DESTINATION\" --transfers 20 --checkers 20 --stats-one-line --stats=5s -v --tpslimit 95 --tpslimit-burst 40\n",
- " else:\n",
- " dest = DESTINATION+'/'+f_name\n",
- " !mv \"$hTemp\" \"$dest\"\n",
- " if os.path.isfile(DESTINATION+ '/' +f_name): \n",
- " print(f\"\\n\\n********** Successfully saved {f_name} to Destination\")\n",
- "\n",
- "########## Check Mode ########\n",
- "if MODE==\"BATCH\":\n",
- " os.makedirs(DESTINATION, exist_ok=True)\n",
- " if SOURCE.endswith('/'):\n",
- " pass\n",
- " else: SOURCE +='/'\n",
- " filesList = os.listdir(SOURCE+'.')\n",
- " if os.path.isfile(SOURCE+'processed_db.txt'):\n",
- " pass\n",
- " else:\n",
- " with open((SOURCE+'processed_db.txt'), 'w') as fb:\n",
- " fb.write(\"Do not delete this file until all files have been processed!\\n\")\n",
- " fb.close()\n",
- " with open((SOURCE+'processed_db.txt'), \"r+\") as filehandle:\n",
- " processedList = [x.rstrip() for x in filehandle.readlines()]\n",
- "\n",
- " print('<<<<<<<<<<<<<<<<<< Starting Conversion in Batch mode. >>>>>>>>>>>>>>>>>>')\n",
- "\n",
- " for currentFile in filesList:\n",
- " if currentFile.endswith(formats):\n",
- " if currentFile not in processedList:\n",
- " currentPath = SOURCE + currentFile \n",
- " print(f'\\n\\n**************** Current File to process: {currentFile}')\n",
- " runner(currentPath)\n",
- " filehandle.write(currentFile+'\\n')\n",
- " filehandle.close()\n",
- " \n",
- "\n",
- "else:\n",
- " if SOURCE.endswith(formats): \n",
- " runner(SOURCE)\n",
- " else: print(\"Are you sure you have selected the correct file?\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Rd6Br05y7_Ya"
- },
- "source": [
- "## MEGA Downloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "LeGWoVGW8Eem",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Module and Dependencies \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!pip install git+https://github.com/jeroenmeulenaar/python3-mega.git\n",
- "\n",
- "path1 = '/content/downloads'\n",
- "path2 = '/content/downloads/MEGA'\n",
- "\n",
- "if os.path.exists(path1) == False:\n",
- " os.makedirs(path1)\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "elif os.path.exists(path1) == True:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "JiZ0tJd78LNQ",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] MEGA Downloader \n",
- "url = \"\" # @param {type:\"string\"}\n",
- "output = \"\" # @param {type:\"string\"}\n",
- "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/MEGA).
\n",
- "# @markdown > Currently not working due to the module haven't been updated to work with the new MEGA link structure. \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "from mega import Mega\n",
- "\n",
- "if url == '':\n",
- " print(\"The url field is empty!\")\n",
- "else:\n",
- " if output == '':\n",
- " output = '/content/downloads/MEGA'\n",
- " %cd /content/downloads/MEGA\n",
- " m = Mega.from_ephemeral()\n",
- " m.download_from_url(url)\n",
- " else:\n",
- " %cd \"$output\"\n",
- " m = Mega.from_ephemeral()\n",
- " m.download_from_url(url)\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "7bNutSOeJ1kM"
- },
- "source": [
- "## zippyshare Downloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "if-ge8tzJ305",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Module and Dependencies \n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!git clone https://github.com/mcrapet/plowshare.git /content/plowshare\n",
- "clear_output()\n",
- "%cd plowshare\n",
- "clear_output()\n",
- "!make install\n",
- "clear_output()\n",
- "!plowmod --install\n",
- "clear_output()\n",
- "! apt-get install nodejs\n",
- "\n",
- "path1 = '/content/downloads'\n",
- "path2 = '/content/downloads/zippyshare'\n",
- "\n",
- "if os.path.exists(path1) == False:\n",
- " os.makedirs(path1)\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "elif os.path.exists(path1) == True:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " None\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "tO22WPSLKdbH",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] zippyshare Downloader \n",
- "mode = 'single' #@param [\"single\", \"batch\"]\n",
- "# @markdown ---\n",
- "direct_url = \"\" #@param {type:\"string\"}\n",
- "store_path = \"\" #@param {type:\"string\"}\n",
- "# @markdown > This downloader isn't working as it can't read from zippyshare's weird url (www(random_number).zippyshare)\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "from IPython.display import clear_output\n",
- "from google.colab import files\n",
- "\n",
- "if mode == 'single':\n",
- " if direct_url == '':\n",
- " print(\"The URL field is empty!\")\n",
- " else:\n",
- " if store_path == '':\n",
- " store_path = '/content/downloads/zippyshare'\n",
- " !plowdown {direct_url} -o {store_path}\n",
- " else:\n",
- " !plowdown {direct_url} -o {store_path}\n",
- "elif mode == 'batch':\n",
- " print(\"Upload a download.txt file that contains a list of zippyshare links.\\n\")\n",
- " files.upload()\n",
- " clear_output()\n",
- " if store_path == '':\n",
- " store_path = '/content/downloads/zippyshare'\n",
- " !plowdown {direct_url} -o {store_path}\n",
- " else:\n",
- " !plowdown {direct_url} -o {store_path}"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "pUODCRACrvGC"
- },
- "source": [
- "## Penetration Testing "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "5Lo-h1Cnrxou"
- },
- "source": [
- "### hashcat \n",
- "GPU runtime needed! "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dWFBQvMVOJv0"
- },
- "source": [
- "This block is unlikely going to make any progress as the learning curve of hashcat is quite steep..."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "LPxKv5DAr3KV",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install hashcat \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "!apt install cmake build-essential -y && apt install checkinstall git -y && git clone https://github.com/hashcat/hashcat.git && cd hashcat && git submodule update --init && make && make install \n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "SeubAcoyxCsw",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] hashcat Bechmark \n",
- "# ================================================================ #\n",
- "\n",
- "!hashcat -b"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "HwRqNJoYR4Us",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] hashcat \n",
- "hash = \"\" # @param {type:\"string\"}\n",
- "output = \"\" # @param {type:\"string\"}\n",
- "# @markdown > The output field is currently there just as a placeholder.
\n",
- "# @markdown ---\n",
- "hash_type = 'WPA-EAPOL-PBKDF2' #@param [\"MD5\", \"SHA1\", \"WPA-EAPOL-PBKDF2\"]\n",
- "attack_mode = 'dictionary' #@param [\"dictionary\", \"combination\", \"mask\", \"hybrid_wordlist_+_mask\", \"hybrid_mask_+_wordlist\"]\n",
- "wordlist = \"\" # @param {type:\"string\"}\n",
- "# @markdown > Enter the path to your wordlist (only used when the dictionary attack is chosen).
\n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "if hash == '':\n",
- " print(\"The hash field is empty!\")\n",
- "\n",
- "if output == '':\n",
- " output = '/content/hashcat_output.txt'\n",
- "\n",
- "placeholder = 'This cell is not complete yet and could be dropped/abandoned at any time.'\n",
- "\n",
- "if hash_type == 'MD5' or hash_type == 'SHA1':\n",
- " print(placeholder)\n",
- "elif hash_type == 'WPA-EAPOL-PBKDF2':\n",
- " hash_type = 2500\n",
- " if attack_mode == 'dictionary':\n",
- " attack_mode = 0\n",
- " if wordlist == '':\n",
- " print(\"The wordlist field is empty!\")\n",
- " else:\n",
- " !hashcat -m {hash_type} -a {attack_mode} {hash} {wordlist} -o {output} --force\n",
- " elif attack_mode == 'combination' or attack_mode == 'mask' or attack_mode == 'hybrid_wordlist_+_mask' or attack_mode == 'hybrid_mask_+_wordlist':\n",
- " print(placeholder)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "2EIvy2zb8re1"
- },
- "source": [
- "!hashcat -m 2500 -a 0 /content/test.hccapx /content/downloads/rockyou.txt -d 1 -o /content/test.txt "
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "gdgYuWnst4ed"
- },
- "source": [
- "## ProxyBroker "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "SuLleS03tzjn"
- },
- "source": [
- "!pip install proxybroker"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "1czv6VpwuJs8"
- },
- "source": [
- "\"\"\"Find 10 working HTTP(S) proxies and save them to a file.\"\"\"\n",
- "\n",
- "import asyncio\n",
- "from proxybroker import Broker\n",
- "\n",
- "\n",
- "async def save(proxies, filename):\n",
- " \"\"\"Save proxies to a file.\"\"\"\n",
- " with open(filename, 'w') as f:\n",
- " while True:\n",
- " proxy = await proxies.get()\n",
- " if proxy is None:\n",
- " break\n",
- " proto = 'https' if 'HTTPS' in proxy.types else 'http'\n",
- " row = '%s://%s:%d\\n' % (proto, proxy.host, proxy.port)\n",
- " f.write(row)\n",
- "\n",
- "\n",
- "def main():\n",
- " proxies = asyncio.Queue()\n",
- " broker = Broker(proxies)\n",
- " tasks = asyncio.gather(broker.find(types=['HTTP', 'HTTPS'], limit=10),\n",
- " save(proxies, filename='proxies.txt'))\n",
- " loop = asyncio.get_event_loop()\n",
- "# loop.run_until_complete(tasks)\n",
- "\n",
- "\n",
- "if __name__ == '__main__':\n",
- " main()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "TxQiE-LXjnAb"
- },
- "source": [
- "## Prawler "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "rq0caJ3njq08"
- },
- "source": [
- "!pip install Prawler"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Gl8Efvbzjs3T"
- },
- "source": [
- "import Prawler\n",
- "\n",
- "proxy_list = Prawler.get_proxy_list(5, \"http\", \"elite\", \"US\")\n",
- "\n",
- "print(proxy_list)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "JyUn6Yn8lM_c"
- },
- "source": [
- "## Free-Proxy "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "K8qprse5lLcb"
- },
- "source": [
- "!pip install free-proxy"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "jR8t0j98lWG1"
- },
- "source": [
- "from fp.fp import FreeProxy\n",
- "\n",
- "proxy = FreeProxy(country_id=['US', 'AU', 'CA', 'SG', 'JP', 'KR'], timeout=1, rand=False).get()\n",
- "\n",
- "print(proxy)"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "jmlQ0JeXyH9j"
- },
- "source": [
- "## madodl "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "PZrpgJGe59yp"
- },
- "source": [
- "## code-server "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "yzLxqKex6BQ6",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install code-server \n",
- "# ================================================================ #\n",
- "\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!pip install colabcode\n",
- "\n",
- "# clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "-LB2nKez6XOz",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] code-server \n",
- "# @markdown > Please note that while running this cell, you cannot run other cell until you stop this one first.\n",
- "# ================================================================ #\n",
- "\n",
- "from colabcode import ColabCode\n",
- "\n",
- "# Run VSCode with password\n",
- "# ColabCode(port=10000, password=\"12345\")\n",
- "\n",
- "# Run VSCode without password\n",
- "ColabCode()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "eWs-zl2gNvwW"
- },
- "source": [
- "## Create/Extract Archive "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "XO8dzdyyH5pT"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Create Archive \n",
- "MODE = \"ZIP\" #@param [\"ZIP\", \"TAR\", \"7Z\"]\n",
- "FILENAME = \"\" # @param {type:\"string\"}\n",
- "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
- "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
- "\n",
- "# option supports b k m g (bytes, kilobytes, megabytes, gigabytes)\n",
- "SPLIT = \"no\" #@param [\"1g\", \"2g\", \"3g\", \"4g\", \"5g\", \"no\"]\n",
- "\n",
- "compress = 4#@param {type:\"slider\", min:0, max:9, step:0}\n",
- "#@markdown > Use the character `|` to separate paths. (Example `path/to /1 | path/to/2`)\n",
- "# ================================================================ #\n",
- "\n",
- "from pathlib import PurePosixPath\n",
- "\n",
- "pathList = PATH_TO_FILE.split('|')\n",
- "if MODE == \"ZIP\":\n",
- " if not FILENAME:\n",
- " FILENAME = \"/content/NEW_FILE.ZIP\"\n",
- " if ARCHIVE_PASSWORD:\n",
- " passADD = f'--password \"{ARCHIVE_PASSWORD}\"'\n",
- " else:\n",
- " passADD = ''\n",
- " splitC = f\"-s {SPLIT}\" if not 'no' in SPLIT else \"\" \n",
- " for part in pathList:\n",
- " pathdic = PurePosixPath(part.strip())\n",
- " parent = pathdic.parent\n",
- " partName = pathdic.parts[-1]\n",
- " cmd = f'cd \"{parent}\" && zip {passADD} -{compress} {splitC} -v -r -u \"{FILENAME}\" \"{partName}\"'\n",
- " !$cmd\n",
- "elif MODE == \"TAR\":\n",
- " if not FILENAME:\n",
- " FILENAME = \"/content/NEW_FILE\"\n",
- " cmd = f'GZIP=-{compress} tar -zcvf \"{FILENAME}.tar.gz\" {PATH_TO_FILE}'\n",
- " !$cmd\n",
- "else:\n",
- " if not FILENAME:\n",
- " FILENAME = \"/content/NEW_FILE\"\n",
- " for part in pathList:\n",
- " pathdic = PurePosixPath(part.strip())\n",
- " parent = pathdic.parent\n",
- " partName = pathdic.parts[-1]\n",
- " cmd = f'cd \"{parent}\" && 7z a -mx={compress} \"{FILENAME}.7z\" \"{partName}\"'\n",
- " !$cmd\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "k98WImeXH5pK",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Extract Archive \n",
- "MODE = \"7Z\" # @param [\"UNZIP\", \"UNTAR\", \"UNRAR\", \"7Z\"]\n",
- "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
- "extractPath = \"\" # @param {type:\"string\"}\n",
- "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os, urllib.request\n",
- "HOME = os.path.expanduser(\"~\")\n",
- "\n",
- "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
- " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
- " \"MiXLab/master/resources/mixlab.py\"\n",
- " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
- "\n",
- "from mixlab import (\n",
- " runSh,\n",
- " checkAvailable,\n",
- ")\n",
- "\n",
- "def extractFiles():\n",
- " global extractPath\n",
- " if ARCHIVE_PASSWORD:\n",
- " passADD = f'-P {ARCHIVE_PASSWORD}'\n",
- " else:\n",
- " passADD = ''\n",
- " if not extractPath:\n",
- " extractPath = \"/content/extract\"\n",
- " os.makedirs(extractPath, exist_ok=True)\n",
- " if MODE == \"UNZIP\":\n",
- " runSh('unzip '+passADD+f' \"{PATH_TO_FILE}\" -d \"{extractPath}\"', output=True)\n",
- " elif MODE == \"UNRAR\":\n",
- " runSh(f'unrar x \"{PATH_TO_FILE}\" \"{extractPath}\" '+passADD+' -o+', output=True)\n",
- " elif MODE == \"UNTAR\":\n",
- " runSh(f'tar -C \"{extractPath}\" -xvf \"{PATH_TO_FILE}\"', output=True)\n",
- " else:\n",
- " runSh(f'7z x \"{PATH_TO_FILE}\" -o{extractPath} '+passADD, output=True)\n",
- "\n",
- "extractFiles()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "wBCtu4fMAwRn"
- },
- "source": [
- "## 4chan-downloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0w-c_xBUBCXN",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Clone 4chan-downloader \n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "import os.path\n",
- "from IPython.display import clear_output\n",
- "\n",
- "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
- " print(\"Hey, Anon-kun/chan!\\n\\nDid you know that you already have cloned the 4chan-downloader?\\nNo need to do that again, you know...\\n\\n(How do I know that? Well, I can os.path.exists the file inb4404.py, so... yeah)\")\n",
- "else:\n",
- " !git clone https://github.com/Exceen/4chan-downloader.git /content/tools/4chan-downloader\n",
- " clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "VkBNduaUBg6S",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] 4chan-downloader \n",
- "automatically_clear_output = False #@param {type:\"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "import os.path\n",
- "from IPython.display import clear_output\n",
- "\n",
- "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
- " !python /content/tools/4chan-downloader/inb4404.py -h\n",
- " if automatically_clear_output == True:\n",
- " clear_output()\n",
- "else:\n",
- " print(\"Hey, Anon-kun/chan... I can't find the inb4404.py.\\n\\nHave you run the cell above this one?\\nIf you haven't already, run the cell above first.\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "um3eitPj0QWG"
- },
- "source": [
- "## Instagram Scraper "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "dqFUrm7M3B4j",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install Instagram Scraper \n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "%pip install instagram-scraper\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "wk2bY_l00Sq3",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] Instagram Scraper \n",
- "target_username = \"\" #@param {type:\"string\"}\n",
- "# @markdown ---\n",
- "# @markdown
In case if the account is private, you will need to authenticate using your account. \n",
- "your_username = \"\" #@param {type:\"string\"}\n",
- "your_password = \"\" #@param {type:\"string\"}\n",
- "use_login = False #@param {type:\"boolean\"}\n",
- "# @markdown ---\n",
- "# @markdown
Options: \n",
- "download_path = \"\" #@param {type:\"string\"}\n",
- "download_mode = 'default' #@param [\"default\", \"image_only\", \"video_only\", \"story_only\", \"broadcast_only\"]\n",
- "silent_mode = False #@param {type:\"boolean\"}\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import sys\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "path1 = \"/content/downloads/\"\n",
- "path2 = \"/content/downloads/instagram-scraper/\"\n",
- "silent = \"\"\n",
- "\n",
- "if download_path != \"\":\n",
- " pass\n",
- "elif download_path == \"\":\n",
- " if os.path.exists(path1) == False:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path1)\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " os.makedirs(path1)\n",
- " elif os.path.exists(path1) == True:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " download_path = path2\n",
- "\n",
- "if download_mode == \"default\":\n",
- "\tdownload_mode = \"\"\n",
- "elif download_mode == \"image_only\":\n",
- "\tdownload_mode = \"image\"\n",
- "elif download_mode == \"video_only\":\n",
- "\tdownload_mode = \"video\"\n",
- "elif download_mode == \"story_only\":\n",
- "\tdownload_mode = \"story\"\n",
- "elif download_mode == \"broadcast_only\":\n",
- "\tdownload_mode = \"broadcast\"\n",
- "\n",
- "if silent_mode == True:\n",
- "\tsilent = \"-q\"\n",
- "else:\n",
- "\tsilent = \"\"\n",
- "\n",
- "if target_username == \"\":\n",
- " sys.exit(\"No target username to download is given.\")\n",
- "else:\n",
- " if use_login == True:\n",
- " if your_username == \"\" and your_password == \"\":\n",
- " sys.exit(\"The username and password fields are empty!\")\n",
- " elif your_username == \"\" and your_password != \"\":\n",
- " sys.exit(\"The username field is empty!\")\n",
- " elif your_username != \"\" and your_password == \"\":\n",
- " sys.exit(\"The password field is empty!\")\n",
- " else:\n",
- " !instagram-scraper \"$target_username\" -u \"$your_username\" -p \"$your_password\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent\"\n",
- " else:\n",
- " !instagram-scraper \"$target_username\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent_mode\"\n",
- "\n",
- "print(\"\")\n",
- "print(\"==================================================\")\n",
- "print(\"Downloaded files are stored in\", download_path + target_username)\n",
- "print(\"==================================================\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "OgTz1FAtcHJH"
- },
- "source": [
- "!instagram-scraper \"\" -u \"\" -p \"\" -d \"\" -n -t image"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3PmqhlfgKj85"
- },
- "source": [
- "## instaloader "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "XH3kLNW9KoRf"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install instaloader \n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "!pip3 install instaloader\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "avemAgewKydt",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] instaloader \n",
- "target_username = \"\" #@param {type:\"string\"}\n",
- "# @markdown ---\n",
- "# @markdown
Options: \n",
- "use_login = False #@param {type:\"boolean\"}\n",
- "download_path = \"\" #@param {type:\"string\"}\n",
- "# @markdown > If the download path is not specified, the default one will be used.\"/content/downloads/instaloader/username\"\n",
- "# ================================================================ #\n",
- "\n",
- "import os\n",
- "import sys\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "if download_path != \"\":\n",
- " pass\n",
- "elif download_path == \"\":\n",
- " path1 = \"/content/downloads/\"\n",
- " path2 = \"/content/downloads/instaloader/\"\n",
- " if os.path.exists(path1) == False:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path1)\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " os.makedirs(path1)\n",
- " elif os.path.exists(path1) == True:\n",
- " if os.path.exists(path2) == False:\n",
- " os.makedirs(path2)\n",
- " elif os.path.exists(path2) == True:\n",
- " download_path = path2\n",
- "\n",
- "if target_username == \"\":\n",
- " sys.exit(\"No target username to download is given.\")\n",
- "else:\n",
- " if use_login == True:\n",
- " username = input(\"Enter your username: \")\n",
- " username = \"--login=\" + username\n",
- " %cd \"$download_path\"\n",
- " clear_output()\n",
- " !instaloader --fast-update \"$target_username\" \"$username\"\n",
- " else:\n",
- " %cd \"$download_path\"\n",
- " clear_output()\n",
- " !instaloader \"$target_username\"\n",
- "\n",
- "print(\"\")\n",
- "print(\"==================================================\")\n",
- "print(\"Downloaded files are stored in\", download_path + target_username)\n",
- "print(\"==================================================\")"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "bpnK5DH0VBs6"
- },
- "source": [
- "# Copy session from local to google drive\n",
- "!cp -a /root/.config/instaloader/ /content/drive/MyDrive/instaloader-session"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0Jej2XFqWI4O"
- },
- "source": [
- "# Copy session from google drive to local\n",
- "!cp -a /content/drive/MyDrive/instaloader-session /root/.config/instaloader"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2b4Igr8g0duu"
- },
- "source": [
- "## ecchi.iwara-dl "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "__PBrzCP0fPf"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Clone] ecchi.iwara-dl \n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import HTML, clear_output\n",
- "\n",
- "!apt-get install -y jq\n",
- "!apt-get install python3-bs4\n",
- "!git clone https://github.com/hare1039/iwara-dl /content/tools/iwara-dl\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "lLU9f0EH0mZN"
- },
- "source": [
- "!bash /content/tools/iwara-dl/iwara-dl.sh [-u [U]] [-p [P]] [-i [n]] [-rhftcsdn] [url [url ...]]"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "iHyp2Bgkx1B2"
- },
- "source": [
- "## UUP Dump "
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "cellView": "form",
- "id": "uoSecUJvx4za"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← Install the Requirements \n",
- "# ================================================================ #\n",
- "\n",
- "import IPython\n",
- "from IPython.display import clear_output\n",
- "\n",
- "!sudo apt-get install aria2 cabextract wimtools chntpw genisoimage\n",
- "!git clone https://github.com/uup-dump/converter \"/content/tools/uup-dump/converter\"\n",
- "\n",
- "clear_output()"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "6NqUFFnBx9C5",
- "cellView": "form"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# @markdown ← [Start] UUP Dump \n",
- "script_location = \"\" #@param {type:\"string\"}\n",
- "# @markdown > Only type in the script's path and exclude the script's name. Type in: /content/path/to/script Exclude: uup_download_linux.sh\n",
- "# ================================================================ #\n",
- "\n",
- "if not script_location == \"\":\n",
- " pass\n",
- "else:\n",
- " script_location = \"/content\"\n",
- "\n",
- "%cd \"$script_location\"\n",
- "\n",
- "!bash \"uup_download_linux.sh\"\n",
- "\n",
- "%cd \"/content\""
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "KDFWgCYE0ULQ"
- },
- "source": [
- "# ============================= FORM ============================= #\n",
- "# Custom commands goes here\n",
- "# ================================================================ #\n",
- "\n"
- ],
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "6uDnLXX40fv6"
- },
- "source": [
- "TO DO:\n",
- "\n",
- "- Add files and paths checker ot make sure they are exist"
- ]
- }
- ]
-}
\ No newline at end of file
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text",
+ "id": "view-in-github"
+ },
+ "source": [
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ygDyFQvR5Gci"
+ },
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "# **Welcome to Mi XL ab ** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XU_IUOV6owRg"
+ },
+ "source": [
+ "## About MiXLab "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lkyLut0_ntrJ"
+ },
+ "source": [
+ "MiXLab is a mix of multiple amazing colab notebooks found on the internet (mostly from github).\n",
+ "\n",
+ "The name MiXLab is inspired from this awesome 3rd party Android file manager app called MiXplorer and combined with (Google) Colab at the end, resulting in MiXLab.\n",
+ "\n",
+ "What is the aim of MiXLab, you might ask?\n",
+ "Well... educational purpose, I guess..."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "uzuVvbfSo16m"
+ },
+ "source": [
+ "## Features "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N1eYiTlGoqaA"
+ },
+ "source": [
+ "Here's what you can do with MiXLab\n",
+ "* Mount/unmount remote storage (Google Drive / rclone).\n",
+ "* Hosted/P2P downloader.\n",
+ "* Some other useful tools such as File Manager, Remote Connection and System Monitor to monitor the VM's state."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e-0yDs4C0HkB"
+ },
+ "source": [
+ "# ✦ *Change Log* ✦ \n",
+ "\n",
+ "Last modified: 2021-09-29 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3_30gF8Am-PQ"
+ },
+ "source": [
+ "2021-09-29 \n",
+ " \n",
+ "Added cell on Real-ESRGAN to download the results. \n",
+ "Changed back the default runtime type CPU only (no hardware accelerator). \n",
+ "Added a lot more options to Real-ESRGAN . \n",
+ "Removed \"custom_command\" field from Real-ESRGAN . \n",
+ "Added a temporary field \"custom_command\" to Real-ESRGAN ."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eA2hvW0ZYn2u"
+ },
+ "source": [
+ "2021-09-28 \n",
+ " \n",
+ "Added a simple implementation of Real-ESRGAN ."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Be03jPf-0L0F"
+ },
+ "source": [
+ "2021-09-28 \n",
+ " \n",
+ "MiXLab is now using VueTorrent for the qBittorrent alternate web interface.\n",
+ "\n",
+ ">Note: there seem to be something wrong with VueTorrent not automatically redirecting user to the main page, serving the login page instead, while there is no need to login. You simply have to click on the login button and then it should take you to the main page."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "21-Wb8ywqQeJ"
+ },
+ "source": [
+ "# ✦ *Colab Stay Alive* ✦ "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nYEj5CeCqbTY"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Colab Stay Alive \n",
+ "# @markdown This cell runs a JS code that will automatically press the reconnect button when you got disconnected due to idle.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import output\n",
+ "\n",
+ "display(IPython.display.Javascript('''\n",
+ " function ClickConnect(){\n",
+ " btn = document.querySelector(\"colab-connect-button\")\n",
+ " if (btn != null){\n",
+ " console.log(\"Clicked on the connect button\"); \n",
+ " btn.click() \n",
+ " }\n",
+ " \n",
+ " btn = document.getElementById('connect')\n",
+ " if (btn != null){\n",
+ " console.log(\"Clicked on the reconnect button\"); \n",
+ " btn.click() \n",
+ " }\n",
+ " }\n",
+ " \n",
+ "setInterval(ClickConnect,60000)\n",
+ "'''))\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cRwNEZJmUFMg"
+ },
+ "source": [
+ "If the cell above doesn't work, try to run one of these codes below on your browser's developer tool/console.\n",
+ "\n",
+ "\n",
+ "\n",
+ ">Code 1(credit to rockyourcode)\n",
+ "function ClickConnect() {\n",
+ " console.log('Working')\n",
+ " document\n",
+ " .querySelector('#top-toolbar > colab-connect-button')\n",
+ " .shadowRoot.querySelector('#connect')\n",
+ " .click()\n",
+ "}\n",
+ "\n",
+ "setInterval(ClickConnect, 60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 2(credit to Kavyajeet Bora on stack overflow)\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Working\"); \n",
+ " document.querySelector(\"colab-toolbar-button#connect\").click() \n",
+ "}\n",
+ "setInterval(ClickConnect,60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 3\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Connnect Clicked - Start\"); \n",
+ " document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click();\n",
+ " console.log(\"Connnect Clicked - End\"); \n",
+ "};\n",
+ "setInterval(ClickConnect, 60000)
\n",
+ "\n",
+ "\n",
+ "\n",
+ "> Code 4(credit to Stephane Belemkoabga on stack overflow)\n",
+ "function ClickConnect(){\n",
+ " console.log(\"Working\"); \n",
+ " document.querySelector(\"colab-connect-button\").click() \n",
+ "}\n",
+ "setInterval(ClickConnect,60000)
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GaegjvHPPW9q"
+ },
+ "source": [
+ "# ✦ *Mount/Unmount Storage* ✦ \n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4sXeh7Tdx1v-"
+ },
+ "source": [
+ "## Google Drive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LkGoo1n9PNgj"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Mount/Unmount Google Drive \n",
+ "# @markdown This cell will mount/unmount Google Drive to /content/drive/
\n",
+ "MODE = \"MOUNT\" #@param [\"MOUNT\", \"UNMOUNT\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import drive\n",
+ "drive.mount._DEBUG = False\n",
+ "if MODE == \"MOUNT\":\n",
+ " drive.mount('/content/drive', force_remount=True)\n",
+ "elif MODE == \"UNMOUNT\":\n",
+ " try:\n",
+ " drive.flush_and_unmount()\n",
+ " except ValueError:\n",
+ " pass\n",
+ " get_ipython().system_raw(\"rm -rf /root/.config/Google/DriveFS\")\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "EgMPgxmrTCvF"
+ },
+ "outputs": [],
+ "source": [
+ "# @markdown ← Force re-mount Google Drive \n",
+ "\n",
+ "drive.mount(\"/content/drive\", force_remount=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nPuXzMyawnzo"
+ },
+ "outputs": [],
+ "source": [
+ "# @markdown This cell is not needed (won't do anything if you run it and here just for reference).\n",
+ "\n",
+ "## ============================= FORM ============================= #\n",
+ "## @markdown ← Mount Google Drive (Cloud SDK) \n",
+ "## @markdown This cell will mount Google Drive to /content/downloads/
\n",
+ "## @markdown > currently there is no way to unmount the drive.\n",
+ "## ================================================================ #\n",
+ "\n",
+ "#!apt-get install -y -qq software-properties-common python-software-properties module-init-tools\n",
+ "#!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null\n",
+ "#!apt-get update -qq 2>&1 > /dev/null\n",
+ "#!apt-get -y install -qq google-drive-ocamlfuse fuse\n",
+ "#from google.colab import auth\n",
+ "#auth.authenticate_user()\n",
+ "#from oauth2client.client import GoogleCredentials\n",
+ "#creds = GoogleCredentials.get_application_default()\n",
+ "#import getpass\n",
+ "#!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL\n",
+ "#vcode = getpass.getpass()\n",
+ "#!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}\n",
+ "\n",
+ "#!mkdir -p downloads\n",
+ "#!google-drive-ocamlfuse drive downloads\n",
+ "\n",
+ "#from IPython.display import HTML, clear_output\n",
+ "\n",
+ "#clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "477G4hACPgqM"
+ },
+ "source": [
+ "## rclone "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0VJ4VO1X8YE6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Install rclone \n",
+ "build_version = \"stable\" #@param [\"stable\", \"beta\"]\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "if build_version == \"stable\":\n",
+ "\t!curl https://rclone.org/install.sh | sudo bash\n",
+ "else:\n",
+ "\t!curl https://rclone.org/install.sh | sudo bash -s beta\n",
+ "\n",
+ "\n",
+ "try:\n",
+ "\tos.makedirs(\"/root/.config/rclone\", exist_ok=True)\n",
+ "except OSError as error:\n",
+ "\tpass\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KTXERiVMIKgw"
+ },
+ "source": [
+ "### rclone 1 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2db3MpgeQdT9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone \n",
+ "Mode = \"Copy\" # @param [\"Move\", \"Copy\", \"Sync\", \"Verify\", \"Dedupe\", \"Clean Empty Dirs\", \"Empty Trash\"]\n",
+ "Source = \"\" # @param {type:\"string\"}\n",
+ "Destination = \"\" # @param {type:\"string\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "Extra_Arguments = \"--local-no-check-updated\" # @param {type:\"string\"}\n",
+ "COPY_SHARED_FILES = False # @param{type: \"boolean\"}\n",
+ "Compare = \"Size & Checksum\"\n",
+ "TRANSFERS, CHECKERS = 20, 20\n",
+ "THROTTLE_TPS = True\n",
+ "BRIDGE_TRANSFER = False # @param{type: \"boolean\"}\n",
+ "FAST_LIST = False # @param{type: \"boolean\"}\n",
+ "OPTIMIZE_GDRIVE = True\n",
+ "SIMPLE_LOG = True\n",
+ "RECORD_LOGFILE = False # @param{type: \"boolean\"}\n",
+ "SKIP_NEWER_FILE = False\n",
+ "SKIP_EXISTED = False\n",
+ "SKIP_UPDATE_MODTIME = False\n",
+ "ONE_FILE_SYSTEM = False\n",
+ "LOG_LEVEL = \"DEBUG\"\n",
+ "SYNC_MODE = \"Delete after transfering\"\n",
+ "SYNC_TRACK_RENAME = True\n",
+ "DEDUPE_MODE = \"Largest\"\n",
+ "USE_TRASH = True\n",
+ "DRY_RUN = False # @param{type: \"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "from os import path as _p\n",
+ "\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ " \n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "\n",
+ "from datetime import datetime as _dt\n",
+ "from mixlab import (\n",
+ " displayOutput,\n",
+ " checkAvailable,\n",
+ " runSh,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " accessSettingFile,\n",
+ " memGiB,\n",
+ ")\n",
+ "\n",
+ "\n",
+ "def populateActionArg():\n",
+ " if Mode == \"Copy\":\n",
+ " actionArg = \"copy\"\n",
+ " elif Mode == \"Sync\":\n",
+ " actionArg = \"sync\"\n",
+ " elif Mode == \"Verify\":\n",
+ " actionArg = \"check\"\n",
+ " elif Mode == \"Dedupe\":\n",
+ " actionArg = \"dedupe largest\"\n",
+ " elif Mode == \"Clean Empty Dirs\":\n",
+ " actionArg = \"rmdirs\"\n",
+ " elif Mode == \"Empty Trash\":\n",
+ " actionArg = \"delete\"\n",
+ " else:\n",
+ " actionArg = \"move\"\n",
+ "\n",
+ " return actionArg\n",
+ "\n",
+ "\n",
+ "def populateCompareArg():\n",
+ " if Compare == \"Mod-Time\":\n",
+ " compareArg = \"--ignore-size\"\n",
+ " elif Compare == \"Size\":\n",
+ " compareArg = \"--size-only\"\n",
+ " elif Compare == \"Checksum\":\n",
+ " compareArg = \"-c --ignore-size\"\n",
+ " else:\n",
+ " compareArg = \"-c\"\n",
+ "\n",
+ " return compareArg\n",
+ "\n",
+ "\n",
+ "def populateOptimizeGDriveArg():\n",
+ " return (\n",
+ " \"--buffer-size 256M \\\n",
+ " --drive-chunk-size 256M \\\n",
+ " --drive-upload-cutoff 256M \\\n",
+ " --drive-acknowledge-abuse \\\n",
+ " --drive-keep-revision-forever\"\n",
+ "\n",
+ " if OPTIMIZE_GDRIVE\n",
+ " else \"--buffer-size 128M\"\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def populateGDriveCopyArg():\n",
+ " if BRIDGE_TRANSFER and memGiB() < 13:\n",
+ " global TRANSFERS, CHECKERS\n",
+ " TRANSFERS, CHECKERS = 10, 80\n",
+ " else:\n",
+ " pass\n",
+ " return \"--disable copy\" if BRIDGE_TRANSFER else \"--drive-server-side-across-configs\"\n",
+ "\n",
+ "\n",
+ "def populateStatsArg():\n",
+ " statsArg = \"--stats-one-line --stats=5s\" if SIMPLE_LOG else \"--stats=5s -P\"\n",
+ " if LOG_LEVEL != \"OFF\":\n",
+ " statsArg += \" -v\" if SIMPLE_LOG else \"-vv\"\n",
+ " elif LOG_LEVEL == \"INFO\":\n",
+ " statsArg += \" --log-level INFO\"\n",
+ " elif LOG_LEVEL == \"ERROR\":\n",
+ " statsArg += \" --log-level ERROR\"\n",
+ " else:\n",
+ " statsArg += \" --log-level DEBUG\"\n",
+ " return statsArg\n",
+ "\n",
+ "\n",
+ "def populateSyncModeArg():\n",
+ " if Mode != \"Sync\":\n",
+ " return \"\"\n",
+ " elif SYNC_MODE == \"Delete before transfering\":\n",
+ " syncModeArg = \"--delete-before\"\n",
+ " elif SYNC_MODE == \"Delete after transfering\":\n",
+ " syncModeArg = \"--delete-after\"\n",
+ " else:\n",
+ " syncModeArg = \"--delete-during\"\n",
+ " if SYNC_TRACK_RENAME:\n",
+ " syncModeArg += \" --track-renames\"\n",
+ " return syncModeArg\n",
+ "\n",
+ "\n",
+ "def populateDedupeModeArg():\n",
+ " if DEDUPE_MODE == \"Interactive\":\n",
+ " dedupeModeArg = \"--dedupe-mode interactive\"\n",
+ " elif DEDUPE_MODE == \"Skip\":\n",
+ " dedupeModeArg = \"--dedupe-mode skip\"\n",
+ " elif DEDUPE_MODE == \"First\":\n",
+ " dedupeModeArg = \"--dedupe-mode first\"\n",
+ " elif DEDUPE_MODE == \"Newest\":\n",
+ " dedupeModeArg = \"--dedupe-mode newest\"\n",
+ " elif DEDUPE_MODE == \"Oldest\":\n",
+ " dedupeModeArg = \"--dedupe-mode oldest\"\n",
+ " elif DEDUPE_MODE == \"Rename\":\n",
+ " dedupeModeArg = \"--dedupe-mode rename\"\n",
+ " else:\n",
+ " dedupeModeArg = \"--dedupe-mode largest\"\n",
+ "\n",
+ " return dedupeModeArg\n",
+ "\n",
+ "\n",
+ "def generateCmd():\n",
+ " sharedFilesArgs = (\n",
+ " \"--drive-shared-with-me --files-from /content/upload.txt --no-traverse\"\n",
+ " if COPY_SHARED_FILES\n",
+ " else \"\"\n",
+ " )\n",
+ "\n",
+ " logFileArg = f\"--log-file /content/rclone_log.txt -vv -P\"\n",
+ "\n",
+ " args = [\n",
+ " \"rclone\",\n",
+ " f\"--config {rcloneConfigurationPath}/rclone.conf\",\n",
+ " '--user-agent \"Mozilla\"',\n",
+ " populateActionArg(),\n",
+ " f'\"{Source}\"',\n",
+ " f'\"{Destination}\"' if Mode in (\"Move\", \"Copy\", \"Sync\") else \"\",\n",
+ " f\"--transfers {str(TRANSFERS)}\",\n",
+ " f\"--checkers {str(CHECKERS)}\",\n",
+ " ]\n",
+ "\n",
+ " if Mode == \"Verify\":\n",
+ " args.append(\"--one-way\")\n",
+ " elif Mode == \"Empty Trash\":\n",
+ " args.append(\"--drive-trashed-only --drive-use-trash=false\")\n",
+ " else:\n",
+ " args.extend(\n",
+ " [\n",
+ " populateGDriveCopyArg(),\n",
+ " populateSyncModeArg(),\n",
+ " populateCompareArg(),\n",
+ " populateOptimizeGDriveArg(),\n",
+ " \"-u\" if SKIP_NEWER_FILE else \"\",\n",
+ " \"--ignore-existing\" if SKIP_EXISTED else \"\",\n",
+ " \"--no-update-modtime\" if SKIP_UPDATE_MODTIME else \"\",\n",
+ " \"--one-file-system\" if ONE_FILE_SYSTEM else \"\",\n",
+ " \"--tpslimit 95 --tpslimit-burst 40\" if THROTTLE_TPS else \"\",\n",
+ " \"--fast-list\" if FAST_LIST else \"\",\n",
+ " \"--delete-empty-src-dirs\" if Mode == \"Move\" else \"\",\n",
+ " ]\n",
+ " )\n",
+ " args.extend(\n",
+ " [\n",
+ " \"-n\" if DRY_RUN else \"\",\n",
+ " populateStatsArg() if not RECORD_LOGFILE else logFileArg,\n",
+ " sharedFilesArgs,\n",
+ " Extra_Arguments,\n",
+ " ]\n",
+ " )\n",
+ "\n",
+ " return args\n",
+ "\n",
+ "\n",
+ "def executeRclone():\n",
+ " prepareSession()\n",
+ " if Source.strip() == \"\":\n",
+ " displayOutput(\"❌ The source field is empty!\")\n",
+ " return\n",
+ " if checkAvailable(\"/content/rclone_log.txt\"):\n",
+ " if not checkAvailable(\"/content/logfiles\"):\n",
+ " runSh(\"mkdir -p -m 666 /content/logfiles\")\n",
+ " job = accessSettingFile(\"job.txt\")\n",
+ " runSh(\n",
+ " f'mv /content/rclone_log.txt /content/logfiles/{job[\"title\"]}_{job[\"status\"]}_logfile.txt'\n",
+ " )\n",
+ "\n",
+ " onGoingJob = {\n",
+ " \"title\": f'{Mode}_{Source}_{Destination}_{_dt.now().strftime(\"%a-%H-%M-%S\")}',\n",
+ " \"status\": \"ongoing\",\n",
+ " }\n",
+ " accessSettingFile(\"job.txt\", onGoingJob)\n",
+ "\n",
+ " cmd = \" \".join(generateCmd())\n",
+ " runSh(cmd, output=True)\n",
+ " displayOutput(Mode, \"success\")\n",
+ "\n",
+ " onGoingJob[\"status\"] = \"finished\"\n",
+ " accessSettingFile(\"job.txt\", onGoingJob)\n",
+ "\n",
+ "executeRclone()\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "wkc0wCvPIUFh"
+ },
+ "source": [
+ "### rclone 2 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "t03ZdwQ-IvPv"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone \n",
+ "Mode = \"Copy\" #@param [\"Copy\", \"Move\", \"Sync\", \"Checker\", \"Deduplicate\", \"Remove Empty Directories\", \"Empty Trash\"]\n",
+ "Source = \"\" #@param {type:\"string\"}\n",
+ "Destination = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Global Configuration ⚙️ \n",
+ "Extra_Arguments = \"--local-no-check-updated\" #@param {type:\"string\"}\n",
+ "Compare = \"Size & Mod-Time\" #@param [\"Size & Mod-Time\", \"Size & Checksum\", \"Only Mod-Time\", \"Only Size\", \"Only Checksum\"]\n",
+ "Checkers = 10 #@param {type:\"slider\", min:1, max:40, step:1}\n",
+ "Transfers = 10 #@param {type:\"slider\", min:1, max:20, step:1}\n",
+ "Dry_Run = False #@param {type:\"boolean\"}\n",
+ "Do_not_cross_filesystem_boundaries = False\n",
+ "Do_not_update_modtime_if_files_are_identical = False #@param {type:\"boolean\"}\n",
+ "Google_Drive_optimization = False #@param {type:\"boolean\"}\n",
+ "Large_amount_of_files_optimization = False #@param {type:\"boolean\"}\n",
+ "Simple_Ouput = True #@param {type:\"boolean\"}\n",
+ "Skip_all_files_that_exist = False #@param {type:\"boolean\"}\n",
+ "Skip_files_that_are_newer_on_the_destination = False #@param {type:\"boolean\"}\n",
+ "Output_Log_File = \"OFF\" #@param [\"OFF\", \"NOTICE\", \"INFO\", \"ERROR\", \"DEBUG\"]\n",
+ "\n",
+ "#@markdown ↪️ Sync Configuration ↩️ \n",
+ "Sync_Mode = \"Delete during transfer\" #@param [\"Delete during transfer\", \"Delete before transfering\", \"Delete after transfering\"]\n",
+ "Track_Renames = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown 💞 Deduplicate Configuration 💞 \n",
+ "Deduplicate_Mode = \"Interactive\" #@param [\"Interactive\", \"Skip\", \"First\", \"Newest\", \"Oldest\", \"Largest\", \"Rename\"]\n",
+ "Deduplicate_Use_Trash = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "##### Importing the needed modules\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "##### Variable Declaration\n",
+ "# Optimized for Google Colaboratory\n",
+ "os.environ[\"bufferC\"] = \"--buffer-size 96M\"\n",
+ "\n",
+ "if Compare == \"Size & Checksum\":\n",
+ " os.environ[\"compareC\"] = \"-c\"\n",
+ "elif Compare == \"Only Mod-Time\":\n",
+ " os.environ[\"compareC\"] = \"--ignore-size\"\n",
+ "elif Compare == \"Only Size\":\n",
+ " os.environ[\"compareC\"] = \"--size-only\"\n",
+ "elif Compare == \"Only Checksum\":\n",
+ " os.environ[\"compareC\"] = \"-c --ignore-size\"\n",
+ "else:\n",
+ " os.environ[\"compareC\"] = \"\"\n",
+ "\n",
+ "os.environ[\"sourceC\"] = Source\n",
+ "os.environ[\"destinationC\"] = Destination\n",
+ "os.environ[\"transfersC\"] = \"--transfers \"+str(Transfers)\n",
+ "os.environ[\"checkersC\"] = \"--checkers \"+str(Checkers)\n",
+ "\n",
+ "if Skip_files_that_are_newer_on_the_destination == True:\n",
+ " os.environ[\"skipnewC\"] = \"-u\"\n",
+ "else:\n",
+ " os.environ[\"skipnewC\"] = \"\"\n",
+ " \n",
+ "if Skip_all_files_that_exist == True:\n",
+ " os.environ[\"skipexistC\"] = \"--ignore-existing\"\n",
+ "else:\n",
+ " os.environ[\"skipexistC\"] = \"\"\n",
+ " \n",
+ "if Do_not_cross_filesystem_boundaries == True:\n",
+ " os.environ[\"nocrossfilesystemC\"] = \"--one-file-system\"\n",
+ "else:\n",
+ " os.environ[\"nocrossfilesystemC\"] = \"\"\n",
+ " \n",
+ "if Do_not_update_modtime_if_files_are_identical == True:\n",
+ " os.environ[\"noupdatemodtimeC\"] = \"--no-update-modtime\"\n",
+ "else:\n",
+ " os.environ[\"noupdatemodtimeC\"] = \"\"\n",
+ "\n",
+ "if Large_amount_of_files_optimization == True:\n",
+ " os.environ[\"filesoptimizeC\"] = \"--fast-list\"\n",
+ "else:\n",
+ " os.environ[\"filesoptimizeC\"] = \"\"\n",
+ " \n",
+ "if Google_Drive_optimization == True:\n",
+ " os.environ[\"driveoptimizeC\"] = \"--drive-chunk-size 32M --drive-acknowledge-abuse --drive-keep-revision-forever\"\n",
+ "else:\n",
+ " os.environ[\"driveoptimizeC\"] = \"\"\n",
+ " \n",
+ "if Dry_Run == True:\n",
+ " os.environ[\"dryrunC\"] = \"-n\"\n",
+ "else:\n",
+ " os.environ[\"dryrunC\"] = \"\"\n",
+ " \n",
+ "if Output_Log_File != \"OFF\":\n",
+ " os.environ[\"statsC\"] = \"--log-file=/root/.rclone_log/rclone_log.txt\"\n",
+ "else:\n",
+ " if Simple_Ouput == True:\n",
+ " os.environ[\"statsC\"] = \"-v --stats-one-line --stats=5s\"\n",
+ " else:\n",
+ " os.environ[\"statsC\"] = \"-v --stats=5s\"\n",
+ " \n",
+ "if Output_Log_File == \"INFO\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level INFO\"\n",
+ "elif Output_Log_File == \"ERROR\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level ERROR\"\n",
+ "elif Output_Log_File == \"DEBUG\":\n",
+ " os.environ[\"loglevelC\"] = \"--log-level DEBUG\"\n",
+ "else:\n",
+ " os.environ[\"loglevelC\"] = \"\"\n",
+ "\n",
+ "os.environ[\"extraC\"] = Extra_Arguments\n",
+ "\n",
+ "if Sync_Mode == \"Delete during transfer\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-during\"\n",
+ "elif Sync_Mode == \"Delete before transfering\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-before\"\n",
+ "elif Sync_Mode == \"Delete after transfering\":\n",
+ " os.environ[\"syncmodeC\"] = \"--delete-after\"\n",
+ " \n",
+ "if Track_Renames == True:\n",
+ " os.environ[\"trackrenamesC\"] = \"--track-renames\"\n",
+ "else:\n",
+ " os.environ[\"trackrenamesC\"] = \"\"\n",
+ " \n",
+ "if Deduplicate_Mode == \"Interactive\":\n",
+ " os.environ[\"deduplicateC\"] = \"interactive\"\n",
+ "elif Deduplicate_Mode == \"Skip\":\n",
+ " os.environ[\"deduplicateC\"] = \"skip\"\n",
+ "elif Deduplicate_Mode == \"First\":\n",
+ " os.environ[\"deduplicateC\"] = \"first\"\n",
+ "elif Deduplicate_Mode == \"Newest\":\n",
+ " os.environ[\"deduplicateC\"] = \"newest\"\n",
+ "elif Deduplicate_Mode == \"Oldest\":\n",
+ " os.environ[\"deduplicateC\"] = \"oldest\"\n",
+ "elif Deduplicate_Mode == \"Largest\":\n",
+ " os.environ[\"deduplicateC\"] = \"largest\"\n",
+ "elif Deduplicate_Mode == \"Rename\":\n",
+ " os.environ[\"deduplicateC\"] = \"rename\"\n",
+ " \n",
+ "if Deduplicate_Use_Trash == True:\n",
+ " os.environ[\"deduplicatetrashC\"] = \"\"\n",
+ "else:\n",
+ " os.environ[\"deduplicatetrashC\"] = \"--drive-use-trash=false\"\n",
+ "\n",
+ "\n",
+ "##### rclone Execution\n",
+ "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
+ " !mkdir -p -m 666 /root/.rclone_log/\n",
+ " display(HTML(\"Logging enabled, rclone will no longer display any output on the terminal. Please wait until the cell stop by itself. \"))\n",
+ "\n",
+ "if Mode == \"Copy\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf copy \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Move\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf move \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Sync\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf sync \"$sourceC\" \"$destinationC\" $transfersC $checkersC $statsC $loglevelC $syncmodeC $trackrenamesC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Checker\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf check \"$sourceC\" \"$destinationC\" $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Deduplicate\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf dedupe \"$sourceC\" $checkersC $statsC $loglevelC --dedupe-mode $deduplicateC $deduplicatetrashC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC\n",
+ "elif Mode == \"Remove Empty Directories\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf rmdirs \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
+ "elif Mode == \"Empty Trash\":\n",
+ " !rclone --config=/root/.config/rclone/rclone.conf cleanup \"$sourceC\" $statsC $loglevelC $dryrunC $extraC\n",
+ "\n",
+ "\n",
+ "##### Log Output\n",
+ "if Output_Log_File != \"OFF\" and Mode != \"Config\":\n",
+ "\n",
+ " ##### Rename log file and output settings.\n",
+ " !mv /root/.rclone_log/rclone_log.txt /root/.rclone_log/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).txt\n",
+ " with open(\"/root/.rclone_log/\" + Mode + \"_settings.txt\", \"w\") as f:\n",
+ " f.write(\"Mode: \" + Mode + \\\n",
+ " \"\\nCompare: \" + Compare + \\\n",
+ " \"\\nSource: \\\"\" + Source + \\\n",
+ " \"\\\"\\nDestination: \\\"\" + Destination + \\\n",
+ " \"\\\"\\nTransfers: \" + str(Transfers) + \\\n",
+ " \"\\nCheckers: \" + str(Checkers) + \\\n",
+ " \"\\nSkip files that are newer on the destination: \" + str(Skip_files_that_are_newer_on_the_destination) + \\\n",
+ " \"\\nSkip all files that exist: \" + str(Skip_all_files_that_exist) + \\\n",
+ " \"\\nDo not cross filesystem boundaries: \" + str(Do_not_cross_filesystem_boundaries) + \\\n",
+ " \"\\nDo not update modtime if files are identical: \" + str(Do_not_update_modtime_if_files_are_identical) + \\\n",
+ " \"\\nDry-Run: \" + str(Dry_Run) + \\\n",
+ " \"\\nOutput Log Level: \" + Output_Log_File + \\\n",
+ " \"\\nExtra Arguments: \\\"\" + Extra_Arguments + \\\n",
+ " \"\\\"\\nSync Moden: \" + Sync_Mode + \\\n",
+ " \"\\nTrack Renames: \" + str(Track_Renames) + \\\n",
+ " \"\\nDeduplicate Mode: \" + Deduplicate_Mode + \\\n",
+ " \"\\nDeduplicate Use Trash: \" + str(Deduplicate_Use_Trash))\n",
+ "\n",
+ " ##### Compressing log file.\n",
+ " !rm -f /root/rclone_log.zip\n",
+ " !zip -r -q -j -9 /root/rclone_log.zip /root/.rclone_log/\n",
+ " !rm -rf /root/.rclone_log/\n",
+ " !mkdir -p -m 666 /root/.rclone_log/\n",
+ "\n",
+ " ##### Send Log\n",
+ " if os.path.isfile(\"/root/rclone_log.zip\") == True:\n",
+ " try:\n",
+ " files.download(\"/root/rclone_log.zip\")\n",
+ " !rm -f /root/rclone_log.zip\n",
+ " display(HTML(\"Sending log to your browser... \"))\n",
+ " except:\n",
+ " !mv /root/rclone_log.zip /content/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).zip\n",
+ " display(HTML(\"You can use file explorer to download the log file. \"))\n",
+ " else:\n",
+ " clear_output()\n",
+ " display(HTML(\"There is no log file. \"))\n",
+ " \n",
+ "\n",
+ "### Operation has been successfully completed.\n",
+ "if Mode != \"Config\":\n",
+ " display(HTML(\"✅ Operation has been successfully completed. \"))\n",
+ "\n",
+ "\n",
+ "##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
+ "if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ "else:\n",
+ "\tpass##### Automatically clear terminal output if the checkbox's value on the top is set to True.\n",
+ "if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YSEyWbWfY9qx"
+ },
+ "source": [
+ "### Google Drive 750GB Upload Bandwidth Bypass "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Qvwz8vtgjSLM"
+ },
+ "source": [
+ "\n",
+ "Still work in progress! Use at your own risk! \n",
+ "Be sure to read everything in this block carefully. No, seriously. Read carefully. \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HBy4qgMQNm7Q"
+ },
+ "source": [
+ "**Always remember to install rclone first!** "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "UI9NTz-typuf"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Clone] AutorRclone \n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!git clone https://github.com/xyou365/AutoRclone /content/tools/AutoRclone\n",
+ "!sudo pip3 install -r /content/tools/AutoRclone/requirements.txt\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Y28rhXs2a7QV"
+ },
+ "source": [
+ "\n",
+ "Since Google has removed the ability to automatically enable the GDrive API from the good old \"Quickstart\" (as of 2021-04-15), you will have to manually create a project by yourself, to get the credentials.json.\n",
+ " \n",
+ "(This means that you have to do the initial job all by yourself. This includes creating a project on the Google Cloud Platform, enabling the GDrive API, setting up the OAuth 2.0, setting up the OAuth Screen, all that stuff.)\n",
+ " \n",
+ "Click here (opens in new tab) and follow along the tutorial there.\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0AR8nQi2w9_K"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Upload the \"credentials.json\" File \n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "else:\n",
+ " %cd \"$AutoRclone_path\"\n",
+ "\n",
+ " from google.colab import files\n",
+ " uploaded = files.upload()\n",
+ "\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pqe_u-WjESWe"
+ },
+ "source": [
+ "TO DO: Add \"remove token\" to be able to re-authorize with different account."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UFQoYxRKAclf"
+ },
+ "source": [
+ "#### Generate Project(s) and Service Account(s) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "dQsFZnNa8qN4"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Generate Service Account(s) on Existing Project(s) \n",
+ "#@markdown > This cell will generate the Service Accounts on ALL existing project(s)! Let's say you currenly have 2 projects, then the number of service accounts will be created is 200 (100 per project). To avoid any unwanted things like messing up your current project, it is highly recommended to run the cell below instead.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --quick-setup -1\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "n1OhWkE8Flds"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Generate New Project(s) and Service Account(s) \n",
+ "\n",
+ "the_amount_of_project_to_generate = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
+ "#@markdown > To avoid any unwanted things like messing up your current project, this cell will generate a NEW project instead, on the Google Cloud Platform, based on the number specified by the slider. It will also (trying to) enable the needed API(s) and create the Service Accounts. The number of Service Account created per project is 100. That is a lot. So the calculation here is 100 x 750GB = 7500GB or 7.5TB worth of upload bandwidth. There could be a chance that Google will notice your action. You obviously don't want that, right? Well... just don't be a glutton and slide the slider all the way to the right and you should be safe to go. (Realistically speaking though, 7.5TB is a lot of upload bandwidth. Even 750 x 5 should be sufficient enough... not to mention the limitation is just a day and will recharge after 24 hours).\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --quick-setup \"$the_amount_of_project_to_generate\" --new-only\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "k8UlN_AeTZqs"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Download the Service Account Keys (Optional) \n",
+ "Project_ID = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > After you have generated the project(s) and the service account(s) using one one the cell above, the service account keys should be automatically downloaded. You can still run this cell to manually do it yourself, or if you want to download keys from a specific project.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "AutoRclone_path = \"/content/tools/AutoRclone\"\n",
+ "json_path = \"/content/tools/AutoRclone/credentials.json\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(AutoRclone_path):\n",
+ " display(HTML(\"❌ Make sure you have already run the first cell first! \"))\n",
+ "elif os.path.exists(AutoRclone_path) and not os.path.exists(json_path):\n",
+ " display(HTML(\"❌ Unable to locate the credentials.json file! Please upload it first! \"))\n",
+ "else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 gen_sa_accounts.py --download-keys \"$Project_ID\"\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8TsnaCxSV-9G"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Clear the \"accounts\" Folder (Optional) \n",
+ "#@markdown > If you think the \"accounts\" folder is cluttered, feel free to run this cell and then run the cell above this to re-download the service account keys.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import shutil\n",
+ "\n",
+ "accounts_path = \"/content/tools/AutoRclone/accounts\"\n",
+ "\n",
+ "if os.path.exists(accounts_path) and os.path.isdir(accounts_path):\n",
+ " shutil.rmtree(accounts_path)\n",
+ " os.makedirs(accounts_path)\n",
+ "else:\n",
+ " os.makedirs(accounts_path)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mOjIsl60XBvw"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Export the Email Addresses from the JSON Files to a Text File \n",
+ "input_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
+ "#@markdown > Path to the folder which contain the Service Account JSON files.\n",
+ "#output_name = \"\" #@param {type:\"string\"}\n",
+ "#output_path = \"\" #@param {type:\"string\"}\n",
+ "##@markdown > If both fields are empty, the default name and path for the output file will be used. Name = service-account-emails.txt Path = /content\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "#if output_name and output_path == \"\":\n",
+ "# output_name = \"service-account-emails\"\n",
+ "# output_path = \"/content\"\n",
+ "#elif output_name == \"\" and not output_path == \"\":\n",
+ "# output_name = \"service-account-emails\"\n",
+ "#elif not output_name == \"\" and output_path == \"\":\n",
+ "# output_path = \"/content\"\n",
+ "\n",
+ "\n",
+ "if input_path == \"\":\n",
+ " display(HTML(\"❌ The input_path field is empty! \"))\n",
+ "else:\n",
+ " if not os.path.exists(input_path):\n",
+ " display(HTML(\"❌ The path you have entered does not exist! \"))\n",
+ " elif os.path.exists(input_path) and os.path.isfile(input_path):\n",
+ " display(HTML(\"❌ The input_path is not a folder! \"))\n",
+ " elif os.path.exists(input_path) and os.path.isdir(input_path):\n",
+ " %cd \"$input_path\"\n",
+ " !grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > /content/service_account_emails.txt\n",
+ " #!grep -oPh '\"client_email\": \"\\K[^\"]+' *.json > \"$output_path\"/\"$output_name\".txt\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()\n",
+ "\n",
+ " display(HTML(\"✅ The output is saved in /content/service_account_emails.txt \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "l-Sbt9djBtpe"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Bulk Rename the Service Account Keys (Optional) \n",
+ "service_account_keys_path = \"/content/tools/AutoRclone/accounts\" #@param {type:\"string\"}\n",
+ "rename_prefix = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the rename_prefix field is empty, the default prefix will be given: service_account_0 to 100.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "\n",
+ "if rename_prefix == \"\":\n",
+ " rename_prefix = \"service_account_\"\n",
+ "else:\n",
+ " rename_prefix = rename_prefix\n",
+ "\n",
+ "def main():\n",
+ " for count, filename in enumerate(os.listdir(service_account_keys_path)):\n",
+ " destination = rename_prefix + str(count) + \".json\"\n",
+ " source = service_account_keys_path + \"/\" + filename\n",
+ " destination = service_account_keys_path + \"/\" + destination\n",
+ " \n",
+ " # rename() function will\n",
+ " # rename all the files\n",
+ " os.rename(source, destination)\n",
+ " \n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "g5HCVqRNaj4Q"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← Bulk Add the Service Accounts into a Team Drive (Optional) \n",
+ "Team_Drive_ID = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If this cell does not work or maybe not doing anything, simply create a Google Group (click here (opens in new tab)) and add all, if not, a number of the service accounts into that group and then on the Team Drive, just invite over the group's email into the Team Drive. The group's email should look something like this: group-name@googlegroups.com\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/AutoRclone/add_to_team_drive.py\"):\n",
+ " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
+ "else:\n",
+ " if Team_Drive_ID == \"\":\n",
+ " display(HTML(\"❌ The Team_Drive_ID field is empty! \"))\n",
+ " elif not Team_Drive_ID == \"\":\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 add_to_team_drive.py -d \"Team_Drive_ID\"\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "c1N141ZcEdwd"
+ },
+ "source": [
+ "#### Perform the Task "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eF7Wmr7unSD5"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Method 1 \n",
+ "Source = \"\" #@param {type:\"string\"}\n",
+ "Destination = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > I'm pretty sure this only works between Team Drive to Team Drive, but your mileage may vary.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/AutoRclone/rclone_sa_magic.py\"):\n",
+ " display(HTML(\"❌ Unable to locate the required script! Make sure you have already run the cell [Clone] AutoRclone first! \"))\n",
+ "else:\n",
+ " if Source is \"\" and not Destination is \"\":\n",
+ " display(HTML(\"❌ The Source field is empty! \"))\n",
+ " elif not Source is \"\" and Destination is \"\":\n",
+ " display(HTML(\"❌ The Destination field is empty! \"))\n",
+ " elif Source is \"\" and Destination is \"\":\n",
+ " display(HTML(\"❌ Both of the fields above are empty! \"))\n",
+ " else:\n",
+ " %cd /content/tools/AutoRclone\n",
+ " !python3 rclone_sa_magic.py -s \"$Source\" -d \"$Destination\" -b 1 -e 600\n",
+ " %cd /content\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "O0NHwsI_-d3W"
+ },
+ "source": [
+ "### rclone Configuration "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "PDc8KdYNQ2s-"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone WebUI Configuration \n",
+ "# @markdown >rclone WebUI Default CredentialUsername: userPassword: pass\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, signal, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ " checkAvailable,\n",
+ " displayOutput,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " accessSettingFile,\n",
+ " memGiB,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "prepareSession()\n",
+ "\n",
+ "pid = findProcess(\"rclone\", \"rcd\", isPid=True)\n",
+ "\n",
+ "try:\n",
+ " os.kill(int(pid), signal.SIGTERM)\n",
+ "except TypeError:\n",
+ " pass\n",
+ " \n",
+ "cmd = \"rclone rcd --rc-web-gui --rc-addr :5572\" \\\n",
+ " \" --rc-serve\" \\\n",
+ " \" --rc-user=user --rc-pass=pass\" \\\n",
+ " \" --rc-no-auth\" \\\n",
+ " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
+ " ' --user-agent \"Mozilla\"' \\\n",
+ " ' --transfers 16' \\\n",
+ " \" &\"\n",
+ "\n",
+ "runSh(cmd, shell=True)\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneWebUI', 5572, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneWebUI.yml\", 4099]).start('rcloneWebUI', displayB=False)\n",
+ "clear_output()\n",
+ "displayUrl(Server, pNamU='rclone WebUI : ')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "5HURZQEZQ6pT"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rclone CLI Configuration \n",
+ "# @markdown Run this cell to create and/or edit an rclone configuration.
\n",
+ "# @markdown > After you have created a configuration, download the configuration file.In the next time you want to mount an rclone drive, simply import the configuration file.\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\" #\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "# @markdown ---\n",
+ "automatically_clear_cell_output = True # @param{type: \"boolean\"}\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ " runSh,\n",
+ " PortForward_wrapper\n",
+ ")\n",
+ "\n",
+ "import codecs, contextlib, locale, os, pty, select, signal, subprocess, sys, termios, time\n",
+ "from IPython.utils import text\n",
+ "import six\n",
+ "from google.colab import _ipython\n",
+ "from google.colab import _message\n",
+ "from google.colab.output import _tags\n",
+ "\n",
+ "# Linux read(2) limits to 0x7ffff000 so stay under that for clarity.\n",
+ "_PTY_READ_MAX_BYTES_FOR_TEST = 2**20 # 1MB\n",
+ "\n",
+ "_ENCODING = 'UTF-8'\n",
+ "\n",
+ "class ShellResult(object):\n",
+ " \"\"\"Result of an invocation of the shell magic.\n",
+ "\n",
+ " Note: This is intended to mimic subprocess.CompletedProcess, but has slightly\n",
+ " different characteristics, including:\n",
+ " * CompletedProcess has separate stdout/stderr properties. A ShellResult\n",
+ " has a single property containing the merged stdout/stderr stream,\n",
+ " providing compatibility with the existing \"!\" shell magic (which this is\n",
+ " intended to provide an alternative to).\n",
+ " * A custom __repr__ method that returns output. When the magic is invoked as\n",
+ " the only statement in the cell, Python prints the string representation by\n",
+ " default. The existing \"!\" shell magic also returns output.\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, args, returncode, command_output):\n",
+ " self.args = args\n",
+ " self.returncode = returncode\n",
+ " self.output = command_output\n",
+ "\n",
+ " def check_returncode(self):\n",
+ " if self.returncode:\n",
+ " raise subprocess.CalledProcessError(\n",
+ " returncode=self.returncode, cmd=self.args, output=self.output)\n",
+ "\n",
+ " def _repr_pretty_(self, p, cycle): # pylint:disable=unused-argument\n",
+ " # Note: When invoking the magic and not assigning the result\n",
+ " # (e.g. %shell echo \"foo\"), Python's default semantics will be used and\n",
+ " # print the string representation of the object. By default, this will\n",
+ " # display the __repr__ of ShellResult. Suppress this representation since\n",
+ " # the output of the command has already been displayed to the output window.\n",
+ " if cycle:\n",
+ " raise NotImplementedError\n",
+ "\n",
+ "\n",
+ "def _configure_term_settings(pty_fd):\n",
+ " term_settings = termios.tcgetattr(pty_fd)\n",
+ " # ONLCR transforms NL to CR-NL, which is undesirable. Ensure this is disabled.\n",
+ " # http://man7.org/linux/man-pages/man3/termios.3.html\n",
+ " term_settings[1] &= ~termios.ONLCR\n",
+ "\n",
+ " # ECHOCTL echoes control characters, which is undesirable.\n",
+ " term_settings[3] &= ~termios.ECHOCTL\n",
+ "\n",
+ " termios.tcsetattr(pty_fd, termios.TCSANOW, term_settings)\n",
+ "\n",
+ "\n",
+ "def _run_command(cmd, clear_streamed_output):\n",
+ " \"\"\"Calls the shell command, forwarding input received on the stdin_socket.\"\"\"\n",
+ " locale_encoding = locale.getpreferredencoding()\n",
+ " if locale_encoding != _ENCODING:\n",
+ " raise NotImplementedError(\n",
+ " 'A UTF-8 locale is required. Got {}'.format(locale_encoding))\n",
+ "\n",
+ " parent_pty, child_pty = pty.openpty()\n",
+ " _configure_term_settings(child_pty)\n",
+ "\n",
+ " epoll = select.epoll()\n",
+ " epoll.register(\n",
+ " parent_pty,\n",
+ " (select.EPOLLIN | select.EPOLLOUT | select.EPOLLHUP | select.EPOLLERR))\n",
+ "\n",
+ " try:\n",
+ " temporary_clearer = _tags.temporary if clear_streamed_output else _no_op\n",
+ "\n",
+ " with temporary_clearer(), _display_stdin_widget(\n",
+ " delay_millis=500) as update_stdin_widget:\n",
+ " # TODO(b/115531839): Ensure that subprocesses are terminated upon\n",
+ " # interrupt.\n",
+ " p = subprocess.Popen(\n",
+ " cmd,\n",
+ " shell=True,\n",
+ " executable='/bin/bash',\n",
+ " stdout=child_pty,\n",
+ " stdin=child_pty,\n",
+ " stderr=child_pty,\n",
+ " close_fds=True)\n",
+ " # The child PTY is only needed by the spawned process.\n",
+ " os.close(child_pty)\n",
+ "\n",
+ " return _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget)\n",
+ " finally:\n",
+ " epoll.close()\n",
+ " os.close(parent_pty)\n",
+ "\n",
+ "\n",
+ "class _MonitorProcessState(object):\n",
+ "\n",
+ " def __init__(self):\n",
+ " self.process_output = six.StringIO()\n",
+ " self.is_pty_still_connected = True\n",
+ "\n",
+ "\n",
+ "def _monitor_process(parent_pty, epoll, p, cmd, update_stdin_widget):\n",
+ " \"\"\"Monitors the given subprocess until it terminates.\"\"\"\n",
+ " state = _MonitorProcessState()\n",
+ "\n",
+ " # A single UTF-8 character can span multiple bytes. os.read returns bytes and\n",
+ " # could return a partial byte sequence for a UTF-8 character. Using an\n",
+ " # incremental decoder is incrementally fed input bytes and emits UTF-8\n",
+ " # characters.\n",
+ " decoder = codecs.getincrementaldecoder(_ENCODING)()\n",
+ "\n",
+ " num_interrupts = 0\n",
+ " echo_status = None\n",
+ " while True:\n",
+ " try:\n",
+ " result = _poll_process(parent_pty, epoll, p, cmd, decoder, state)\n",
+ " if result is not None:\n",
+ " return result\n",
+ " term_settings = termios.tcgetattr(parent_pty)\n",
+ " new_echo_status = bool(term_settings[3] & termios.ECHO)\n",
+ " if echo_status != new_echo_status:\n",
+ " update_stdin_widget(new_echo_status)\n",
+ " echo_status = new_echo_status\n",
+ " except KeyboardInterrupt:\n",
+ " try:\n",
+ " num_interrupts += 1\n",
+ " if num_interrupts == 1:\n",
+ " p.send_signal(signal.SIGINT)\n",
+ " elif num_interrupts == 2:\n",
+ " # Process isn't responding to SIGINT and user requested another\n",
+ " # interrupt. Attempt to send SIGTERM followed by a SIGKILL if the\n",
+ " # process doesn't respond.\n",
+ " p.send_signal(signal.SIGTERM)\n",
+ " time.sleep(0.5)\n",
+ " if p.poll() is None:\n",
+ " p.send_signal(signal.SIGKILL)\n",
+ " except KeyboardInterrupt:\n",
+ " # Any interrupts that occur during shutdown should not propagate.\n",
+ " pass\n",
+ "\n",
+ " if num_interrupts > 2:\n",
+ " # In practice, this shouldn't be possible since\n",
+ " # SIGKILL is quite effective.\n",
+ " raise\n",
+ "\n",
+ "\n",
+ "def _poll_process(parent_pty, epoll, p, cmd, decoder, state):\n",
+ " \"\"\"Polls the process and captures / forwards input and output.\"\"\"\n",
+ "\n",
+ " terminated = p.poll() is not None\n",
+ " if terminated:\n",
+ " termios.tcdrain(parent_pty)\n",
+ " # We're no longer interested in write events and only want to consume any\n",
+ " # remaining output from the terminated process. Continuing to watch write\n",
+ " # events may cause early termination of the loop if no output was\n",
+ " # available but the pty was ready for writing.\n",
+ " epoll.modify(parent_pty,\n",
+ " (select.EPOLLIN | select.EPOLLHUP | select.EPOLLERR))\n",
+ "\n",
+ " output_available = False\n",
+ "\n",
+ " events = epoll.poll()\n",
+ " input_events = []\n",
+ " for _, event in events:\n",
+ " if event & select.EPOLLIN:\n",
+ " output_available = True\n",
+ " raw_contents = os.read(parent_pty, _PTY_READ_MAX_BYTES_FOR_TEST)\n",
+ " import re\n",
+ " decoded_contents = re.sub(r\"http:\\/\\/127.0.0.1:53682\", Server[\"url\"], \n",
+ " decoder.decode(raw_contents))\n",
+ " sys.stdout.write(decoded_contents)\n",
+ " state.process_output.write(decoded_contents)\n",
+ "\n",
+ " if event & select.EPOLLOUT:\n",
+ " # Queue polling for inputs behind processing output events.\n",
+ " input_events.append(event)\n",
+ "\n",
+ " # PTY was disconnected or encountered a connection error. In either case,\n",
+ " # no new output should be made available.\n",
+ " if (event & select.EPOLLHUP) or (event & select.EPOLLERR):\n",
+ " state.is_pty_still_connected = False\n",
+ "\n",
+ " for event in input_events:\n",
+ " # Check to see if there is any input on the stdin socket.\n",
+ " # pylint: disable=protected-access\n",
+ " input_line = _message._read_stdin_message()\n",
+ " # pylint: enable=protected-access\n",
+ " if input_line is not None:\n",
+ " # If a very large input or sequence of inputs is available, it's\n",
+ " # possible that the PTY buffer could be filled and this write call\n",
+ " # would block. To work around this, non-blocking writes and keeping\n",
+ " # a list of to-be-written inputs could be used. Empirically, the\n",
+ " # buffer limit is ~12K, which shouldn't be a problem in most\n",
+ " # scenarios. As such, optimizing for simplicity.\n",
+ " input_bytes = bytes(input_line.encode(_ENCODING))\n",
+ " os.write(parent_pty, input_bytes)\n",
+ "\n",
+ " # Once the process is terminated, there still may be output to be read from\n",
+ " # the PTY. Wait until the PTY has been disconnected and no more data is\n",
+ " # available for read. Simply waiting for disconnect may be insufficient if\n",
+ " # there is more data made available on the PTY than we consume in a single\n",
+ " # read call.\n",
+ " if terminated and not state.is_pty_still_connected and not output_available:\n",
+ " sys.stdout.flush()\n",
+ " command_output = state.process_output.getvalue()\n",
+ " return ShellResult(cmd, p.returncode, command_output)\n",
+ "\n",
+ " if not output_available:\n",
+ " # The PTY is almost continuously available for reading input to provide\n",
+ " # to the underlying subprocess. This means that the polling loop could\n",
+ " # effectively become a tight loop and use a large amount of CPU. Add a\n",
+ " # slight delay to give resources back to the system while monitoring the\n",
+ " # process.\n",
+ " # Skip this delay if we read output in the previous loop so that a partial\n",
+ " # read doesn't unnecessarily sleep before reading more output.\n",
+ " # TODO(b/115527726): Rather than sleep, poll for incoming messages from\n",
+ " # the frontend in the same poll as for the output.\n",
+ " time.sleep(0.1)\n",
+ "\n",
+ "\n",
+ "@contextlib.contextmanager\n",
+ "def _display_stdin_widget(delay_millis=0):\n",
+ " \"\"\"Context manager that displays a stdin UI widget and hides it upon exit.\n",
+ "\n",
+ " Args:\n",
+ " delay_millis: Duration (in milliseconds) to delay showing the widget within\n",
+ " the UI.\n",
+ "\n",
+ " Yields:\n",
+ " A callback that can be invoked with a single argument indicating whether\n",
+ " echo is enabled.\n",
+ " \"\"\"\n",
+ " shell = _ipython.get_ipython()\n",
+ " display_args = ['cell_display_stdin', {'delayMillis': delay_millis}]\n",
+ " _message.blocking_request(*display_args, parent=shell.parent_header)\n",
+ "\n",
+ " def echo_updater(new_echo_status):\n",
+ " # Note: Updating the echo status uses colab_request / colab_reply on the\n",
+ " # stdin socket. Input provided by the user also sends messages on this\n",
+ " # socket. If user input is provided while the blocking_request call is still\n",
+ " # waiting for a colab_reply, the input will be dropped per\n",
+ " # https://github.com/googlecolab/colabtools/blob/56e4dbec7c4fa09fad51b60feb5c786c69d688c6/google/colab/_message.py#L100.\n",
+ " update_args = ['cell_update_stdin', {'echo': new_echo_status}]\n",
+ " _message.blocking_request(*update_args, parent=shell.parent_header)\n",
+ "\n",
+ " yield echo_updater\n",
+ "\n",
+ " hide_args = ['cell_remove_stdin', {}]\n",
+ " _message.blocking_request(*hide_args, parent=shell.parent_header)\n",
+ "\n",
+ "\n",
+ "@contextlib.contextmanager\n",
+ "def _no_op():\n",
+ " yield\n",
+ "\n",
+ "prepareSession()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rcloneConfiguration', 53682, 'http']], 'REGION.lower', [f\"{HOME}/.ngrok2/rcloneConfiguration.yml\", 4074]).start('rcloneConfiguration', displayB=False, v=False)\n",
+ "\n",
+ "printData = \"\"\"\n",
+ "Before finishing the configuration, you will be redirected to an address.\n",
+ "Replace the address http://127.0.0.0:53682 with {}\"\"\".format(Server['url'])\n",
+ "print(printData)\n",
+ "display(HTML('(Click here to see how to do it)'))\n",
+ "print(f\"{Server['url']}\", end=\"\\n\\n\")\n",
+ "_run_command(f\"rclone config --config {rcloneConfigurationPath}/rclone.conf\", False)\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qakuMVVjQlGU"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Mount/Unmount rclone Drive (Optional) \n",
+ "# @markdown Mount a remote drive as a local drive on a mountpoint.\n",
+ "# @markdown ---\n",
+ "Cache_Directory = \"DISK\" #@param [\"RAM\", \"DISK\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import uuid\n",
+ "import ipywidgets as widgets\n",
+ "from google.colab import output\n",
+ "import re\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " prepareSession,\n",
+ " rcloneConfigurationPath,\n",
+ ")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def ShowAC():\n",
+ " clear_output(wait=True)\n",
+ " display(\n",
+ " widgets.HBox(\n",
+ " [widgets.VBox(\n",
+ " [widgets.HTML(\n",
+ " '''\n",
+ " Available drive to mount/unmount: \n",
+ " '''\n",
+ " ),\n",
+ " mountNam]\n",
+ " )\n",
+ " ]\n",
+ " )\n",
+ " )\n",
+ " \n",
+ " display(HTML(\" \"), MakeButton(\"Mount\", MountCMD, \"primary\"),\n",
+ " MakeButton(\"Unmount\", unmountCMD, \"danger\"))\n",
+ "\n",
+ "prepareSession()\n",
+ "content = open(f\"{rcloneConfigurationPath}/rclone.conf\").read()\n",
+ "avCon = re.findall(r\"^\\[(.+)\\]$\", content, re.M)\n",
+ "mountNam = widgets.Dropdown(options=avCon)\n",
+ "\n",
+ "if Cache_Directory == 'RAM':\n",
+ " cache_path = '/dev/shm'\n",
+ "elif Cache_Directory == 'DISK':\n",
+ " os.makedirs('/tmp', exist_ok=True)\n",
+ " cache_path = '/tmp'\n",
+ "\n",
+ "def MountCMD():\n",
+ " mPoint = f\"/content/drives/{mountNam.value}\"\n",
+ " os.makedirs(mPoint, exist_ok=True)\n",
+ " cmd = rf\"rclone mount {mountNam.value}: {mPoint}\" \\\n",
+ " rf\" --config {rcloneConfigurationPath}/rclone.conf\" \\\n",
+ " ' --user-agent \"Mozilla\"' \\\n",
+ " ' --buffer-size 256M' \\\n",
+ " ' --transfers 10' \\\n",
+ " ' --vfs-cache-mode full' \\\n",
+ " ' --vfs-cache-max-age 0h0m1s' \\\n",
+ " ' --vfs-cache-poll-interval 0m1s' \\\n",
+ " f' --cache-dir {cache_path}' \\\n",
+ " ' --allow-other' \\\n",
+ " ' --daemon'\n",
+ "\n",
+ " if runSh(cmd, shell=True) == 0:\n",
+ " print(f\"The drive have been successfully mounted! - \\t{mPoint}\")\n",
+ " else:\n",
+ " print(f\"Failed to mount the drive! - \\t{mPoint}\")\n",
+ "\n",
+ "def unmountCMD():\n",
+ " mPoint = f\"/content/drives/{mountNam.value}\"\n",
+ " if os.system(f\"fusermount -uz {mPoint}\") == 0:\n",
+ " runSh(f\"rm -r {mPoint}\")\n",
+ " print(f\"The drive have been successfully unmounted! - \\t{mPoint}\")\n",
+ " else:\n",
+ " runSh(f\"fusermount -uz {mPoint}\", output=True)\n",
+ "\n",
+ "ShowAC()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "G3rr1OuFRApD"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Upload Configuration File \n",
+ "# @markdown If you already have an rclone configuration file, you can upload it by running this cell.
\n",
+ "\n",
+ "# @markdown ---\n",
+ "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG', 'RCONFIG_append', \"GENERATELIST\"]\n",
+ "REMOTE = \"mnc\" # @param {type:\"string\"}\n",
+ "QUERY_PATTERN = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > For those who are unable to upload local file: StackOverflow
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from os import path as _p\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run # nosec\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "import importlib, mixlab\n",
+ "from google.colab import files # pylint: disable=import-error #nosec\n",
+ "from mixlab import checkAvailable, runSh, rcloneConfigurationPath, prepareSession\n",
+ "\n",
+ "\n",
+ "def generateUploadList():\n",
+ " prepareSession()\n",
+ " if checkAvailable(\"/content/upload.txt\"):\n",
+ " runSh(\"rm -f upload.txt\")\n",
+ " runSh(\n",
+ " f\"rclone --config {rcloneConfigurationPath}/rclone.conf lsf {REMOTE}: --include '{QUERY_PATTERN}' --drive-shared-with-me --files-only --max-depth 1 > /content/upload.txt\",\n",
+ " shell=True, # nosec\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def uploadLocalFiles():\n",
+ " prepareSession()\n",
+ " if MODE == \"UTILS\":\n",
+ " filePath = \"/root/.ipython/mixlab.py\"\n",
+ " elif MODE in (\"RCONFIG\", \"RCONFIG_append\"):\n",
+ " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
+ " else:\n",
+ " pass\n",
+ "\n",
+ " try:\n",
+ " if checkAvailable(filePath):\n",
+ " runSh(f\"rm -f {filePath}\")\n",
+ " display(HTML(\"Upload rclone.conf from your local machine. \"))\n",
+ " uploadedFile = files.upload()\n",
+ " fileNameDictKeys = uploadedFile.keys()\n",
+ " fileNo = len(fileNameDictKeys)\n",
+ " if fileNo > 1:\n",
+ " for fn in fileNameDictKeys:\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " return print(\"\\nOnly upload one configuration file!\")\n",
+ " elif fileNo == 0:\n",
+ " return print(\"\\nFile upload cancelled.\")\n",
+ " elif fileNo == 1:\n",
+ " for fn in fileNameDictKeys:\n",
+ " if checkAvailable(f\"/content/{fn}\"):\n",
+ " if MODE == \"RCONFIG_append\":\n",
+ " import urllib\n",
+ " urllib.request.urlretrieve(\"https://shirooo39.github.io/MiXLab/resources/configurations/rclone/rclone.conf\",\n",
+ " \"/usr/local/sessionSettings/rclone.conf\")\n",
+ " with open(f\"/content/{fn}\", 'r+') as r:\n",
+ " new_data = r.read()\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " with open(filePath, 'r+') as f:\n",
+ " old_data = f.read()\n",
+ " f.seek(0)\n",
+ " f.truncate(0)\n",
+ " f.write(old_data + new_data)\n",
+ " print(\"\\nUpdate completed.\")\n",
+ " else:\n",
+ " runSh(f'mv -f \"/content/{fn}\" {filePath}')\n",
+ " runSh(f\"chmod 666 {filePath}\")\n",
+ " runSh(f'rm -f \"/content/{fn}\"')\n",
+ " importlib.reload(mixlab)\n",
+ " !rm /content/upload.txt\n",
+ " clear_output()\n",
+ " print(\"rclone.conf has been uploaded to Colab!\")\n",
+ " return\n",
+ " else:\n",
+ " print(\"\\nNo file is chosen!\")\n",
+ " return\n",
+ " except:\n",
+ " return print(\"\\nFailed to upload!\")\n",
+ "\n",
+ "\n",
+ "if MODE == \"GENERATELIST\":\n",
+ " generateUploadList()\n",
+ "else:\n",
+ " uploadLocalFiles()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BucL21B4RIGJ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Download Configuration File \n",
+ "# @markdown Download configuration file from the VM into your local machine.
\n",
+ "\n",
+ "# @markdown ---\n",
+ "MODE = \"RCONFIG\" # @param ['UTILS', 'RCONFIG']\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "def downloadFile():\n",
+ " if MODE == \"UTILS\":\n",
+ " filePath = \"/root/.ipython/mixlab.py\"\n",
+ " elif MODE == \"RCONFIG\":\n",
+ " filePath = f\"{rcloneConfigurationPath}/rclone.conf\"\n",
+ " else:\n",
+ " pass\n",
+ " try:\n",
+ " files.download(filePath)\n",
+ " except FileNotFoundError:\n",
+ " print(\"File not found!\")\n",
+ "\n",
+ "if __name__ == \"__main__\":\n",
+ " downloadFile()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "_NGsTyR3Ra5N"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "## @markdown ← Sync Backup \n",
+ "# @markdown \n",
+ "#FAST_LIST = True\n",
+ "# ================================================================ #\n",
+ "\n",
+ "#from os import path as _p\n",
+ "\n",
+ "#if not _p.exists(\"/root/.ipython/rlab_utils.py\"):\n",
+ "# from shlex import split as _spl\n",
+ "# from subprocess import run # nosec\n",
+ "\n",
+ "# shellCmd = \"wget -qq https://biplobsd.github.io/RLabClone/res/rlab_utils.py \\\n",
+ "# -O /root/.ipython/rlab_utils.py\"\n",
+ "# run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "#from rlab_utils import (\n",
+ "# runSh,\n",
+ "# prepareSession,\n",
+ "# PATH_RClone_Config,\n",
+ "#)\n",
+ "\n",
+ "\n",
+ "#def generateCmd(src, dst):\n",
+ "# block=f\"{'':=<117}\"\n",
+ "# title=f\"\"\"+{f'Now Synchronizing... \"{src}\" > \"{dst}\" Fast List : {\"ON\" if FAST_LIST else \"OFF\"}':^{len(block)-2}}+\"\"\"\n",
+ "# print(f\"{block}\\n{title}\\n{block}\")\n",
+ "# cmd = f'rclone sync \"{src}\" \"{dst}\" --config {PATH_RClone_Config}/rclone.conf {\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" --transfers 20 --checkers 20 --drive-server-side-across-configs -c --buffer-size 256M --drive-chunk-size 256M --drive-upload-cutoff 256M --drive-acknowledge-abuse --drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 --stats-one-line --stats=5s -v'\n",
+ "# return cmd\n",
+ "\n",
+ "\n",
+ "#def executeSync():\n",
+ "# prepareSession()\n",
+ "# runSh(generateCmd(\"tdTdnMov:Movies\",\"tdMovRa4:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnTvs:TV Shows\",\"tdTvsRa5:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnRa6:Games\",\"tdGamRa7:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnRa8:Software\",\"tdSofRa9:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnR11:Tutorials\",\"tdTutR12:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdnR13:Anime\",\"tdAniR14:\"), output=True)\n",
+ "# runSh(generateCmd(\"tdTdn14:Music\",\"tdMusR15:\"), output=True)\n",
+ "\n",
+ "\n",
+ "#executeSync()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4DdRcv08fzTG"
+ },
+ "source": [
+ "# ✦ *Download Manager* ✦ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Sjvzf5WLsJya"
+ },
+ "source": [
+ "> It is recommended to download the file(s) into the VM's local disk first and then use rclone to upload (move/copy)to remote Drive, to avoid possible file corruption."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "o_uCXhC1S0GZ"
+ },
+ "source": [
+ "## ✧ *Hosted-File Downloader* ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nGKbLp4P8MXi"
+ },
+ "source": [
+ "### aria2 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "l8uIsoVrC6to"
+ },
+ "source": [
+ "#### aria2 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Z3fpZQeJ8N80"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] aria2 \n",
+ "Aria2_rpc = True\n",
+ "Ariang_WEBUI = True\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request, requests\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from urllib.parse import urlparse\n",
+ "\n",
+ "PORT = 8221\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " CWD,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " findPackageR\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "# Setting up aria2\n",
+ "runSh('apt install -y aria2')\n",
+ "pathlib.Path('ariang').mkdir(mode=0o777, exist_ok=True)\n",
+ "pathlib.Path('downloads').mkdir(mode=0o777, exist_ok=True)\n",
+ "\n",
+ "# Defining Github latest release tag\n",
+ "def latestTag(link):\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
+ " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
+ "\n",
+ "# Downloading the latest version of ariaNg\n",
+ "if not os.path.exists(\"ariang/index.html\"):\n",
+ " # BASE_URL = r\"https://github.com/mayswind/AriaNg\"\n",
+ " # LATEST_TAG = latestTag(BASE_URL)\n",
+ " # urlF = f'{BASE_URL}/releases/download/{LATEST_TAG}/' \\\n",
+ " # f'AriaNg-{LATEST_TAG}-AllInOne.zip'\n",
+ " urllib.request.urlretrieve(findPackageR('mayswind/AriaNg', 'AllInOne.zip'), 'ariang/new.zip')\n",
+ " with zipfile.ZipFile('ariang/new.zip', 'r') as zip_ref: zip_ref.extractall('ariang')\n",
+ " try:\n",
+ " pathlib.Path('ariang/new.zip').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ "# Starting up aria2 RPC and the WebUI (ariaNg)\n",
+ "try:\n",
+ " if not OUTPUT_DIR:\n",
+ " OUTPUT_DIR = f\"downloads/\"\n",
+ " elif not os.path.exists(OUTPUT_DIR):\n",
+ " \n",
+ " clear_output()\n",
+ " \n",
+ " print(\"Unable to find the defined path!\")\n",
+ " exx()\n",
+ "except:\n",
+ " OUTPUT_DIR = f\"{CWD}/downloads/\"\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " if not findProcess(\"aria2c\", \"--enable-rpc\"):\n",
+ " try:\n",
+ " trackers = requests.get(\"https://trackerslist.com/best_aria2.txt\").text\n",
+ " cmdC = r\"aria2c --enable-rpc --rpc-listen-port=6800 -D \" \\\n",
+ " fr\"-d {OUTPUT_DIR} \" \\\n",
+ " r\"-j 20 \" \\\n",
+ " r\"-c \" \\\n",
+ " fr\"--bt-tracker={trackers} \" \\\n",
+ " r\"--bt-request-peer-speed-limit=0 \" \\\n",
+ " r\"--bt-max-peers=0 \" \\\n",
+ " r\"--seed-ratio=0.0 \" \\\n",
+ " r\"--max-connection-per-server=10 \" \\\n",
+ " r\"--min-split-size=10M \" \\\n",
+ " r\"--follow-torrent=mem \" \\\n",
+ " r\"--disable-ipv6=true \" \\\n",
+ " r\" &\"\n",
+ " runSh(cmdC, shell=True)\n",
+ " except:\n",
+ " print(\"aria2 RPC is not enabled! Please enable the RPC first!\")\n",
+ "\n",
+ "# Configuring port forwarding\n",
+ "clear_output()\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Aria2_rpc', 6800, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/aria2.yml\", 5042])\n",
+ " data = Server.start('Aria2_rpc', displayB=False)\n",
+ " Host = urlparse(data['url']).hostname\n",
+ " port = \"80\"\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if Ariang_WEBUI:\n",
+ " if Aria2_rpc:\n",
+ " filePath = 'ariang/index.html'\n",
+ " with open(filePath, 'r+') as f:\n",
+ " read_data = f.read()\n",
+ " f.seek(0)\n",
+ " f.truncate(0)\n",
+ " read_data = re.sub(r'(rpcHost:\"\\w+.\")|rpcHost:\"\"', f'rpcHost:\"{Host}\"', read_data)\n",
+ " read_data = re.sub(r'protocol:\"\\w+.\"', r'protocol:\"ws\"', read_data)\n",
+ " read_data = re.sub(r'rpcPort:\"\\d+.\"', f'rpcPort:\"{port}\"', read_data)\n",
+ " f.write(read_data)\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " except:\n",
+ " runSh(f\"python3 -m http.server {PORT} &\", shell=True, cd=\"ariang/\")\n",
+ " \n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['Ariang', PORT, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/ariang.yml\", 5043])\n",
+ "data2 = Server.start('Ariang', displayB=False)\n",
+ "data2['url'] = urlparse(data2['url'])._replace(scheme='http').geturl()\n",
+ "displayUrl(data2, pNamU='AriaNg : ')\n",
+ "\n",
+ "if Aria2_rpc:\n",
+ " display(HTML(\"\"\"aria2 RPC Configuration
Protocol Host Port
WebSocket \"\"\"+Host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YMSrqjUm_bDN"
+ },
+ "source": [
+ "#### aria2 > "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "xa483vhL_d0X"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] aria2 > \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > If OUTPUT_PATH is blank, the file will be downloaded into the default location.Default download location is /content/downloads\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import pathlib\n",
+ "import shutil\n",
+ "import hashlib\n",
+ "import requests\n",
+ "from urllib.parse import urlparse\n",
+ "from os import path, mkdir\n",
+ "if not path.exists(\"/root/.ipython/mixlab.py\"): \n",
+ " from subprocess import run\n",
+ " from shlex import split\n",
+ "\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(split(shellCmd))\n",
+ "\n",
+ "from mixlab import runSh\n",
+ "\n",
+ "def youtubedlInstall():\n",
+ " if not path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
+ " cmdC = \"rm -rf /content/sample_data/ && \" \\\n",
+ " \" mkdir -p -m 666 /root/.YouTube-DL/ &&\" \\\n",
+ " \" apt-get install atomicparsley &&\" \\\n",
+ " \" curl -L https://yt-dl.org/downloads/latest/youtube-dl \" \\\n",
+ " \"-o /usr/local/bin/youtube-dl &&\" \\\n",
+ " \" chmod a+rx /usr/local/bin/youtube-dl\"\n",
+ " get_ipython().system_raw(cmdC)\n",
+ "\n",
+ "def aria2Install():\n",
+ " runSh('apt install -y aria2')\n",
+ "\n",
+ "def istmd(URL): \n",
+ " link = urlparse(URL)\n",
+ " \n",
+ " #YandexDisk\n",
+ " if link.netloc == \"yadi.sk\":\n",
+ " API_ENDPOINT = 'https://cloud-api.yandex.net/v1/disk/public/resources/' \\\n",
+ " '?public_key={}&path=/{}&offset={}'\n",
+ " dry = False\n",
+ " def md5sum(file_path):\n",
+ " md5 = hashlib.md5()\n",
+ " with open(file_path, 'rb') as f:\n",
+ " for chunk in iter(lambda: f.read(128 * md5.block_size), b''):\n",
+ " md5.update(chunk)\n",
+ " return md5.hexdigest()\n",
+ "\n",
+ "\n",
+ " def check_and_download_file(target_path, url, size, checksum):\n",
+ " if path.isfile(target_path):\n",
+ " if size == path.getsize(target_path):\n",
+ " if checksum == md5sum(target_path):\n",
+ " print('URL {}'.format(url))\n",
+ " print('skipping correct {}'.format(target_path))\n",
+ " return\n",
+ " if not dry:\n",
+ " print('URL {}'.format(url))\n",
+ " print('downloading {}'.format(target_path))\n",
+ " runSh(f'aria2c -x 16 -s 16 -k 1M -d {OUTPUT_PATH} {url}', output=True)\n",
+ " # r = requests.get(url, stream=True)\n",
+ " # with open(target_path, 'wb') as f:\n",
+ " # shutil.copyfileobj(r.raw, f)\n",
+ "\n",
+ " def download_path(target_path, public_key, source_path, offset=0):\n",
+ " print('getting \"{}\" at offset {}'.format(source_path, offset))\n",
+ " current_path = path.join(target_path, source_path)\n",
+ " pathlib.Path(current_path).mkdir(parents=True, exist_ok=True)\n",
+ " jsn = requests.get(API_ENDPOINT.format(public_key, source_path, offset)).json()\n",
+ " def try_as_file(j):\n",
+ " if 'file' in j:\n",
+ " file_save_path = path.join(current_path, j['name'])\n",
+ " check_and_download_file(file_save_path, j['file'], j['size'], j['md5'])\n",
+ " return True\n",
+ " return False\n",
+ "\n",
+ " # first try to treat the actual json as a single file description\n",
+ " if try_as_file(jsn):\n",
+ " return\n",
+ "\n",
+ " # otherwise treat it as a directory\n",
+ " emb = jsn['_embedded']\n",
+ " items = emb['items']\n",
+ " for i in items:\n",
+ " # each item can be a file...\n",
+ " if try_as_file(i):\n",
+ " continue\n",
+ " # ... or a directory\n",
+ " else:\n",
+ " subdir_path = path.join(source_path, i['name'])\n",
+ " download_path(target_path, public_key, subdir_path)\n",
+ "\n",
+ " # check if current directory has more items\n",
+ " last = offset + emb['limit']\n",
+ " if last < emb['total']:\n",
+ " download_path(target_path, public_key, source_path, last)\n",
+ " download_path(OUTPUT_PATH, URL, '')\n",
+ " return False \n",
+ " return URL\n",
+ "\n",
+ "if not OUTPUT_PATH:\n",
+ " OUTPUT_PATH = \"/content/downloads/\"\n",
+ " \n",
+ "if not URL == \"\":\n",
+ " aria2Install()\n",
+ " youtubedlInstall()\n",
+ " try:\n",
+ " mkdir(\"downloads\")\n",
+ " except FileExistsError:\n",
+ " pass\n",
+ " url = istmd(URL)\n",
+ " if url != False:\n",
+ " print('URL {}'.format(URL))\n",
+ " cmdC = f'youtube-dl -o \"{OUTPUT_PATH}/%(title)s\" {URL} ' \\\n",
+ " '--external-downloader aria2c ' \\\n",
+ " '--external-downloader-args \"-x 16 -s 16 -k 1M\"'\n",
+ " runSh(cmdC, output=True)\n",
+ "else:\n",
+ " print(\"The URL field is emtpy!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N09EnjlB6wuV"
+ },
+ "source": [
+ "### bandcamp-dl "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0jLuWp0C604l"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM =============================#\n",
+ "#@markdown ← [Install] bandcamp-dl \n",
+ "#@markdown Make sure to run this cell first! \n",
+ "#================================================================#\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!pip3 install bandcamp-downloader\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LxU70FqH62an"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM =============================#\n",
+ "#@markdown ← [Run] bandcamp-dl \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "Download_location = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the \"Download_location\" field is left empty, downloads will be stored in: /content/downloads/bandcamp\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Download Options ⚙️ \n",
+ "Download_only_if_all_tracks_are_available = False #@param {type:\"boolean\"}\n",
+ "Overwrite_tracks_that_already_exist = False #@param {type:\"boolean\"}\n",
+ "Skip_grabbing_album_art = False #@param {type:\"boolean\"}\n",
+ "Embed_track_lyrics_If_available = False #@param {type:\"boolean\"}\n",
+ "Use_album_or_track_Label_as_iTunes_grouping = False #@param {type:\"boolean\"}\n",
+ "Embed_album_art_If_available = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Advanced Options ⚙️ \n",
+ "Enable_verbose_logging = False #@param {type:\"boolean\"}\n",
+ "Disable_slugification_of_track_album_and_artist_names = False #@param {type:\"boolean\"}\n",
+ "Only_allow_ASCII_characters = False #@param {type:\"boolean\"}\n",
+ "Retain_whitespace_in_filenames = False #@param {type:\"boolean\"}\n",
+ "Retain_uppercase_letters_in_filenames = False #@param {type:\"boolean\"}\n",
+ "Specify_allowed_characters_in_slugify = \"-_~\" #@param {type:\"string\"}\n",
+ "Specify_the_character_to_use_in_place_of_spaces = \"-\" #@param {type:\"string\"}\n",
+ "#================================================================#\n",
+ "\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "default_download_location = \"/content/downloads/bandcamp\"\n",
+ "custom_download_location = Download_location\n",
+ "\n",
+ "if Download_location is \"\":\n",
+ " Download_location = \"--base-dir=\" + default_download_location\n",
+ " \n",
+ " if os.path.exists(default_download_location):\n",
+ " pass\n",
+ " else:\n",
+ " os.makedirs(default_download_location)\n",
+ "else:\n",
+ " Download_location = \"--base-dir=\" + Download_location\n",
+ " \n",
+ " if os.path.exists(custom_download_location):\n",
+ " pass\n",
+ " else:\n",
+ " os.makedirs(custom_download_location)\n",
+ "\n",
+ "if Download_only_if_all_tracks_are_available is True:\n",
+ " full_album = \"-f\"\n",
+ "else:\n",
+ " full_album = \"\"\n",
+ "\n",
+ "if Overwrite_tracks_that_already_exist is True:\n",
+ " overwrite = \"-o\"\n",
+ "else:\n",
+ " overwrite = \"\"\n",
+ "\n",
+ "if Skip_grabbing_album_art is True:\n",
+ " no_art = \"-n\"\n",
+ "else:\n",
+ " no_art = \"\"\n",
+ "\n",
+ "if Embed_track_lyrics_If_available is True:\n",
+ " embed_lyrics = \"-e\"\n",
+ "else:\n",
+ " embed_lyrics = \"\"\n",
+ "\n",
+ "if Use_album_or_track_Label_as_iTunes_grouping is True:\n",
+ " group = \"-g\"\n",
+ "else:\n",
+ " group = \"\"\n",
+ "\n",
+ "if Embed_album_art_If_available is True:\n",
+ " embed_art = \"-r\"\n",
+ "else:\n",
+ " embed_art = \"\"\n",
+ "\n",
+ "if Enable_verbose_logging is True:\n",
+ " verbose_logging = \"-d\"\n",
+ "else:\n",
+ " verbose_logging = \"\"\n",
+ "\n",
+ "if Disable_slugification_of_track_album_and_artist_names is True:\n",
+ " no_slugify = \"-y\"\n",
+ "else:\n",
+ " no_slugify = \"\"\n",
+ "\n",
+ "if Only_allow_ASCII_characters is True:\n",
+ " ascii_only = \"-a\"\n",
+ "else:\n",
+ " ascii_only = \"\"\n",
+ "\n",
+ "if Retain_whitespace_in_filenames is True:\n",
+ " keep_spaces = \"-k\"\n",
+ "else:\n",
+ " keep_spaces = \"\"\n",
+ "\n",
+ "if Retain_uppercase_letters_in_filenames is True:\n",
+ " keep_upper = \"-u\"\n",
+ "else:\n",
+ " keep_upper = \"\"\n",
+ "\n",
+ "\n",
+ "if not URL is \"\":\n",
+ " !bandcamp-dl $full_album $overwrite $no_art $embed_lyrics $group $embed_art $verbose_logging $no_slugify $ascii_only $keep_spaces $keep_upper \"$Download_location\" \"$URL\"\n",
+ " \n",
+ " display(HTML(\"✅ bandcamp-dl has finished performing its task! \"))\n",
+ "else:\n",
+ " display(HTML(\"❌ The URL field is empty! \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bZ-Z0cUdz7IL"
+ },
+ "source": [
+ "### FunKiiU "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "yRmvnl090JmZ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← [Start] FunKiiU \n",
+ "#@markdown FunKiiU is a python tool for downloading Nintendo Wii U content from Nintendo's CDN. (Click here to check out the github repository)
\n",
+ "\n",
+ "#@markdown ---\n",
+ "title_id = \"\" #@param {type:\"string\"}\n",
+ "title_key = \"\" #@param {type:\"string\"}\n",
+ "#download_path = \"\" #@param {type:\"string\"}\n",
+ "run_in_simulated_mode = False #@param{type: \"boolean\"}\n",
+ "#@markdown > Download(s) are stored in (/content/install).\n",
+ "\n",
+ "# @markdown ---\n",
+ "automatically_clear_cell_output = False #@param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "#import subprocess\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "FunKiiU_clone_path = \"/content/tools/FunKiiU\"\n",
+ "FunKiiU_path = \"/content/tools/FunKiiU/FunKiiU.py\"\n",
+ "FunKiiU_download_path = \"/content/install\"\n",
+ "\n",
+ "\n",
+ "# Checks whether FunKiiU exists or not.\n",
+ "# If FunKiiU does not exist, it will be downloaded/pulled from its github repository.\n",
+ "if os.path.exists(FunKiiU_path):\n",
+ "\tpass\n",
+ "else:\n",
+ " os.system(\"git clone https://github.com/llakssz/FunKiiU \" + FunKiiU_clone_path)\n",
+ " \n",
+ " # This block here is not actually necessery as FunKiiU is able automatically create the \"install\" folder but, well...\n",
+ " try:\n",
+ " os.makedirs(FunKiiU_download_path, exist_ok=True)\n",
+ " except OSError as error:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "\n",
+ "# Fields checking.\n",
+ "# If both fields or one of them are empty, a message will be shown.\n",
+ "if title_id == \"\" and title_key == \"\":\n",
+ " display(HTML(\"❌ Both fields are empty! \"))\n",
+ "elif title_id == \"\" and not title_key == \"\":\n",
+ " display(HTML(\"❌ The title_id field is empty! \"))\n",
+ "elif not title_id == \"\" and title_key == \"\":\n",
+ " display(HTML(\"❌ The title_key field is empty! \"))\n",
+ "else:\n",
+ " # Passing the -simulate argument to run in simulated mode, if the above checkbox's value is True\n",
+ " if run_in_simulated_mode is True:\n",
+ " simulate = \" -simulate\"\n",
+ " else:\n",
+ " simulate = \"\"\n",
+ " \n",
+ " # The actual piece of command that runs FunKiiU\n",
+ " # ----- Downloading by running the command directly as the OS,\n",
+ " !python \"/content/tools/FunKiiU/FunKiiU.py\" -title \"$title_id\" -key \"$title_key\" $simulate\n",
+ " \n",
+ " # ----- Downloading the python way but still as the OS (does not show any output),\n",
+ " #os.system(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate)\n",
+ " \n",
+ " # ----- Downloading as subprocess and capture the output.\n",
+ " #FunKiiU_process = subprocess.Popen(\"python \" + FunKiiU_path + \" -title \" + title_id + \" -key \" + title_key + simulate, shell = True, stdout = subprocess.PIPE).stdout\n",
+ " #FunKiiU = FunKiiU_process.read()\n",
+ " #\n",
+ " #print(FunKiiU.decode())\n",
+ "\n",
+ " # Printing different message for regular download mode or simulated mode.\n",
+ " if run_in_simulated_mode is True:\n",
+ " display(HTML(\"✅ FunKiiU has finished doing the simulation. \"))\n",
+ " else:\n",
+ " display(HTML(\"✅ Download(s) are stored in: /content/install \"))\n",
+ " \n",
+ " # Will automatically clear console output if the above checkbox's value is True\n",
+ " # With this enabled, user won't be able to see anything, though.\n",
+ " if automatically_clear_cell_output is True:\n",
+ " clear_output()\n",
+ " else:\n",
+ " pass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0XaXh7Ix0VFu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clear \"install\" Folder \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "FunKiiU_download_path = \"/content/install\"\n",
+ "\n",
+ "if os.path.exists(FunKiiU_download_path):\n",
+ " os.system(\"rm -rf \" + FunKiiU_download_path)\n",
+ " os.makedirs(FunKiiU_download_path)\n",
+ "elif not os.path.exists(FunKiiU_download_path):\n",
+ " os.makedirs(FunKiiU_download_path)\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "s7IbnEdkYBkY"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Remove FunKiiU \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "FunKiiU_path = \"/content/tools/FunKiiU\"\n",
+ "\n",
+ "if os.path.exists(FunKiiU_download_path):\n",
+ " os.system(\"rm -rf \" + FunKiiU_path)\n",
+ "elif not os.path.exists(FunKiiU_path):\n",
+ " pass\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ERBVA5aIERou"
+ },
+ "source": [
+ "### Google Drive CLI "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Qs0bcnzAFDZq"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Clone] Google Drive CLI \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
+ "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
+ "\n",
+ "\n",
+ "def cloneGoogleDriveCLI():\n",
+ " if os.path.exists(GoogleDriveCLI_path1 + \"/gdrive\"):\n",
+ " pass\n",
+ " else:\n",
+ " # Big thanks to github user GrowtopiaJaw for providing a pre-compiled binary of Google Drive CLI.\n",
+ " # https://github.com/GrowtopiaJaw/gdrive\n",
+ " os.system(\"wget https://github.com/GrowtopiaJaw/gdrive/releases/download/v2.1.1/gdrive-linux-amd64\")\n",
+ " \n",
+ " if not os.path.exists(GoogleDriveCLI_path1):\n",
+ " # Big thanks to github user prasmussen for creating such an awesome tool.\n",
+ " # https://github.com/prasmussen/gdrive\n",
+ " os.makedirs(\"/content/tools/GoogleDriveCLI\")\n",
+ "\n",
+ " os.system(\"mv /content/gdrive-linux-amd64 \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
+ " os.system(\"chmod +x \" + GoogleDriveCLI_path1 + \"/gdrive\")\n",
+ "\n",
+ "\n",
+ "def initializeGoogleDriveCLI():\n",
+ " if not os.path.exists(GoogleDriveCLI_path2):\n",
+ " cloneGoogleDriveCLI()\n",
+ " initializeGoogleDriveCLI()\n",
+ " else:\n",
+ " !\"$GoogleDriveCLI_path2\" \"about\"\n",
+ " #clear_output(wait = True)\n",
+ "\n",
+ "\n",
+ "initializeGoogleDriveCLI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "V6fwq8QcF77j"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Google Drive CLI \n",
+ "download_id = \"\" #@param{type:\"string\"}\n",
+ "#@markdown > Currently only support downloading a publicly shared file (a file, NOT a folder).\n",
+ "download_path = \"\" #@param{type:\"string\"}\n",
+ "#@markdown > If left empty, the default download path will be used (/content/downloads).\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "download_path_default = \"/content/downloads\"\n",
+ "GoogleDriveCLI_path1 = \"/content/tools/GoogleDriveCLI\"\n",
+ "GoogleDriveCLI_path2 = GoogleDriveCLI_path1 + \"/gdrive\"\n",
+ "\n",
+ "\n",
+ "if not os.path.exists(GoogleDriveCLI_path2):\n",
+ " display(HTML(\"❌ Unable to locate the required binary! Make sure you have already run the cell above first! \"))\n",
+ "else:\n",
+ " if download_id == \"\":\n",
+ " display(HTML(\"❌ The download_id field is empty! \"))\n",
+ " else:\n",
+ " if download_path == \"\":\n",
+ " download_path = download_path_default\n",
+ " if not os.path.exists(download_path):\n",
+ " os.makedirs(download_path)\n",
+ " else:\n",
+ " pass\n",
+ " elif not os.path.exists(download_path):\n",
+ " os.makedirs(download_path)\n",
+ " else:\n",
+ " pass\n",
+ " \n",
+ " !\"/content/tools/GoogleDriveCLI/gdrive\" download --path \"$download_path\" \"$download_id\"\n",
+ " \n",
+ " if download_path is download_path_default:\n",
+ " display(HTML(\"The download_path field is empty. Download(s) are stored into the default download path (/content/downloads). \"))\n",
+ " else:\n",
+ " display(HTML(\"Download(s) are stored into (\" + download_path + \"). \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bEYznPNQ61sm"
+ },
+ "source": [
+ "### JDownloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LP35vcdpw2Vd"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] JDownloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from os import path as _p\n",
+ "\n",
+ "NEW_Account = True\n",
+ "\n",
+ "if not _p.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from shlex import split as _spl\n",
+ " from subprocess import run # nosec\n",
+ "\n",
+ " shellCmd = \"wget -qq https://shirooo39.github.io/MiXLab/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(_spl(shellCmd)) # nosec\n",
+ "\n",
+ "from mixlab import handleJDLogin\n",
+ "\n",
+ "handleJDLogin(NEW_Account)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1mctlRk1TTrc"
+ },
+ "source": [
+ "### MEGA "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "AelSL7BeTcJA"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← MEGA Login \n",
+ "# @markdown Please log in to MEGA first (only needed to use the Uploader).
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from functools import wraps\n",
+ "import errno\n",
+ "import os\n",
+ "import signal\n",
+ "import subprocess\n",
+ "import shlex\n",
+ "\n",
+ "class TimeoutError(Exception):\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):\n",
+ " def decorator(func):\n",
+ " def _handle_timeout(signum, frame):\n",
+ " raise TimeoutError(error_message)\n",
+ "\n",
+ " def wrapper(*args, **kwargs):\n",
+ " signal.signal(signal.SIGALRM, _handle_timeout)\n",
+ " signal.alarm(seconds)\n",
+ " try:\n",
+ " result = func(*args, **kwargs)\n",
+ " finally:\n",
+ " signal.alarm(0)\n",
+ " return result\n",
+ "\n",
+ " return wraps(func)(wrapper)\n",
+ "\n",
+ " return decorator\n",
+ "\n",
+ "if not os.path.exists(\"/root/.ipython/mixlab.py\"):\n",
+ " from subprocess import run\n",
+ " from shlex import split\n",
+ "\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/mixlab.py \\\n",
+ " -O /root/.ipython/mixlab.py\"\n",
+ " run(split(shellCmd))\n",
+ "from mixlab import runSh\n",
+ "\n",
+ "@timeout(10)\n",
+ "def runShT(args):\n",
+ " return runSh(args, output=True)\n",
+ "\n",
+ "# Installing MEGAcmd\n",
+ "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
+ " print(\"Installing MEGA ...\")\n",
+ " runSh('sudo apt-get -y update')\n",
+ " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
+ " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
+ " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
+ " print(\"MEGA is installed.\")\n",
+ "else:\n",
+ " !pkill mega-cmd\n",
+ "\n",
+ "# Enter MEGA credential\n",
+ "USERNAME = \"\" # @param {type:\"string\"}\n",
+ "PASSWORD = \"\" # @param {type:\"string\"}\n",
+ "if not (USERNAME == \"\" or PASSWORD == \"\"):\n",
+ " try:\n",
+ " runShT(f\"mega-login {USERNAME} {PASSWORD}\")\n",
+ " except TimeoutError:\n",
+ " runSh('mega-whoami', output=True)\n",
+ "else:\n",
+ " print(\"Please enter your MEGA credential.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "p0Wg4seDVseV"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Downloader \n",
+ "URL = \"\" #@param {type:\"string\"}\n",
+ "OUTPUT_PATH = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > URL: is the MEGA link you want to download (ex: mega.nz/file/file_link#decryption_key)OUTPUT_PATH: is where to store the downloaded file(s) (ex: /content/downloads/)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import sys, os, urllib.request\n",
+ "import time\n",
+ "import subprocess\n",
+ "import contextlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ ")\n",
+ "\n",
+ "if not OUTPUT_PATH:\n",
+ " os.makedirs(\"downloads\", exist_ok=True)\n",
+ " OUTPUT_PATH = \"downloads\"\n",
+ "# Installing MEGAcmd\n",
+ "if not os.path.exists(\"/usr/bin/mega-cmd\"):\n",
+ " loadingAn()\n",
+ " print(\"Installing MEGA ...\")\n",
+ " runSh('sudo apt-get -y update')\n",
+ " runSh('sudo apt-get -y install libmms0 libc-ares2 libc6 libcrypto++6 libgcc1 libmediainfo0v5 libpcre3 libpcrecpp0v5 libssl1.1 libstdc++6 libzen0v5 zlib1g apt-transport-https')\n",
+ " runSh('sudo curl -sL -o /var/cache/apt/archives/MEGAcmd.deb https://mega.nz/linux/MEGAsync/Debian_9.0/amd64/megacmd-Debian_9.0_amd64.deb', output=True)\n",
+ " runSh('sudo dpkg -i /var/cache/apt/archives/MEGAcmd.deb', output=True)\n",
+ " print(\"MEGA is installed.\")\n",
+ " clear_output()\n",
+ "\n",
+ "# Unix, Windows and old Macintosh end-of-line\n",
+ "newlines = ['\\n', '\\r\\n', '\\r']\n",
+ "\n",
+ "def unbuffered(proc, stream='stdout'):\n",
+ " stream = getattr(proc, stream)\n",
+ " with contextlib.closing(stream):\n",
+ " while True:\n",
+ " out = []\n",
+ " last = stream.read(1)\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " while last not in newlines:\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " out.append(last)\n",
+ " last = stream.read(1)\n",
+ " out = ''.join(out)\n",
+ " yield out\n",
+ "\n",
+ "def transfare():\n",
+ " import codecs\n",
+ " decoder = codecs.getincrementaldecoder(\"UTF-8\")()\n",
+ " cmd = [\"mega-get\", URL, OUTPUT_PATH]\n",
+ " proc = subprocess.Popen(\n",
+ " cmd,\n",
+ " stdout=subprocess.PIPE,\n",
+ " stderr=subprocess.STDOUT,\n",
+ " # Make all end-of-lines '\\n'\n",
+ " universal_newlines=True,\n",
+ " )\n",
+ " for line in unbuffered(proc):\n",
+ " print(line)\n",
+ " \n",
+ "transfare()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "3GKtYuBbUP-c"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Uploader \n",
+ "# Simple_torrent = False # @param{type: \"boolean\"}\n",
+ "# Peerflix = False # @param{type: \"boolean\"}\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > PATH_TO_FILE is the location of the file you want to upload located at. (ex: /content/downloads/file-to-upload.zip)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time\n",
+ "import subprocess\n",
+ "import contextlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "# Unix, Windows and old Macintosh end-of-line\n",
+ "newlines = ['\\n', '\\r\\n', '\\r']\n",
+ "\n",
+ "def unbuffered(proc, stream='stdout'):\n",
+ " stream = getattr(proc, stream)\n",
+ " with contextlib.closing(stream):\n",
+ " while True:\n",
+ " out = []\n",
+ " last = stream.read(1)\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " while last not in newlines:\n",
+ " # Don't loop forever\n",
+ " if last == '' and proc.poll() is not None:\n",
+ " break\n",
+ " out.append(last)\n",
+ " last = stream.read(1)\n",
+ " out = ''.join(out)\n",
+ " yield out\n",
+ "\n",
+ "def transfare():\n",
+ " cmd = \"\"\n",
+ " if Simple_torrent:\n",
+ " cmd = ['mega-put', 'downloads', '/colab']\n",
+ " elif Peerflix:\n",
+ " cmd = ['mega-put', 'peerflix', '/colab']\n",
+ " else:\n",
+ " cmd = ['mega-put', PATH_TO_FILE, '/colab']\n",
+ " proc = subprocess.Popen(\n",
+ " cmd,\n",
+ " stdout=subprocess.PIPE,\n",
+ " stderr=subprocess.STDOUT,\n",
+ " # Make all end-of-lines '\\n'\n",
+ " universal_newlines=True,\n",
+ " )\n",
+ " for line in unbuffered(proc):\n",
+ " clear_output(wait=True)\n",
+ " print(line)\n",
+ "\n",
+ "try:\n",
+ " transfare()\n",
+ "except FileNotFoundError:\n",
+ " print(\"Please log into your MEGA account first!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dEq11jIB5oee"
+ },
+ "source": [
+ "### pyLoad "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "a08IDWFG5rm1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] pyLoad \n",
+ "# @markdown pyLoad is a free and open-source download manager written in pure python.\n",
+ "# @markdown > pyLoad Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "PORT_FORWARD = \"argo_tunnel\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "os.makedirs('tools/pyload', exist_ok=True)\n",
+ "\n",
+ "# Downloading latest version of pyload\n",
+ "if not os.path.exists(\"tools/pyload/pyload-stable\"):\n",
+ " urlF = 'https://github.com/pyload/pyload/archive/stable.zip'\n",
+ " conf = 'https://raw.githubusercontent.com/shirooo39/' \\\n",
+ " 'MiXLab/master/resources/configurations/pyload/pyload.conf'\n",
+ " db = 'https://github.com/shirooo39/MiXLab/raw/master/' \\\n",
+ " 'resources/configurations/pyload/files.db'\n",
+ " urllib.request.urlretrieve(urlF, 'tools/pyload.zip')\n",
+ " urllib.request.urlretrieve(conf, 'tools/pyload/pyload.conf')\n",
+ " urllib.request.urlretrieve(db, 'tools/pyload/files.db')\n",
+ " with zipfile.ZipFile('tools/pyload.zip', 'r') as zip_ref: zip_ref.extractall('tools/pyload')\n",
+ " try:\n",
+ " pathlib.Path('tools/pyload.zip').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ " runSh(\"apt install python-pycurl python-qt4 tesseract-ocr libtesseract-dev\")\n",
+ " runSh(\"pip2 install pycrypto pyOpenSSL Jinja2 tesseract tesseract-ocr\")\n",
+ "\n",
+ "if not findProcess(\"python2.7\", \"pyLoadCore.py\"):\n",
+ " runCmd = \"python2.7 /content/tools/pyload/pyload-stable/pyLoadCore.py\" \\\n",
+ " \" --configdir=/content/tools/pyload\" \\\n",
+ " \" --no-remote\" \\\n",
+ " \" --daemon\"\n",
+ " runSh(runCmd, shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['pyload', 8000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/pyLoad.yml\", 4074]).start('pyload')\n",
+ "displayUrl(Server, pNamU='pyLoad : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ci0HTN9Xyxze"
+ },
+ "source": [
+ "### Pornhub Downloader "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cD9BrjIoAbF7"
+ },
+ "source": [
+ "> Recommended to use YouTube-DL instead."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jRrvPBr5y19U"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Required Module(s) \n",
+ "# ================================================================ #\n",
+ "\n",
+ "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
+ "from IPython.display import clear_output\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn(name=\"lds\")\n",
+ "textAn(\"Installing dependencies...\", ty='twg')\n",
+ "os.system('pip3 install youtube-dl')\n",
+ "os.system('pip3 install prettytable')\n",
+ "os.system('pip3 install bs4')\n",
+ "os.system('pip3 install requests')\n",
+ "%cd /content\n",
+ "os.system('git clone https://github.com/mariosemes/PornHub-downloader-python.git')\n",
+ "\n",
+ "clear_output()\n",
+ "print(\"The module(s) has been successfully installed.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OLj2mj4lzcOp"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] PornHub Downloader \n",
+ "pornhub_url = '' #@param {type: \"string\"}\n",
+ "option = \"single_download\" #@param [\"single_download\", \"batch_download\",\"add\",\"delete\"]\n",
+ "# @markdown > - Single Download link Eg: https://www.pornhub.com/view_video.php?viewkey=ph5d69a2093729e\n",
+ "#@markdown > - The batch option will ask you for the full path of your .txt file where you can import multiple URLs at once.Take care that every single URL in the .txt file is in his own row.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "%cd PornHub-downloader-python\n",
+ "\n",
+ "if option == 'single_download':\n",
+ " !python3 phdler.py custom \"$pornhub_url\"\n",
+ "\n",
+ "elif option == 'add':\n",
+ " !python3 phdler.py add \"$pornhub_url\"\n",
+ "\n",
+ "elif option == 'delete':\n",
+ " !python3 phdler.py delete \"$pornhub_url\"\n",
+ "\n",
+ "else:\n",
+ " !python3 phdler.py custom batch "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tL-ilxH0N_B9"
+ },
+ "source": [
+ "### Spotify Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JTAKDpp9OCEs"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Spotify Downloader \n",
+ "# @markdown Download Spotify playlists from YouTube with album-art and meta-tags
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, urllib.parse, re\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "from glob import glob\n",
+ "from urllib.parse import urlparse, parse_qs\n",
+ "from IPython.display import HTML, clear_output, YouTubeVideo\n",
+ "from IPython.utils.io import ask_yes_no\n",
+ "from google.colab import output, files\n",
+ "\n",
+ "\n",
+ "os.makedirs('tools/spotify-downloader/', exist_ok=True)\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "\n",
+ "# # Config files\n",
+ "# data = \"\"\"spotify-downloader:\n",
+ "# avconv: false\n",
+ "# download-only-metadata: false\n",
+ "# dry-run: false\n",
+ "# file-format: '{artist} - {track_name}'\n",
+ "# folder: /home/user/Music\n",
+ "# input-ext: .m4a\n",
+ "# log-level: INFO\n",
+ "# manual: false\n",
+ "# music-videos-only: false\n",
+ "# no-fallback-metadata: false\n",
+ "# no-metadata: false\n",
+ "# no-spaces: false\n",
+ "# output-ext: .mp3\n",
+ "# overwrite: prompt\n",
+ "# search-format: '{artist} - {track_name} lyrics'\n",
+ "# skip: null\n",
+ "# spotify_client_id: 4fe3fecfe5334023a1472516cc99d805\n",
+ "# spotify_client_secret: 0f02b7c483c04257984695007a4a8d5c\n",
+ "# trim-silence: false\n",
+ "# write-successful: null\n",
+ "# write-to: null\n",
+ "# youtube-api-key: null\n",
+ "# \"\"\"\n",
+ "# with open('tools/spotify-downloader/config.yml', 'w') as wnow:\n",
+ "# wnow.write(data)\n",
+ "\n",
+ "Links = widgets.Textarea(placeholder='''Link list\n",
+ "(one link per line)''')\n",
+ "\n",
+ "fileFormat = widgets.Text(\n",
+ " value='{artist} - {track_name}',\n",
+ " placeholder='File name format',\n",
+ " description=\"\"\"File Name : file format to save the downloaded track with, each\n",
+ " tag is surrounded by curly braces. Possible formats:\n",
+ " ['track_name', 'artist', 'album', 'album_artist',\n",
+ " 'genre', 'disc_number', 'duration', 'year',\n",
+ " 'original_date', 'track_number', 'total_tracks',\n",
+ " 'isrc']\"\"\",\n",
+ " disabled=False\n",
+ ")\n",
+ "\n",
+ "searchFormat = widgets.Text(\n",
+ " value='{artist} - {track_name} lyrics',\n",
+ " placeholder='Search format',\n",
+ " description=\"\"\"Search Format : search format to search for on YouTube, each tag is\n",
+ " surrounded by curly braces. Possible formats:\n",
+ " ['track_name', 'artist', 'album', 'album_artist',\n",
+ " 'genre', 'disc_number', 'duration', 'year',\n",
+ " 'original_date', 'track_number', 'total_tracks',\n",
+ " 'isrc']\"\"\",\n",
+ " disabled=False\n",
+ ")\n",
+ "\n",
+ "tab = widgets.Tab()\n",
+ "\n",
+ "LinksType = widgets.RadioButtons(\n",
+ " options=['Songs', 'Playlist', 'Album', 'Username', 'Artist'],\n",
+ " value='Songs',\n",
+ " layout={'width': 'max-content'},\n",
+ " description='Links type:',\n",
+ " disabled=False,\n",
+ ")\n",
+ "\n",
+ "SavePathYT = widgets.Dropdown(options=[\"/content/downloads\", \"/content\"])\n",
+ "\n",
+ "Extension = widgets.Select(options=[\"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"], value=\"mp3\")\n",
+ "\n",
+ "TrimSilence = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Trim silence',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='remove silence from the start of the audio',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "writeM3u = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Write .m3u playlist',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''generate an .m3u playlist file with youtube links\n",
+ " given a text file containing tracks''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "noMeta = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No metadata',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='do not embed metadata in tracks',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "nf = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No fallback metadata',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''do not use YouTube as fallback for metadata if track\n",
+ " not found on Spotify''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "dryRun = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Dry run',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip=''' show only track title and YouTube URL, and then skip\n",
+ " to the next track (if any)''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "MusicVidOnly = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Music Videos Only',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''search only for music videos on Youtube (works only\n",
+ " when YouTube API key is set''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "NoSpaces = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='No Spaces',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''replace spaces with underscores in file names''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "manual = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='manually',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''choose the track to download manually from a list of\n",
+ " matching tracks''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "nr = widgets.ToggleButton(\n",
+ " value=False,\n",
+ " description='Keep original',\n",
+ " disabled=False,\n",
+ " button_style='',\n",
+ " tooltip='''do not remove the original file after conversion''',\n",
+ " icon='check'\n",
+ ")\n",
+ "\n",
+ "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def MakeLabel(description, button_style):\n",
+ " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
+ "\n",
+ "def RefreshPathYT():\n",
+ " if os.path.exists(\"/content/drive/\"):\n",
+ " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content/downloads\", \"/content\"]\n",
+ "\n",
+ "\n",
+ "def ShowYT():\n",
+ " clear_output(wait=True)\n",
+ " RefreshPathYT()\n",
+ " mainTab = widgets.Box([widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
+ " LinksType, searchFormat, fileFormat, widgets.HBox([TrimSilence, writeM3u, noMeta]), widgets.HBox([nf, dryRun, MusicVidOnly]),widgets.HBox([NoSpaces, manual, nr])]),\n",
+ " widgets.VBox([widgets.HTML(\"Extension: \"), Extension,\n",
+ " widgets.HTML(\"Extra Arguments: \"), ExtraArg])])])\n",
+ " tab.children = [mainTab]\n",
+ " tab.set_title(0, 'spotify-downloader')\n",
+ " display(tab)\n",
+ " display(HTML(\"Save Location: \"), SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
+ " if not os.path.exists(\"/content/drive/\"):\n",
+ " display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
+ " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
+ "\n",
+ "def DownloadYT():\n",
+ " if Links.value.strip():\n",
+ " Count = 0\n",
+ " Total = str(len(Links.value.splitlines()))\n",
+ " if writeM3u.value:\n",
+ " M3u = '--write-m3u'\n",
+ " else:\n",
+ " M3u = ''\n",
+ " if TrimSilence.value:\n",
+ " trmS = '--trim-silence'\n",
+ " else:\n",
+ " trmS = ''\n",
+ " if noMeta.value:\n",
+ " noM = '--no-metadata'\n",
+ " else:\n",
+ " noM = ''\n",
+ " if nf.value:\n",
+ " nfv = '--no-fallback-metadata'\n",
+ " else:\n",
+ " nfv = ''\n",
+ " if dryRun.value:\n",
+ " drR = '--dry-run'\n",
+ " else:\n",
+ " drR = ''\n",
+ " if MusicVidOnly.value:\n",
+ " MsV = '--music-videos-only'\n",
+ " else:\n",
+ " MsV = ''\n",
+ " if NoSpaces.value:\n",
+ " NoS = '--no-spaces'\n",
+ " else:\n",
+ " NoS = ''\n",
+ " if manual.value:\n",
+ " mal = '--manual'\n",
+ " else:\n",
+ " mal = ''\n",
+ " if nr.value:\n",
+ " nro = '--no-remove-original' \n",
+ " else:\n",
+ " nro = ''\n",
+ " if not searchFormat.value == '{artist} - {track_name} lyrics':\n",
+ " seFor = f'--search-format \"{searchFormat.value}\"'\n",
+ " else:\n",
+ " seFor = ''\n",
+ " if not fileFormat.value == '{artist} - {track_name}':\n",
+ " fiFor = f'--file-format \"{fileFormat.value}\"'\n",
+ " else:\n",
+ " fiFor = ''\n",
+ " \n",
+ " if not LinksType.value == 'Songs':\n",
+ " with open('tools/spotify-downloader/finish.txt', 'a+') as master:\n",
+ " for Link in Links.value.splitlines():\n",
+ " if LinksType.value == 'Playlist':\n",
+ " outFileName = !spotdl --playlist $Link\n",
+ " elif LinksType.value == 'Album':\n",
+ " outFileName = !spotdl --album $Link\n",
+ " elif LinksType.value == 'Username':\n",
+ " outFileName = !spotdl -u $Link\n",
+ " elif LinksType.value == 'Artist':\n",
+ " outFileName = !spotdl --all-albums $Link\n",
+ " filename = re.search(r\"to\\s(.+\\.txt)\", outFileName[-1]).group(1)\n",
+ " with open(filename, 'r') as r:\n",
+ " master.write(r.read())\n",
+ " else:\n",
+ " for Link in Links.value.splitlines():\n",
+ " with open('tools/spotify-downloader/finish.txt', 'w') as master:\n",
+ " master.write(Link)\n",
+ " # Extra Arguments\n",
+ " \n",
+ " extraargC = ExtraArg.value\n",
+ " cmd = r\"spotdl -l 'tools/spotify-downloader/finish.txt' \" \\\n",
+ " fr\"-f {SavePathYT.value} \" \\\n",
+ " fr\"-o .{Extension.value} \" \\\n",
+ " f\"--overwrite skip \" \\\n",
+ " f\"{seFor} {fiFor} \" \\\n",
+ " f\"{M3u} {trmS} {noM} {nfv} {drR} {MsV} {NoS} {mal} {nro}\" \n",
+ " !$cmd\n",
+ " ShowYT()\n",
+ "\n",
+ "if not os.path.isfile(\"/usr/local/bin/spotdl\"):\n",
+ " get_ipython().system_raw(\"pip3 install spotdl && apt-get install ffmpeg\")\n",
+ "\n",
+ "ShowYT()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QOyo5zf4suod"
+ },
+ "source": [
+ "### YouTube-DL "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mYCRR-yWSuyi"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] YouTube-DL \n",
+ "Archive = False\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, urllib.parse\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "from glob import glob\n",
+ "from urllib.parse import urlparse, parse_qs\n",
+ "from IPython.display import HTML, clear_output, YouTubeVideo\n",
+ "from IPython.utils.io import ask_yes_no\n",
+ "from google.colab import output, files\n",
+ "\n",
+ "Links = widgets.Textarea(placeholder='''Video/Playlist Link\n",
+ "(one link per line)''')\n",
+ "\n",
+ "VideoQ = widgets.Dropdown(options=[\"Best Quality (VP9 upto 4K)\", \"Best Compatibility (H.264 upto 1080p)\"])\n",
+ "\n",
+ "AudioQ = widgets.Dropdown(options=[\"Best Quality (Opus)\", \"Best Compatibility (M4A)\"])\n",
+ "\n",
+ "Subtitle = widgets.ToggleButton(value=True, description=\"Subtitle\", button_style=\"info\", tooltip=\"Subtitle\")\n",
+ "\n",
+ "SavePathYT = widgets.Dropdown(options=[\"/content\", \"/content/downloads\"])\n",
+ "\n",
+ "AudioOnly = widgets.ToggleButton(value=False, description=\"Audio Only\", button_style=\"\", tooltip=\"Audio Only\")\n",
+ "\n",
+ "Resolution = widgets.Select(options=[\"Highest\", \"4K\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"144p\"], value=\"Highest\")\n",
+ "\n",
+ "Extension = widgets.Select(options=[\"mkv\", \"webm\"], value=\"mkv\")\n",
+ "\n",
+ "UsernameYT = widgets.Text(placeholder=\"Username\")\n",
+ "\n",
+ "PasswordYT = widgets.Text(placeholder=\"Password\")\n",
+ "\n",
+ "SecAuth = widgets.Text(placeholder=\"2nd Factor Authentication\")\n",
+ "\n",
+ "VideoPW = widgets.Text(placeholder=\"Video Password\")\n",
+ "\n",
+ "GEOBypass = widgets.Dropdown(options=[\"Disable\", \"Hide\", \"AD\", \"AE\", \"AF\", \"AG\", \"AI\", \"AL\", \"AM\", \"AO\", \"AQ\", \"AR\", \"AS\", \"AT\", \"AU\", \"AW\", \"AX\", \"AZ\", \"BA\", \"BB\", \"BD\", \"BE\", \"BF\", \"BG\", \"BH\", \"BI\", \"BJ\", \"BL\", \"BM\", \"BN\", \"BO\", \"BQ\", \"BR\", \"BS\", \"BT\", \"BV\", \"BW\", \"BY\", \"BZ\", \"CA\", \"CC\", \"CD\", \"CF\", \"CG\", \"CH\", \"CI\", \"CK\", \"CL\", \"CM\", \"CN\", \"CO\", \"CR\", \"CU\", \"CV\", \"CW\", \"CX\", \"CY\", \"CZ\", \"DE\", \"DJ\", \"DK\", \"DM\", \"DO\", \"DZ\", \"EC\", \"EE\", \"EG\", \"EH\", \"ER\", \"ES\", \"ET\", \"FI\", \"FJ\", \"FK\", \"FM\", \"FO\", \"FR\", \"GA\", \"GB\", \"GD\", \"GE\", \"GF\", \"GG\", \"GH\", \"GI\", \"GL\", \"GM\", \"GN\", \"GP\", \"GQ\", \"GR\", \"GS\", \"GT\", \"GU\", \"GW\", \"GY\", \"HK\", \"HM\", \"HN\", \"HR\", \"HT\", \"HU\", \"ID\", \"IE\", \"IL\", \"IM\", \"IN\", \"IO\", \"IQ\", \"IR\", \"IS\", \"IT\", \"JE\", \"JM\", \"JO\", \"JP\", \"KE\", \"KG\", \"KH\", \"KI\", \"KM\", \"KN\", \"KP\", \"KR\", \"KW\", \"KY\", \"KZ\", \"LA\", \"LB\", \"LC\", \"LI\", \"LK\", \"LR\", \"LS\", \"LT\", \"LU\", \"LV\", \"LY\", \"MA\", \"MC\", \"MD\", \"ME\", \"MF\", \"MG\", \"MH\", \"MK\", \"ML\", \"MM\", \"MN\", \"MO\", \"MP\", \"MQ\", \"MR\", \"MS\", \"MT\", \"MU\", \"MV\", \"MW\", \"MX\", \"MY\", \"MZ\", \"NA\", \"NC\", \"NE\", \"NF\", \"NG\", \"NI\", \"NL\", \"NO\", \"NP\", \"NR\", \"NU\", \"NZ\", \"OM\", \"PA\", \"PE\", \"PF\", \"PG\", \"PH\", \"PK\", \"PL\", \"PM\", \"PN\", \"PR\", \"PS\", \"PT\", \"PW\", \"PY\", \"QA\", \"RE\", \"RO\", \"RS\", \"RU\", \"RW\", \"SA\", \"SB\", \"SC\", \"SD\", \"SE\", \"SG\", \"SH\", \"SI\", \"SJ\", \"SK\", \"SL\", \"SM\", \"SN\", \"SO\", \"SR\", \"SS\", \"ST\", \"SV\", \"SX\", \"SY\", \"SZ\", \"TC\", \"TD\", \"TF\", \"TG\", \"TH\", \"TJ\", \"TK\", \"TL\", \"TM\", \"TN\", \"TO\", \"TR\", \"TT\", \"TV\", \"TW\", \"TZ\", \"UA\", \"UG\", \"UM\", \"US\", \"UY\", \"UZ\", \"VA\", \"VC\", \"VE\", \"VG\", \"VI\", \"VN\", \"VU\", \"WF\", \"WS\", \"YE\", \"YT\", \"ZA\", \"ZM\", \"ZW\"])\n",
+ "\n",
+ "ProxyYT = widgets.Text(placeholder=\"Proxy URL\")\n",
+ "\n",
+ "MinSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Min:\")\n",
+ "\n",
+ "MaxSleep = widgets.BoundedIntText(value=0, min=0, max=300, step=1, description=\"Max:\")\n",
+ "\n",
+ "ExtraArg = widgets.Text(placeholder=\"Extra Arguments\")\n",
+ "\n",
+ "class MakeButton(object):\n",
+ " def __init__(self, title, callback, style):\n",
+ " self._title = title\n",
+ " self._callback = callback\n",
+ " self._style = style\n",
+ " def _repr_html_(self):\n",
+ " callback_id = 'button-' + str(uuid.uuid4())\n",
+ " output.register_callback(callback_id, self._callback)\n",
+ " if self._style != \"\":\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button mod-\" + self._style\n",
+ " else:\n",
+ " style_html = \"p-Widget jupyter-widgets jupyter-button widget-button\"\n",
+ " template = \"\"\"{title} \n",
+ " \"\"\"\n",
+ " html = template.format(title=self._title, callback_id=callback_id, style_html=style_html)\n",
+ " return html\n",
+ " \n",
+ "def MakeLabel(description, button_style):\n",
+ " return widgets.Button(description=description, disabled=True, button_style=button_style)\n",
+ "\n",
+ "def upload_archive():\n",
+ " if ask_yes_no(\"Do you already have an archive file? (y/n)\", default=\"\", interrupt=\"\"):\n",
+ " try:\n",
+ " display(HTML(\"Please upload an archive from your computer. \"))\n",
+ " UploadConfig = files.upload().keys()\n",
+ " clear_output(wait=True)\n",
+ " if len(UploadConfig) == 0:\n",
+ " return display(HTML(\"File upload has been cancelled during upload file. \"))\n",
+ " elif len(UploadConfig) == 1:\n",
+ " for fn in UploadConfig:\n",
+ " if os.path.isfile(\"/content/\" + fn):\n",
+ " get_ipython().system_raw(\"mv -f \" + \"\\\"\" + fn + \"\\\" /root/.youtube-dl.txt && chmod 666 /root/.youtube-dl.txt\")\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()\n",
+ " else:\n",
+ " return display(HTML(\"File upload has been failed during upload file. \"))\n",
+ " else:\n",
+ " for fn in UploadConfig:\n",
+ " get_ipython().system_raw(\"rm -f \" + \"\\\"\" + fn + \"\\\"\")\n",
+ " return display(HTML(\"Please uploading only one file at a time. \"))\n",
+ " except:\n",
+ " clear_output(wait=True)\n",
+ " return display(HTML(\"Error occurred during upload file. \"))\n",
+ " else:\n",
+ " get_ipython().system_raw(\"touch '/root/.youtube-dl.txt'\")\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()\n",
+ "\n",
+ "def RefreshPathYT():\n",
+ " if os.path.exists(\"/content/drive/\"):\n",
+ " if os.path.exists(\"/content/drive/Shared drives/\"):\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\") + glob(\"/content/drive/Shared drives/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\", \"/content/drive/My Drive\"] + glob(\"/content/drive/My Drive/*/\")\n",
+ " else:\n",
+ " SavePathYT.options = [\"/content\", \"/content/downloads\"]\n",
+ "\n",
+ "def AudioOnlyChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
+ " VideoQ.disabled = True\n",
+ " Subtitle.disabled = True\n",
+ " if Subtitle.value:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " else:\n",
+ " Subtitle.button_style = \"\"\n",
+ " Resolution.disabled = True\n",
+ " Extension.options = [\"best\", \"aac\", \"flac\", \"mp3\", \"m4a\", \"opus\", \"vorbis\", \"wav\"]\n",
+ " Extension.value = \"best\"\n",
+ " AudioOnly.button_style = \"info\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
+ " VideoQ.disabled = False\n",
+ " Subtitle.disabled = False\n",
+ " if Subtitle.value:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " else:\n",
+ " Subtitle.button_style = \"\"\n",
+ " Resolution.disabled = False\n",
+ " if AudioQ.value == \"Best Quality (Opus)\":\n",
+ " Extension.options = [\"mkv\", \"webm\"]\n",
+ " else:\n",
+ " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ " AudioOnly.button_style = \"\"\n",
+ "\n",
+ "def SubtitleChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"]:\n",
+ " Subtitle.button_style = \"info\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == False:\n",
+ " Subtitle.button_style = \"\"\n",
+ "\n",
+ "def AudioQChange(change):\n",
+ " if change[\"type\"] == \"change\" and change[\"new\"] == \"Best Quality (Opus)\":\n",
+ " Extension.options = [\"mkv\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ " elif change[\"type\"] == \"change\" and change[\"new\"] == \"Best Compatibility (M4A)\":\n",
+ " Extension.options = [\"mkv\", \"mp4\", \"webm\"]\n",
+ " Extension.value = \"mkv\"\n",
+ "\n",
+ "def ShowYT():\n",
+ " clear_output(wait=True)\n",
+ " RefreshPathYT()\n",
+ " display(widgets.HBox([widgets.VBox([widgets.HTML(\"Link: \"), Links,\n",
+ " widgets.HTML(\"For website that require an account: \"), UsernameYT, PasswordYT, SecAuth, VideoPW,\n",
+ " widgets.HTML(\"GEO Bypass Country: \"), GEOBypass,\n",
+ " widgets.HTML(\"Proxy: \"), ProxyYT,\n",
+ " widgets.HTML(\"Sleep Interval (second): \"), MinSleep, MaxSleep]),\n",
+ " widgets.VBox([widgets.HTML(\"Video Quality: \"), VideoQ, widgets.HTML(\"Resolution: \"), Resolution,\n",
+ " widgets.HTML(\"Audio Quality: \"), AudioQ, widgets.HTML(\"Extension: \"), Extension,\n",
+ " widgets.HTML(\"Extra Options: \"), widgets.HBox([Subtitle, AudioOnly]),\n",
+ " widgets.HTML(\"Extra Arguments: \"), ExtraArg])]), HTML(\"Save Location: \"),\n",
+ " SavePathYT, MakeButton(\"Refresh\", RefreshPathYT, \"\"))\n",
+ " if not os.path.exists(\"/content/drive/\"):\n",
+ "# display(HTML(\"*If you want to save in Google Drive please run the cell below.\"))\n",
+ " display(HTML(\" \"), MakeButton(\"Download\", DownloadYT, \"info\"))\n",
+ "\n",
+ "def DownloadYT():\n",
+ " if Links.value.strip():\n",
+ " Count = 0\n",
+ " Total = str(len(Links.value.splitlines()))\n",
+ " # Account Check\n",
+ " if UsernameYT.value.strip() and PasswordYT.value.strip():\n",
+ " accountC = \"--username \\\"\" + UsernameYT.value + \"\\\" --password \\\"\" + PasswordYT.value + \"\\\"\"\n",
+ " else:\n",
+ " accountC = \"\"\n",
+ " if SecAuth.value.strip():\n",
+ " secauthC = \"-2 \" + SecAuth.value\n",
+ " else:\n",
+ " secauthC = \"\"\n",
+ " if VideoPW.value.strip():\n",
+ " videopwC = \"--video-password \" + VideoPW.value\n",
+ " else:\n",
+ " videopwC = \"\"\n",
+ " # Proxy\n",
+ " if ProxyYT.value.strip():\n",
+ " proxyytC = \"--proxy \" + ProxyYT.value\n",
+ " else:\n",
+ " proxyytC = \"\"\n",
+ " # GEO Bypass\n",
+ " if GEOBypass.value == \"Disable\":\n",
+ " geobypass = \"\"\n",
+ " elif GEOBypass.value == \"Hide\":\n",
+ " geobypass = \"--geo-bypass\"\n",
+ " else:\n",
+ " geobypass = \"--geo-bypass-country \" + GEOBypass.value\n",
+ " # Video Quality\n",
+ " if VideoQ.value == \"Best Quality (VP9 upto 4K)\":\n",
+ " videoqC = \"webm\"\n",
+ " else:\n",
+ " videoqC = \"mp4\"\n",
+ " # Audio Quality\n",
+ " if AudioQ.value == \"Best Quality (Opus)\":\n",
+ " audioqC = \"webm\"\n",
+ " else:\n",
+ " audioqC = \"m4a\"\n",
+ " # Audio Only Check\n",
+ " if AudioOnly.value:\n",
+ " subtitleC = \"\"\n",
+ " thumbnailC = \"\"\n",
+ " extC = \"-x --audio-quality 0 --audio-format \" + Extension.value\n",
+ " codecC = \"bestaudio[ext=\" + audioqC + \"]/bestaudio/best\"\n",
+ " else:\n",
+ " if Subtitle.value:\n",
+ " subtitleC = \"--all-subs --convert-subs srt --embed-subs\"\n",
+ " else:\n",
+ " subtitleC = \"\"\n",
+ " if Extension.value == \"mp4\":\n",
+ " thumbnailC = \"--embed-thumbnail\"\n",
+ " else:\n",
+ " thumbnailC = \"\"\n",
+ " extC = \"--merge-output-format \" + Extension.value\n",
+ " if Resolution.value == \"Highest\":\n",
+ " codecC = \"bestvideo[ext=\" + videoqC + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo+bestaudio/best\"\n",
+ " else:\n",
+ " codecC = \"bestvideo[ext=\" + videoqC + \",height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio[ext=\" + audioqC + \"]/bestvideo[height<=\" + Resolution.value.replace(\"4K\", \"2160\").replace(\"p\", \"\") + \"]+bestaudio/bestvideo+bestaudio/best\"\n",
+ " # Archive\n",
+ " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
+ " archiveC = \"--download-archive \\\"/root/.youtube-dl.txt\\\"\"\n",
+ " else:\n",
+ " archiveC = \"\"\n",
+ " # Sleep Interval\n",
+ " if MinSleep.value > 0 and MaxSleep.value > 0:\n",
+ " minsleepC = \"--min-sleep-interval \" + MinSleep.value\n",
+ " maxsleepC = \"--max-sleep-interval \" + MaxSleep.value\n",
+ " else:\n",
+ " minsleepC = \"\"\n",
+ " maxsleepC = \"\"\n",
+ " # Extra Arguments\n",
+ " extraargC = ExtraArg.value\n",
+ " for Link in Links.value.splitlines():\n",
+ " clear_output(wait=True)\n",
+ " Count += 1\n",
+ " display(HTML(\"Processing link \" + str(Count) + \" out of \" + Total + \" \"))\n",
+ " if \"youtube.com\" in Link or \"youtu.be\" in Link:\n",
+ " display(HTML(\"Currently downloading... \"), YouTubeVideo(Link, width=640, height=360), HTML(\" \"))\n",
+ " else:\n",
+ " display(HTML(\" \"))\n",
+ " if (\"youtube.com\" in Link or \"youtu.be\" in Link) and \"list=\" in Link:\n",
+ " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s\" \"$Link\"\n",
+ " else:\n",
+ " !youtube-dl -i --no-warnings --yes-playlist --add-metadata $accountC $secauthC $videopwC $minsleepC $maxsleepC $geobypass $proxyytC $extC $thumbnailC $subtitleC $archiveC $extraargC -f \"$codecC\" -o \"/root/.YouTube-DL/%(title)s.%(ext)s\" \"$Link\"\n",
+ " if not os.path.exists(SavePathYT.value):\n",
+ " get_ipython().system_raw(\"mkdir -p -m 666 \" + SavePathYT.value)\n",
+ " get_ipython().system_raw(\"mv /root/.YouTube-DL/* '\" + SavePathYT.value + \"/'\")\n",
+ " # Archive Download\n",
+ " if os.path.isfile(\"/root/.youtube-dl.txt\"):\n",
+ " files.download(\"/root/.youtube-dl.txt\")\n",
+ " ShowYT()\n",
+ "\n",
+ "if not os.path.isfile(\"/usr/local/bin/youtube-dl\"):\n",
+ " get_ipython().system_raw(\"rm -rf /content/sample_data/ && mkdir -p -m 666 /root/.YouTube-DL/ && apt-get install atomicparsley && curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl && chmod a+rx /usr/local/bin/youtube-dl\")\n",
+ "if Archive:\n",
+ " upload_archive()\n",
+ "else:\n",
+ " AudioOnly.observe(AudioOnlyChange)\n",
+ " Subtitle.observe(SubtitleChange)\n",
+ " AudioQ.observe(AudioQChange)\n",
+ " ShowYT()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FejGUkxPhDmE"
+ },
+ "source": [
+ "## ✧ *P2P-File Downloader* ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_GVSJ9jdn6lW"
+ },
+ "source": [
+ "### Deluge "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "z1IqkfEXn-eu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Deluge \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, pathlib\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " displayUrl,\n",
+ " PortForward_wrapper\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "pathlib.Path('downloads').mkdir(exist_ok=True)\n",
+ "pathlib.Path(f\"{HOME}/.config/deluge/\").mkdir(parents=True, exist_ok=True)\n",
+ "\n",
+ "if not (findProcess(\"/usr/bin/python\", \"deluged\") or findProcess(\"/usr/bin/python\", \"deluge-web\")):\n",
+ " runSh('sudo apt install -y deluged deluge-console deluge-webui')\n",
+ " runSh(\n",
+ " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/core.conf \\\n",
+ " -O {HOME}/.config/deluge/core.conf\"\n",
+ " )\n",
+ " runSh(\n",
+ " f\"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/deluge/web.conf \\\n",
+ " -O {HOME}/.config/deluge/web.conf\"\n",
+ " )\n",
+ " runSh('deluged &> /dev/null &', shell=True)\n",
+ " runSh('deluge-web --fork', shell=True)\n",
+ " runSh(\"\"\"sed -i 's/if s.hexdigest() == config\\[\"pwd_sha1\"\\]:/if True:/' /usr/lib/python2.7/dist-packages/deluge/ui/web/auth.py\"\"\")\n",
+ " runSh(\"sed -i 's/onShow:function(){this.passwordField.focus(.*)}/onShow:function(){this.onLogin();}/' /usr/lib/python2.7/dist-packages/deluge/ui/web/js/deluge-all.js\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['deluge', 8112, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/deluge.yml\", 4042]).start('deluge')\n",
+ "displayUrl(Server, pNamU='Deluge : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OJBVlUw-kKyt"
+ },
+ "source": [
+ "### libtorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NZgOIKJ3kOL9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install libtorrent \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!apt install python3-libtorrent\n",
+ "\n",
+ "import libtorrent as lt\n",
+ "\n",
+ "ses = lt.session()\n",
+ "ses.listen_on(6881, 6891)\n",
+ "downloads = []\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wOroL1PJns93"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Add Torrent from File \n",
+ "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: \"save_path\": \"/content/downloads\",3. Change /content/downloads to your path \n",
+ "# @markdown > You can run this cell as many time as you want.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "if os.path.exists(\"/content/downloads\"):\n",
+ " pass\n",
+ "else:\n",
+ " os.mkdir(\"/content/downloads\")\n",
+ "\n",
+ "source = files.upload()\n",
+ "params = {\n",
+ " \"save_path\": \"/content/downloads\",\n",
+ " \"ti\": lt.torrent_info(list(source.keys())[0]),\n",
+ "}\n",
+ "downloads.append(ses.add_torrent(params))\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nOQBAsoenwLb"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Add Torrent from Magnet Link \n",
+ "# @markdown
How to change the download location: 1. Double click the cell to show its code2. Find this line: params = {\"save_path\": \"/content/downloads\"}3. Change /content/downloads to your path \n",
+ "# @markdown > You can run this cell as many time as you want.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/downloads\"):\n",
+ " pass\n",
+ "else:\n",
+ " os.mkdir(\"/content/downloads\")\n",
+ "\n",
+ "params = {\"save_path\": \"/content/downloads\"}\n",
+ "\n",
+ "while True:\n",
+ " magnet_link = input(\"Paste the magnet link here or type exit to stop:\\n\")\n",
+ " if magnet_link.lower() == \"exit\":\n",
+ " break\n",
+ " downloads.append(\n",
+ " lt.add_magnet_uri(ses, magnet_link, params)\n",
+ " )\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vY4-WX3FmMBB"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] libtorrent \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time\n",
+ "from IPython.display import display\n",
+ "import ipywidgets as widgets\n",
+ "\n",
+ "state_str = [\n",
+ " \"queued\",\n",
+ " \"checking\",\n",
+ " \"downloading metadata\",\n",
+ " \"downloading\",\n",
+ " \"finished\",\n",
+ " \"seeding\",\n",
+ " \"allocating\",\n",
+ " \"checking fastresume\",\n",
+ "]\n",
+ "\n",
+ "layout = widgets.Layout(width=\"auto\")\n",
+ "style = {\"description_width\": \"initial\"}\n",
+ "download_bars = [\n",
+ " widgets.FloatSlider(\n",
+ " step=0.01, disabled=True, layout=layout, style=style\n",
+ " )\n",
+ " for _ in downloads\n",
+ "]\n",
+ "display(*download_bars)\n",
+ "\n",
+ "while downloads:\n",
+ " next_shift = 0\n",
+ " for index, download in enumerate(downloads[:]):\n",
+ " bar = download_bars[index + next_shift]\n",
+ " if not download.is_seed():\n",
+ " s = download.status()\n",
+ "\n",
+ " bar.description = \" \".join(\n",
+ " [\n",
+ " download.name(),\n",
+ " str(s.download_rate / 1000),\n",
+ " \"kB/s\",\n",
+ " state_str[s.state],\n",
+ " ]\n",
+ " )\n",
+ " bar.value = s.progress * 100\n",
+ " else:\n",
+ " next_shift -= 1\n",
+ " ses.remove_torrent(download)\n",
+ " downloads.remove(download)\n",
+ " bar.close() # Seems to be not working in Colab (see https://github.com/googlecolab/colabtools/issues/726#issue-486731758)\n",
+ " download_bars.remove(bar)\n",
+ " print(download.name(), \"complete\")\n",
+ " time.sleep(1)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "yqY0BtjuGS78"
+ },
+ "source": [
+ "### qBittorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Yk8cbx3EdKaK"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] qBittorrent \n",
+ "# @markdown MiXLab is now using VueTorrent as the default qBittorrent WebUI.
\n",
+ "#QBITTORRENT_VARIANT = \"official\" #@param [\"official\", \"unofficial\"]\n",
+ "## @markdown ---\n",
+ "## @markdown qBittorrent Default Credential
\n",
+ "## @markdown > Username: adminPassword: adminadmin\n",
+ "## @markdown ---\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "!wget -P /content/qBittorrent/tmp https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/qbittorrent/vuetorrent.zip\n",
+ "!unzip /content/qBittorrent/tmp/vuetorrent.zip -d /content/qBittorrent/tmp\n",
+ "!mv /content/qBittorrent/tmp/vuetorrent/ /content/qBittorrent/WebUI\n",
+ "clear_output()\n",
+ "\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "#Note: need to locate where the WebUI is extracted into and then remove it\n",
+ "# in order to use the proper WebUI for the Official or Unofficial version of qBittorrent\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
+ "#runSh(\"rm -f /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
+ "#runSh(\"rm -f /usr/bin/qbittorrent\")\n",
+ "#runSh(\"rm -f /usr/bin/qbittorrent-nox\")\n",
+ "#runSh(\"sudo apt-get purge --auto-remove qbittorrent-nox \")\n",
+ "#clear_output()\n",
+ "\n",
+ "def addUtils():\n",
+ " if not checkAvailable(\"/usr/local/sessionSettings\"):\n",
+ " runSh(\"mkdir -p -m 777 /usr/local/sessionSettings\")\n",
+ " if not checkAvailable(\"/content/upload.txt\"):\n",
+ " runSh(\"touch /content/upload.txt\")\n",
+ " if not checkAvailable(\"checkAptUpdate.txt\", userPath=True):\n",
+ " runSh(\"apt update -qq -y\")\n",
+ " runSh(\"apt-get install -y iputils-ping\")\n",
+ "\n",
+ "def configTimezone(auto=True):\n",
+ " if checkAvailable(\"timezone.txt\", userPath=True):\n",
+ " return\n",
+ " if not auto:\n",
+ " runSh(\"sudo dpkg-reconfigure tzdata\")\n",
+ " else:\n",
+ " runSh(\"sudo ln -fs /usr/share/zoneinfo/Asia/Ho_Chi_Minh /etc/localtime\")\n",
+ " runSh(\"sudo dpkg-reconfigure -f noninteractive tzdata\")\n",
+ "\n",
+ "def uploadQBittorrentConfig():\n",
+ " if checkAvailable(\"updatedQBSettings.txt\", userPath=True):\n",
+ " return\n",
+ " runSh(\n",
+ " \"mkdir -p -m 666 /content/qBittorrent /root/.qBittorrent_temp /root/.config/qBittorrent\"\n",
+ " )\n",
+ " runSh(\n",
+ " \"wget -qq https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/qbittorrent/qBittorrent.conf \\\n",
+ " -O /root/.config/qBittorrent/qBittorrent.conf\"\n",
+ " )\n",
+ "\n",
+ "def prepareSession():\n",
+ " if checkAvailable(\"ready.txt\", userPath=True):\n",
+ " return\n",
+ " else:\n",
+ " addUtils()\n",
+ " configTimezone()\n",
+ " uploadQBittorrentConfig()\n",
+ "\n",
+ "def installQBittorrent():\n",
+ " if checkAvailable(\"/usr/bin/qbittorrent-nox\"):\n",
+ " return\n",
+ " else:\n",
+ "# if QBITTORRENT_VARIANT == \"official\":\n",
+ " try:\n",
+ "# if checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list\")\n",
+ "# elif checkAvailable(\"/etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/poplite-ubuntu-qbittorrent-enhanced-bionic.list.save\")\n",
+ "# else:\n",
+ " runSh(\"sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable\")\n",
+ " runSh(\"sudo apt-get update\")\n",
+ " runSh(\"sudo apt install qbittorrent-nox\")\n",
+ " except:\n",
+ " raise Exception('Failed to install qBittorrent!')\n",
+ "# elif QBITTORRENT_VARIANT == \"unofficial\":\n",
+ "# try:\n",
+ "# if checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list\")\n",
+ "# elif checkAvailable(\"/etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\"):\n",
+ "# runSh(\"rm /etc/apt/sources.list.d/qbittorrent-team-ubuntu-qbittorrent-stable-bionic.list.save\")\n",
+ "# else:\n",
+ "# runSh(\"sudo add-apt-repository ppa:poplite/qbittorrent-enhanced\")\n",
+ "# runSh(\"sudo apt-get update\")\n",
+ "# runSh(\"sudo apt-get install qbittorrent-enhanced qbittorrent-enhanced-nox\")\n",
+ "# except:\n",
+ "# raise Exception('Failed to install qBittorrent!')\n",
+ "\n",
+ "def startQBService():\n",
+ " prepareSession()\n",
+ " installQBittorrent()\n",
+ " if not findProcess(\"qbittorrent-nox\", \"-d --webui-port\"):\n",
+ " runSh(f\"qbittorrent-nox -d --webui-port={QB_Port}\")\n",
+ " time.sleep(1)\n",
+ "\n",
+ "QB_Port = 10001\n",
+ "loadingAn()\n",
+ "startQBService()\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['qbittorrent', QB_Port, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/qbittorrent.yml\", 4088]).start('qbittorrent', displayB=False)\n",
+ "displayUrl(server, pNamU='qBittorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nFrxKe_52fSj"
+ },
+ "source": [
+ "### rTorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "cN8mVNe52cYu"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] rTorrent \n",
+ "# @markdown > rTorrent Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re, urllib.request\n",
+ "from shutil import copyfile\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/', exist_ok=True)\n",
+ "os.makedirs(\"/content/downloads\", mode=0o775, exist_ok=True)\n",
+ "os.makedirs(\"/content/tools/rtorrent/session\", mode=0o775, exist_ok=True)\n",
+ "\n",
+ "configData = \"\"\"\n",
+ "# Where rTorrent saves the downloaded files\n",
+ "directory = /content/downloads\n",
+ "\n",
+ "# Where rTorrent saves the session\n",
+ "session = /content/tools/rtorrent/session\n",
+ "\n",
+ "# Which ports rTorrent can use (Make sure to open them in your router)\n",
+ "port_range = 50000-50000\n",
+ "port_random = no\n",
+ "\n",
+ "# Check the hash after the end of the download\n",
+ "check_hash = yes\n",
+ "\n",
+ "# Enable DHT (for torrents without trackers)\n",
+ "dht = auto\n",
+ "dht_port = 6881\n",
+ "peer_exchange = yes\n",
+ "\n",
+ "# Authorize UDP trackers\n",
+ "use_udp_trackers = yes\n",
+ "\n",
+ "# Enable encryption when possible\n",
+ "encryption = allow_incoming,try_outgoing,enable_retry\n",
+ "\n",
+ "# SCGI port, used to communicate with Flood\n",
+ "scgi_port = 127.0.0.1:5000\n",
+ "\"\"\"\n",
+ "with open(\"/root/.rtorrent.rc\", 'w') as rC:\n",
+ " rC.write(configData)\n",
+ "\n",
+ "if not os.path.exists(\"/content/tools/flood/config.js\"):\n",
+ " runSh(\"apt install rtorrent screen mediainfo -y\")\n",
+ " runSh(\"git clone --depth 1 https://github.com/jfurrow/flood.git tools/flood\", shell=True)\n",
+ " copyfile(\"tools/flood/config.template.js\", \"tools/flood/config.js\")\n",
+ " runSh(\"npm install\", shell=True, cd=\"tools/flood/\")\n",
+ " runSh(\"npm install pm2 -g\", shell=True, cd=\"tools/flood/\")\n",
+ " runSh(\"npm run build\", shell=True, cd=\"tools/flood/\")\n",
+ "\n",
+ " userDB = r\"\"\"{\"username\":\"admin\",\"password\":\"$argon2i$v=19$m=4096,t=3,p=1$3hJdjMSgwdUnJ86uYBhOnA$dud5j5/IokJ3hyb+v5aqmDK0jwP9X5W2pz6Qqek++Tk\",\"host\":\"127.0.0.1\",\"port\":\"5000\",\"isAdmin\":true,\"_id\":\"jLJcPySMAEgp35uB\"}\n",
+ "{\"$$indexCreated\":{\"fieldName\":\"username\",\"unique\":true,\"sparse\":false}}\n",
+ "\"\"\"\n",
+ " userSettingsDB = r\"\"\"{\"id\":\"startTorrentsOnLoad\",\"data\":true,\"_id\":\"5leeeHwIN9rKLgG9\"}\n",
+ "{\"id\":\"torrentListColumnWidths\",\"data\":{\"sizeBytes\":61,\"ratio\":56,\"peers\":62},\"_id\":\"PnB52rZSPg5fLEN9\"}\n",
+ "{\"id\":\"torrentDestination\",\"data\":\"/content/downloads\",\"_id\":\"YcGroeyigKYWM8Ol\"}\n",
+ "{\"id\":\"mountPoints\",\"data\":[\"/\"],\"_id\":\"gJlGwWqOsyPfkLyJ\"}\n",
+ "{\"id\":\"torrentListViewSize\",\"data\":\"expanded\",\"_id\":\"q0CmirE9c0KnDGV3\"}\n",
+ "\"\"\"\n",
+ "\n",
+ " os.makedirs(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings\", exist_ok=True)\n",
+ " with open(\"tools/flood/server/db/users.db\", 'w') as wDB:\n",
+ " wDB.write(userDB)\n",
+ " with open(\"tools/flood/server/db/jLJcPySMAEgp35uB/settings/settings.db\", 'w') as wDB:\n",
+ " wDB.write(userSettingsDB)\n",
+ "\n",
+ "if not findProcess(\"rtorrent\", \"\"):\n",
+ " runSh(\"screen -d -m -fa -S rtorrent rtorrent\", shell=True)\n",
+ "if not findProcess(\"node\", \"start.js\"): \n",
+ " runSh(\"pm2 start server/bin/start.js\", shell=True, cd=\"tools/flood/\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['rTorrent', 3000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/rTorrent.yml\", 1463]).start('rTorrent', btc='b', displayB=True)\n",
+ "displayUrl(Server, pNamU='rTorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ssn-ZMNcv5UQ"
+ },
+ "source": [
+ "### SimpleTorrent "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zb3hWwWE1Us8"
+ },
+ "source": [
+ "NOT WORKING! USE OTHER TORRENT DOWNLOADER! \n",
+ "(I'm... probably not going to fix this...) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "lrCc585SD2f7"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] SimpleTorrent \n",
+ "Install_old_version = False\n",
+ "Auto_UP_Gdrive = False\n",
+ "AUTO_MOVE_PATH = \"/content/drive/MyDrive\"\n",
+ "force_change_version = \"\"\n",
+ "rclone_DestinationPath = \"\"\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, time, pathlib, urllib.request, requests, tarfile\n",
+ "from subprocess import Popen\n",
+ "from IPython.display import clear_output\n",
+ " \n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ " \n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "# Defining environments for SImpleTorrent\n",
+ "os.makedirs('downloads', exist_ok=True)\n",
+ "os.makedirs('torrents', exist_ok=True)\n",
+ "os.makedirs('tools/simple-torrent', exist_ok=True)\n",
+ " \n",
+ "def generateCmd(src, dst):\n",
+ " FAST_LIST = True\n",
+ " PATH_RClone_Config = \"/usr/local/sessionSettings\"\n",
+ " cmd = f'rclone move \"{src}\" \"{dst}\" ' \\\n",
+ " f'--config {PATH_RClone_Config}/rclone.conf ' \\\n",
+ " f'{\"--fast-list\" if FAST_LIST else \"\"} --user-agent \"Mozilla\" ' \\\n",
+ " '--transfers 20 --checkers 20 --drive-server-side-across-configs ' \\\n",
+ " '-c --buffer-size 256M --drive-chunk-size 256M ' \\\n",
+ " '--drive-upload-cutoff 256M --drive-acknowledge-abuse ' \\\n",
+ " '--drive-keep-revision-forever --tpslimit 95 --tpslimit-burst 40 ' \\\n",
+ " '--stats-one-line --stats=5s -v'\n",
+ " return cmd\n",
+ "\n",
+ "\n",
+ "if Auto_UP_Gdrive:\n",
+ " data = \"\"\"#!/bin/bash\n",
+ " dir=${CLD_DIR}\n",
+ " path=${CLD_PATH}\n",
+ " abp=\"${dir}/${path}\"\n",
+ " type=${CLD_TYPE}\n",
+ " if [[ ${type} == \"torrent\" ]]; then\n",
+ " \"\"\"\n",
+ "\n",
+ " nUpload = \"\"\" \n",
+ " #Upload to Gdrive\n",
+ " #mkdir -p \"%s/$(dirname \"${path}\")\"\n",
+ " mv \"${abp}\" \"%s/${path}\"\n",
+ " \"\"\" % (AUTO_MOVE_PATH, AUTO_MOVE_PATH)\n",
+ "\n",
+ " rcloneUpload = \"\"\"\n",
+ " #You can also use rcone move file to remote\n",
+ " %s\n",
+ " \"\"\" % generateCmd(r\"${abp}\", rclone_DestinationPath)\n",
+ "\n",
+ " end = \"\"\"\n",
+ " fi\n",
+ " \"\"\"\n",
+ " \n",
+ " data = data + (rcloneUpload if rclone_DestinationPath else nUpload) + end\n",
+ " with open(pathDoneCMD, 'w') as w:\n",
+ " w.write(data)\n",
+ " os.chmod(pathDoneCMD, 0o755)\n",
+ "else:\n",
+ " try:\n",
+ " os.unlink(pathDoneCMD)\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " \n",
+ "configPath = pathlib.Path('tools/simple-torrent/cloud-torrent.json')\n",
+ "configsdata = r\"\"\"\n",
+ "{{\n",
+ " \"AutoStart\": true,\n",
+ " \"EngineDebug\": false,\n",
+ " \"MuteEngineLog\": true,\n",
+ " \"ObfsPreferred\": true,\n",
+ " \"ObfsRequirePreferred\": false,\n",
+ " \"DisableTrackers\": false,\n",
+ " \"DisableIPv6\": false,\n",
+ " \"DownloadDirectory\": \"/content/downloads/\",\n",
+ " \"WatchDirectory\": \"torrents/\",\n",
+ " \"EnableUpload\": true,\n",
+ " \"EnableSeeding\": false,\n",
+ " \"IncomingPort\": 50007,\n",
+ " \"DoneCmd\": \"{}/doneCMD.sh\",\n",
+ " \"SeedRatio\": 1.5,\n",
+ " \"UploadRate\": \"High\",\n",
+ " \"DownloadRate\": \"Unlimited\",\n",
+ " \"TrackerListURL\": \"https://trackerslist.com/best.txt\",\n",
+ " \"AlwaysAddTrackers\": true,\n",
+ " \"ProxyURL\": \"\"\n",
+ "}}\n",
+ "\"\"\".format(HOME)\n",
+ "with open(configPath, \"w+\") as configFile:\n",
+ " configFile.write(configsdata)\n",
+ " \n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.isfile(\"tools/simple-torrent/cloud-torrent\"):\n",
+ " filename = 'tools/simple-torrent/cloud-torrent_linux_amd64.gz'\n",
+ " if Install_old_version:\n",
+ " latestTag = '1.2.3'\n",
+ " else:\n",
+ " latestTag = requests.get(\"https://api.github.com/repos/boypt/simple-torrent/releases/latest\").json()['tag_name']\n",
+ " url = \"https://github.com/boypt/simple-torrent/releases/download/\" \\\n",
+ " f\"{latestTag}/{filename[21:]}\"\n",
+ " \n",
+ " urllib.request.urlretrieve(url, filename)\n",
+ " import gzip, shutil\n",
+ " with gzip.open(filename, 'rb') as f_in:\n",
+ " with open('tools/simple-torrent/cloud-torrent', 'wb') as f_out: shutil.copyfileobj(f_in, f_out)\n",
+ " os.chmod('tools/simple-torrent/cloud-torrent', 0o775)\n",
+ " os.remove(filename)\n",
+ " \n",
+ "# Launching SimpleTorrent in background\n",
+ "if not findProcess(\"cloud-torrent\", \"SimpleTorrent\"):\n",
+ " PORT = 4444\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " except:\n",
+ " cmdC = f'./cloud-torrent --port {PORT} ' \\\n",
+ " '-t Simple-Torrent ' \\\n",
+ " '-c cloud-torrent.json ' \\\n",
+ " '--host 0.0.0.0'\n",
+ " for run in range(10): \n",
+ " Popen(cmdC.split(), cwd='tools/simple-torrent')\n",
+ " time.sleep(3)\n",
+ " try:\n",
+ " urllib.request.urlopen(f\"http://localhost:{PORT}\")\n",
+ " break\n",
+ " except:\n",
+ " print(\"Unable to start SimpleTorrent! Retrying...\")\n",
+ " \n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['SimpleTorrent', 4444, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/SimpleTorrent.yml\", 4040]).start('SimpleTorrent')\n",
+ "displayUrl(Server, pNamU='SimpleTorrent : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iLcAVtWT4NTC"
+ },
+ "source": [
+ "### Transmission "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CePVeFVG4QFz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Transmission \n",
+ "# @markdown > Transmission Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request, pathlib\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.exists('/usr/bin/transmission-daemon'):\n",
+ " os.makedirs('downloads', exist_ok=True)\n",
+ " os.makedirs('tools/transmission/', exist_ok=True)\n",
+ " runSh('apt install transmission-daemon')\n",
+ " nTWC = \"https://raw.githubusercontent.com/ronggang/\" \\\n",
+ " \"transmission-web-control/master/release/install-tr-control.sh\"\n",
+ " urllib.request.urlretrieve(nTWC, 'tools/transmission/trInstall.sh')\n",
+ " runSh('bash tools/transmission/trInstall.sh auto')\n",
+ " \n",
+ " try:\n",
+ " pathlib.Path('tools/transmission/trInstall.sh').unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ "\n",
+ "if not findProcess('transmission-daemon', '--no-watch-dir'):\n",
+ " !transmission-daemon --no-watch-dir --config-dir tools/transmission \\\n",
+ " --port 9091 --download-dir /content/downloads/ --dht --utp --no-portmap \\\n",
+ " --peerlimit-global 9999 --peerlimit-torrent 9999 --no-global-seedratio \\\n",
+ " -u admin -v admin --auth\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http'], ['transmission', 9091, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/transmission.yml\", 4058]).start('transmission', displayB=False)\n",
+ "displayUrl(server, pNamU='Transmission : ', btc='r')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bQ73mxqlpNjb"
+ },
+ "source": [
+ "### µTorrent "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "unIq2GEJpLzG"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] µTorrent \n",
+ "# @markdown > µTorrent Default CredentialUsername: adminPassword: admin\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "r = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "# Installing µTorrent\n",
+ "if not os.path.exists(\"/usr/bin/utserver\"):\n",
+ " os.makedirs(\"downloads\", exist_ok=True)\n",
+ " r.system_raw(\"apt install libssl1.0.0 libssl-dev\")\n",
+ " r.system_raw(r\"wget http://download-new.utorrent.com/endpoint/utserver/os/linux-x64-ubuntu-13-04/track/beta/ -O utserver.tar.gz\")\n",
+ " r.system_raw(r\"tar -zxvf utserver.tar.gz -C /opt/\")\n",
+ " r.system_raw(\"rm -f utserver.tar.gz\")\n",
+ " r.system_raw(\"mv /opt/utorrent-server-* /opt/utorrent\")\n",
+ " os.chmod(\"/opt/utorrent\", 0o777)\n",
+ " r.system_raw(\"ln -s /opt/utorrent/utserver /usr/bin/utserver\")\n",
+ " urllib.request.urlretrieve(\"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/configurations/utorrent/utserver.conf\", \"/opt/utorrent/utserver.conf\")\n",
+ "\n",
+ "if not findProcess(\"utserver\", \"-settingspath\"):\n",
+ " cmd = \"utserver -settingspath /opt/utorrent/\" \\\n",
+ " \" -configfile /opt/utorrent/utserver.conf\" \\\n",
+ " \" -daemon\"\n",
+ " runSh(cmd, shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['utorrent', 5454, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/uTorrent.yml\", 4042]).start('utorrent', displayB=False)\n",
+ "displayUrl(Server, pNamU='µTorrent : ', ExUrl=fr\"http://admin:admin@{Server['url'][7:]}/gui\", btc=\"g\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UU-y9pOU4sRB"
+ },
+ "source": [
+ "### vuze "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Uxp5DDkJ4ue1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] vuze \n",
+ "# @markdown > viuze Default CredentialUsername: rootPassword: yesme\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request, pathlib\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " checkAvailable,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "def latestTag():\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(\"http://dev.vuze.com/\").read().decode('UTF-8')\n",
+ " return re.findall(r'\\sVuze_(\\d{4})\\sRelease\\s', htmlF)[0]\n",
+ "\n",
+ "\n",
+ "loadingAn()\n",
+ "if not os.path.exists('tools/vuze/Vuze.jar'):\n",
+ " os.makedirs('downloads', exist_ok=True)\n",
+ " os.makedirs('tools/vuze/', exist_ok=True)\n",
+ " runSh('wget -r --level=1 -np -nH -R index.html -nd -k http://svn.vuze.com/public/client/trunk/uis/lib/', cd='tools/vuze/')\n",
+ " rv = latestTag()\n",
+ " dlink = f\"https://netcologne.dl.sourceforge.net/project/azureus/vuze/Vuze_{rv}/Vuze_{rv}.jar\"\n",
+ " urllib.request.urlretrieve(dlink, 'tools/vuze/Vuze.jar') \n",
+ "\n",
+ " # All command found in set command ex: java -jar Vuze.jar --ui=console -c set\n",
+ " runScript = \"\"\"plugin install xmwebui\n",
+ "pair enable\n",
+ "set \"Plugin.xmwebui.Port\" 9595 int\n",
+ "set \"Plugin.xmwebui.Password Enable\" true boolean\n",
+ "set \"Plugin.xmwebui.Pairing Enable\" false boolean\n",
+ "set \"Plugin.xmwebui.User\" \"root\" string\n",
+ "set \"Plugin.xmwebui.Password\" \"yesme\" password\n",
+ "set \"Completed Files Directory\" \"/content/downloads/\" string\n",
+ "set \"General_sDefaultSave_Directory\" \"/content/downloads/\" string\n",
+ "set \"General_sDefaultTorrent_Directory\" \"/content/downloads/\" string\n",
+ "\"\"\"\n",
+ " with open('tools/vuze/Rscript.sh', 'w') as w: w.write(runScript)\n",
+ "\n",
+ "if not findProcess('java', '-jar Vuze.jar'):\n",
+ " runSh('java -jar Vuze.jar --ui=console -e Rscript.sh &', cd='tools/vuze/', shell=True)\n",
+ " time.sleep(7)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vuze', 9595, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/vuze.yml\", 4058]).start('vuze', displayB=False)\n",
+ "displayUrl(server, pNamU='vuze : ', ExUrl=fr\"http://root:yesme@{server['url'][7:]}\", btc='b')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EpwNYbcfRvcl"
+ },
+ "source": [
+ "# ✦ *Utility* ✦ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5CWw65NugcjI"
+ },
+ "source": [
+ "## ✧ Checksum Tool ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3AbFcLJr5PHk"
+ },
+ "source": [
+ "### MD5 + SHA-1 + SHA-256 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OQTQwwFm5PH1"
+ },
+ "source": [
+ "TO DO (later...):\n",
+ "\n",
+ "1. Add some kind of checking to make sure file_name does exist.\n",
+ "2. Add some kind of checking to make sure file_name is not a directory.\n",
+ "3. Add some kind of checking to make sure file_path does exist.\n",
+ "4. Add some kind of checking to make sure file_path is not a file.\n",
+ "5. Add whether the hash file does exist or not. If not, skip."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "ovjsyICM5PH5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Generate
\n",
+ "file_path = \"/content/\" #@param {type:\"string\"}\n",
+ "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
+ "\n",
+ "generate_md5 = True #@param {type:\"boolean\"}\n",
+ "generate_sha1 = True #@param {type:\"boolean\"}\n",
+ "generate_sha256 = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "%cd \"$file_path\"\n",
+ "clear_output()\n",
+ "\n",
+ "if generate_md5 is True:\n",
+ " print(\"Generating MD5 hash...\")\n",
+ " !md5sum \"$file_name\" > \"$file_name\".md5\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if generate_sha1 is True:\n",
+ " print(\"Generating SHA-1 hash...\")\n",
+ " !sha1sum \"$file_name\" > \"$file_name\".sha1\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if generate_sha256 is True:\n",
+ " print(\"Generating SHA-256 hash...\")\n",
+ " !sha256sum \"$file_name\" > \"$file_name\".sha256\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "print(\"\\nAll hashes has been generated.\\n\\n\")\n",
+ "\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "O8m9DgFb5PH8"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Compare
\n",
+ "file_path = \"/content/\" #@param {type:\"string\"}\n",
+ "file_name = \"loremipsum.txt\" #@param {type:\"string\"}\n",
+ "\n",
+ "compare_md5 = True #@param {type:\"boolean\"}\n",
+ "compare_sha1 = True #@param {type:\"boolean\"}\n",
+ "compare_sha256 = True #@param {type:\"boolean\"}\n",
+ "\n",
+ "# @markdown > Do NOT forget to add the end slash on the file_path field or it would not \"cd\" properly.
\n",
+ "# @markdown > If the result shows \"OK\", that means the file matches 100%.
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "%cd \"$file_path\"\n",
+ "clear_output()\n",
+ "\n",
+ "if compare_md5 is True:\n",
+ " print(\"Comparing MD5 hash...\")\n",
+ " !md5sum -c \"$file_name\".md5\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if compare_sha1 is True:\n",
+ " print(\"\\nComparing SHA-1 hash...\")\n",
+ " !sha1sum -c \"$file_name\".sha1\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if compare_sha256 is True:\n",
+ " print(\"\\nComparing SHA-256 hash...\")\n",
+ " !sha256sum -c \"$file_name\".sha256\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "print(\"\\n\")\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pIk3H6xUic8a"
+ },
+ "source": [
+ "## ✧ Files Uploader ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LOmbPf7Tihne"
+ },
+ "source": [
+ "### anonfiles "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BIMRKjTrinOM"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Upload to anonfiles
\n",
+ "file_path = \"\" # @param {type: \"string\"}\n",
+ "\n",
+ "url = \"https://api.anonfiles.com/upload\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import requests\n",
+ "\n",
+ "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
+ "\n",
+ "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "paeY4yX7jNd1"
+ },
+ "source": [
+ "### BayFiles "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "b5hRr0CmjSI2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Upload to BayFiles
\n",
+ "file_path = \"\" # @param {type: \"string\"}\n",
+ "\n",
+ "url = \"https://api.bayfiles.com/upload\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import requests\n",
+ "\n",
+ "x = requests.post(url, files = {'file': open(file_path,'rb')},)\n",
+ "\n",
+ "print(\"Download link: \" + x.json()[\"data\"][\"file\"][\"url\"][\"full\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "j-PgCLYrZFbm"
+ },
+ "source": [
+ "## ✧ File Manager ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TgwoGxAitg0y"
+ },
+ "source": [
+ "### Cloud Commander "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "sWTkCBV0ZHtJ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Cloud Commander \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, time, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " displayUrl,\n",
+ " PortForward_wrapper,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if os.path.isfile(\"/tools/node/bin/cloudcmd\") == False:\n",
+ " get_ipython().system_raw(\"npm cache clean -f && npm install -g n && n stable && npm i cloudcmd -g --force\")\n",
+ "\n",
+ "try:\n",
+ " urllib.request.urlopen('http://localhost:7007')\n",
+ "except urllib.error.URLError:\n",
+ " !nohup cloudcmd --online --no-auth --show-config --show-file-name \\\n",
+ " --editor 'deepword' --packer 'tar' --port 7007 \\\n",
+ " --no-confirm-copy --confirm-move --name 'File Manager' \\\n",
+ " --keys-panel --no-contact --console --sync-console-path \\\n",
+ " --no-terminal --no-vim --columns 'name-size-date' --no-log &\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['cloudcmd', 7007, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/CloudCommander.yml\", 7044]).start('cloudcmd')\n",
+ "displayUrl(server, pNamU='Cloud Commander : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xmq_9AJCtvlV"
+ },
+ "source": [
+ "### File Browser "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Cs_DPqJaabw3"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] File Browser \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "\n",
+ "# OUTPUT_DIR = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re, urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess\n",
+ ")\n",
+ "\n",
+ "clear_output()\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/filebrowser/', exist_ok=True)\n",
+ "\n",
+ "get_ipython().system_raw(r\"curl -fsSL https://filebrowser.xyz/get.sh | bash\")\n",
+ "if not findProcess(\"filebrowser\", \"--noauth\"):\n",
+ " runSh(\"filebrowser --noauth -r /content/ -p 4000 -d tools/filebrowser/filebrowser.db &\", shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['filebrowser', 4000, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/FileBrowser.yml\", 4099]).start('filebrowser')\n",
+ "displayUrl(server, pNamU='File Browser : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nUI7G8OSSXbM"
+ },
+ "source": [
+ "### Go HTTP File Server "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "evBFe60vSfxW"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Go HTTP File Server \n",
+ "HOME_DIRECTORY = \"/content\" #@param {type:\"string\"}\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request, requests\n",
+ "from zipfile import ZipFile as ZZ\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ "\tdisplayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "toolLocation = 'tools/ghfs'\n",
+ "binaryF = f\"{toolLocation}/ghfs\"\n",
+ "if not HOME_DIRECTORY:\n",
+ " HOME_DIRECTORY = CWD\n",
+ "\n",
+ "try:\n",
+ " if HOME_DIRECTORY != OldP:\n",
+ " os.system(\"pkill ghfs\")\n",
+ "except NameError:\n",
+ " pass\n",
+ " \n",
+ "OldP = HOME_DIRECTORY\n",
+ "os.makedirs(toolLocation, exist_ok=True)\n",
+ "\n",
+ "if not os.path.exists(binaryF):\n",
+ " ownerProjet = \"mjpclab/go-http-file-server\"\n",
+ " DZipBL = f\"{toolLocation}/Zipghfs.zip\"\n",
+ " latest_tag = requests.get(f\"https://api.github.com/repos/{ownerProjet}/releases/latest\").json()['tag_name']\n",
+ " dBinaryL = f\"https://github.com/{ownerProjet}/releases/download/{latest_tag}/ghfs-{latest_tag}-linux-amd64.zip\"\n",
+ " urllib.request.urlretrieve(dBinaryL, DZipBL)\n",
+ " with ZZ(DZipBL, 'r') as zip_ref:zip_ref.extractall(toolLocation)\n",
+ " os.remove(DZipBL)\n",
+ " os.chmod(binaryF, 0o777)\n",
+ "\n",
+ "if not findProcess(\"ghfs\", \"--listen-plain\"):\n",
+ " runSh(f'./ghfs --listen-plain 1717 -R \\\n",
+ " -a \":/:{HOME_DIRECTORY}\" \\\n",
+ " --global-upload \\\n",
+ " --global-mkdir \\\n",
+ " --global-delete \\\n",
+ " --global-archive \\\n",
+ " --global-archive \\\n",
+ " &', \n",
+ " shell=True,\n",
+ " cd=\"tools/ghfs\")\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ghfs', 1717, 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/GoHTTPFileServer.yml\", 41717]).start('ghfs')\n",
+ "displayUrl(server, pNamU='Go HTTP File Server : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aStiEPlnDoeY"
+ },
+ "source": [
+ "### Create / Extract Archive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "88JkX_J3EXWC"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Tools
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "os.system(\"sudo apt update\")\n",
+ "os.system(\"apt install p7zip-full p7zip-rar unrar rar\")\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "RMy0TxzHzCR9"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Create archive \n",
+ "source_path = \"\" #@param {type:\"string\"}\n",
+ "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"tar.gz\"]\n",
+ "archive_name = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default name will be used (archive)\n",
+ "archive_password = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave this field empty if you do not want to protect the archive with password.\n",
+ "compression_level = \"no_compression\" #@param [\"no_compression\", \"fastest\", \"fast\", \"normal\", \"maximum\", \"ultra\"]\n",
+ "output_path = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default path will be used (/content)\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import os, sys, re\n",
+ "\n",
+ "\n",
+ "if archive_name == \"\":\n",
+ " archive_name = \"archive\"\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "if archive_password == \"\":\n",
+ " pass\n",
+ "else:\n",
+ " archive_password = \"-p\" + archive_password\n",
+ "\n",
+ "if compression_level == \"no_compression\":\n",
+ " compression_level = \"-mx=0\"\n",
+ "elif compression_level == \"fastest\":\n",
+ " compression_level = \"-mx=1\"\n",
+ "elif compression_level == \"fast\":\n",
+ " compression_level = \"-mx=3\"\n",
+ "elif compression_level == \"normal\":\n",
+ " compression_level = \"-mx=5\"\n",
+ "elif compression_level == \"maximum\":\n",
+ " compression_level = \"-mx=7\"\n",
+ "elif compression_level == \"ultra\":\n",
+ " compression_level = \"-mx=9\"\n",
+ "\n",
+ "if output_path == \"\":\n",
+ " output_path = \"/content\"\n",
+ "else:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "if archive_type == \"zip\":\n",
+ " if source_path == \"\":\n",
+ " display(HTML(\"❌ The source_path field is empty! \"))\n",
+ " else:\n",
+ " #output_file_path = re.search(\"^[\\/].+\\/\", source_path)\n",
+ " #output_file_path_raw = output_file_path.group(0)\n",
+ " #delsplit = re.search(\"\\/(?:.(?!\\/))+$\", source_path)\n",
+ " #folder_name = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "\n",
+ " #os.environ['inputDir'] = source_path\n",
+ " #os.environ['outputPath'] = output_file_path_raw\n",
+ " #os.environ['folderName'] = folder_name\n",
+ " #os.environ['archiveLevel'] = compression_level\n",
+ " #os.environ['archivePassword'] = archive_password\n",
+ "\n",
+ " #!7z a -tzip \"$archiveLevel\" \"$archivePassword\" \"$outputPath\"/\"$folderName\".zip \"$inputDirectory\"\n",
+ " !7z a -tzip \"$compression_level\" \"$archive_password\" \"$output_path\"/\"$archive_name\".zip \"$source_path\"\n",
+ "else:\n",
+ " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Mbmf5lk0zF1q"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "#@markdown ← Extract archive \n",
+ "archive_path = \"\" #@param {type:\"string\"}\n",
+ "archive_type = \"zip\" #@param [\"zip\", \"7z\", \"rar\", \"tar\", \"gzip\", \"iso\"]\n",
+ "archive_password = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave the archive_password field empty if archive is not password protected.\n",
+ "output_path = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > Leave the output_path field empty to use default extraction path (/content).\n",
+ "\n",
+ "#@markdown ---\n",
+ "automatically_clear_cell_output = False # @param{type: \"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, sys, re\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "\n",
+ "if archive_password == \"\":\n",
+ " pass\n",
+ "elif not archive_password == \"\":\n",
+ " archive_password = \"-p\" + archive_password\n",
+ "\n",
+ "if output_path == \"\":\n",
+ " output_path = \"-o/content\"\n",
+ "elif output_path == \"/content\":\n",
+ " output_path = \"-o/content\"\n",
+ "else:\n",
+ " output_path = \"-o\" + output_path\n",
+ "\n",
+ "\n",
+ "os.environ['inputFile'] = archive_path\n",
+ "os.environ['inputPassword'] = archive_password\n",
+ "os.environ['outputFile'] = output_path\n",
+ "\n",
+ "\n",
+ "if archive_path == \"\":\n",
+ " display(HTML(\"❌ The archive_path field is empty! \"))\n",
+ "else:\n",
+ " if archive_type == \"zip\":\n",
+ " !7z x \"$inputFile\" \"$inputPassword\" \"$outputFile\"\n",
+ " elif archive_type == \"iso\":\n",
+ " !7z x \"$inputFile\" \"$outputFile\"\n",
+ " else:\n",
+ " display(HTML(\"❌ More archive format will be added in the future. \"))\n",
+ "\n",
+ "\n",
+ "if automatically_clear_cell_output is True:\n",
+ "\tclear_output()\n",
+ "else:\n",
+ "\tpass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d7hdxEjc-ynr"
+ },
+ "source": [
+ "## ✧ Image Manipulation ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qs8R07vxhuo2"
+ },
+ "source": [
+ "Some of these cells might require GPU runtime. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Jbw2QIUB6JKR"
+ },
+ "source": [
+ "### Real-ESRGAN "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JdnKplLq61kb"
+ },
+ "source": [
+ "GPU runtime is required! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5kUMYIALO6yI"
+ },
+ "source": [
+ "This is my own simple Google Colab implementation of xinntao 's amazing Real-ESRGAN project.\n",
+ "\n",
+ " \n",
+ "\n",
+ "Image credit: Real-ESRGAN "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OW-WSLlS6S3m"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Install] Real-ESRGAN\n",
+ "#@markdown You MUST run this cell first! \n",
+ "#====================================================================#\n",
+ "\n",
+ "import subprocess, pathlib, shutil\n",
+ "\n",
+ "\n",
+ "main_path = '/content/Real-ESRGAN'\n",
+ "input_path = main_path + '/inputs'\n",
+ "cmd = [\n",
+ " 'apt get update',\n",
+ " 'git clone https://github.com/xinntao/Real-ESRGAN.git',\n",
+ " 'pip install basicsr',\n",
+ " 'pip install facexlib',\n",
+ " 'pip install gfpgan',\n",
+ " 'pip install -r requirements.txt',\n",
+ " 'python setup.py develop'\n",
+ " ]\n",
+ "mdl = [\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth',\n",
+ " 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth'\n",
+ " ]\n",
+ "\n",
+ "\n",
+ "for x in cmd[0:2:]:\n",
+ " subprocess.run(x, shell=True)\n",
+ "for y in cmd[2:]:\n",
+ " subprocess.run(y, shell=True, cwd=main_path)\n",
+ "for z in mdl:\n",
+ " subprocess.run(['wget ' + z + ' -P experiments/pretrained_models'], shell=True, cwd=main_path)\n",
+ "\n",
+ "\n",
+ "remove_path = pathlib.Path(input_path)\n",
+ "shutil.rmtree(remove_path)\n",
+ "remove_path.mkdir()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eFcZE1D374Gr"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← Get image\n",
+ "image_source = \"upload\" #@param [\"upload\", \"url\"]\n",
+ "#====================================================================#\n",
+ "\n",
+ "import os, sys, shutil\n",
+ "from IPython.display import clear_output\n",
+ "from google.colab import files\n",
+ "\n",
+ "\n",
+ "main_path = '/content/Real-ESRGAN'\n",
+ "input_path = main_path + '/inputs'\n",
+ "\n",
+ "\n",
+ "if image_source == 'upload':\n",
+ " uploaded = files.upload()\n",
+ "\n",
+ " for filename in uploaded.keys():\n",
+ " dst_path = os.path.join(input_path, filename)\n",
+ " shutil.move(filename, dst_path)\n",
+ "\n",
+ " print(f'Moved file \"{filename}\" to \"{dst_path}\"') \n",
+ "elif image_source == 'url':\n",
+ " print('Enter ONLY direct url! For example: https://internet.com/image.jpg')\n",
+ " print('Leave the field below blank to cancel.\\n')\n",
+ "\n",
+ " image_url = input('URL: ')\n",
+ "\n",
+ " if image_url == '':\n",
+ " clear_output()\n",
+ " sys.exit('String image_url is empty!')\n",
+ " else:\n",
+ " os.system('wget -q ' + image_url + ' -N -P ' + input_path)\n",
+ "\n",
+ " print(f'\\nImage saved to: \"{input_path}\"')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OCxq4YzeQ2It"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Start] Real-ESRGAN\n",
+ "style = \"real_life\" #@param [\"real_life\", \"anime\"]\n",
+ "upscale_ratio = 2 #@param {type:\"slider\", min: 1, max:10, step:1}\n",
+ "#@markdown ---\n",
+ "#@markdown ⚙️ Advanced Options ⚙️ \n",
+ "output_format = \"auto\" #@param [\"auto\", \"jpg\", \"png\"]\n",
+ "alpha_upsampler = \"realesrgan\" #@param [\"realesrgan\", \"bicubic\"]\n",
+ "split_chunk = 256 #@param {type:\"slider\", min:0, max:1024, step:128}\n",
+ "custom_upscale_ratio = \"1.5\" #@param {type:\"string\"}\n",
+ "enable_custom_upscale_ratio = False #@param {type:\"boolean\"}\n",
+ "optimize_face = False #@param {type:\"boolean\"}\n",
+ "half_precision_mode = False #@param {type:\"boolean\"}\n",
+ "#@markdown >This cell is not finished yet!\n",
+ "#====================================================================#\n",
+ "\n",
+ "#\n",
+ "# TO DO: if \"inputs\" is empty, upload some image first\n",
+ "# optimize_face is not for anime model.\n",
+ "# add \"performance mode\" by using the X2 model? since it's faster...\n",
+ "# us os.system or subprocess.run\n",
+ "#\n",
+ "\n",
+ "work_path = '/content/Real-ESRGAN'\n",
+ "input_path = work_path + '/inputs'\n",
+ "output_path = work_path + '/results'\n",
+ "model = [\n",
+ " 'RealESRGAN_x2plus.pth',\n",
+ " 'RealESRGAN_x2plus_netD.pth',\n",
+ " 'RealESRNet_x4plus.pth',\n",
+ " 'RealESRGAN_x4plus_netD.pth',\n",
+ " 'RealESRGAN_x4plus_anime_6B.pth',\n",
+ " 'RealESRGAN_x4plus_anime_6B_netD.pth'\n",
+ " ]\n",
+ "output_format = '--ext ' + output_format\n",
+ "alpha_upsampler = '--alpha_upsampler ' + alpha_upsampler\n",
+ "split_chunk = '--tile ' + str(split_chunk)\n",
+ "\n",
+ "if style == 'anime':\n",
+ " use_model = model[4]\n",
+ "else:\n",
+ " use_model = model[2]\n",
+ "\n",
+ "if enable_custom_upscale_ratio is True:\n",
+ " if custom_upscale_ratio == '':\n",
+ " sys.exit('The custom_upscale_ratio field cannot be empty!')\n",
+ " else:\n",
+ " upscale_ratio = '--outscale ' + custom_upscale_ratio\n",
+ "else:\n",
+ " upscale_ratio = '--outscale ' + str(upscale_ratio)\n",
+ "\n",
+ "if optimize_face is True:\n",
+ " optimize_face = '--face_enhance'\n",
+ "else:\n",
+ " optimize_face = ''\n",
+ "\n",
+ "if half_precision_mode is True:\n",
+ " half_precision_mode = '--half'\n",
+ "else:\n",
+ " half_precision_mode = ''\n",
+ "\n",
+ "\n",
+ "!python \"{work_path}/inference_realesrgan.py\" --model_path \"{work_path}/experiments/pretrained_models/{use_model}\" --input \"{input_path}\" --output \"{output_path}\" {upscale_ratio} {split_chunk} {alpha_upsampler} {split_chunk} {optimize_face} {half_precision_mode} {output_format} --suffix 'realesrgan'\n",
+ "\n",
+ "print('\\nResults are saved in:', output_path)\n",
+ "\n",
+ "\n",
+ "#====================================================================================================\n",
+ "#\n",
+ "# import subprocess\n",
+ "# from subprocess import PIPE\n",
+ "\n",
+ "# work_path = '/content/Real-ESRGAN'\n",
+ "# input_path = work_path + '/inputs'\n",
+ "# output_path = work_path + '/results'\n",
+ "# use_model = 'RealESRGAN_x2plus.pth'\n",
+ "\n",
+ "# cmd = 'python inference_realesrgan.py --model_path experiments/pretrained_models/' + use_model + ' --input inputs'\n",
+ "# process_run = subprocess.run(cmd, shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True, cwd=work_path)\n",
+ "# print(process_run.stdout, process_run.stderr)\n",
+ "\n",
+ "# print('\\nOutputs are saved in:', output_path)\n",
+ "#\n",
+ "#===================================================================================================="
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "3cfIjdfASyNI"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← [Start] Visualize comparison (optional)\n",
+ "#====================================================================#\n",
+ "#\n",
+ "# Codes below are from Real-ESRGAN.\n",
+ "# Path variables are of course has been changed.\n",
+ "#\n",
+ "#====================================================================#\n",
+ "\n",
+ "working_directory = '/content/Real-ESRGAN'\n",
+ "input_folder = working_directory + '/inputs'\n",
+ "result_folder = working_directory + '/results'\n",
+ "\n",
+ "# utils for visualization\n",
+ "import cv2\n",
+ "import matplotlib.pyplot as plt\n",
+ "def display(img1, img2):\n",
+ " fig = plt.figure(figsize=(25, 10))\n",
+ " ax1 = fig.add_subplot(1, 2, 1) \n",
+ " plt.title('Input image', fontsize=16)\n",
+ " ax1.axis('off')\n",
+ " ax2 = fig.add_subplot(1, 2, 2)\n",
+ " plt.title('Real-ESRGAN output', fontsize=16)\n",
+ " ax2.axis('off')\n",
+ " ax1.imshow(img1)\n",
+ " ax2.imshow(img2)\n",
+ "def imread(img_path):\n",
+ " img = cv2.imread(img_path)\n",
+ " img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
+ " return img\n",
+ "\n",
+ "# display each image in the upload folder\n",
+ "import os\n",
+ "import glob\n",
+ "\n",
+ "input_list = sorted(glob.glob(os.path.join(input_folder, '*')))\n",
+ "output_list = sorted(glob.glob(os.path.join(result_folder, '*')))\n",
+ "for input_path, output_path in zip(input_list, output_list):\n",
+ " img_input = imread(input_path)\n",
+ " img_output = imread(output_path)\n",
+ " display(img_input, img_output)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2QB4GeGX0Wbr"
+ },
+ "outputs": [],
+ "source": [
+ "#============================== [FORM] ==============================#\n",
+ "#@markdown ##← Download results (archived)\n",
+ "#====================================================================#\n",
+ "\n",
+ "zip_filename = 'Real-ESRGAN_result.zip'\n",
+ "\n",
+ "if os.path.exists(zip_filename):\n",
+ " os.remove(zip_filename)\n",
+ "\n",
+ "os.system(f\"zip -r -j {zip_filename} /content/Real-ESRGAN/results/*\")\n",
+ "\n",
+ "files.download(zip_filename)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e-OWHJwruE6V"
+ },
+ "source": [
+ "### StyleGAN2 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OvwxWoyaUsIL"
+ },
+ "source": [
+ "GPU runtime is required! "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jqz-1eEnuIer"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Install] StyleGAN2 \n",
+ "# ================================================================ #\n",
+ "\n",
+ "%cd /content\n",
+ "!pip install typeguard;\n",
+ "!pip install psutil\n",
+ "!pip install humanize\n",
+ "!pip install tqdm\n",
+ "!rm -rf stylegan2 && git clone https://github.com/NVlabs/stylegan2.git;\n",
+ "%cd /content/stylegan2\n",
+ "\n",
+ "print(\"Installing\")\n",
+ "\n",
+ "from IPython.display import Image, clear_output\n",
+ "from google.colab import files\n",
+ "import sys\n",
+ "import pickle\n",
+ "import numpy as np\n",
+ "import PIL\n",
+ "import psutil\n",
+ "import humanize\n",
+ "import os\n",
+ "import time\n",
+ "from tqdm import tqdm\n",
+ "\n",
+ "from scipy import ndimage\n",
+ "\n",
+ "%tensorflow_version 1.x\n",
+ "sys.path.append('/content/stylegan2/dnnlib')\n",
+ "import dnnlib\n",
+ "import dnnlib.tflib as tflib\n",
+ "dnnlib.tflib.init_tf()\n",
+ "\n",
+ "entity_to_url = {\n",
+ " 'faces': 'https://drive.google.com/uc?id=1erg93hWnekh57m3cwsAnqJYfYVceVVSe',\n",
+ " 'celebs': 'https://drive.google.com/uc?id=1q8VldTeTbruoh34ih6GftOcybGNA0dcZ',\n",
+ " 'bedrooms': 'https://drive.google.com/uc?id=15EV9JBiQ7ifoi-B-DQAZF4sYPdCAsiCY',\n",
+ " 'cars': 'https://drive.google.com/uc?id=1QzWwIqJITrg5NWG7QyqrArhb_4UhStDy',\n",
+ " 'cats': 'https://drive.google.com/uc?id=1Fz12B8TSPiRtzCqjhFxTH_W-rIZ5rSGr',\n",
+ " 'anime': 'https://drive.google.com/uc?id=1z8N_-xZW9AU45rHYGj1_tDHkIkbnMW-R',\n",
+ " 'chruch': 'https://drive.google.com/uc?id=1-0JMXPdCQLIVxkDE_S9pO8t8mWoEvhHl',\n",
+ " 'horse': 'https://drive.google.com/uc?id=1-1oc3016pUDi2er1zEvjGcFy8FC-QAh3',\n",
+ " 'anime': 'https://drive.google.com/uc?id=1-91fGZSsZJPNlFytg5iHvVLqxKWDLFt8',\n",
+ " 'anime_portrait': 'https://drive.google.com/uc?id=1-Bw24cv9o7qjLtd8yq8bzzz9AjR9QAkL',\n",
+ " 'faces2': 'https://drive.google.com/uc?id=18rJYK9oF6D7C607Be1B_Fu53rjjHUAT1',\n",
+ " 'GOT': 'https://drive.google.com/uc?id=1-0LCuuUxUA0R6gdSd9prn5sP7T01iF0e',\n",
+ "}\n",
+ "\n",
+ "model_cache = {}\n",
+ "synthesis_kwargs = dict(output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), minibatch_size=20)\n",
+ "\n",
+ "def gen_pil_image(latents, zoom=1, psi=0.7):\n",
+ " fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n",
+ " image = Gs.run(latents, None, randomize_noise=True, output_transform=fmt, truncation_psi=psi)\n",
+ " if zoom == 1:\n",
+ " return PIL.Image.fromarray(image[0])\n",
+ " else:\n",
+ " print(image[0].shape)\n",
+ " return PIL.Image.fromarray(ndimage.zoom(image[0],(zoom,zoom,1)))\n",
+ "\n",
+ "import google.colab.output\n",
+ "import random\n",
+ "import io\n",
+ "import base64\n",
+ "\n",
+ "def gen(l=None, psi=1):\n",
+ " if l is None:\n",
+ " l = [random.random()*2-1 for x in range(512)]\n",
+ " pimg = gen_pil_image(np.array(l).reshape(1,512), psi=psi)\n",
+ " bio = io.BytesIO()\n",
+ " pimg.save(bio, \"PNG\")\n",
+ " b = bio.getvalue()\n",
+ " return 'data:image/png;base64,'+str(base64.b64encode(b),encoding='utf-8')\n",
+ "\n",
+ "google.colab.output.register_callback('gen', gen)\n",
+ "\n",
+ "##\n",
+ "def fetch_model(name):\n",
+ " if model_cache.get(name):\n",
+ " return model_cache[name]\n",
+ " url = entity_to_url[name]\n",
+ " with dnnlib.util.open_url(url, cache_dir='cache') as f:\n",
+ " _G, _D, Gs = pickle.load(f)\n",
+ " model_cache[name] = Gs\n",
+ " return model_cache[name]\n",
+ "\n",
+ "def fetch_file(filename):\n",
+ " with open(filename,'rb') as f:\n",
+ " return pickle.load(f)\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BPdx4NeDu1SX"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Render Model \n",
+ "# ================================================================ #\n",
+ "\n",
+ "#choose model here. default is ffhq\n",
+ "import os\n",
+ "Render_Model = \"anime\" #@param [\"faces\",\"faces2\",\"GOT\",\"celebs\",\"bedrooms\",\"cars\",\"cats\",\"chruch\",\"horse\",\"anime\"]\n",
+ "\n",
+ "\n",
+ "if Render_Model == \"faces\":\n",
+ " curr_model = \"faces\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"faces2\":\n",
+ " curr_model = \"faces2\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"celebs\":\n",
+ " curr_model = \"celebs\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"bedrooms\":\n",
+ " curr_model = \"bedrooms\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"cars\":\n",
+ " curr_model = \"cars\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"cats\":\n",
+ " curr_model = \"cats\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"chruch\":\n",
+ " curr_model = \"chruch\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"horse\":\n",
+ " curr_model = \"horse\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"anime\":\n",
+ " curr_model = \"anime\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"anime_portrait\":\n",
+ " curr_model = \"anime_portrait\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')\n",
+ "\n",
+ "if Render_Model == \"GOT\":\n",
+ " curr_model = \"GOT\" # can be faces, celebs, bedrooms, cars, cats, anime\n",
+ " Gs = fetch_model(curr_model) # if you uploaded your own file, use fetch_file('path/to/file.pkl')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "xYUOT5SAu_wz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] StyleGAN2 \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "def get_latent_html(i):\n",
+ " return \"\"\"\n",
+ " L%03i: \n",
+ " \n",
+ "
\"\"\" % (i, i, i, (random.random()*2-1))\n",
+ "\n",
+ "def get_latents_html():\n",
+ " return '\\n'.join([get_latent_html(i) for i in range(512)])\n",
+ "\n",
+ "input_form = \"\"\"\n",
+ " \n",
+ " \n",
+ "\n",
+ "\n",
+ "
You have currently loaded %s model
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " %s\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " Generate from latents \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " psi: \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Mutate randomly \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Mutation strength: \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " Random image \n",
+ "
\n",
+ "
\n",
+ " Normalize latents \n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " Save latents \n",
+ " Load latents \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\"\"\" % (curr_model, get_latents_html())\n",
+ "\n",
+ "javascript = \"\"\"\n",
+ " \n",
+ "\n",
+ "\"\"\"\n",
+ "\n",
+ "HTML(input_form + javascript)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "AMu9crpy-7yb"
+ },
+ "source": [
+ "### waifu2xLab "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Q1H1wCcM-1Vd"
+ },
+ "source": [
+ "GPU runtime is optional, but waifu2x could perform better on GPU. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "o-L3111Z-2a3"
+ },
+ "source": [
+ "waifu2xLab is a Google Colab implementation of tsurumeso 's waifu2x-chainer \n",
+ "\n",
+ " \n",
+ "\n",
+ "2D character picture (Kagamine Rin) is licensed under CC BY-NC by piapro [2]. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0IOySews_Ine"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clone waifu2x-chainer and Install Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " pass\n",
+ "else:\n",
+ " # Installing the required dependencies\n",
+ " # !pip install -q cupy-cuda100\n",
+ " !pip install -q futures\n",
+ " !pip install -q chainer\n",
+ "\n",
+ " # Cloning waifu2x-chainer from github\n",
+ " !git clone -l -s https://github.com/tsurumeso/waifu2x-chainer.git /content/tools/waifu2x\n",
+ "\n",
+ " # Creating input and output directory for waifu2x-chainer to work with\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " else:\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "d_OGARyM_L8P"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Upload Image or Get from URL \n",
+ "image_source = \"file_upload\" #@param [\"file_upload\", \"url\"]\n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > For the url, input a direct link to the file. (e.g: https://domain.moe/saber_waifu.jpg )\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import google.colab.files\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "def IOFolderCheck():\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " IOFolderCheck()\n",
+ "\n",
+ " %cd /content/waifu2x/input\n",
+ " clear_output()\n",
+ "\n",
+ "\n",
+ " if image_source == \"file_upload\":\n",
+ " uploaded = google.colab.files.upload()\n",
+ " else:\n",
+ " if url == \"\":\n",
+ " display(HTML(\"❌ The url field is empty! \"))\n",
+ " else:\n",
+ " !wget -q {url}\n",
+ " \n",
+ "\n",
+ " %cd /content\n",
+ " clear_output()\n",
+ "else:\n",
+ " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "pZJnNTad_W0I"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] waifu2xLab \n",
+ "input = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If the \"input\" and \"output\" fields are empty, waifu2xLab will look for image(s) in \"/content/waifu2x/input\" and store the processed image(s) into \"/content/waifu2x/output\". By default, waifu2xLab will process anything inside the \"input\" folder.To process a single image, type in the absolute path of the file (e.g: /content/downloads/image.jpg).\n",
+ "output = \"\" #@param {type:\"string\"}\n",
+ "#@markdown > If left empty, the default output path will be used: /content/waifu2x/output\n",
+ "\n",
+ "#@markdown ---\n",
+ "processor = \"CPU\" #@param [\"CPU\", \"GPU\"]\n",
+ "mode = \"De-noise\" #@param [\"De-noise\", \"Upscale\", \"De-noise & Upscale\"]\n",
+ "tta = \"Disabled\" #@param [\"Enabled\", \"Disabled\"]\n",
+ "tta_level = \"8\" #@param [\"2\", \"4\", \"8\"]\n",
+ "# tta_level = 2 #@param {type:\"slider\", min:2, max:8, step:2}\n",
+ "denoise_level = 0 #@param {type:\"slider\", min:0, max:3, step:1}\n",
+ "upscale_ratio = 1 #@param {type:\"slider\", min:1, max:10, step:1}\n",
+ "output_quality = 100 #@param {type:\"slider\", min:1, max:100, step:1}\n",
+ "color_profile = \"RGB\" #@param [\"RGB\", \"YUV\"]\n",
+ "model = \"VGG7\" #@param [\"VGG7\", \"UpConv7\", \"ResNet10\", \"UpResNet10\"]\n",
+ "output_format = \"PNG\" #@param [\"PNG\", \"WEBP\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import google.colab.files\n",
+ "\n",
+ "\n",
+ "waifu2x_path_1 = \"/content/tools/waifu2x\"\n",
+ "waifu2x_path_2 = waifu2x_path_1 + \"/waifu2x.py\"\n",
+ "input_path = \"/content/waifu2x/input\"\n",
+ "output_path = \"/content/waifu2x/output\"\n",
+ "\n",
+ "\n",
+ "def IOFolderCheck():\n",
+ " if os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " pass\n",
+ " elif not os.path.exists(input_path) and os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " elif os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(output_path)\n",
+ " elif not os.path.exists(input_path) and not os.path.exists(output_path):\n",
+ " os.makedirs(input_path)\n",
+ " os.makedirs(output_path)\n",
+ "\n",
+ "\n",
+ "# For now, the CPU core is hardcoded to use 4 cores.\n",
+ "# The same goes for GPU, only GPU = 0 will be used.\n",
+ "if processor == \"CPU\":\n",
+ " processor = \"\"\n",
+ "elif processor == \"GPU\":\n",
+ " processor = \"-g 0\"\n",
+ "\n",
+ "# Checking for which mode is chosen.\n",
+ "if mode == \"De-noise\":\n",
+ " mode = \"noise\"\n",
+ "\n",
+ " upscale_ratio = 1\n",
+ "elif mode == \"Upscale\":\n",
+ " mode = \"scale\"\n",
+ "\n",
+ " denoise_level = 0\n",
+ "elif mode == \"De-noise & Upscale\":\n",
+ " mode = \"noise_scale\"\n",
+ "\n",
+ "# Checking whether TTA is enabled or not.\n",
+ "if tta == \"Enabled\":\n",
+ " tta1 = \"-t\"\n",
+ " tta2 = \"-T\"\n",
+ "elif tta == \"Disabled\":\n",
+ " tta1 = \"\"\n",
+ " tta2 = \"\"\n",
+ " tta_level = \"\"\n",
+ "\n",
+ "# Checking for which arch/model is used and convert it into parameter number.\n",
+ "if model == \"VGG7\":\n",
+ " model = 0\n",
+ "elif model == \"UpConv7\":\n",
+ " model = 1\n",
+ "elif model == \"ResNet10\":\n",
+ " model = 2\n",
+ "elif model == \"UpResNet10\":\n",
+ " model = 3\n",
+ "\n",
+ "# Checking for the chosen color profile and convert it into parameter.\n",
+ "if color_profile == \"YUV\":\n",
+ " color_profile = \"y\"\n",
+ "elif color_profile == \"RGB\":\n",
+ " color_profile = \"rgb\"\n",
+ "\n",
+ "# Checking for which output format is chosen and convert it into parameter.\n",
+ "if output_format == \"PNG\":\n",
+ " output_format = \"png\"\n",
+ "elif output_format == \"WEBP\":\n",
+ " output_format = \"webp\"\n",
+ "\n",
+ "# Checking whether input and output fields are empty or not\n",
+ "# If they are empty, the default storing path will be used (/content/waifu2x/output/)\n",
+ "if input == \"\" and output == \"\":\n",
+ " input = input_path\n",
+ " output = output_path\n",
+ "elif input == \"\" and not output == \"\":\n",
+ " input = inpput_path\n",
+ "elif not input == \"\" and output == \"\":\n",
+ " output = output_path\n",
+ "\n",
+ "\n",
+ "if os.path.exists(waifu2x_path_1) and os.path.isdir(waifu2x_path_1) and os.path.exists(waifu2x_path_2) and os.path.isfile(waifu2x_path_2):\n",
+ " IOFolderCheck()\n",
+ "\n",
+ " %cd \"$waifu2x_path_1\"\n",
+ " clear_output()\n",
+ "\n",
+ " !python waifu2x.py {processor} -m {mode} {tta1} {tta2} {tta_level} -n {denoise_level} -s {upscale_ratio} -c {color_profile} -a {model} -e {output_format} -q {output_quality} -i \"{input}\" -o \"{output}\"\n",
+ "\n",
+ " %cd \"/content\"\n",
+ " clear_output()\n",
+ "else:\n",
+ " display(HTML(\"❌ Unable to locate waifu2x! Make sure you have already run the first cell first! \"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "uQT6GEq9Na_E"
+ },
+ "source": [
+ "## ✧ Programming ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FdDNhzc0NdeS"
+ },
+ "source": [
+ "### Visual Studio Code "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "QaKEKUrRNfHI"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] code-server
\n",
+ "# @markdown VS Code in the browser. Run VS Code on any machine anywhere and access it in the browser.
\n",
+ "# @markdown \n",
+ "# @markdown ⚙️ Install Configuration ⚙️ \n",
+ "TOKEN = \"\" \n",
+ "REGION = \"AP\"\n",
+ "USE_FREE_TOKEN = True #{type:\"boolean\"}\n",
+ "INSTALL_EXTENSION = \"ms-python.python ms-vscode.cpptools ritwickdey.LiveServer sidthesloth.html5-boilerplate tht13.python\" #@param {type:\"string\"}\n",
+ "USER_DATA_DIR = \"/content/tools/code-server/userdata\" #@param {type:\"string\"}\n",
+ "OPEN_FOLDER = \"/content/\" #@param {type: \"string\"} \n",
+ "TAG_NAME = \"3.11.1\" #@param {type: \"string\"}\n",
+ "#@markdown > See HERE to get the tag name.\n",
+ "PACKAGES = \"amd64\" #@param [\"x86_64\", \"amd64\"]\n",
+ "RUN_LATEST = True\n",
+ "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os,sys, pathlib, zipfile, re, tarfile, shutil\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " findPackageR,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/code-server/', exist_ok=True)\n",
+ "os.makedirs('tools/temp', exist_ok=True)\n",
+ "binFile = ''\n",
+ "\n",
+ "# Downloading code-server\n",
+ "if not os.path.exists(\"tools/code-server/README.md\"):\n",
+ " os.system(\"apt install net-tools -y\")\n",
+ "\n",
+ " BASE_URL = r\"https://github.com/cdr/code-server/\"\n",
+ " rawRdata = findPackageR(\"cdr/code-server\",\n",
+ " f\"linux-{PACKAGES}.tar.gz\",\n",
+ " False if RUN_LATEST else TAG_NAME,\n",
+ " all_=True)\n",
+ " file_name = rawRdata['assets']['name']\n",
+ " urlF = rawRdata['assets']['browser_download_url']\n",
+ " output_file = \"tools/temp/code-server.tar.gz\"\n",
+ "\n",
+ " textAn(f\"Installing code-server {rawRdata['tag_name']} ...\", ty=\"twg\")\n",
+ " \n",
+ " urllib.request.urlretrieve(urlF, output_file)\n",
+ " with tarfile.open(output_file, 'r:gz') as tar_ref:\n",
+ " tar_ref.extractall('tools/temp/')\n",
+ " os.renames(\"tools/temp/\"+file_name[:-7], 'tools/code-server/')\n",
+ " try:\n",
+ " pathlib.Path(output_file).unlink()\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " try:\n",
+ " os.remove('tools/code-server/lib/libstdc++.so.6')\n",
+ " except FileNotFoundError:\n",
+ " pass\n",
+ " \n",
+ " binList = ['bin/code-server',\n",
+ " 'code-server']\n",
+ " for b in binList:\n",
+ " if os.path.exists('tools/code-server/'+b):\n",
+ " binFile = b\n",
+ " break\n",
+ " \n",
+ " # workspace settings\n",
+ " configScript = \"\"\"{\n",
+ " \"workbench.colorTheme\": \"Default Dark+\",\n",
+ " \"editor.minimap.enabled\": false\n",
+ "}\n",
+ "\"\"\"\n",
+ " os.makedirs(f'{OPEN_FOLDER}/.vscode', exist_ok=True)\n",
+ " with open(f'{OPEN_FOLDER}/.vscode/settings.json', 'w') as w:w.write(configScript)\n",
+ "\n",
+ " if INSTALL_EXTENSION:\n",
+ " perExtension = INSTALL_EXTENSION.split(' ')\n",
+ " for l in perExtension:\n",
+ " cmdE = f\"./{binFile} \" \\\n",
+ " f\"--user-data-dir {USER_DATA_DIR}\" \\\n",
+ " f\" --install-extension {l}\"\n",
+ " runSh(cmdE, cd=\"tools/code-server\", shell=True)\n",
+ "\n",
+ "\n",
+ "if not findProcess(\"node\", \"--extensions-dir\"):\n",
+ " cmdDo = f\"./{binFile} --auth none \" \\\n",
+ " f\" --port 5050 --user-data-dir {USER_DATA_DIR}\" \\\n",
+ " \" &\"\n",
+ " runSh(cmdDo, \n",
+ " cd=\"tools/code-server\",\n",
+ " shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(\n",
+ " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['code-server', 5050, 'http']], REGION.lower(), \n",
+ " [f\"{HOME}/.ngrok2/code-server.yml\", 30499]\n",
+ ").start('code-server', displayB=False)\n",
+ "displayUrl(server, EcUrl=f\"/?folder={OPEN_FOLDER}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "-HjoEvVINmgx"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Port Forwarding
\n",
+ "# @markdown Type in whatever PORT you want and separate them with comma and space. `80, 8080, 4040`
\n",
+ "USE_FREE_TOKEN = True \n",
+ "TOKEN = \"\" \n",
+ "REGION = \"US\" #[\"US\", \"EU\", \"AP\", \"AU\", \"SA\", \"JP\", \"IN\"]\n",
+ "PORT_LIST = \"\" #@param {type:\"string\"}\n",
+ "PORT_FORWARD = \"argotunnel\" #[\"ngrok\", \"localhost\", \"argotunnel\"]\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, pathlib, zipfile, re\n",
+ "import urllib.request\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs('tools/', exist_ok=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "m = []\n",
+ "splitPortList = PORT_LIST.split(',')\n",
+ "for p in splitPortList:\n",
+ " p = int(p)\n",
+ " m.append([f\"s{p}\", p, 'http'])\n",
+ "\n",
+ "Server = PortForward_wrapper(\n",
+ " PORT_FORWARD, TOKEN, USE_FREE_TOKEN, m, REGION.lower(), \n",
+ " [f\"{HOME}/.ngrok2/randomPortOpen.yml\", 45535]\n",
+ ")\n",
+ "\n",
+ "for l in m:\n",
+ " displayUrl(Server.start(l[0], displayB=False, v=False), \n",
+ " pNamU=f\"{l[0][1:]} -> \", cls=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_wlFbVS6JcSL"
+ },
+ "source": [
+ "## ✧ Remote Connection ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KFpBZnkQhQz2"
+ },
+ "source": [
+ "**!! NOT FOR CRYPTOCURRENCY MINING !!** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WaSgbPEch7KH"
+ },
+ "source": [
+ "### Chrome Remote Desktop "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1-hL0LM7vRH8"
+ },
+ "source": [
+ "Original code written by PradyumnaKrishna (modified for MiXLab use)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "t4yNp3KmLtZ6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Create user \n",
+ "username = \"MiXLab\" #@param {type:\"string\"}\n",
+ "password = \"123456qwerty\" #@param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "\n",
+ "print(\"Creating user and setting it up...\")\n",
+ "\n",
+ "# Creation of user\n",
+ "os.system(f\"useradd -m {username}\")\n",
+ "\n",
+ "# Add user to sudo group\n",
+ "os.system(f\"adduser {username} sudo\")\n",
+ " \n",
+ "# Set password of user to 'root'\n",
+ "os.system(f\"echo '{username}:{password}' | sudo chpasswd\")\n",
+ "\n",
+ "# Change default shell from sh to bash\n",
+ "os.system(\"sed -i 's/\\/bin\\/sh/\\/bin\\/bash/g' /etc/passwd\")\n",
+ "\n",
+ "print(\"User created and configured.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Q6bl1b0EifVG"
+ },
+ "outputs": [],
+ "source": [
+ "#============================= FORM ============================= #\n",
+ "#@markdown ← [Start] Remote Desktop \n",
+ "#@markdown \n",
+ "#@markdown \tClick HERE (opens in new tab) and set up a computer first. \n",
+ "#@markdown \tAfter you have done setting up a computer, get the Debian Linux command / authcode and paste it into the field below. \n",
+ "#@markdown \tRun the cell and wait for it to finish. \n",
+ "#@markdown \tNow, go to HERE (opens in new tab) and you should see a machine pops up in there. \n",
+ "#@markdown \tClick on that machine to remote it and enter the pin. \n",
+ "#@markdown \n",
+ "CRP = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Enter a PIN that is equal to or more than 6 digits\n",
+ "Pin = 123456 #@param {type: \"integer\"}\n",
+ "\n",
+ "#@markdown > It takes about 4 to 5 minutes for the installation process.\n",
+ "#================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import subprocess\n",
+ "\n",
+ "\n",
+ "class CRD:\n",
+ " def __init__(self):\n",
+ " os.system(\"apt update\")\n",
+ " self.installCRD()\n",
+ " self.installDesktopEnvironment()\n",
+ " self.installGoogleChorme()\n",
+ " self.finish()\n",
+ "\n",
+ " @staticmethod\n",
+ " def installCRD():\n",
+ " print(\"Installing Chrome Remote Desktop...\")\n",
+ " subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
+ "\n",
+ " @staticmethod\n",
+ " def installDesktopEnvironment():\n",
+ " print(\"Installing Desktop Environment...\")\n",
+ " os.system(\"export DEBIAN_FRONTEND=noninteractive\")\n",
+ " os.system(\"apt install --assume-yes xfce4 desktop-base xfce4-terminal\")\n",
+ " os.system(\"bash -c 'echo \\\"exec /etc/X11/Xsession /usr/bin/xfce4-session\\\" > /etc/chrome-remote-desktop-session'\")\n",
+ " os.system(\"apt remove --assume-yes gnome-terminal\")\n",
+ " os.system(\"apt install --assume-yes xscreensaver\")\n",
+ " os.system(\"systemctl disable lightdm.service\")\n",
+ "\n",
+ " @staticmethod\n",
+ " def installGoogleChorme():\n",
+ " print(\"Installing Google Chrome...\")\n",
+ " subprocess.run([\"wget\", \"https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
+ " subprocess.run([\"dpkg\", \"--install\", \"google-chrome-stable_current_amd64.deb\"], stdout=subprocess.PIPE)\n",
+ " subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)\n",
+ "\n",
+ " @staticmethod\n",
+ " def finish():\n",
+ " print(\"Finalizing...\")\n",
+ " os.system(f\"adduser {username} chrome-remote-desktop\")\n",
+ " command = f\"{CRP} --pin={Pin}\"\n",
+ " os.system(f\"su - {username} -c '{command}'\")\n",
+ " os.system(\"service chrome-remote-desktop start\")\n",
+ " print(\"Finished Succesfully!\")\n",
+ "\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " if CRP == \"\":\n",
+ " print(\"Please enter the authcode from the Chrome Remote Desktop site!\")\n",
+ " elif len(str(Pin)) < 6:\n",
+ " print(\"Enter a PIN that is equal to or more than 6 digits!\")\n",
+ " else:\n",
+ " CRD()\n",
+ "except NameError as e:\n",
+ " print(\"Username variable not found! Create a user first!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Th3Qyn2uttiW"
+ },
+ "source": [
+ "#### Optionals "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vk2qtOTGIFsQ"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **Google Drive Mount**\n",
+ "#@markdown Google Drive used as Persistance HDD for files. \n",
+ "#@markdown Mounted at `user` Home directory inside drive folder\n",
+ "#@markdown (If `username` variable not defined then use root as default).\n",
+ "\n",
+ "def MountGDrive():\n",
+ " from google.colab import drive\n",
+ "\n",
+ " ! runuser -l $user -c \"yes | python3 -m pip install --user google-colab\" > /dev/null 2>&1\n",
+ "\n",
+ " mount = \"\"\"from os import environ as env\n",
+ "from google.colab import drive\n",
+ "\n",
+ "env['CLOUDSDK_CONFIG'] = '/content/.config'\n",
+ "drive.mount('{}')\"\"\".format(mountpoint)\n",
+ "\n",
+ " with open('/content/mount.py', 'w') as script:\n",
+ " script.write(mount)\n",
+ "\n",
+ " ! runuser -l $user -c \"python3 /content/mount.py\"\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " mountpoint = \"/home/\"+username+\"/drive\"\n",
+ " user = username\n",
+ "except NameError:\n",
+ " print(\"username variable not found, mounting at `/content/drive' using `root'\")\n",
+ " mountpoint = '/content/drive'\n",
+ " user = 'root'\n",
+ "\n",
+ "MountGDrive()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8icuQYnyKDLk"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **SSH**\n",
+ "\n",
+ "! pip install colab_ssh --upgrade &> /dev/null\n",
+ "\n",
+ "Ngrok = False #@param {type:'boolean'}\n",
+ "Agro = False #@param {type:'boolean'}\n",
+ "\n",
+ "\n",
+ "#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)\n",
+ "ngrokToken = \"\" #@param {type:'string'}\n",
+ "\n",
+ "\n",
+ "def runNGROK():\n",
+ " from colab_ssh import launch_ssh\n",
+ " from IPython.display import clear_output\n",
+ " launch_ssh(ngrokToken, password)\n",
+ " clear_output()\n",
+ "\n",
+ " print(\"ssh\", username, end='@')\n",
+ " ! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n",
+ " \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))\"\n",
+ "\n",
+ "\n",
+ "def runAgro():\n",
+ " from colab_ssh import launch_ssh_cloudflared\n",
+ " launch_ssh_cloudflared(password=password)\n",
+ "\n",
+ "\n",
+ "try:\n",
+ " if username:\n",
+ " pass\n",
+ " elif password:\n",
+ " pass\n",
+ "except NameError:\n",
+ " print(\"No user found using username and password as 'root'\")\n",
+ " username='root'\n",
+ " password='root'\n",
+ "\n",
+ "\n",
+ "if Agro and Ngrok:\n",
+ " print(\"You can't do that\")\n",
+ " print(\"Select only one of them\")\n",
+ "elif Agro:\n",
+ " runAgro()\n",
+ "elif Ngrok:\n",
+ " if ngrokToken == \"\":\n",
+ " print(\"No ngrokToken Found, Please enter it\")\n",
+ " else:\n",
+ " runNGROK()\n",
+ "else:\n",
+ " print(\"Select one of them\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "OXsG6_pxeEFu"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Package Installer { vertical-output: true }\n",
+ "run = False #@param {type:\"boolean\"}\n",
+ "#@markdown *Package management actions (gasp)*\n",
+ "action = \"Install\" #@param [\"Install\", \"Check Installed\", \"Remove\"] {allow-input: true}\n",
+ "\n",
+ "package = \"wget\" #@param {type:\"string\"}\n",
+ "system = \"apt\" #@param [\"apt\", \"\"]\n",
+ "\n",
+ "def install(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt --fix-broken install > /dev/null 2>&1\n",
+ " !killall apt > /dev/null 2>&1\n",
+ " !rm /var/lib/dpkg/lock-frontend\n",
+ " !dpkg --configure -a > /dev/null 2>&1\n",
+ "\n",
+ " !apt-get install -o Dpkg::Options::=\"--force-confold\" --no-install-recommends -y $package\n",
+ " \n",
+ " !dpkg --configure -a > /dev/null 2>&1 \n",
+ " !apt update > /dev/null 2>&1\n",
+ "\n",
+ " !apt install $package > /dev/null 2>&1\n",
+ "\n",
+ "def check_installed(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt list --installed | grep $package\n",
+ "\n",
+ "def remove(package=package, system=system):\n",
+ " if system == \"apt\":\n",
+ " !apt remove $package\n",
+ "\n",
+ "if run:\n",
+ " if action == \"Install\":\n",
+ " install()\n",
+ " if action == \"Check Installed\":\n",
+ " check_installed()\n",
+ " if action == \"Remove\":\n",
+ " remove()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "UoeBdz6_KE6a"
+ },
+ "outputs": [],
+ "source": [
+ "#@title **Colab Shutdown**\n",
+ "\n",
+ "#@markdown To Kill NGROK Tunnel\n",
+ "NGROK = False #@param {type:'boolean'}\n",
+ "\n",
+ "#@markdown To Unmount GDrive\n",
+ "GDrive = False #@param {type:'boolean'}\n",
+ "\n",
+ "#@markdown To Sleep Colab\n",
+ "Sleep = True #@param {type:'boolean'}\n",
+ "\n",
+ "if NGROK:\n",
+ " ! killall ngrok\n",
+ "\n",
+ "if GDrive:\n",
+ " with open('/content/unmount.py', 'w') as unmount:\n",
+ " unmount.write(\"\"\"from google.colab import drive\n",
+ "drive.flush_and_unmount()\"\"\")\n",
+ " \n",
+ " try:\n",
+ " if user:\n",
+ " ! runuser $user -c 'python3 /content/unmount.py'\n",
+ " except NameError:\n",
+ " print(\"Google Drive not Mounted\")\n",
+ "\n",
+ "if Sleep:\n",
+ " from time import sleep\n",
+ " sleep(43200)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CKxGMNKUJloT"
+ },
+ "source": [
+ "### IceMW + noVNC "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NXhG3KGGJqtf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] IceWM \n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "# Defining Github latest tag so the code can fetch the latest release, if there is any\n",
+ "def latestTag(link):\n",
+ " import re\n",
+ " from urllib.request import urlopen\n",
+ " htmlF = urlopen(link+\"/releases/latest\").read().decode('UTF-8')\n",
+ " return re.findall(r'.+\\/tag\\/([.0-9A-Za-z]+)\".+/', htmlF)[0]\n",
+ "\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs(\"tools/noVnc\", exist_ok=True)\n",
+ "\n",
+ "# Generating the password\n",
+ "try:\n",
+ " print(f\"Found old password! : {password}\")\n",
+ "except:\n",
+ " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if not findProcess(\"Xtightvnc\", \":1\"):\n",
+ " textAn(\"Please wait while noVNC is being prepared...\")\n",
+ " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
+ " runW.system_raw('apt update -y')\n",
+ " runW.system_raw('apt install -y icewm firefox tightvncserver autocutsel xterm')\n",
+ " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
+ " data = \"\"\"\n",
+ "#!/bin/bash\n",
+ "xrdb $HOME/.Xresources\n",
+ "xsetroot -solid black -cursor_name left_ptr\n",
+ "autocutsel -fork\n",
+ "icewm-session &\n",
+ "\"\"\"\n",
+ " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
+ " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
+ " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
+ " \n",
+ " runSh('sudo vncserver :1 -geometry 1440x870 -economictranslate -dontdisconnect &', shell=True)\n",
+ "\n",
+ " BASE_URL = \"https://github.com/geek1011/easy-novnc\"\n",
+ " LATEST_TAG = latestTag(BASE_URL)\n",
+ " output_file = \"tools/noVnc/easy-noVnc_linux-64bit\"\n",
+ " file_name = f\"easy-novnc_linux-64bit\"\n",
+ " urlF = f\"{BASE_URL}/releases/download/{LATEST_TAG}/{file_name}\"\n",
+ "\n",
+ " try:\n",
+ " urllib.request.urlretrieve(urlF, output_file)\n",
+ " except OSError:\n",
+ " pass\n",
+ "\n",
+ " os.chmod(output_file, 0o755)\n",
+ "\n",
+ "if not findProcess(\"easy-noVnc_linux-64bit\", '--addr \"0.0.0.0:6080\"'):\n",
+ " cmdDo = \"./easy-noVnc_linux-64bit --addr 0.0.0.0:6080 --port 5901\" \\\n",
+ " \" &\"\n",
+ " runSh(cmdDo, cd=\"tools/noVnc/\", shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC.yml\", 4455])\n",
+ "data = Server.start('vnc', displayB=False)\n",
+ "displayUrl(data, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}&path=vnc&resize=scale&reconnect=true&show_dot=true')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "COqwo7iH6_vu"
+ },
+ "source": [
+ "### NoMachine "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "eypiLPD8UtD2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] NoMachine \n",
+ "USE_FREE_TOKEN = False\n",
+ "TOKEN = \"\" # @param {type:\"string\"}\n",
+ "REGION = \"US\"\n",
+ "PORT_FORWARD = \"ngrok\"\n",
+ "# @markdown > You would need to provide your own ngrok Authtoken.Click here to register for a free ngrok account.Click here to copy your ngrok Authtoken.Click here to download NoMachine.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import tarfile\n",
+ "import urllib.request\n",
+ "import shutil\n",
+ "import time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "APT_INSTALL = \"apt install -y \"\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " textAn,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "os.makedirs(\"tools/nomachine\", exist_ok=True)\n",
+ "os.makedirs(\"/root/.icewm\", exist_ok=True)\n",
+ "\n",
+ "# password ganarate\n",
+ "try:\n",
+ " print(f\"Found the old password! : {password}\")\n",
+ "except:\n",
+ " password = 'nomachine'\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "start = time.time()\n",
+ "if not os.path.exists(\"tools/nomachine/NX/bin/nxserver\"):\n",
+ " textAn(\"Please wait while noMachine is being prepared...\")\n",
+ "\n",
+ " runW.system_raw('apt update --quiet --force-yes')\n",
+ "\n",
+ " # Minimal install \n",
+ " runW.system_raw(\n",
+ " 'apt install --quiet --force-yes --no-install-recommends \\\n",
+ " icewm x11-xserver-utils firefox xterm pcmanfm')\n",
+ "\n",
+ " # icewm theme\n",
+ " with open('/root/.icewm/theme', 'w') as w:\n",
+ " w.write('Theme=\"NanoBlue/default.theme\"')\n",
+ " \n",
+ " # with open('/root/.icewm/toolbar', 'w') as w:\n",
+ " # w.write('prog \"chromium\" ! chromium-browser --no-sandbox')\n",
+ "\n",
+ " # nomachine\n",
+ " \n",
+ " staticUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/nomachine_6.9.2_1_x86_64.tar.gz\"\n",
+ " configUrl = \"https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/nomachine/NXetc.tar.gz\"\n",
+ " \n",
+ " output_file = 'tools/nomachine/nm.tar.gz'\n",
+ " config_file = 'tools/nomachine/etc.tar.gz'\n",
+ " urllib.request.urlretrieve(staticUrl, output_file)\n",
+ " urllib.request.urlretrieve(configUrl, config_file)\n",
+ " \n",
+ " with tarfile.open(output_file, 'r:gz') as t:t.extractall('tools/nomachine')\n",
+ " runSh('./nxserver --install', cd='tools/nomachine/NX', shell=True)\n",
+ " runSh('./nxserver --stop', cd='tools/nomachine/NX/bin', shell=True)\n",
+ " \n",
+ " shutil.rmtree('tools/nomachine/NX/etc')\n",
+ " with tarfile.open(config_file, 'r:gz') as t:t.extractall('tools/nomachine/NX')\n",
+ " os.remove(config_file)\n",
+ " \n",
+ " os.remove(output_file)\n",
+ " runSh('./nxserver --startup', cd='tools/nomachine/NX/bin', shell=True)\n",
+ " runW.system_raw(\"echo root:$password | chpasswd\")\n",
+ "\n",
+ "end = time.time()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['nomachine', 4000, 'tcp']], REGION.lower(), [f\"{HOME}/.ngrok2/nomachine.yml\", 8459])\n",
+ "\n",
+ "data = Server.start('nomachine', displayB=False)\n",
+ "host, port = data['url'][7:].split(':')\n",
+ "user = os.popen('whoami').read()\n",
+ "\n",
+ "# Colors\n",
+ "bttxt = 'hsla(10, 50%, 85%, 1)'\n",
+ "btcolor = 'hsla(10, 86%, 56%, 1)'\n",
+ "btshado = 'hsla(10, 40%, 52%, .4)'\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "display(HTML(\"\"\"NoMachine Configuration
Username Password Protocol Host Port
\"\"\"+user+\"\"\" \"\"\"+password+\"\"\" NX \"\"\"+host+\"\"\" \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JM1Do14AKIdF"
+ },
+ "source": [
+ "### SSH + noVNC "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "5-jp3jmlKKk5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] SSH \n",
+ "CREATE_VNC = True #@param {type:\"boolean\"}\n",
+ "CREATE_SSH = True #@param {type:\"boolean\"}\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "#TOKEN = \"\" #@param {type:\"string\"}\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, random, string, urllib.request, time\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "runW = get_ipython()\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " displayUrl,\n",
+ " findProcess,\n",
+ " CWD,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "# Generating password\n",
+ "try:\n",
+ " print(f\"Found the old password! : {password}\")\n",
+ "except:\n",
+ " password = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(20))\n",
+ "\n",
+ "if CREATE_SSH:\n",
+ " USE_FREE_TOKEN = False\n",
+ "\n",
+ "# Setting up the root password\n",
+ "if CREATE_SSH and os.path.exists('/var/run/sshd') == False:\n",
+ " # Setting up the SSH Daemon\n",
+ " runSh('apt install -qq -o=Dpkg::Use-Pty=0 openssh-server pwgen')\n",
+ " runW.system_raw(\"echo root:$password | chpasswd\")\n",
+ " os.makedirs(\"/var/run/sshd\", exist_ok=True)\n",
+ " runW.system_raw('echo \"PermitRootLogin yes\" >> /etc/ssh/sshd_config')\n",
+ " runW.system_raw('echo \"PasswordAuthentication yes\" >> /etc/ssh/sshd_config')\n",
+ " runW.system_raw('echo \"LD_LIBRARY_PATH=/usr/lib64-nvidia\" >> /root/.bashrc')\n",
+ " runW.system_raw('echo \"export LD_LIBRARY_PATH\" >> /root/.bashrc')\n",
+ "\n",
+ " # Running the SSH Daemon\n",
+ " if not findProcess(\"/usr/sbin/sshd\", command=\"-D\"):\n",
+ " runSh('/usr/sbin/sshd -D &', shell=True)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "if CREATE_VNC:\n",
+ " # Start = time.time()\n",
+ " textAn(\"Please wait while noVNC is being prepared...\")\n",
+ " os.makedirs(f'{HOME}/.vnc', exist_ok=True)\n",
+ " runW.system_raw('add-apt-repository -y ppa:apt-fast/stable < /dev/null')\n",
+ " runW.system_raw('echo debconf apt-fast/maxdownloads string 16 | debconf-set-selections')\n",
+ " runW.system_raw('echo debconf apt-fast/dlflag boolean true | debconf-set-selections')\n",
+ " runW.system_raw('echo debconf apt-fast/aptmanager string apt-get | debconf-set-selections')\n",
+ " runW.system_raw('apt install -y apt-fast')\n",
+ " runW.system_raw('apt-fast install -y xfce4 xfce4-goodies firefox tightvncserver autocutsel')\n",
+ " runW.system_raw(rf'echo \"{password}\" | vncpasswd -f > ~/.vnc/passwd')\n",
+ " data = \"\"\"\n",
+ "#!/bin/bash\n",
+ "xrdb $HOME/.Xresources\n",
+ "autocutsel -fork\n",
+ "startxfce4 &\n",
+ "\"\"\"\n",
+ " with open(f'{HOME}/.vnc/xstartup', 'w+') as wNow: wNow.write(data)\n",
+ " os.chmod(f'{HOME}/.vnc/xstartup', 0o755)\n",
+ " os.chmod(f'{HOME}/.vnc/passwd', 0o400)\n",
+ " runSh('sudo vncserver &', shell=True)\n",
+ " runSh(f'git clone https://github.com/novnc/noVNC.git {CWD}/noVNC')\n",
+ " runSh(\"bash noVNC/utils/launch.sh --listen 6080 --vnc localhost:5901 &\", shell=True)\n",
+ " # End = time.time()\n",
+ "\n",
+ "Server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['ssh', 22, 'tcp'], ['vnc', 6080, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/noVNC_SSH.yml\", 4455])\n",
+ "data = Server.start('ssh', displayB=False)\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "Host,port = data['url'][7:].split(':')\n",
+ "data2 = Server.start('vnc', displayB=False)\n",
+ "\n",
+ "if CREATE_VNC:\n",
+ " displayUrl(data2, pNamU='noVnc : ', EcUrl=f'/vnc.html?autoconnect=true&password={password}')\n",
+ "if CREATE_SSH:\n",
+ " display(HTML(\"\"\"SSH Configuration
Host Port Password
\"\"\"+Host+\"\"\" \"\"\"+port+\"\"\" \"\"\"+password+\"\"\"
Simple SSH Commands Terminal connect ssh root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\" SOCKS5 proxy ssh -D 8282 -q -C -N root@\"\"\"+Host+\"\"\" -p \"\"\"+port+\"\"\"
Click HERE to see how to use the configuration.
\"\"\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0vHRnizI9BXA"
+ },
+ "source": [
+ "### WeTTY "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "FMd-AFnVYZid"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] WeTTY \n",
+ "# @markdown Terminal access in browser over HTTP / HTTPS.\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, tarfile, urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "os.makedirs('tools/temp', exist_ok=True)\n",
+ "\n",
+ "if not os.path.exists(\"tools/wetty/wetty\"):\n",
+ " # Build WeTTy from source\n",
+ " # os.system(\"git clone https://github.com/butlerx/wetty.git tools/wetty\")\n",
+ " # Popen('npm install'.split(), cwd='tools/wetty').wait()\n",
+ " # Popen('npm run-script build'.split(), cwd='tools/wetty').wait()\n",
+ " # Popen('npm i -g'.split(), cwd='tools/wetty').wait()\n",
+ " # --------------------------------------------------\n",
+ " # Download a pre-built WeTTy package from github\n",
+ " wettyBF = 'https://raw.githubusercontent.com/shirooo39/MiXLab/master/resources/packages/wetty/wetty.tar.gz'\n",
+ " fileSN = 'tools/temp/wetty.tar.gz'\n",
+ " urllib.request.urlretrieve(wettyBF, fileSN)\n",
+ " with tarfile.open(fileSN, 'r:gz') as t:t.extractall('tools/')\n",
+ " os.remove(fileSN)\n",
+ "\n",
+ "if not findProcess(\"wetty\", \"--port\"):\n",
+ "# Popen(\n",
+ "# r'wetty --port 4343 --bypasshelmet \\\n",
+ "# -b \"/\" -c \"/bin/bash\"'.split(), \n",
+ "# cwd='/content')\n",
+ " Popen(\n",
+ " r'tools/wetty/wetty --port 4343 --bypasshelmet \\\n",
+ " -b \"/\" -c \"/bin/bash\"'.split(), \n",
+ " cwd='/content')\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['wetty', '4343', 'http']], REGION.lower, [f\"{HOME}/.ngrok2/wetty.yml\", 31199]).start('wetty', displayB=True)\n",
+ "displayUrl(server, pNamU='WeTTy : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9JBIZh3OZBaL"
+ },
+ "source": [
+ "## ✧ System Tools ✧ "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2zGMePbPQJWI"
+ },
+ "source": [
+ "### Glances "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "vLhOue7XQJWa"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Glances \n",
+ "# @markdown Glances is a cross-platform system monitoring tool written in Python.
\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "from subprocess import Popen\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " findProcess,\n",
+ " displayUrl\n",
+ ")\n",
+ "\n",
+ "loadingAn()\n",
+ "\n",
+ "if not os.path.exists(\"/usr/local/bin/glances\"):\n",
+ " os.system(\"pip3 install https://github.com/nicolargo/glances/archive/master.zip\")\n",
+ " os.system('pip3 install Bottle')\n",
+ " os.system(\"pip3 install 'glances[gpu,ip]'\")\n",
+ "\n",
+ "if not findProcess(\"glances\", \"--webserver\"):\n",
+ " Popen(\n",
+ " 'glances --webserver --port 61208 --time 0 --enable-process-extended \\\n",
+ " --byte --diskio-show-ramfs --fs-free-space \\\n",
+ " --disable-check-update'.split()\n",
+ " )\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['glances', '61208', 'http']], REGION.lower(), [f\"{HOME}/.ngrok2/Glances.yml\", 31499]).start('glances', displayB=True)\n",
+ "displayUrl(server, pNamU='Glances : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eaUJNGmju5G6"
+ },
+ "source": [
+ "### netdata "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "WSUUUDXsUOkl"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] netdata \n",
+ "# @markdown netdata is a real-time system performance monitoring utility.
\n",
+ "USE_FREE_TOKEN = True\n",
+ "TOKEN = \"\"\n",
+ "REGION = \"US\"\n",
+ "Tunneling = \"argo_tunnel_(cloudflare)\" #@param [\"argo_tunnel_(cloudflare)\", \"localhost.run\", \"ngrok\"]\n",
+ "\n",
+ "if Tunneling == \"argo_tunnel_(cloudflare)\":\n",
+ " PORT_FORWARD = \"argotunnel\"\n",
+ "elif Tunneling == \"localhost.run\":\n",
+ " PORT_FORWARD = \"localhost\"\n",
+ "elif Tunneling == \"ngrok\":\n",
+ " PORT_FORWARD = \"ngrok\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, psutil, subprocess, shlex\n",
+ "from IPython.display import HTML, clear_output\n",
+ "import time\n",
+ "\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " shellCmd = \"wget -qq https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\" \\\n",
+ " f\" -O {HOME}/.ipython/mixlab.py\"\n",
+ " subprocess.run(shlex.split(shellCmd))\n",
+ "\n",
+ "from mixlab import (\n",
+ " loadingAn,\n",
+ " PortForward_wrapper,\n",
+ " runSh,\n",
+ " displayUrl,\n",
+ " textAn\n",
+ ")\n",
+ "\n",
+ "def CheckProcess(process, command):\n",
+ " for pid in psutil.pids():\n",
+ " try:\n",
+ " p = psutil.Process(pid)\n",
+ " if process in p.name():\n",
+ " for arg in p.cmdline():\n",
+ " if command in str(arg): \n",
+ " return True\n",
+ " else:\n",
+ " pass\n",
+ " else:\n",
+ " pass\n",
+ " except:\n",
+ " continue\n",
+ "\n",
+ "def Start_ServerMT():\n",
+ " if CheckProcess(\"netdata\", \"\") != True:\n",
+ " runSh('/usr/sbin/netdata', shell=True)\n",
+ "\n",
+ "loadingAn() \n",
+ "\n",
+ "if not os.path.isfile(\"/usr/sbin/netdata\"):\n",
+ " clear_output(wait=True)\n",
+ " textAn(\"Installing netdata...\")\n",
+ " # Start = time.time()\n",
+ " get_ipython().system_raw(\"bash <(curl -Ss https://my-netdata.io/kickstart.sh) --dont-wait --dont-start-it\")\n",
+ " # End = time.time()\n",
+ " Start_ServerMT()\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "server = PortForward_wrapper(PORT_FORWARD, TOKEN, USE_FREE_TOKEN, [['netdata', 19999, 'http']], REGION.lower, [f\"{HOME}/.ngrok2/netdata.yml\", 7044]).start('netdata', 'g')\n",
+ "displayUrl(server, pNamU='netdata : ')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xzeZBOnhyKPy"
+ },
+ "source": [
+ "### speedtest "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Az1Yh9WMyQwB"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] speedtest \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import re\n",
+ "import csv\n",
+ "import sys\n",
+ "import math\n",
+ "import errno\n",
+ "import signal\n",
+ "import socket\n",
+ "import timeit\n",
+ "import datetime\n",
+ "import platform\n",
+ "import threading\n",
+ "import xml.parsers.expat\n",
+ "\n",
+ "try:\n",
+ " import gzip\n",
+ " GZIP_BASE = gzip.GzipFile\n",
+ "except ImportError:\n",
+ " gzip = None\n",
+ " GZIP_BASE = object\n",
+ "\n",
+ "__version__ = '2.1.1'\n",
+ "\n",
+ "class FakeShutdownEvent(object):\n",
+ " \"\"\"Class to fake a threading.Event.isSet so that users of this module\n",
+ " are not required to register their own threading.Event()\n",
+ " \"\"\"\n",
+ "\n",
+ " @staticmethod\n",
+ " def isSet():\n",
+ " \"Dummy method to always return false\"\"\"\n",
+ " return False\n",
+ "\n",
+ "# Some global variables we use\n",
+ "DEBUG = False\n",
+ "_GLOBAL_DEFAULT_TIMEOUT = object()\n",
+ "\n",
+ "# Begin import game to handle Python 2 and Python 3\n",
+ "try:\n",
+ " import json\n",
+ "except ImportError:\n",
+ " try:\n",
+ " import simplejson as json\n",
+ " except ImportError:\n",
+ " json = None\n",
+ "\n",
+ "try:\n",
+ " import xml.etree.cElementTree as ET\n",
+ "except ImportError:\n",
+ " try:\n",
+ " import xml.etree.ElementTree as ET\n",
+ " except ImportError:\n",
+ " from xml.dom import minidom as DOM\n",
+ " from xml.parsers.expat import ExpatError\n",
+ " ET = None\n",
+ "\n",
+ "try:\n",
+ " from urllib2 import (urlopen, Request, HTTPError, URLError,\n",
+ " AbstractHTTPHandler, ProxyHandler,\n",
+ " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
+ " HTTPErrorProcessor, OpenerDirector)\n",
+ "except ImportError:\n",
+ " from urllib.request import (urlopen, Request, HTTPError, URLError,\n",
+ " AbstractHTTPHandler, ProxyHandler,\n",
+ " HTTPDefaultErrorHandler, HTTPRedirectHandler,\n",
+ " HTTPErrorProcessor, OpenerDirector)\n",
+ "\n",
+ "try:\n",
+ " from httplib import HTTPConnection, BadStatusLine\n",
+ "except ImportError:\n",
+ " from http.client import HTTPConnection, BadStatusLine\n",
+ "\n",
+ "try:\n",
+ " from httplib import HTTPSConnection\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from http.client import HTTPSConnection\n",
+ " except ImportError:\n",
+ " HTTPSConnection = None\n",
+ "\n",
+ "try:\n",
+ " from httplib import FakeSocket\n",
+ "except ImportError:\n",
+ " FakeSocket = None\n",
+ "\n",
+ "try:\n",
+ " from Queue import Queue\n",
+ "except ImportError:\n",
+ " from queue import Queue\n",
+ "\n",
+ "try:\n",
+ " from urlparse import urlparse\n",
+ "except ImportError:\n",
+ " from urllib.parse import urlparse\n",
+ "\n",
+ "try:\n",
+ " from urlparse import parse_qs\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from urllib.parse import parse_qs\n",
+ " except ImportError:\n",
+ " from cgi import parse_qs\n",
+ "\n",
+ "try:\n",
+ " from hashlib import md5\n",
+ "except ImportError:\n",
+ " from md5 import md5\n",
+ "\n",
+ "try:\n",
+ " from argparse import ArgumentParser as ArgParser\n",
+ " from argparse import SUPPRESS as ARG_SUPPRESS\n",
+ " PARSER_TYPE_INT = int\n",
+ " PARSER_TYPE_STR = str\n",
+ " PARSER_TYPE_FLOAT = float\n",
+ "except ImportError:\n",
+ " from optparse import OptionParser as ArgParser\n",
+ " from optparse import SUPPRESS_HELP as ARG_SUPPRESS\n",
+ " PARSER_TYPE_INT = 'int'\n",
+ " PARSER_TYPE_STR = 'string'\n",
+ " PARSER_TYPE_FLOAT = 'float'\n",
+ "\n",
+ "try:\n",
+ " from cStringIO import StringIO\n",
+ " BytesIO = None\n",
+ "except ImportError:\n",
+ " try:\n",
+ " from StringIO import StringIO\n",
+ " BytesIO = None\n",
+ " except ImportError:\n",
+ " from io import StringIO, BytesIO\n",
+ "\n",
+ "try:\n",
+ " import __builtin__\n",
+ "except ImportError:\n",
+ " import builtins\n",
+ " from io import TextIOWrapper, FileIO\n",
+ "\n",
+ " class _Py3Utf8Output(TextIOWrapper):\n",
+ " \"\"\"UTF-8 encoded wrapper around stdout for py3, to override\n",
+ " ASCII stdout\n",
+ " \"\"\"\n",
+ " def __init__(self, f, **kwargs):\n",
+ " buf = FileIO(f.fileno(), 'w')\n",
+ " super(_Py3Utf8Output, self).__init__(\n",
+ " buf,\n",
+ " encoding='utf8',\n",
+ " errors='strict'\n",
+ " )\n",
+ "\n",
+ " def write(self, s):\n",
+ " super(_Py3Utf8Output, self).write(s)\n",
+ " self.flush()\n",
+ "\n",
+ " _py3_print = getattr(builtins, 'print')\n",
+ " try:\n",
+ " _py3_utf8_stdout = _Py3Utf8Output(sys.stdout)\n",
+ " _py3_utf8_stderr = _Py3Utf8Output(sys.stderr)\n",
+ " except OSError:\n",
+ " # sys.stdout/sys.stderr is not a compatible stdout/stderr object\n",
+ " # just use it and hope things go ok\n",
+ " _py3_utf8_stdout = sys.stdout\n",
+ " _py3_utf8_stderr = sys.stderr\n",
+ "\n",
+ " def to_utf8(v):\n",
+ " \"\"\"No-op encode to utf-8 for py3\"\"\"\n",
+ " return v\n",
+ "\n",
+ " def print_(*args, **kwargs):\n",
+ " \"\"\"Wrapper function for py3 to print, with a utf-8 encoded stdout\"\"\"\n",
+ " if kwargs.get('file') == sys.stderr:\n",
+ " kwargs['file'] = _py3_utf8_stderr\n",
+ " else:\n",
+ " kwargs['file'] = kwargs.get('file', _py3_utf8_stdout)\n",
+ " _py3_print(*args, **kwargs)\n",
+ "else:\n",
+ " del __builtin__\n",
+ "\n",
+ " def to_utf8(v):\n",
+ " \"\"\"Encode value to utf-8 if possible for py2\"\"\"\n",
+ " try:\n",
+ " return v.encode('utf8', 'strict')\n",
+ " except AttributeError:\n",
+ " return v\n",
+ "\n",
+ " def print_(*args, **kwargs):\n",
+ " \"\"\"The new-style print function for Python 2.4 and 2.5.\n",
+ " Taken from https://pypi.python.org/pypi/six/\n",
+ " Modified to set encoding to UTF-8 always, and to flush after write\n",
+ " \"\"\"\n",
+ " fp = kwargs.pop(\"file\", sys.stdout)\n",
+ " if fp is None:\n",
+ " return\n",
+ "\n",
+ " def write(data):\n",
+ " if not isinstance(data, basestring):\n",
+ " data = str(data)\n",
+ " # If the file has an encoding, encode unicode with it.\n",
+ " encoding = 'utf8' # Always trust UTF-8 for output\n",
+ " if (isinstance(fp, file) and\n",
+ " isinstance(data, unicode) and\n",
+ " encoding is not None):\n",
+ " errors = getattr(fp, \"errors\", None)\n",
+ " if errors is None:\n",
+ " errors = \"strict\"\n",
+ " data = data.encode(encoding, errors)\n",
+ " fp.write(data)\n",
+ " fp.flush()\n",
+ " want_unicode = False\n",
+ " sep = kwargs.pop(\"sep\", None)\n",
+ " if sep is not None:\n",
+ " if isinstance(sep, unicode):\n",
+ " want_unicode = True\n",
+ " elif not isinstance(sep, str):\n",
+ " raise TypeError(\"sep must be None or a string\")\n",
+ " end = kwargs.pop(\"end\", None)\n",
+ " if end is not None:\n",
+ " if isinstance(end, unicode):\n",
+ " want_unicode = True\n",
+ " elif not isinstance(end, str):\n",
+ " raise TypeError(\"end must be None or a string\")\n",
+ " if kwargs:\n",
+ " raise TypeError(\"invalid keyword arguments to print()\")\n",
+ " if not want_unicode:\n",
+ " for arg in args:\n",
+ " if isinstance(arg, unicode):\n",
+ " want_unicode = True\n",
+ " break\n",
+ " if want_unicode:\n",
+ " newline = unicode(\"\\n\")\n",
+ " space = unicode(\" \")\n",
+ " else:\n",
+ " newline = \"\\n\"\n",
+ " space = \" \"\n",
+ " if sep is None:\n",
+ " sep = space\n",
+ " if end is None:\n",
+ " end = newline\n",
+ " for i, arg in enumerate(args):\n",
+ " if i:\n",
+ " write(sep)\n",
+ " write(arg)\n",
+ " write(end)\n",
+ "\n",
+ "\n",
+ "# Exception \"constants\" to support Python 2 through Python 3\n",
+ "try:\n",
+ " import ssl\n",
+ " try:\n",
+ " CERT_ERROR = (ssl.CertificateError,)\n",
+ " except AttributeError:\n",
+ " CERT_ERROR = tuple()\n",
+ "\n",
+ " HTTP_ERRORS = (\n",
+ " (HTTPError, URLError, socket.error, ssl.SSLError, BadStatusLine) +\n",
+ " CERT_ERROR\n",
+ " )\n",
+ "except ImportError:\n",
+ " ssl = None\n",
+ " HTTP_ERRORS = (HTTPError, URLError, socket.error, BadStatusLine)\n",
+ "\n",
+ "\n",
+ "class SpeedtestException(Exception):\n",
+ " \"\"\"Base exception for this module\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestCLIError(SpeedtestException):\n",
+ " \"\"\"Generic exception for raising errors during CLI operation\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPError(SpeedtestException):\n",
+ " \"\"\"Base HTTP exception for this module\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestConfigError(SpeedtestException):\n",
+ " \"\"\"Configuration XML is invalid\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestServersError(SpeedtestException):\n",
+ " \"\"\"Servers XML is invalid\"\"\"\n",
+ "\n",
+ "\n",
+ "class ConfigRetrievalError(SpeedtestHTTPError):\n",
+ " \"\"\"Could not retrieve config.php\"\"\"\n",
+ "\n",
+ "\n",
+ "class ServersRetrievalError(SpeedtestHTTPError):\n",
+ " \"\"\"Could not retrieve speedtest-servers.php\"\"\"\n",
+ "\n",
+ "\n",
+ "class InvalidServerIDType(SpeedtestException):\n",
+ " \"\"\"Server ID used for filtering was not an integer\"\"\"\n",
+ "\n",
+ "\n",
+ "class NoMatchedServers(SpeedtestException):\n",
+ " \"\"\"No servers matched when filtering\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestMiniConnectFailure(SpeedtestException):\n",
+ " \"\"\"Could not connect to the provided speedtest mini server\"\"\"\n",
+ "\n",
+ "\n",
+ "class InvalidSpeedtestMiniServer(SpeedtestException):\n",
+ " \"\"\"Server provided as a speedtest mini server does not actually appear\n",
+ " to be a speedtest mini server\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class ShareResultsConnectFailure(SpeedtestException):\n",
+ " \"\"\"Could not connect to speedtest.net API to POST results\"\"\"\n",
+ "\n",
+ "\n",
+ "class ShareResultsSubmitFailure(SpeedtestException):\n",
+ " \"\"\"Unable to successfully POST results to speedtest.net API after\n",
+ " connection\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestUploadTimeout(SpeedtestException):\n",
+ " \"\"\"testlength configuration reached during upload\n",
+ " Used to ensure the upload halts when no additional data should be sent\n",
+ " \"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestBestServerFailure(SpeedtestException):\n",
+ " \"\"\"Unable to determine best server\"\"\"\n",
+ "\n",
+ "\n",
+ "class SpeedtestMissingBestServer(SpeedtestException):\n",
+ " \"\"\"get_best_server not called or not able to determine best server\"\"\"\n",
+ "\n",
+ "\n",
+ "def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,\n",
+ " source_address=None):\n",
+ " \"\"\"Connect to *address* and return the socket object.\n",
+ " Convenience function. Connect to *address* (a 2-tuple ``(host,\n",
+ " port)``) and return the socket object. Passing the optional\n",
+ " *timeout* parameter will set the timeout on the socket instance\n",
+ " before attempting to connect. If no *timeout* is supplied, the\n",
+ " global default timeout setting returned by :func:`getdefaulttimeout`\n",
+ " is used. If *source_address* is set it must be a tuple of (host, port)\n",
+ " for the socket to bind as a source address before making the connection.\n",
+ " An host of '' or port 0 tells the OS to use the default.\n",
+ " Largely vendored from Python 2.7, modified to work with Python 2.4\n",
+ " \"\"\"\n",
+ "\n",
+ " host, port = address\n",
+ " err = None\n",
+ " for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):\n",
+ " af, socktype, proto, canonname, sa = res\n",
+ " sock = None\n",
+ " try:\n",
+ " sock = socket.socket(af, socktype, proto)\n",
+ " if timeout is not _GLOBAL_DEFAULT_TIMEOUT:\n",
+ " sock.settimeout(float(timeout))\n",
+ " if source_address:\n",
+ " sock.bind(source_address)\n",
+ " sock.connect(sa)\n",
+ " return sock\n",
+ "\n",
+ " except socket.error:\n",
+ " err = get_exception()\n",
+ " if sock is not None:\n",
+ " sock.close()\n",
+ "\n",
+ " if err is not None:\n",
+ " raise err\n",
+ " else:\n",
+ " raise socket.error(\"getaddrinfo returns an empty list\")\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPConnection(HTTPConnection):\n",
+ " \"\"\"Custom HTTPConnection to support source_address across\n",
+ " Python 2.4 - Python 3\n",
+ " \"\"\"\n",
+ " def __init__(self, *args, **kwargs):\n",
+ " source_address = kwargs.pop('source_address', None)\n",
+ " timeout = kwargs.pop('timeout', 10)\n",
+ "\n",
+ " HTTPConnection.__init__(self, *args, **kwargs)\n",
+ "\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def connect(self):\n",
+ " \"\"\"Connect to the host and port specified in __init__.\"\"\"\n",
+ " try:\n",
+ " self.sock = socket.create_connection(\n",
+ " (self.host, self.port),\n",
+ " self.timeout,\n",
+ " self.source_address\n",
+ " )\n",
+ " except (AttributeError, TypeError):\n",
+ " self.sock = create_connection(\n",
+ " (self.host, self.port),\n",
+ " self.timeout,\n",
+ " self.source_address\n",
+ " )\n",
+ "\n",
+ "\n",
+ "if HTTPSConnection:\n",
+ " class SpeedtestHTTPSConnection(HTTPSConnection,\n",
+ " SpeedtestHTTPConnection):\n",
+ " \"\"\"Custom HTTPSConnection to support source_address across\n",
+ " Python 2.4 - Python 3\n",
+ " \"\"\"\n",
+ " def __init__(self, *args, **kwargs):\n",
+ " source_address = kwargs.pop('source_address', None)\n",
+ " timeout = kwargs.pop('timeout', 10)\n",
+ "\n",
+ " HTTPSConnection.__init__(self, *args, **kwargs)\n",
+ "\n",
+ " self.timeout = timeout\n",
+ " self.source_address = source_address\n",
+ "\n",
+ " def connect(self):\n",
+ " \"Connect to a host on a given (SSL) port.\"\n",
+ "\n",
+ " SpeedtestHTTPConnection.connect(self)\n",
+ "\n",
+ " if ssl:\n",
+ " try:\n",
+ " kwargs = {}\n",
+ " if hasattr(ssl, 'SSLContext'):\n",
+ " kwargs['server_hostname'] = self.host\n",
+ " self.sock = self._context.wrap_socket(self.sock, **kwargs)\n",
+ " except AttributeError:\n",
+ " self.sock = ssl.wrap_socket(self.sock)\n",
+ " try:\n",
+ " self.sock.server_hostname = self.host\n",
+ " except AttributeError:\n",
+ " pass\n",
+ " elif FakeSocket:\n",
+ " # Python 2.4/2.5 support\n",
+ " try:\n",
+ " self.sock = FakeSocket(self.sock, socket.ssl(self.sock))\n",
+ " except AttributeError:\n",
+ " raise SpeedtestException(\n",
+ " 'This version of Python does not support HTTPS/SSL '\n",
+ " 'functionality'\n",
+ " )\n",
+ " else:\n",
+ " raise SpeedtestException(\n",
+ " 'This version of Python does not support HTTPS/SSL '\n",
+ " 'functionality'\n",
+ " )\n",
+ "\n",
+ "\n",
+ "def _build_connection(connection, source_address, timeout, context=None):\n",
+ " \"\"\"Cross Python 2.4 - Python 3 callable to build an ``HTTPConnection`` or\n",
+ " ``HTTPSConnection`` with the args we need\n",
+ " Called from ``http(s)_open`` methods of ``SpeedtestHTTPHandler`` or\n",
+ " ``SpeedtestHTTPSHandler``\n",
+ " \"\"\"\n",
+ " def inner(host, **kwargs):\n",
+ " kwargs.update({\n",
+ " 'source_address': source_address,\n",
+ " 'timeout': timeout\n",
+ " })\n",
+ " if context:\n",
+ " kwargs['context'] = context\n",
+ " return connection(host, **kwargs)\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPHandler(AbstractHTTPHandler):\n",
+ " \"\"\"Custom ``HTTPHandler`` that can build a ``HTTPConnection`` with the\n",
+ " args we need for ``source_address`` and ``timeout``\n",
+ " \"\"\"\n",
+ " def __init__(self, debuglevel=0, source_address=None, timeout=10):\n",
+ " AbstractHTTPHandler.__init__(self, debuglevel)\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def http_open(self, req):\n",
+ " return self.do_open(\n",
+ " _build_connection(\n",
+ " SpeedtestHTTPConnection,\n",
+ " self.source_address,\n",
+ " self.timeout\n",
+ " ),\n",
+ " req\n",
+ " )\n",
+ "\n",
+ " http_request = AbstractHTTPHandler.do_request_\n",
+ "\n",
+ "\n",
+ "class SpeedtestHTTPSHandler(AbstractHTTPHandler):\n",
+ " \"\"\"Custom ``HTTPSHandler`` that can build a ``HTTPSConnection`` with the\n",
+ " args we need for ``source_address`` and ``timeout``\n",
+ " \"\"\"\n",
+ " def __init__(self, debuglevel=0, context=None, source_address=None,\n",
+ " timeout=10):\n",
+ " AbstractHTTPHandler.__init__(self, debuglevel)\n",
+ " self._context = context\n",
+ " self.source_address = source_address\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " def https_open(self, req):\n",
+ " return self.do_open(\n",
+ " _build_connection(\n",
+ " SpeedtestHTTPSConnection,\n",
+ " self.source_address,\n",
+ " self.timeout,\n",
+ " context=self._context,\n",
+ " ),\n",
+ " req\n",
+ " )\n",
+ "\n",
+ " https_request = AbstractHTTPHandler.do_request_\n",
+ "\n",
+ "\n",
+ "def build_opener(source_address=None, timeout=10):\n",
+ " \"\"\"Function similar to ``urllib2.build_opener`` that will build\n",
+ " an ``OpenerDirector`` with the explicit handlers we want,\n",
+ " ``source_address`` for binding, ``timeout`` and our custom\n",
+ " `User-Agent`\n",
+ " \"\"\"\n",
+ "\n",
+ " printer('Timeout set to %d' % timeout, debug=True)\n",
+ "\n",
+ " if source_address:\n",
+ " source_address_tuple = (source_address, 0)\n",
+ " printer('Binding to source address: %r' % (source_address_tuple,),\n",
+ " debug=True)\n",
+ " else:\n",
+ " source_address_tuple = None\n",
+ "\n",
+ " handlers = [\n",
+ " ProxyHandler(),\n",
+ " SpeedtestHTTPHandler(source_address=source_address_tuple,\n",
+ " timeout=timeout),\n",
+ " SpeedtestHTTPSHandler(source_address=source_address_tuple,\n",
+ " timeout=timeout),\n",
+ " HTTPDefaultErrorHandler(),\n",
+ " HTTPRedirectHandler(),\n",
+ " HTTPErrorProcessor()\n",
+ " ]\n",
+ "\n",
+ " opener = OpenerDirector()\n",
+ " opener.addheaders = [('User-agent', build_user_agent())]\n",
+ "\n",
+ " for handler in handlers:\n",
+ " opener.add_handler(handler)\n",
+ "\n",
+ " return opener\n",
+ "\n",
+ "\n",
+ "class GzipDecodedResponse(GZIP_BASE):\n",
+ " \"\"\"A file-like object to decode a response encoded with the gzip\n",
+ " method, as described in RFC 1952.\n",
+ " Largely copied from ``xmlrpclib``/``xmlrpc.client`` and modified\n",
+ " to work for py2.4-py3\n",
+ " \"\"\"\n",
+ " def __init__(self, response):\n",
+ " # response doesn't support tell() and read(), required by\n",
+ " # GzipFile\n",
+ " if not gzip:\n",
+ " raise SpeedtestHTTPError('HTTP response body is gzip encoded, '\n",
+ " 'but gzip support is not available')\n",
+ " IO = BytesIO or StringIO\n",
+ " self.io = IO()\n",
+ " while 1:\n",
+ " chunk = response.read(1024)\n",
+ " if len(chunk) == 0:\n",
+ " break\n",
+ " self.io.write(chunk)\n",
+ " self.io.seek(0)\n",
+ " gzip.GzipFile.__init__(self, mode='rb', fileobj=self.io)\n",
+ "\n",
+ " def close(self):\n",
+ " try:\n",
+ " gzip.GzipFile.close(self)\n",
+ " finally:\n",
+ " self.io.close()\n",
+ "\n",
+ "\n",
+ "def get_exception():\n",
+ " \"\"\"Helper function to work with py2.4-py3 for getting the current\n",
+ " exception in a try/except block\n",
+ " \"\"\"\n",
+ " return sys.exc_info()[1]\n",
+ "\n",
+ "\n",
+ "def distance(origin, destination):\n",
+ " \"\"\"Determine distance between 2 sets of [lat,lon] in km\"\"\"\n",
+ "\n",
+ " lat1, lon1 = origin\n",
+ " lat2, lon2 = destination\n",
+ " radius = 6371 # km\n",
+ "\n",
+ " dlat = math.radians(lat2 - lat1)\n",
+ " dlon = math.radians(lon2 - lon1)\n",
+ " a = (math.sin(dlat / 2) * math.sin(dlat / 2) +\n",
+ " math.cos(math.radians(lat1)) *\n",
+ " math.cos(math.radians(lat2)) * math.sin(dlon / 2) *\n",
+ " math.sin(dlon / 2))\n",
+ " c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n",
+ " d = radius * c\n",
+ "\n",
+ " return d\n",
+ "\n",
+ "\n",
+ "def build_user_agent():\n",
+ " \"\"\"Build a Mozilla/5.0 compatible User-Agent string\"\"\"\n",
+ "\n",
+ " ua_tuple = (\n",
+ " 'Mozilla/5.0',\n",
+ " '(%s; U; %s; en-us)' % (platform.platform(),\n",
+ " platform.architecture()[0]),\n",
+ " 'Python/%s' % platform.python_version(),\n",
+ " '(KHTML, like Gecko)',\n",
+ " 'speedtest-cli/%s' % __version__\n",
+ " )\n",
+ " user_agent = ' '.join(ua_tuple)\n",
+ " printer('User-Agent: %s' % user_agent, debug=True)\n",
+ " return user_agent\n",
+ "\n",
+ "\n",
+ "def build_request(url, data=None, headers=None, bump='0', secure=False):\n",
+ " \"\"\"Build a urllib2 request object\n",
+ " This function automatically adds a User-Agent header to all requests\n",
+ " \"\"\"\n",
+ "\n",
+ " if not headers:\n",
+ " headers = {}\n",
+ "\n",
+ " if url[0] == ':':\n",
+ " scheme = ('http', 'https')[bool(secure)]\n",
+ " schemed_url = '%s%s' % (scheme, url)\n",
+ " else:\n",
+ " schemed_url = url\n",
+ "\n",
+ " if '?' in url:\n",
+ " delim = '&'\n",
+ " else:\n",
+ " delim = '?'\n",
+ "\n",
+ " # WHO YOU GONNA CALL? CACHE BUSTERS!\n",
+ " final_url = '%s%sx=%s.%s' % (schemed_url, delim,\n",
+ " int(timeit.time.time() * 1000),\n",
+ " bump)\n",
+ "\n",
+ " headers.update({\n",
+ " 'Cache-Control': 'no-cache',\n",
+ " })\n",
+ "\n",
+ " printer('%s %s' % (('GET', 'POST')[bool(data)], final_url),\n",
+ " debug=True)\n",
+ "\n",
+ " return Request(final_url, data=data, headers=headers)\n",
+ "\n",
+ "\n",
+ "def catch_request(request, opener=None):\n",
+ " \"\"\"Helper function to catch common exceptions encountered when\n",
+ " establishing a connection with a HTTP/HTTPS request\n",
+ " \"\"\"\n",
+ "\n",
+ " if opener:\n",
+ " _open = opener.open\n",
+ " else:\n",
+ " _open = urlopen\n",
+ "\n",
+ " try:\n",
+ " uh = _open(request)\n",
+ " if request.get_full_url() != uh.geturl():\n",
+ " printer('Redirected to %s' % uh.geturl(), debug=True)\n",
+ " return uh, False\n",
+ " except HTTP_ERRORS:\n",
+ " e = get_exception()\n",
+ " return None, e\n",
+ "\n",
+ "\n",
+ "def get_response_stream(response):\n",
+ " \"\"\"Helper function to return either a Gzip reader if\n",
+ " ``Content-Encoding`` is ``gzip`` otherwise the response itself\n",
+ " \"\"\"\n",
+ "\n",
+ " try:\n",
+ " getheader = response.headers.getheader\n",
+ " except AttributeError:\n",
+ " getheader = response.getheader\n",
+ "\n",
+ " if getheader('content-encoding') == 'gzip':\n",
+ " return GzipDecodedResponse(response)\n",
+ "\n",
+ " return response\n",
+ "\n",
+ "\n",
+ "def get_attributes_by_tag_name(dom, tag_name):\n",
+ " \"\"\"Retrieve an attribute from an XML document and return it in a\n",
+ " consistent format\n",
+ " Only used with xml.dom.minidom, which is likely only to be used\n",
+ " with python versions older than 2.5\n",
+ " \"\"\"\n",
+ " elem = dom.getElementsByTagName(tag_name)[0]\n",
+ " return dict(list(elem.attributes.items()))\n",
+ "\n",
+ "\n",
+ "def print_dots(shutdown_event):\n",
+ " \"\"\"Built in callback function used by Thread classes for printing\n",
+ " status\n",
+ " \"\"\"\n",
+ " def inner(current, total, start=False, end=False):\n",
+ " if shutdown_event.isSet():\n",
+ " return\n",
+ "\n",
+ " sys.stdout.write('.')\n",
+ " if current + 1 == total and end is True:\n",
+ " sys.stdout.write('\\n')\n",
+ " sys.stdout.flush()\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "def do_nothing(*args, **kwargs):\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "class HTTPDownloader(threading.Thread):\n",
+ " \"\"\"Thread class for retrieving a URL\"\"\"\n",
+ "\n",
+ " def __init__(self, i, request, start, timeout, opener=None,\n",
+ " shutdown_event=None):\n",
+ " threading.Thread.__init__(self)\n",
+ " self.request = request\n",
+ " self.result = [0]\n",
+ " self.starttime = start\n",
+ " self.timeout = timeout\n",
+ " self.i = i\n",
+ " if opener:\n",
+ " self._opener = opener.open\n",
+ " else:\n",
+ " self._opener = urlopen\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " def run(self):\n",
+ " try:\n",
+ " if (timeit.default_timer() - self.starttime) <= self.timeout:\n",
+ " f = self._opener(self.request)\n",
+ " while (not self._shutdown_event.isSet() and\n",
+ " (timeit.default_timer() - self.starttime) <=\n",
+ " self.timeout):\n",
+ " self.result.append(len(f.read(10240)))\n",
+ " if self.result[-1] == 0:\n",
+ " break\n",
+ " f.close()\n",
+ " except IOError:\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "class HTTPUploaderData(object):\n",
+ " \"\"\"File like object to improve cutting off the upload once the timeout\n",
+ " has been reached\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, length, start, timeout, shutdown_event=None):\n",
+ " self.length = length\n",
+ " self.start = start\n",
+ " self.timeout = timeout\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " self._data = None\n",
+ "\n",
+ " self.total = [0]\n",
+ "\n",
+ " def pre_allocate(self):\n",
+ " chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n",
+ " multiplier = int(round(int(self.length) / 36.0))\n",
+ " IO = BytesIO or StringIO\n",
+ " try:\n",
+ " self._data = IO(\n",
+ " ('content1=%s' %\n",
+ " (chars * multiplier)[0:int(self.length) - 9]\n",
+ " ).encode()\n",
+ " )\n",
+ " except MemoryError:\n",
+ " raise SpeedtestCLIError(\n",
+ " 'Insufficient memory to pre-allocate upload data. Please '\n",
+ " 'use --no-pre-allocate'\n",
+ " )\n",
+ "\n",
+ " @property\n",
+ " def data(self):\n",
+ " if not self._data:\n",
+ " self.pre_allocate()\n",
+ " return self._data\n",
+ "\n",
+ " def read(self, n=10240):\n",
+ " if ((timeit.default_timer() - self.start) <= self.timeout and\n",
+ " not self._shutdown_event.isSet()):\n",
+ " chunk = self.data.read(n)\n",
+ " self.total.append(len(chunk))\n",
+ " return chunk\n",
+ " else:\n",
+ " raise SpeedtestUploadTimeout()\n",
+ "\n",
+ " def __len__(self):\n",
+ " return self.length\n",
+ "\n",
+ "\n",
+ "class HTTPUploader(threading.Thread):\n",
+ " \"\"\"Thread class for putting a URL\"\"\"\n",
+ "\n",
+ " def __init__(self, i, request, start, size, timeout, opener=None,\n",
+ " shutdown_event=None):\n",
+ " threading.Thread.__init__(self)\n",
+ " self.request = request\n",
+ " self.request.data.start = self.starttime = start\n",
+ " self.size = size\n",
+ " self.result = None\n",
+ " self.timeout = timeout\n",
+ " self.i = i\n",
+ "\n",
+ " if opener:\n",
+ " self._opener = opener.open\n",
+ " else:\n",
+ " self._opener = urlopen\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " def run(self):\n",
+ " request = self.request\n",
+ " try:\n",
+ " if ((timeit.default_timer() - self.starttime) <= self.timeout and\n",
+ " not self._shutdown_event.isSet()):\n",
+ " try:\n",
+ " f = self._opener(request)\n",
+ " except TypeError:\n",
+ " # PY24 expects a string or buffer\n",
+ " # This also causes issues with Ctrl-C, but we will concede\n",
+ " # for the moment that Ctrl-C on PY24 isn't immediate\n",
+ " request = build_request(self.request.get_full_url(),\n",
+ " data=request.data.read(self.size))\n",
+ " f = self._opener(request)\n",
+ " f.read(11)\n",
+ " f.close()\n",
+ " self.result = sum(self.request.data.total)\n",
+ " else:\n",
+ " self.result = 0\n",
+ " except (IOError, SpeedtestUploadTimeout):\n",
+ " self.result = sum(self.request.data.total)\n",
+ "\n",
+ "\n",
+ "class SpeedtestResults(object):\n",
+ " \"\"\"Class for holding the results of a speedtest, including:\n",
+ " Download speed\n",
+ " Upload speed\n",
+ " Ping/Latency to test server\n",
+ " Data about server that the test was run against\n",
+ " Additionally this class can return a result data as a dictionary or CSV,\n",
+ " as well as submit a POST of the result data to the speedtest.net API\n",
+ " to get a share results image link.\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, download=0, upload=0, ping=0, server=None, client=None,\n",
+ " opener=None, secure=False):\n",
+ " self.download = download\n",
+ " self.upload = upload\n",
+ " self.ping = ping\n",
+ " if server is None:\n",
+ " self.server = {}\n",
+ " else:\n",
+ " self.server = server\n",
+ " self.client = client or {}\n",
+ "\n",
+ " self._share = None\n",
+ " self.timestamp = '%sZ' % datetime.datetime.utcnow().isoformat()\n",
+ " self.bytes_received = 0\n",
+ " self.bytes_sent = 0\n",
+ "\n",
+ " if opener:\n",
+ " self._opener = opener\n",
+ " else:\n",
+ " self._opener = build_opener()\n",
+ "\n",
+ " self._secure = secure\n",
+ "\n",
+ " def __repr__(self):\n",
+ " return repr(self.dict())\n",
+ "\n",
+ " def share(self):\n",
+ " \"\"\"POST data to the speedtest.net API to obtain a share results\n",
+ " link\n",
+ " \"\"\"\n",
+ "\n",
+ " if self._share:\n",
+ " return self._share\n",
+ "\n",
+ " download = int(round(self.download / 1000.0, 0))\n",
+ " ping = int(round(self.ping, 0))\n",
+ " upload = int(round(self.upload / 1000.0, 0))\n",
+ "\n",
+ " # Build the request to send results back to speedtest.net\n",
+ " # We use a list instead of a dict because the API expects parameters\n",
+ " # in a certain order\n",
+ " api_data = [\n",
+ " 'recommendedserverid=%s' % self.server['id'],\n",
+ " 'ping=%s' % ping,\n",
+ " 'screenresolution=',\n",
+ " 'promo=',\n",
+ " 'download=%s' % download,\n",
+ " 'screendpi=',\n",
+ " 'upload=%s' % upload,\n",
+ " 'testmethod=http',\n",
+ " 'hash=%s' % md5(('%s-%s-%s-%s' %\n",
+ " (ping, upload, download, '297aae72'))\n",
+ " .encode()).hexdigest(),\n",
+ " 'touchscreen=none',\n",
+ " 'startmode=pingselect',\n",
+ " 'accuracy=1',\n",
+ " 'bytesreceived=%s' % self.bytes_received,\n",
+ " 'bytessent=%s' % self.bytes_sent,\n",
+ " 'serverid=%s' % self.server['id'],\n",
+ " ]\n",
+ "\n",
+ " headers = {'Referer': 'http://c.speedtest.net/flash/speedtest.swf'}\n",
+ " request = build_request('://www.speedtest.net/api/api.php',\n",
+ " data='&'.join(api_data).encode(),\n",
+ " headers=headers, secure=self._secure)\n",
+ " f, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise ShareResultsConnectFailure(e)\n",
+ "\n",
+ " response = f.read()\n",
+ " code = f.code\n",
+ " f.close()\n",
+ "\n",
+ " if int(code) != 200:\n",
+ " raise ShareResultsSubmitFailure('Could not submit results to '\n",
+ " 'speedtest.net')\n",
+ "\n",
+ " qsargs = parse_qs(response.decode())\n",
+ " resultid = qsargs.get('resultid')\n",
+ " if not resultid or len(resultid) != 1:\n",
+ " raise ShareResultsSubmitFailure('Could not submit results to '\n",
+ " 'speedtest.net')\n",
+ "\n",
+ " self._share = 'http://www.speedtest.net/result/%s.png' % resultid[0]\n",
+ "\n",
+ " return self._share\n",
+ "\n",
+ " def dict(self):\n",
+ " \"\"\"Return dictionary of result data\"\"\"\n",
+ "\n",
+ " return {\n",
+ " 'download': self.download,\n",
+ " 'upload': self.upload,\n",
+ " 'ping': self.ping,\n",
+ " 'server': self.server,\n",
+ " 'timestamp': self.timestamp,\n",
+ " 'bytes_sent': self.bytes_sent,\n",
+ " 'bytes_received': self.bytes_received,\n",
+ " 'share': self._share,\n",
+ " 'client': self.client,\n",
+ " }\n",
+ "\n",
+ " @staticmethod\n",
+ " def csv_header(delimiter=','):\n",
+ " \"\"\"Return CSV Headers\"\"\"\n",
+ "\n",
+ " row = ['Server ID', 'Sponsor', 'Server Name', 'Timestamp', 'Distance',\n",
+ " 'Ping', 'Download', 'Upload', 'Share', 'IP Address']\n",
+ " out = StringIO()\n",
+ " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
+ " writer.writerow([to_utf8(v) for v in row])\n",
+ " return out.getvalue()\n",
+ "\n",
+ " def csv(self, delimiter=','):\n",
+ " \"\"\"Return data in CSV format\"\"\"\n",
+ "\n",
+ " data = self.dict()\n",
+ " out = StringIO()\n",
+ " writer = csv.writer(out, delimiter=delimiter, lineterminator='')\n",
+ " row = [data['server']['id'], data['server']['sponsor'],\n",
+ " data['server']['name'], data['timestamp'],\n",
+ " data['server']['d'], data['ping'], data['download'],\n",
+ " data['upload'], self._share or '', self.client['ip']]\n",
+ " writer.writerow([to_utf8(v) for v in row])\n",
+ " return out.getvalue()\n",
+ "\n",
+ " def json(self, pretty=False):\n",
+ " \"\"\"Return data in JSON format\"\"\"\n",
+ "\n",
+ " kwargs = {}\n",
+ " if pretty:\n",
+ " kwargs.update({\n",
+ " 'indent': 4,\n",
+ " 'sort_keys': True\n",
+ " })\n",
+ " return json.dumps(self.dict(), **kwargs)\n",
+ "\n",
+ "\n",
+ "class Speedtest(object):\n",
+ " \"\"\"Class for performing standard speedtest.net testing operations\"\"\"\n",
+ "\n",
+ " def __init__(self, config=None, source_address=None, timeout=10,\n",
+ " secure=False, shutdown_event=None):\n",
+ " self.config = {}\n",
+ "\n",
+ " self._source_address = source_address\n",
+ " self._timeout = timeout\n",
+ " self._opener = build_opener(source_address, timeout)\n",
+ "\n",
+ " self._secure = secure\n",
+ "\n",
+ " if shutdown_event:\n",
+ " self._shutdown_event = shutdown_event\n",
+ " else:\n",
+ " self._shutdown_event = FakeShutdownEvent()\n",
+ "\n",
+ " self.get_config()\n",
+ " if config is not None:\n",
+ " self.config.update(config)\n",
+ "\n",
+ " self.servers = {}\n",
+ " self.closest = []\n",
+ " self._best = {}\n",
+ "\n",
+ " self.results = SpeedtestResults(\n",
+ " client=self.config['client'],\n",
+ " opener=self._opener,\n",
+ " secure=secure,\n",
+ " )\n",
+ "\n",
+ " @property\n",
+ " def best(self):\n",
+ " if not self._best:\n",
+ " self.get_best_server()\n",
+ " return self._best\n",
+ "\n",
+ " def get_config(self):\n",
+ " \"\"\"Download the speedtest.net configuration and return only the data\n",
+ " we are interested in\n",
+ " \"\"\"\n",
+ "\n",
+ " headers = {}\n",
+ " if gzip:\n",
+ " headers['Accept-Encoding'] = 'gzip'\n",
+ " request = build_request('://www.speedtest.net/speedtest-config.php',\n",
+ " headers=headers, secure=self._secure)\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise ConfigRetrievalError(e)\n",
+ " configxml_list = []\n",
+ "\n",
+ " stream = get_response_stream(uh)\n",
+ "\n",
+ " while 1:\n",
+ " try:\n",
+ " configxml_list.append(stream.read(1024))\n",
+ " except (OSError, EOFError):\n",
+ " raise ConfigRetrievalError(get_exception())\n",
+ " if len(configxml_list[-1]) == 0:\n",
+ " break\n",
+ " stream.close()\n",
+ " uh.close()\n",
+ "\n",
+ " if int(uh.code) != 200:\n",
+ " return None\n",
+ "\n",
+ " configxml = ''.encode().join(configxml_list)\n",
+ "\n",
+ " printer('Config XML:\\n%s' % configxml, debug=True)\n",
+ "\n",
+ " try:\n",
+ " try:\n",
+ " root = ET.fromstring(configxml)\n",
+ " except ET.ParseError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Malformed speedtest.net configuration: %s' % e\n",
+ " )\n",
+ " server_config = root.find('server-config').attrib\n",
+ " download = root.find('download').attrib\n",
+ " upload = root.find('upload').attrib\n",
+ " # times = root.find('times').attrib\n",
+ " client = root.find('client').attrib\n",
+ "\n",
+ " except AttributeError:\n",
+ " try:\n",
+ " root = DOM.parseString(configxml)\n",
+ " except ExpatError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Malformed speedtest.net configuration: %s' % e\n",
+ " )\n",
+ " server_config = get_attributes_by_tag_name(root, 'server-config')\n",
+ " download = get_attributes_by_tag_name(root, 'download')\n",
+ " upload = get_attributes_by_tag_name(root, 'upload')\n",
+ " # times = get_attributes_by_tag_name(root, 'times')\n",
+ " client = get_attributes_by_tag_name(root, 'client')\n",
+ "\n",
+ " ignore_servers = list(\n",
+ " map(int, server_config['ignoreids'].split(','))\n",
+ " )\n",
+ "\n",
+ " ratio = int(upload['ratio'])\n",
+ " upload_max = int(upload['maxchunkcount'])\n",
+ " up_sizes = [32768, 65536, 131072, 262144, 524288, 1048576, 7340032]\n",
+ " sizes = {\n",
+ " 'upload': up_sizes[ratio - 1:],\n",
+ " 'download': [350, 500, 750, 1000, 1500, 2000, 2500,\n",
+ " 3000, 3500, 4000]\n",
+ " }\n",
+ "\n",
+ " size_count = len(sizes['upload'])\n",
+ "\n",
+ " upload_count = int(math.ceil(upload_max / size_count))\n",
+ "\n",
+ " counts = {\n",
+ " 'upload': upload_count,\n",
+ " 'download': int(download['threadsperurl'])\n",
+ " }\n",
+ "\n",
+ " threads = {\n",
+ " 'upload': int(upload['threads']),\n",
+ " 'download': int(server_config['threadcount']) * 2\n",
+ " }\n",
+ "\n",
+ " length = {\n",
+ " 'upload': int(upload['testlength']),\n",
+ " 'download': int(download['testlength'])\n",
+ " }\n",
+ "\n",
+ " self.config.update({\n",
+ " 'client': client,\n",
+ " 'ignore_servers': ignore_servers,\n",
+ " 'sizes': sizes,\n",
+ " 'counts': counts,\n",
+ " 'threads': threads,\n",
+ " 'length': length,\n",
+ " 'upload_max': upload_count * size_count\n",
+ " })\n",
+ "\n",
+ " try:\n",
+ " self.lat_lon = (float(client['lat']), float(client['lon']))\n",
+ " except ValueError:\n",
+ " raise SpeedtestConfigError(\n",
+ " 'Unknown location: lat=%r lon=%r' %\n",
+ " (client.get('lat'), client.get('lon'))\n",
+ " )\n",
+ "\n",
+ " printer('Config:\\n%r' % self.config, debug=True)\n",
+ "\n",
+ " return self.config\n",
+ "\n",
+ " def get_servers(self, servers=None, exclude=None):\n",
+ " \"\"\"Retrieve a the list of speedtest.net servers, optionally filtered\n",
+ " to servers matching those specified in the ``servers`` argument\n",
+ " \"\"\"\n",
+ " if servers is None:\n",
+ " servers = []\n",
+ "\n",
+ " if exclude is None:\n",
+ " exclude = []\n",
+ "\n",
+ " self.servers.clear()\n",
+ "\n",
+ " for server_list in (servers, exclude):\n",
+ " for i, s in enumerate(server_list):\n",
+ " try:\n",
+ " server_list[i] = int(s)\n",
+ " except ValueError:\n",
+ " raise InvalidServerIDType(\n",
+ " '%s is an invalid server type, must be int' % s\n",
+ " )\n",
+ "\n",
+ " urls = [\n",
+ " '://www.speedtest.net/speedtest-servers-static.php',\n",
+ " 'http://c.speedtest.net/speedtest-servers-static.php',\n",
+ " '://www.speedtest.net/speedtest-servers.php',\n",
+ " 'http://c.speedtest.net/speedtest-servers.php',\n",
+ " ]\n",
+ "\n",
+ " headers = {}\n",
+ " if gzip:\n",
+ " headers['Accept-Encoding'] = 'gzip'\n",
+ "\n",
+ " errors = []\n",
+ " for url in urls:\n",
+ " try:\n",
+ " request = build_request(\n",
+ " '%s?threads=%s' % (url,\n",
+ " self.config['threads']['download']),\n",
+ " headers=headers,\n",
+ " secure=self._secure\n",
+ " )\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " errors.append('%s' % e)\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " stream = get_response_stream(uh)\n",
+ "\n",
+ " serversxml_list = []\n",
+ " while 1:\n",
+ " try:\n",
+ " serversxml_list.append(stream.read(1024))\n",
+ " except (OSError, EOFError):\n",
+ " raise ServersRetrievalError(get_exception())\n",
+ " if len(serversxml_list[-1]) == 0:\n",
+ " break\n",
+ "\n",
+ " stream.close()\n",
+ " uh.close()\n",
+ "\n",
+ " if int(uh.code) != 200:\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " serversxml = ''.encode().join(serversxml_list)\n",
+ "\n",
+ " printer('Servers XML:\\n%s' % serversxml, debug=True)\n",
+ "\n",
+ " try:\n",
+ " try:\n",
+ " try:\n",
+ " root = ET.fromstring(serversxml)\n",
+ " except ET.ParseError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestServersError(\n",
+ " 'Malformed speedtest.net server list: %s' % e\n",
+ " )\n",
+ " elements = root.getiterator('server')\n",
+ " except AttributeError:\n",
+ " try:\n",
+ " root = DOM.parseString(serversxml)\n",
+ " except ExpatError:\n",
+ " e = get_exception()\n",
+ " raise SpeedtestServersError(\n",
+ " 'Malformed speedtest.net server list: %s' % e\n",
+ " )\n",
+ " elements = root.getElementsByTagName('server')\n",
+ " except (SyntaxError, xml.parsers.expat.ExpatError):\n",
+ " raise ServersRetrievalError()\n",
+ "\n",
+ " for server in elements:\n",
+ " try:\n",
+ " attrib = server.attrib\n",
+ " except AttributeError:\n",
+ " attrib = dict(list(server.attributes.items()))\n",
+ "\n",
+ " if servers and int(attrib.get('id')) not in servers:\n",
+ " continue\n",
+ "\n",
+ " if (int(attrib.get('id')) in self.config['ignore_servers']\n",
+ " or int(attrib.get('id')) in exclude):\n",
+ " continue\n",
+ "\n",
+ " try:\n",
+ " d = distance(self.lat_lon,\n",
+ " (float(attrib.get('lat')),\n",
+ " float(attrib.get('lon'))))\n",
+ " except Exception:\n",
+ " continue\n",
+ "\n",
+ " attrib['d'] = d\n",
+ "\n",
+ " try:\n",
+ " self.servers[d].append(attrib)\n",
+ " except KeyError:\n",
+ " self.servers[d] = [attrib]\n",
+ "\n",
+ " break\n",
+ "\n",
+ " except ServersRetrievalError:\n",
+ " continue\n",
+ "\n",
+ " if (servers or exclude) and not self.servers:\n",
+ " raise NoMatchedServers()\n",
+ "\n",
+ " return self.servers\n",
+ "\n",
+ " def set_mini_server(self, server):\n",
+ " \"\"\"Instead of querying for a list of servers, set a link to a\n",
+ " speedtest mini server\n",
+ " \"\"\"\n",
+ "\n",
+ " urlparts = urlparse(server)\n",
+ "\n",
+ " name, ext = os.path.splitext(urlparts[2])\n",
+ " if ext:\n",
+ " url = os.path.dirname(server)\n",
+ " else:\n",
+ " url = server\n",
+ "\n",
+ " request = build_request(url)\n",
+ " uh, e = catch_request(request, opener=self._opener)\n",
+ " if e:\n",
+ " raise SpeedtestMiniConnectFailure('Failed to connect to %s' %\n",
+ " server)\n",
+ " else:\n",
+ " text = uh.read()\n",
+ " uh.close()\n",
+ "\n",
+ " extension = re.findall('upload_?[Ee]xtension: \"([^\"]+)\"',\n",
+ " text.decode())\n",
+ " if not extension:\n",
+ " for ext in ['php', 'asp', 'aspx', 'jsp']:\n",
+ " try:\n",
+ " f = self._opener.open(\n",
+ " '%s/speedtest/upload.%s' % (url, ext)\n",
+ " )\n",
+ " except Exception:\n",
+ " pass\n",
+ " else:\n",
+ " data = f.read().strip().decode()\n",
+ " if (f.code == 200 and\n",
+ " len(data.splitlines()) == 1 and\n",
+ " re.match('size=[0-9]', data)):\n",
+ " extension = [ext]\n",
+ " break\n",
+ " if not urlparts or not extension:\n",
+ " raise InvalidSpeedtestMiniServer('Invalid Speedtest Mini Server: '\n",
+ " '%s' % server)\n",
+ "\n",
+ " self.servers = [{\n",
+ " 'sponsor': 'Speedtest Mini',\n",
+ " 'name': urlparts[1],\n",
+ " 'd': 0,\n",
+ " 'url': '%s/speedtest/upload.%s' % (url.rstrip('/'), extension[0]),\n",
+ " 'latency': 0,\n",
+ " 'id': 0\n",
+ " }]\n",
+ "\n",
+ " return self.servers\n",
+ "\n",
+ " def get_closest_servers(self, limit=5):\n",
+ " \"\"\"Limit servers to the closest speedtest.net servers based on\n",
+ " geographic distance\n",
+ " \"\"\"\n",
+ "\n",
+ " if not self.servers:\n",
+ " self.get_servers()\n",
+ "\n",
+ " for d in sorted(self.servers.keys()):\n",
+ " for s in self.servers[d]:\n",
+ " self.closest.append(s)\n",
+ " if len(self.closest) == limit:\n",
+ " break\n",
+ " else:\n",
+ " continue\n",
+ " break\n",
+ "\n",
+ " printer('Closest Servers:\\n%r' % self.closest, debug=True)\n",
+ " return self.closest\n",
+ "\n",
+ " def get_best_server(self, servers=None):\n",
+ " \"\"\"Perform a speedtest.net \"ping\" to determine which speedtest.net\n",
+ " server has the lowest latency\n",
+ " \"\"\"\n",
+ "\n",
+ " if not servers:\n",
+ " if not self.closest:\n",
+ " servers = self.get_closest_servers()\n",
+ " servers = self.closest\n",
+ "\n",
+ " if self._source_address:\n",
+ " source_address_tuple = (self._source_address, 0)\n",
+ " else:\n",
+ " source_address_tuple = None\n",
+ "\n",
+ " user_agent = build_user_agent()\n",
+ "\n",
+ " results = {}\n",
+ " for server in servers:\n",
+ " cum = []\n",
+ " url = os.path.dirname(server['url'])\n",
+ " stamp = int(timeit.time.time() * 1000)\n",
+ " latency_url = '%s/latency.txt?x=%s' % (url, stamp)\n",
+ " for i in range(0, 3):\n",
+ " this_latency_url = '%s.%s' % (latency_url, i)\n",
+ " printer('%s %s' % ('GET', this_latency_url),\n",
+ " debug=True)\n",
+ " urlparts = urlparse(latency_url)\n",
+ " try:\n",
+ " if urlparts[0] == 'https':\n",
+ " h = SpeedtestHTTPSConnection(\n",
+ " urlparts[1],\n",
+ " source_address=source_address_tuple\n",
+ " )\n",
+ " else:\n",
+ " h = SpeedtestHTTPConnection(\n",
+ " urlparts[1],\n",
+ " source_address=source_address_tuple\n",
+ " )\n",
+ " headers = {'User-Agent': user_agent}\n",
+ " path = '%s?%s' % (urlparts[2], urlparts[4])\n",
+ " start = timeit.default_timer()\n",
+ " h.request(\"GET\", path, headers=headers)\n",
+ " r = h.getresponse()\n",
+ " total = (timeit.default_timer() - start)\n",
+ " except HTTP_ERRORS:\n",
+ " e = get_exception()\n",
+ " printer('ERROR: %r' % e, debug=True)\n",
+ " cum.append(3600)\n",
+ " continue\n",
+ "\n",
+ " text = r.read(9)\n",
+ " if int(r.status) == 200 and text == 'test=test'.encode():\n",
+ " cum.append(total)\n",
+ " else:\n",
+ " cum.append(3600)\n",
+ " h.close()\n",
+ "\n",
+ " avg = round((sum(cum) / 6) * 1000.0, 3)\n",
+ " results[avg] = server\n",
+ "\n",
+ " try:\n",
+ " fastest = sorted(results.keys())[0]\n",
+ " except IndexError:\n",
+ " raise SpeedtestBestServerFailure('Unable to connect to servers to '\n",
+ " 'test latency.')\n",
+ " best = results[fastest]\n",
+ " best['latency'] = fastest\n",
+ "\n",
+ " self.results.ping = fastest\n",
+ " self.results.server = best\n",
+ "\n",
+ " self._best.update(best)\n",
+ " printer('Best Server:\\n%r' % best, debug=True)\n",
+ " return best\n",
+ "\n",
+ " def download(self, callback=do_nothing, threads=None):\n",
+ " \"\"\"Test download speed against speedtest.net\n",
+ " A ``threads`` value of ``None`` will fall back to those dictated\n",
+ " by the speedtest.net configuration\n",
+ " \"\"\"\n",
+ "\n",
+ " urls = []\n",
+ " for size in self.config['sizes']['download']:\n",
+ " for _ in range(0, self.config['counts']['download']):\n",
+ " urls.append('%s/random%sx%s.jpg' %\n",
+ " (os.path.dirname(self.best['url']), size, size))\n",
+ "\n",
+ " request_count = len(urls)\n",
+ " requests = []\n",
+ " for i, url in enumerate(urls):\n",
+ " requests.append(\n",
+ " build_request(url, bump=i, secure=self._secure)\n",
+ " )\n",
+ "\n",
+ " def producer(q, requests, request_count):\n",
+ " for i, request in enumerate(requests):\n",
+ " thread = HTTPDownloader(\n",
+ " i,\n",
+ " request,\n",
+ " start,\n",
+ " self.config['length']['download'],\n",
+ " opener=self._opener,\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " thread.start()\n",
+ " q.put(thread, True)\n",
+ " callback(i, request_count, start=True)\n",
+ "\n",
+ " finished = []\n",
+ "\n",
+ " def consumer(q, request_count):\n",
+ " while len(finished) < request_count:\n",
+ " thread = q.get(True)\n",
+ " while thread.isAlive():\n",
+ " thread.join(timeout=0.1)\n",
+ " finished.append(sum(thread.result))\n",
+ " callback(thread.i, request_count, end=True)\n",
+ "\n",
+ " q = Queue(threads or self.config['threads']['download'])\n",
+ " prod_thread = threading.Thread(target=producer,\n",
+ " args=(q, requests, request_count))\n",
+ " cons_thread = threading.Thread(target=consumer,\n",
+ " args=(q, request_count))\n",
+ " start = timeit.default_timer()\n",
+ " prod_thread.start()\n",
+ " cons_thread.start()\n",
+ " while prod_thread.isAlive():\n",
+ " prod_thread.join(timeout=0.1)\n",
+ " while cons_thread.isAlive():\n",
+ " cons_thread.join(timeout=0.1)\n",
+ "\n",
+ " stop = timeit.default_timer()\n",
+ " self.results.bytes_received = sum(finished)\n",
+ " self.results.download = (\n",
+ " (self.results.bytes_received / (stop - start)) * 8.0\n",
+ " )\n",
+ " if self.results.download > 100000:\n",
+ " self.config['threads']['upload'] = 8\n",
+ " return self.results.download\n",
+ "\n",
+ " def upload(self, callback=do_nothing, pre_allocate=True, threads=None):\n",
+ " \"\"\"Test upload speed against speedtest.net\n",
+ " A ``threads`` value of ``None`` will fall back to those dictated\n",
+ " by the speedtest.net configuration\n",
+ " \"\"\"\n",
+ "\n",
+ " sizes = []\n",
+ "\n",
+ " for size in self.config['sizes']['upload']:\n",
+ " for _ in range(0, self.config['counts']['upload']):\n",
+ " sizes.append(size)\n",
+ "\n",
+ " # request_count = len(sizes)\n",
+ " request_count = self.config['upload_max']\n",
+ "\n",
+ " requests = []\n",
+ " for i, size in enumerate(sizes):\n",
+ " # We set ``0`` for ``start`` and handle setting the actual\n",
+ " # ``start`` in ``HTTPUploader`` to get better measurements\n",
+ " data = HTTPUploaderData(\n",
+ " size,\n",
+ " 0,\n",
+ " self.config['length']['upload'],\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " if pre_allocate:\n",
+ " data.pre_allocate()\n",
+ "\n",
+ " headers = {'Content-length': size}\n",
+ " requests.append(\n",
+ " (\n",
+ " build_request(self.best['url'], data, secure=self._secure,\n",
+ " headers=headers),\n",
+ " size\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " def producer(q, requests, request_count):\n",
+ " for i, request in enumerate(requests[:request_count]):\n",
+ " thread = HTTPUploader(\n",
+ " i,\n",
+ " request[0],\n",
+ " start,\n",
+ " request[1],\n",
+ " self.config['length']['upload'],\n",
+ " opener=self._opener,\n",
+ " shutdown_event=self._shutdown_event\n",
+ " )\n",
+ " thread.start()\n",
+ " q.put(thread, True)\n",
+ " callback(i, request_count, start=True)\n",
+ "\n",
+ " finished = []\n",
+ "\n",
+ " def consumer(q, request_count):\n",
+ " while len(finished) < request_count:\n",
+ " thread = q.get(True)\n",
+ " while thread.isAlive():\n",
+ " thread.join(timeout=0.1)\n",
+ " finished.append(thread.result)\n",
+ " callback(thread.i, request_count, end=True)\n",
+ "\n",
+ " q = Queue(threads or self.config['threads']['upload'])\n",
+ " prod_thread = threading.Thread(target=producer,\n",
+ " args=(q, requests, request_count))\n",
+ " cons_thread = threading.Thread(target=consumer,\n",
+ " args=(q, request_count))\n",
+ " start = timeit.default_timer()\n",
+ " prod_thread.start()\n",
+ " cons_thread.start()\n",
+ " while prod_thread.isAlive():\n",
+ " prod_thread.join(timeout=0.1)\n",
+ " while cons_thread.isAlive():\n",
+ " cons_thread.join(timeout=0.1)\n",
+ "\n",
+ " stop = timeit.default_timer()\n",
+ " self.results.bytes_sent = sum(finished)\n",
+ " self.results.upload = (\n",
+ " (self.results.bytes_sent / (stop - start)) * 8.0\n",
+ " )\n",
+ " return self.results.upload\n",
+ "\n",
+ "\n",
+ "def ctrl_c(shutdown_event):\n",
+ " \"\"\"Catch Ctrl-C key sequence and set a SHUTDOWN_EVENT for our threaded\n",
+ " operations\n",
+ " \"\"\"\n",
+ " def inner(signum, frame):\n",
+ " shutdown_event.set()\n",
+ " printer('\\nCancelling...', error=True)\n",
+ " sys.exit(0)\n",
+ " return inner\n",
+ "\n",
+ "\n",
+ "def version():\n",
+ " \"\"\"Print the version\"\"\"\n",
+ "\n",
+ " printer('speedtest-cli %s' % __version__)\n",
+ " printer('Python %s' % sys.version.replace('\\n', ''))\n",
+ " sys.exit(0)\n",
+ "\n",
+ "\n",
+ "def csv_header(delimiter=','):\n",
+ " \"\"\"Print the CSV Headers\"\"\"\n",
+ "\n",
+ " printer(SpeedtestResults.csv_header(delimiter=delimiter))\n",
+ " sys.exit(0)\n",
+ "\n",
+ "\n",
+ "def parse_args():\n",
+ " \"\"\"Function to handle building and parsing of command line arguments\"\"\"\n",
+ " description = (\n",
+ " 'Command line interface for testing internet bandwidth using '\n",
+ " 'speedtest.net.\\n'\n",
+ " '------------------------------------------------------------'\n",
+ " '--------------\\n'\n",
+ " 'https://github.com/sivel/speedtest-cli')\n",
+ "\n",
+ " parser = ArgParser(description=description)\n",
+ " # Give optparse.OptionParser an `add_argument` method for\n",
+ " # compatibility with argparse.ArgumentParser\n",
+ " try:\n",
+ " parser.add_argument = parser.add_option\n",
+ " except AttributeError:\n",
+ " pass\n",
+ " parser.add_argument('--no-download', dest='download', default=True,\n",
+ " action='store_const', const=False,\n",
+ " help='Do not perform download test')\n",
+ " parser.add_argument('--no-upload', dest='upload', default=True,\n",
+ " action='store_const', const=False,\n",
+ " help='Do not perform upload test')\n",
+ " parser.add_argument('--single', default=False, action='store_true',\n",
+ " help='Only use a single connection instead of '\n",
+ " 'multiple. This simulates a typical file '\n",
+ " 'transfer.')\n",
+ " parser.add_argument('--bytes', dest='units', action='store_const',\n",
+ " const=('byte', 8), default=('bit', 1),\n",
+ " help='Display values in bytes instead of bits. Does '\n",
+ " 'not affect the image generated by --share, nor '\n",
+ " 'output from --json or --csv')\n",
+ " parser.add_argument('--share', action='store_true',\n",
+ " help='Generate and provide a URL to the speedtest.net '\n",
+ " 'share results image, not displayed with --csv')\n",
+ " parser.add_argument('--simple', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information')\n",
+ " parser.add_argument('--csv', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information in CSV format. Speeds listed in '\n",
+ " 'bit/s and not affected by --bytes')\n",
+ " parser.add_argument('--csv-delimiter', default=',', type=PARSER_TYPE_STR,\n",
+ " help='Single character delimiter to use in CSV '\n",
+ " 'output. Default \",\"')\n",
+ " parser.add_argument('--csv-header', action='store_true', default=False,\n",
+ " help='Print CSV headers')\n",
+ " parser.add_argument('--json', action='store_true', default=False,\n",
+ " help='Suppress verbose output, only show basic '\n",
+ " 'information in JSON format. Speeds listed in '\n",
+ " 'bit/s and not affected by --bytes')\n",
+ " parser.add_argument('--list', action='store_true',\n",
+ " help='Display a list of speedtest.net servers '\n",
+ " 'sorted by distance')\n",
+ " parser.add_argument('--server', type=PARSER_TYPE_INT, action='append',\n",
+ " help='Specify a server ID to test against. Can be '\n",
+ " 'supplied multiple times')\n",
+ " parser.add_argument('--exclude', type=PARSER_TYPE_INT, action='append',\n",
+ " help='Exclude a server from selection. Can be '\n",
+ " 'supplied multiple times')\n",
+ " parser.add_argument('--mini', help='URL of the Speedtest Mini server')\n",
+ " parser.add_argument('--source', help='Source IP address to bind to')\n",
+ " parser.add_argument('--timeout', default=10, type=PARSER_TYPE_FLOAT,\n",
+ " help='HTTP timeout in seconds. Default 10')\n",
+ " parser.add_argument('--secure', action='store_true',\n",
+ " help='Use HTTPS instead of HTTP when communicating '\n",
+ " 'with speedtest.net operated servers')\n",
+ " parser.add_argument('--no-pre-allocate', dest='pre_allocate',\n",
+ " action='store_const', default=True, const=False,\n",
+ " help='Do not pre allocate upload data. Pre allocation '\n",
+ " 'is enabled by default to improve upload '\n",
+ " 'performance. To support systems with '\n",
+ " 'insufficient memory, use this option to avoid a '\n",
+ " 'MemoryError')\n",
+ " parser.add_argument('--version', action='store_true',\n",
+ " help='Show the version number and exit')\n",
+ " parser.add_argument('--debug', action='store_true',\n",
+ " help=ARG_SUPPRESS, default=ARG_SUPPRESS)\n",
+ "\n",
+ " options = parser.parse_args(args=[])\n",
+ " if isinstance(options, tuple):\n",
+ " args = options[0]\n",
+ " else:\n",
+ " args = options\n",
+ " return args\n",
+ "\n",
+ "\n",
+ "def validate_optional_args(args):\n",
+ " \"\"\"Check if an argument was provided that depends on a module that may\n",
+ " not be part of the Python standard library.\n",
+ " If such an argument is supplied, and the module does not exist, exit\n",
+ " with an error stating which module is missing.\n",
+ " \"\"\"\n",
+ " optional_args = {\n",
+ " 'json': ('json/simplejson python module', json),\n",
+ " 'secure': ('SSL support', HTTPSConnection),\n",
+ " }\n",
+ "\n",
+ " for arg, info in optional_args.items():\n",
+ " if getattr(args, arg, False) and info[1] is None:\n",
+ " raise SystemExit('%s is not installed. --%s is '\n",
+ " 'unavailable' % (info[0], arg))\n",
+ "\n",
+ "\n",
+ "def printer(string, quiet=False, debug=False, error=False, **kwargs):\n",
+ " \"\"\"Helper function print a string with various features\"\"\"\n",
+ "\n",
+ " if debug and not DEBUG:\n",
+ " return\n",
+ "\n",
+ " if debug:\n",
+ " if sys.stdout.isatty():\n",
+ " out = '\\033[1;30mDEBUG: %s\\033[0m' % string\n",
+ " else:\n",
+ " out = 'DEBUG: %s' % string\n",
+ " else:\n",
+ " out = string\n",
+ "\n",
+ " if error:\n",
+ " kwargs['file'] = sys.stderr\n",
+ "\n",
+ " if not quiet:\n",
+ " print_(out, **kwargs)\n",
+ "\n",
+ "\n",
+ "def shell():\n",
+ " \"\"\"Run the full speedtest.net test\"\"\"\n",
+ "\n",
+ " global DEBUG\n",
+ " shutdown_event = threading.Event()\n",
+ "\n",
+ " signal.signal(signal.SIGINT, ctrl_c(shutdown_event))\n",
+ "\n",
+ " args = parse_args()\n",
+ "\n",
+ " # Print the version and exit\n",
+ " if args.version:\n",
+ " version()\n",
+ "\n",
+ " if not args.download and not args.upload:\n",
+ " raise SpeedtestCLIError('Cannot supply both --no-download and '\n",
+ " '--no-upload')\n",
+ "\n",
+ " if len(args.csv_delimiter) != 1:\n",
+ " raise SpeedtestCLIError('--csv-delimiter must be a single character')\n",
+ "\n",
+ " if args.csv_header:\n",
+ " csv_header(args.csv_delimiter)\n",
+ "\n",
+ " validate_optional_args(args)\n",
+ "\n",
+ " debug = getattr(args, 'debug', False)\n",
+ " if debug == 'SUPPRESSHELP':\n",
+ " debug = False\n",
+ " if debug:\n",
+ " DEBUG = True\n",
+ "\n",
+ " if args.simple or args.csv or args.json:\n",
+ " quiet = True\n",
+ " else:\n",
+ " quiet = False\n",
+ "\n",
+ " if args.csv or args.json:\n",
+ " machine_format = True\n",
+ " else:\n",
+ " machine_format = False\n",
+ "\n",
+ " # Don't set a callback if we are running quietly\n",
+ " if quiet or debug:\n",
+ " callback = do_nothing\n",
+ " else:\n",
+ " callback = print_dots(shutdown_event)\n",
+ "\n",
+ " printer('Retrieving speedtest.net configuration...', quiet)\n",
+ " try:\n",
+ " speedtest = Speedtest(\n",
+ " source_address=args.source,\n",
+ " timeout=args.timeout,\n",
+ " secure=args.secure\n",
+ " )\n",
+ " except (ConfigRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest configuration', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ "\n",
+ " if args.list:\n",
+ " try:\n",
+ " speedtest.get_servers()\n",
+ " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest server list', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ "\n",
+ " for _, servers in sorted(speedtest.servers.items()):\n",
+ " for server in servers:\n",
+ " line = ('%(id)5s) %(sponsor)s (%(name)s, %(country)s) '\n",
+ " '[%(d)0.2f km]' % server)\n",
+ " try:\n",
+ " printer(line)\n",
+ " except IOError:\n",
+ " e = get_exception()\n",
+ " if e.errno != errno.EPIPE:\n",
+ " raise\n",
+ " sys.exit(0)\n",
+ "\n",
+ " printer('Testing from %(isp)s (%(ip)s)...' % speedtest.config['client'],\n",
+ " quiet)\n",
+ "\n",
+ " if not args.mini:\n",
+ " printer('Retrieving speedtest.net server list...', quiet)\n",
+ " try:\n",
+ " speedtest.get_servers(servers=args.server, exclude=args.exclude)\n",
+ " except NoMatchedServers:\n",
+ " raise SpeedtestCLIError(\n",
+ " 'No matched servers: %s' %\n",
+ " ', '.join('%s' % s for s in args.server)\n",
+ " )\n",
+ " except (ServersRetrievalError,) + HTTP_ERRORS:\n",
+ " printer('Cannot retrieve speedtest server list', error=True)\n",
+ " raise SpeedtestCLIError(get_exception())\n",
+ " except InvalidServerIDType:\n",
+ " raise SpeedtestCLIError(\n",
+ " '%s is an invalid server type, must '\n",
+ " 'be an int' % ', '.join('%s' % s for s in args.server)\n",
+ " )\n",
+ "\n",
+ " if args.server and len(args.server) == 1:\n",
+ " printer('Retrieving information for the selected server...', quiet)\n",
+ " else:\n",
+ " printer('Selecting best server based on ping...', quiet)\n",
+ " speedtest.get_best_server()\n",
+ " elif args.mini:\n",
+ " speedtest.get_best_server(speedtest.set_mini_server(args.mini))\n",
+ "\n",
+ " results = speedtest.results\n",
+ "\n",
+ " printer('Hosted by %(sponsor)s (%(name)s) [%(d)0.2f km]: '\n",
+ " '%(latency)s ms' % results.server, quiet)\n",
+ "\n",
+ " if args.download:\n",
+ " printer('Testing download speed', quiet,\n",
+ " end=('', '\\n')[bool(debug)])\n",
+ " speedtest.download(\n",
+ " callback=callback,\n",
+ " threads=(None, 1)[args.single]\n",
+ " )\n",
+ " printer('Download: %0.2f M%s/s' %\n",
+ " ((results.download / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]),\n",
+ " quiet)\n",
+ " else:\n",
+ " printer('Skipping download test', quiet)\n",
+ "\n",
+ " if args.upload:\n",
+ " printer('Testing upload speed', quiet,\n",
+ " end=('', '\\n')[bool(debug)])\n",
+ " speedtest.upload(\n",
+ " callback=callback,\n",
+ " pre_allocate=args.pre_allocate,\n",
+ " threads=(None, 1)[args.single]\n",
+ " )\n",
+ " printer('Upload: %0.2f M%s/s' %\n",
+ " ((results.upload / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]),\n",
+ " quiet)\n",
+ " else:\n",
+ " printer('Skipping upload test', quiet)\n",
+ "\n",
+ " printer('Results:\\n%r' % results.dict(), debug=True)\n",
+ "\n",
+ " if not args.simple and args.share:\n",
+ " results.share()\n",
+ "\n",
+ " if args.simple:\n",
+ " printer('Ping: %s ms\\nDownload: %0.2f M%s/s\\nUpload: %0.2f M%s/s' %\n",
+ " (results.ping,\n",
+ " (results.download / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0],\n",
+ " (results.upload / 1000.0 / 1000.0) / args.units[1],\n",
+ " args.units[0]))\n",
+ " elif args.csv:\n",
+ " printer(results.csv(delimiter=args.csv_delimiter))\n",
+ " elif args.json:\n",
+ " printer(results.json())\n",
+ "\n",
+ " if args.share and not machine_format:\n",
+ " printer('Share results: %s' % results.share())\n",
+ "\n",
+ "\n",
+ "def main():\n",
+ " try:\n",
+ " shell()\n",
+ " except KeyboardInterrupt:\n",
+ " printer('\\nCancelling...', error=True)\n",
+ " except (SpeedtestException, SystemExit):\n",
+ " e = get_exception()\n",
+ " # Ignore a successful exit, or argparse exit\n",
+ " if getattr(e, 'code', 1) not in (0, 2):\n",
+ " msg = '%s' % e\n",
+ " if not msg:\n",
+ " msg = '%r' % e\n",
+ " raise SystemExit('ERROR: %s' % msg)\n",
+ "\n",
+ "\n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NgCsGSiDu1bY"
+ },
+ "source": [
+ "### Virtual Machine "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qUU2tyDpSAB2"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Ubuntu VM updater \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "!apt update -qq -y &> /dev/null\n",
+ "!apt upgrade -qq -y &> /dev/null\n",
+ "!npm i -g npm &> /dev/null\n",
+ "\n",
+ "display(HTML(\"The system has been updated! \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "arzz5dBiSEDd"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Check VM status \n",
+ "Check_IP = True #@param {type:\"boolean\"}\n",
+ "Loop_Check = False #@param {type:\"boolean\"}\n",
+ "Loop_Interval = 4 #@param {type:\"slider\", min:1, max:15, step:1}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import time, requests\n",
+ "from IPython.display import clear_output\n",
+ "Loop = True\n",
+ "\n",
+ "try:\n",
+ " while Loop == True:\n",
+ " clear_output(wait=True)\n",
+ " !top -bcn1 -w512\n",
+ " if Check_IP: print(\"\\nYour Public IP: \" + requests.get('http://ip.42.pl/raw').text)\n",
+ " if Loop_Check == False:\n",
+ " Loop = False\n",
+ " else:\n",
+ " time.sleep(Loop_Interval)\n",
+ "except:\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "YBpux5mNSHhG"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Get VM specification \n",
+ "Output_Format = \"TEXT\" #@param [\"TEXT\", \"HTML\", \"XML\", \"JSON\"]\n",
+ "Short_Output = True #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from google.colab import files\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "try:\n",
+ " Output_Format_Ext\n",
+ "except NameError:\n",
+ " get_ipython().system_raw(\"apt install lshw -qq -y\")\n",
+ "\n",
+ "if Short_Output:\n",
+ " Output_Format = \"txt\"\n",
+ " Output_Format2 = \"-short\"\n",
+ " Output_Format_Ext = \"txt\"\n",
+ "elif Output_Format == \"TEXT\":\n",
+ " Output_Format = \"txt\"\n",
+ " Output_Format2 = \"\"\n",
+ " Output_Format_Ext = \"txt\"\n",
+ "else:\n",
+ " Output_Format = Output_Format.lower()\n",
+ " Output_Format2 = \"-\"+Output_Format.lower()\n",
+ " Output_Format_Ext = Output_Format.lower()\n",
+ "\n",
+ "get_ipython().system_raw(\"lshw \" + Output_Format2 + \" > Specification.\" + Output_Format)\n",
+ "files.download(\"/content/Specification.\" + Output_Format_Ext)\n",
+ "get_ipython().system_raw(\"rm -f /content/Specification.$outputformatC\")\n",
+ "display(HTML(\"Sending log to your browser... \"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nJlifxF8_yv1"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Check for GPU (GPU runtime is needed) \n",
+ "# @markdown You should never ever connect to GPU runtime if you do not have any use for GPU at all! \n",
+ "# ================================================================ #\n",
+ "\n",
+ "gpu = !nvidia-smi --query-gpu=gpu_name,driver_version,memory.total --format=csv\n",
+ "\n",
+ "print(\"\")\n",
+ "print(gpu[1])\n",
+ "print(\"\")\n",
+ "print(\"(If the output shows nothing, that means you are not connected to GPU runtime)\")\n",
+ "print(\"----------------------------------------------------------------------------------------------------\")\n",
+ "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
+ "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "6sxlwKm9SLBa"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Crash the VM \n",
+ "# @markdown Run this cell to crash the VM. ONLY when needed!
\n",
+ "# @markdown > You might need to run this cell when the VM is out of disk due to rclone caching.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "some_str = ' ' * 5120000000000"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OOpAjMjxsNd6"
+ },
+ "source": [
+ "# ✦ *EXPERIMENTAL* ✦ \n",
+ "\n",
+ "**Everything in this section is in EXPERIMENTAL state and/or UNFINISHED and/or LEFT AS IS!\n",
+ "\n",
+ "Any issue regarding this section will be IGNORED!** "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UdiQLlm5zX3_"
+ },
+ "source": [
+ "## FFMPEG 1 \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EFOqhHG6hOVH"
+ },
+ "source": [
+ "### ***Required to use Scripts:*** Install FFmpeg, VCSI & Mkvtoolnix"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "G3JHGE0Jtzme"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Click Here to Install FFmpeg, VCSI, Mkvtoolnix, Firefox, Furiousmount & Handbrake \n",
+ "\n",
+ "#@title ← ឵឵Upgrade FFmpeg to v4.2.2 { vertical-output: true }\n",
+ "from IPython.display import clear_output\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/totalleecher/\" \\\n",
+ " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
+ "\n",
+ "from ttmg import (\n",
+ " loadingAn,\n",
+ " textAn,\n",
+ ")\n",
+ "\n",
+ "loadingAn(name=\"lds\")\n",
+ "textAn(\"Installing Dependencies...\", ty='twg')\n",
+ "#os.system('pip install git+git://github.com/AWConant/jikanpy.git') //GPU Not supported\n",
+ "#os.system('add-apt-repository -y ppa:jonathonf/ffmpeg-4') //GPU Not supported\n",
+ "os.system('apt-get update')\n",
+ "os.system('apt-get install ffmpeg')\n",
+ "os.system('apt-get install mkvtoolnix')\n",
+ "os.system('pip install vcsi')\n",
+ "#os.system('sudo apt-get install synaptic')\n",
+ "#os.system('sudo apt install firefox')\n",
+ "os.system('sudo add-apt-repository ppa:stebbins/handbrake-releases -y')\n",
+ "os.system('sudo apt update -y')\n",
+ "os.system('sudo apt install --install-recommends handbrake-gtk handbrake-cli')\n",
+ "#os.system('sudo apt-get install furiusisomount')\n",
+ "\n",
+ "clear_output()\n",
+ "print(\"Install Finished\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ey6-UveDalxR"
+ },
+ "source": [
+ "### » Re-encode a Video to a Different Resolution (*H265*) - Need GPU - Nvidia Telsa P100 or T4 (Support Both Single & Batch Processing)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "tsY6jhC9SXvF"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Check GPU\n",
+ "#@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you.\n",
+ "\n",
+ "gpu = !nvidia-smi --query-gpu=gpu_name --format=csv\n",
+ "print(gpu[1])\n",
+ "print(\"The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.\")\n",
+ "print(\"Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Zam_JHPDalxc"
+ },
+ "outputs": [],
+ "source": [
+ "path = \"\" #@param {type:\"string\"}\n",
+ "save_txt = False #@param {type:\"boolean\"}\n",
+ "import os, uuid, re, IPython\n",
+ "import ipywidgets as widgets\n",
+ "import time\n",
+ "\n",
+ "from glob import glob\n",
+ "from IPython.display import HTML, clear_output\n",
+ "from google.colab import output, drive\n",
+ "\n",
+ "def mediainfo():\n",
+ " display(HTML(\" \"))\n",
+ "# print(path.split(\"/\")[::-1][0])\n",
+ " display(HTML(\" \"))\n",
+ "# media = !mediainfo \"$path\"\n",
+ "# media = \"\\n\".join(media).replace(os.path.dirname(path)+\"/\", \"\")\n",
+ " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path\" \"\"\")\n",
+ " with open('/root/.nfo', 'r') as file:\n",
+ " media = file.read()\n",
+ " media = media.replace(os.path.dirname(path)+\"/\", \"\")\n",
+ " print(media)\n",
+ " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
+ " \n",
+ " if save_txt:\n",
+ " txt = path.rpartition('.')[0] + \".txt\"\n",
+ " if os.path.exists(txt):\n",
+ " get_ipython().system_raw(\"rm -f '$txt'\")\n",
+ " !curl -s https://pastebin.com/raw/TApKLQfM -o \"$txt\"\n",
+ " with open(txt, 'a+') as file:\n",
+ " file.write(\"\\n\\n\")\n",
+ " file.write(media)\n",
+ "\n",
+ "while not os.path.exists(\"/content/drive\"):\n",
+ " try:\n",
+ " drive.mount(\"/content/drive\")\n",
+ " clear_output(wait=True)\n",
+ " except:\n",
+ " clear_output()\n",
+ " \n",
+ "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
+ " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
+ " \n",
+ "mediainfo()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "SHBPElqualx6"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "#@markdown Encoder \n",
+ "Encoder = \"CPU\" #@param [\"GPU\", \"CPU\"]\n",
+ "codec = \"x264\" #@param [\"x264\", \"x265\"]\n",
+ "#@markdown Encoding all videos in folder \n",
+ "video_folder_path = '' #@param {type:\"string\"}\n",
+ "#@markdown ---\n",
+ "#@markdown Encoding selected videos \n",
+ "video_file_path1 = '' #@param {type:\"string\"}\n",
+ "video_file_path2 = '' #@param {type:\"string\"}\n",
+ "video_file_path3 = '' #@param {type:\"string\"}\n",
+ "video_file_path4 = '' #@param {type:\"string\"}\n",
+ "video_file_path5 = '' #@param {type:\"string\"}\n",
+ "\n",
+ "#counting\n",
+ "if video_file_path1 != \"\":\n",
+ " coa = 1\n",
+ "else:\n",
+ " coa = 0\n",
+ "\n",
+ "if video_file_path2 != \"\":\n",
+ " cob = 1\n",
+ "else:\n",
+ " cob = 0\n",
+ "\n",
+ "if video_file_path3 != \"\":\n",
+ " coc = 1\n",
+ "else:\n",
+ " coc = 0\n",
+ "\n",
+ "if video_file_path4 != \"\":\n",
+ " cod = 1\n",
+ "else:\n",
+ " cod = 0\n",
+ "\n",
+ "if video_file_path5 != \"\":\n",
+ " coe = 1\n",
+ "else:\n",
+ " coe = 0\n",
+ "\n",
+ "#@markdown ---\n",
+ "resolution = '360p' #@param [\"2160p\",\"1440p\",\"1080p\", \"720p\", \"480p\", \"360p\", \"240p\", \"same as input\"]\n",
+ "encode_setting = 'Advance' #@param [\"Advance\", \"HEVC\", \"HEVC 10 Bit\"]\n",
+ "file_type = 'mkv' #@param [\"mkv\", \"mp4\"]\n",
+ "rip_audio = False #@param {type:\"boolean\"}\n",
+ "rip_subtitle = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "if rip_audio == False:\n",
+ " rip_audio_string = \"-acodec copy\"\n",
+ "else:\n",
+ " rip_audio_string = \"-an\"\n",
+ "\n",
+ "if rip_subtitle == False:\n",
+ " rip_subtitle_string = \"-scodec copy\"\n",
+ "else:\n",
+ " rip_subtitle_string = \"-sn\"\n",
+ "\n",
+ "\n",
+ "if resolution == '2160p':\n",
+ " w = '3840'\n",
+ "elif resolution == '1440p':\n",
+ " w = '2560'\n",
+ "elif resolution == '1080p':\n",
+ " w = '1980'\n",
+ "elif resolution == '720p':\n",
+ " w = '1280'\n",
+ "elif resolution == '480p':\n",
+ " w = '854'\n",
+ "elif resolution == '360p':\n",
+ " w = '640'\n",
+ "elif resolution == '240p':\n",
+ " w = '426'\n",
+ "else:\n",
+ " w = ''\n",
+ "\n",
+ "if (w == '3840' or w == '2560' or w == '1980' or w == '1280' or w == '854' or w == '640' or w == '426'):\n",
+ " scale_string = \"-vf scale=\"+(w)+\":-1:flags=lanczos\" \n",
+ "else:\n",
+ " scale_string = \"\"\n",
+ "\n",
+ "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
+ "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
+ "filePath = \"ffmpeg.txt\"\n",
+ "if os.path.exists(filePath):\n",
+ " os.remove(filePath)\n",
+ "\n",
+ "if video_folder_path == \"\":\n",
+ " #try:\n",
+ " f = open(\"ffmpeg.txt\", \"+w\")\n",
+ " x = (video_file_path1) + \"\\n\" + (video_file_path2) + \"\\n\" +(video_file_path3) + \"\\n\" +(video_file_path4) +\"\\n\" + (video_file_path5)\n",
+ " f.write(x)\n",
+ " f.close()\n",
+ " count = coa+cob+coc+cod+coe\n",
+ " #except:\n",
+ " #err = 1\n",
+ "\n",
+ "else:\n",
+ "#writing temp file\n",
+ " for file in os.listdir(video_folder_path):\n",
+ " if file.endswith(tuple(ext)):\n",
+ " \n",
+ " x = os.path.join(video_folder_path, file) \n",
+ " #print(x)\n",
+ " print(x, file=open(\"ffmpeg.txt\", \"+a\")) \n",
+ "\n",
+ "#counting line\n",
+ " thefilepath = \"ffmpeg.txt\"\n",
+ " count = len(open(thefilepath).readlines( ))\n",
+ "\n",
+ "#@markdown ---\n",
+ "#@markdown Advance Settings \n",
+ "#@markdown Video Setting \n",
+ "preset = 'slow' #@param [\"slow\", \"medium\", \"fast\", \"hq\", \"hp\", \"bd\", \"ll\", \"llhq\", \"llhp\", \"lossless\", \"losslesshp\"]\n",
+ "level = '5.2' #@param [\"default\",\"4.1\", \"5.1\", \"5.2\", \"6.2\"]\n",
+ "tier = 'main' #@param [\"default\",\"main\", \"high\"]\n",
+ "#@markdown Setting only for GPU Encoding
\n",
+ "profile = 'main' #@param [\"main\", \"main10\", \"rext\"]\n",
+ "pixfmt = 'p010le' #@param [\"nv12\", \"yuv420p\", \"p010le\", \"yuv444p\", \"p016le\", \"yuv444p16le\"]\n",
+ "rc = 'vbr_hq' #@param [\"vbr\", \"cbr\", \"vbr_2pass\", \"ll_2pass_size\", \"vbr_hq\", \"cbr_hq\"]\n",
+ "rcla = '32' #@param [\"8\", \"16\", \"32\", \"64\"]\n",
+ "overall_bitrate = 2500 #@param {type:\"slider\", min:500, max:10000, step:100}\n",
+ "max_bitrate = 20000 #@param {type:\"slider\", min:500, max:50000, step:100}\n",
+ "buffer_size = 60000 #@param {type:\"slider\", min:500, max:90000, step:100}\n",
+ "deblock = -3 #@param {type:\"slider\", min:-6, max:6, step:1}\n",
+ "reframe = 5 #@param {type:\"slider\", min:1, max:6, step:1}\n",
+ "surfaces = 64 #@param {type:\"slider\", min:0, max:64, step:1}\n",
+ "#@markdown Setting only for CPU Encoding
\n",
+ "profile_cpu = 'main10' #@param [\"main10\"]\n",
+ "pixfmt_cpu = 'yuv420p10le' #@param [\"yuv420p\",\"yuv420p10le\",\"yuv444p\",\"yuv444p16le\"]\n",
+ "threads = 16 #@param {type:\"slider\", min:0, max:16, step:1}\n",
+ "crf = 28 #@param {type:\"slider\", min:0, max:30, step:1}\n",
+ "\n",
+ "\n",
+ "if level != \"default\":\n",
+ " l_string = \"-level \"+str(level)\n",
+ "else:\n",
+ " l_string =\"\"\n",
+ "\n",
+ "if tier != \"default\":\n",
+ " t_string = \"-tier \"+str(tier)\n",
+ "else:\n",
+ " t_string = \"\"\n",
+ "\n",
+ "#tp = '1' #@param [\"0\", \"1\"]\n",
+ "#cq = '21' #@param {type:\"string\"}\n",
+ "#qm ='21' #@param {type:\"string\"}\n",
+ "#qmx = '27' #@param {type:\"string\"}\n",
+ "#qp = '23' #@param {type:\"string\"}\n",
+ "#qb = '25' #@param {type:\"string\"}\n",
+ "#qi = '21' #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Audio Setting \n",
+ "\n",
+ "audio_output = 'No audio' #@param [\"None\", \"copy\", \"flac\", \"aac\", \"libopus\", \"eac3\", \"No audio\", \"same as input\"]\n",
+ "channel = 'same as input' #@param [\"DownMix 2CH\", \"same as input\"]\n",
+ "\n",
+ "if audio_output == \"same as input\":\n",
+ " audio_string = \"-acodec copy\"\n",
+ "elif audio_output == \"No audio\":\n",
+ " audio_string = \"-an\"\n",
+ "elif audio_output == \"None\":\n",
+ " audio_string = \"\"\n",
+ "else:\n",
+ " audio_string = \"-c:a \"+(audio_output)\n",
+ "\n",
+ "if channel == \"DownMix 2CH\":\n",
+ " channel_string =\"-ac 2\"\n",
+ "else:\n",
+ " channel_string =\"\"\n",
+ "\n",
+ "#@markdown Subtitle Setting \n",
+ "#@markdown Please use ass
file for hardsub \n",
+ "hardsub = False #@param {type:\"boolean\"}\n",
+ "subtitle_option = 'same as input' #@param [\"None\",\"No sub\", \"Add custom sub\",\"same as input\"]\n",
+ "custom_subtitle_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown Custom Added Setting \n",
+ "custom_command = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "\n",
+ "if hardsub == False:\n",
+ "\n",
+ " if subtitle_option == \"No sub\":\n",
+ " subtitle_string = \"-sn\"\n",
+ " elif subtitle_option == \"same as input\":\n",
+ " subtitle_string = \"-scodec copy\"\n",
+ " elif subtitle_option == \"None\":\n",
+ " subtitle_string = \"\"\n",
+ " else:\n",
+ " subtitle_string = \"-i \"+(custom_subtitle_path)\n",
+ "\n",
+ "else:\n",
+ " subtitle_string = \"ass=\"+(custom_subtitle_path)\n",
+ "#=================\n",
+ "if custom_command != \"\":\n",
+ " c_string = custom_command\n",
+ "else:\n",
+ " c_string = \"\"\n",
+ "#=================\n",
+ "\n",
+ "os.environ['ps'] = preset\n",
+ "os.environ['pf'] = profile\n",
+ "os.environ['pf_cpu'] = profile_cpu\n",
+ "os.environ['pfm'] = pixfmt\n",
+ "os.environ['pfmcpu'] = pixfmt_cpu\n",
+ "os.environ['br'] = str(overall_bitrate)\n",
+ "os.environ['max'] = str(max_bitrate)\n",
+ "os.environ['buff'] = str(buffer_size)\n",
+ "os.environ['de'] = str(deblock)\n",
+ "os.environ['ref'] = str(reframe)\n",
+ "os.environ['sur'] = str(surfaces)\n",
+ "os.environ['lv'] = l_string\n",
+ "os.environ['ti'] = t_string\n",
+ "os.environ['rc'] = rc\n",
+ "os.environ['rl'] = rcla\n",
+ "os.environ['thr'] = str(threads)\n",
+ "os.environ['crf'] = str(crf)\n",
+ "os.environ['res'] = resolution\n",
+ "#os.environ['tp'] = tp\n",
+ "#os.environ['cq'] = cq\n",
+ "#os.environ['qP'] = qp\n",
+ "#os.environ['qB'] = qb\n",
+ "#os.environ['qI'] = qi\n",
+ "#os.environ['qm'] = qm\n",
+ "#os.environ['qmx'] = qmx\n",
+ "os.environ['scs'] = str(scale_string)\n",
+ "os.environ['aus'] = audio_string\n",
+ "os.environ['chc'] = channel_string\n",
+ "os.environ['sus'] = subtitle_string\n",
+ "os.environ['cus'] = str(c_string)\n",
+ "#=================\n",
+ "#Batch Encoding\n",
+ "if count != 0:\n",
+ " f=open('ffmpeg.txt')\n",
+ " lines=f.readlines()\n",
+ "\n",
+ " i = 0\n",
+ " while i < count:\n",
+ " video_file_path = lines[i]\n",
+ " video_file_path = video_file_path.rstrip(\"\\n\")\n",
+ " #print(video_file_path)\n",
+ "\n",
+ " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ " testsplit = video_file_path.split(\"/\")\n",
+ " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ " resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
+ " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "\n",
+ " os.environ['inputFile'] = video_file_path\n",
+ " os.environ['outputPath'] = output_file_path.group(0)\n",
+ " os.environ['fileName'] = filename_raw\n",
+ " os.environ['fileType'] = file_type\n",
+ " os.environ['resolutionWidth'] = resolution_raw.group(0)\n",
+ "\n",
+ " if Encoder == \"GPU\":\n",
+ " if codec == \"x265\":\n",
+ " if encode_setting == \"Advance\":\n",
+ "\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v \"$ps\" -rc \"$rc\" -2pass 1 -b:v \"$br\"k -maxrate \"$max\"k -bufsize \"$buff\"k -profile:v \"$pf\" $lv $ti -pix_fmt \"$pfm\" -rc-lookahead \"$rl\" -no-scenecut 1 -weighted_pred 1 -deblock:v \"$de\":\"$de\" -refs:v \"$ref\" -surfaces \"$sur\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " elif encode_setting == \"HEVC\":\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " else:\n",
+ " !ffmpeg -hwaccel cuvid -stats -flags +loop -c:v hevc_nvenc -preset:v slow -rc vbr_hq -2pass 1 -b:v 2500k -maxrate 20M -bufsize 60M -cq 1 -forced-idr 1 -nonref_p 1 -profile:v main10 -pix_fmt p010le -rc-lookahead 32 -no-scenecut 1 -weighted_pred 1 -deblock:v -3:-3 -refs:v 5 -surfaces 64 $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " else:\n",
+ " !ffmpeg -hwaccel cuvid -i \"$inputFile\" -c:v h264_cuvid $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " else:\n",
+ " if codec == \"x265\":\n",
+ " if encode_setting == \"Advance\":\n",
+ " !ffmpeg -i \"$inputFile\" -flags +loop -c:v libx265 -profile:v \"$pf_cpu\" $lv $ti -pix_fmt \"$pfmcpu\" -threads \"$thr\" -thread_type frame -preset:v \"$ps\" -crf \"$crf\" -x265-params \"rc-lookahead=40:bframes=4:b-adapt=2:ref=6:aq-mode=0:aq-strength=0:aq-motion=0:me=hex:subme=3:max-merge=3:weightb=1:no-fast-intra=1:tskip-fast=0:rskip=0:strong-intra-smoothing=0:b-intra=1:early-skip=0:sao=0:rd=1:psy-rd=0:deblock=-5,-5\" $scs $aus $chs $sus $cus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " \n",
+ " elif encode_setting == \"HEVC\":\n",
+ " !ffmpeg -i \"$inputFile\" -c:v libx265 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " else:\n",
+ " !ffmpeg -i \"$inputFile\" -c:v libx265 -profile:v main10 -crf 28 -threads 6 -thread_type frame $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ " else:\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" -c:v libx264 -preset \"$ps\" -crf \"$crf\" -threads \"$thr\" -strict experimental $scs $aus $sus \"$outputPath\"/\"$fileName\"-\"$res\".\"$fileType\" \n",
+ "\n",
+ " i += 1\n",
+ "\n",
+ " else:\n",
+ " print(\"All Finished\")\n",
+ " os.remove(filePath)\n",
+ "else:\n",
+ " print(\"Please input file or folder path\")\n",
+ "#End of Code V1.5 - Codemater - "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GahMjYf8miNs"
+ },
+ "source": [
+ "### » Generate Thumbnails - Preview from Video "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0nY7QbDIrnGl"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Click Here to generate thumbnail for all video in input folder path \n",
+ "\n",
+ "import os\n",
+ "folder_path = \"\" #@param {type:\"string\"}\n",
+ "ext = \".mp4\",\".MP4\",\".MTS\",\".mts\",\".m2ts\",\".mkv\",\".avi\",\".MOV\",\".mov\",\".wmv\",\".WMV\",\".flv\",\".mpg\",\".webm\",\".WEBM\"\n",
+ "video_path = '' #@param {type:\"string\"}\n",
+ "\n",
+ "\n",
+ "#counting\n",
+ "if video_path != \"\":\n",
+ " count = 1\n",
+ "else:\n",
+ " count = 0\n",
+ "\n",
+ "# As file at filePath is deleted now, so we should check if file exists or not not before deleting them\n",
+ "filePath = \"vcsi.txt\"\n",
+ "if os.path.exists(filePath):\n",
+ " os.remove(filePath)\n",
+ "\n",
+ "\n",
+ "\n",
+ "if (folder_path == \"\") and (video_path != \"\"):\n",
+ " #try:\n",
+ " f = open(\"vcsi.txt\", \"+w\")\n",
+ " f.write(video_path)\n",
+ " f.close()\n",
+ " count = 1\n",
+ "\n",
+ "elif (folder_path == \"\") and (video_path == \"\"):\n",
+ " count = 0\n",
+ "\n",
+ "else:\n",
+ "#writing temp file\n",
+ " for file in os.listdir(folder_path):\n",
+ " if file.endswith(tuple(ext)):\n",
+ " \n",
+ " x = os.path.join(folder_path, file) \n",
+ " #print(x)\n",
+ " print(x, file=open(\"vcsi.txt\", \"+a\")) \n",
+ "\n",
+ "#counting line\n",
+ " thefilepath = \"vcsi.txt\"\n",
+ " count = len(open(thefilepath).readlines( ))\n",
+ "\n",
+ "\n",
+ "import os, sys, re\n",
+ "from IPython.display import Image, display\n",
+ "os.makedirs(\"/content/drive/My Drive/Thumbnail\", exist_ok=True)\n",
+ "\n",
+ "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
+ "creation_engine = 'vcsi' #@param [\"ffmpeg\", \"vcsi\"]\n",
+ "output_path = 'same folder' #@param [\"same folder\", \"My Drive/Thumbnail\"]\n",
+ "#@markdown Eg : gird 3 = 3x3
\n",
+ "grid = 4 #@param {type:\"slider\", min:1, max:20, step:1}\n",
+ "default_grid = True #@param {type:\"boolean\"}\n",
+ "time_stamp = False #@param {type:\"boolean\"}\n",
+ "\n",
+ "\n",
+ "if time_stamp == True:\n",
+ " t_string = \"-t\"\n",
+ "else:\n",
+ " t_string = \"\"\n",
+ "\n",
+ "if default_grid == False:\n",
+ " g_string = \"-g \" + str(grid) + \"x\" + str(grid) \n",
+ "else:\n",
+ " g_string = \"\"\n",
+ "\n",
+ "os.environ['ts'] = t_string\n",
+ "os.environ['gs'] = g_string\n",
+ "#Batch Encoding\n",
+ "if count != 0:\n",
+ " f=open('vcsi.txt')\n",
+ " lines=f.readlines()\n",
+ "\n",
+ " i = 0\n",
+ " while i < count:\n",
+ " video_file_path = lines[i]\n",
+ " video_file_path = video_file_path.rstrip(\"\\n\")\n",
+ " print(video_file_path)\n",
+ " \n",
+ " output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ " output_file_path_raw = output_file_path.group(0)\n",
+ " delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ " filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ " filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ " file_extension = re.search(\".{3}$\", filename)\n",
+ " file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ " os.environ['inputFile'] = video_file_path\n",
+ " os.environ['outputPath'] = output_file_path_raw\n",
+ " os.environ['outputExtension'] = output_file_type\n",
+ " os.environ['fileName'] = filename_raw\n",
+ " os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ " if output_path == \"same folder\":\n",
+ " if creation_engine == 'ffmpeg':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 0 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " if output_path == \"same folder\":\n",
+ " if creation_engine == 'vcsi':\n",
+ " !vcsi $ts $gs \"$inputFile\" -o \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " if not output_path == \"same folder\":\n",
+ " !vcsi $ts $gs \"$inputFile\" -o \"/content/drive/My Drive/Thumbnail\"/\"$fileName\"_thumbnails.\"$outputExtension\"\n",
+ "\n",
+ " i += 1\n",
+ "\n",
+ " else:\n",
+ " print(\"All Finished\")\n",
+ " os.remove(filePath)\n",
+ "else:\n",
+ " print(\"Please video file or folder path\")\n",
+ "#End of Code V1.2 - Codemater - "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NQ0TxfKeghR8"
+ },
+ "source": [
+ "### » Misc."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Ls4O5VLwief-"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert *.mkv* ➔ *.mp4* (Lossless)\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputFile'] = filename_raw\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict experimental \"$outputPath\"\"$outputFile\".mp4"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "iFBUeQhn7QTc"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert Trim Video File (Lossless)\n",
+ "\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nSeO98YQoTJe"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Audio from Video File (Lossless)\n",
+ "\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = output_file_extension\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CEHi5EMm9lXG"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Crop Video\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "out_width = \"1280\" #@param {type:\"string\"}\n",
+ "out_height = \"200\" #@param {type:\"string\"}\n",
+ "starting_position_x = \"0\" #@param {type:\"string\"}\n",
+ "starting_position_y = \"300\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outWidth'] = out_width\n",
+ "os.environ['outHeight'] = out_height\n",
+ "os.environ['positionX'] = starting_position_x\n",
+ "os.environ['positionY'] = starting_position_y\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "ee5omyu53kv0"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Individual Frames from Video (*Lossless*)\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
+ "\n",
+ "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
+ "#@markdown * [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
+ "\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['frameRate'] = frame_rate\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "qRVrWJDPFvYY"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown ← Verify Tracks for Video \n",
+ "import os, sys, re\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "!mkvmerge -i \"$video_file_path\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "IVoQDyfT06bN"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Extract Subtitle from Video \n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = 'idx/sub' #@param [\"srt\", \"ass\", \"idx/sub\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outputExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "if output_file_type == 'srt':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
+ "\n",
+ "if output_file_type == 'ass':\n",
+ " !ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\".\"$outputExtension\"\n",
+ "\n",
+ "if output_file_type == 'idx/sub':\n",
+ " !mkvextract \"$inputFile\" tracks 2:\"$outputPath\"/\"$fileName\".idx"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "aURlOf9BC1P3"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)\n",
+ "import os, sys, re\n",
+ "\n",
+ "audio_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = audio_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ja95mvvq8oei"
+ },
+ "source": [
+ "### Extract HardSub (*Code still pending - Require python 3.7*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nsT83IDywPFe"
+ },
+ "outputs": [],
+ "source": [
+ "#@title\n",
+ "#@markdown ⬅️ Click Here to START server \n",
+ "\n",
+ "!sudo apt-get update \n",
+ "!sudo apt install tesseract-ocr\n",
+ "!sudo apt install libtesseract-dev\n",
+ "!sudo apt-get install tesseract-ocr-eng-mya\n",
+ "!sudo pip install pytesseract\n",
+ "!pip3 install opencv-python\n",
+ "!sudo apt-get install libopencv-dev\n",
+ "!pip install videocr\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "clear_output()\n",
+ "\n",
+ "print(\"Server Started Successfully\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "EzF2X0m7FIku"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install progressbar2 baidu-aip opencv-python-headless numpy"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3kabgg9wFmjv"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/fanyange/ocr_video_hardcoded_subtitles.git"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "D313rmPQFrQ3"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/ocr_video_hardcoded_subtitles"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "40JsjJCBxWcn"
+ },
+ "outputs": [],
+ "source": [
+ "from videocr import get_subtitles\n",
+ "\n",
+ "if __name__ == '__main__': # This check is mandatory for Windows.\n",
+ " print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KXWYsnPOxJVd"
+ },
+ "outputs": [],
+ "source": [
+ "get_subtitles(\n",
+ " video_path: str, lang='eng', time_start='0:00', time_end='',\n",
+ " conf_threshold=65, sim_threshold=90, use_fullframe=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Dnkbv5UyGzMJ"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "knnSIyZzG2gs"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/aritra1999/Video-OCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "dJYInqIZHAPJ"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/Video-OCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "anaBbX-VHEwk"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install -r reuirements.txt\n",
+ "!python final.py"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BKvHp7QUKMGL"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/rflynn/mangold.git"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Sc5dglFDKU80"
+ },
+ "outputs": [],
+ "source": [
+ "%cd /content/mangold"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BUm3Yn42KbHD"
+ },
+ "outputs": [],
+ "source": [
+ "!python ocr1.py pitrain.png"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CD36vcpf2FSb"
+ },
+ "source": [
+ "## FFMPEG 2 \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RDHuIkoi6l9a"
+ },
+ "source": [
+ "### » Display Media File Metadata"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "Sv8au_RO6WUs"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "media_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "os.environ['inputFile'] = media_file_path\n",
+ "\n",
+ "!ffmpeg -i \"$inputFile\" -hide_banner"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "X4yIG_nqYAoH"
+ },
+ "source": [
+ "> *You can ignore the* \"`At least one output file must be specified`\" *error after running this.*\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "66I2t2sQ2SMq"
+ },
+ "source": [
+ "### » Convert *Video File* ➔ *.mp4* (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "o6fcC2wN2SM8"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mp4"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NObEcBWAJoaz"
+ },
+ "source": [
+ "### » Convert *Video File* ➔ *.mkv* (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "zsx4JFLRJoa0"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -c copy -strict -2 \"$outputPath\"/\"$fileName\".mkv"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FpJXJiRl6-gK"
+ },
+ "source": [
+ "### » Trim Video File (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "8rjW6Fcb2SN0"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -c copy \"$outputPath\"/\"$fileName\"-TRIM.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SNDGdMRn3PA-"
+ },
+ "source": [
+ "### » Crop Video"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KFcIThDuBii_"
+ },
+ "source": [
+ " Crop Variables Explanation:\n",
+ "\n",
+ "* `out_width` = The width of your cropped video file.\n",
+ "* `out_height` = The height of your cropped video file.\n",
+ "* `starting_position_x` & `starting_position_y` = These values define the x & y coordinates of the top left corner of your original video to start cropping from.\n",
+ "\n",
+ "###### *Example: For cropping the black bars from a video that looked like* [this](https://yuju.pw/y/312r.png):\n",
+ "* *For your starting coordinates* (`x` , `y`) *you would use* (`0` , `138`).\n",
+ "* *For* `out_width` *you would use* `1920`. *And for* `out_height` *you would use `804`.*\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wuMEJdjV2SOT"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "out_width = \"1920\" #@param {type:\"string\"}\n",
+ "out_height = \"804\" #@param {type:\"string\"}\n",
+ "starting_position_x = \"0\" #@param {type:\"string\"}\n",
+ "starting_position_y = \"138\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outWidth'] = out_width\n",
+ "os.environ['outHeight'] = out_height\n",
+ "os.environ['positionX'] = starting_position_x\n",
+ "os.environ['positionY'] = starting_position_y\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -filter:v \"crop=$outWidth:$outHeight:$positionX:$positionY\" \"$outputPath\"/\"$fileName\"-CROP.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2f-THZmDoOaY"
+ },
+ "source": [
+ "### » Extract Audio from Video File (*Lossless*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JNckCucf2SOs"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_extension = 'm4a' #@param [\"m4a\", \"mp3\", \"opus\", \"flac\", \"wav\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = output_file_extension\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vn -c:a copy \"$outputPath\"/\"$fileName\"-audio.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MSUasbRUDP3B"
+ },
+ "source": [
+ "### » Re-encode a Video to a Different Resolution"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "nd2LvSRZCxRe"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = '' #@param {type:\"string\"}\n",
+ "resolution = '1080p' #@param [\"2160p\", \"1440p\", \"1080p\", \"720p\", \"480p\", \"360p\", \"240p\"]\n",
+ "file_type = 'mp4' #@param [\"mkv\", \"mp4\"]\n",
+ "\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "testsplit = video_file_path.split(\"/\")\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "resolution_raw = re.search(\"[^p]{3,4}\", resolution)\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path.group(0)\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileType'] = file_type\n",
+ "os.environ['resolutionHeight'] = resolution_raw.group(0)\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vf \"scale=-1:\"$resolutionHeight\"\" -c:a copy -strict experimental \"$outputPath\"/\"$fileName\"-\"$resolutionHeight\"p.\"$fileType\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9UagRtLPyKoQ"
+ },
+ "source": [
+ "### » Extract Individual Frames from Video"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "jTnByMhAyKoF"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown This will create a folder in the same directory titled \"`Extracted Frames`\"\n",
+ "* [*Example*](https://yuju.pw/y/36pP.png) *of output folder*\n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "start_time = \"00:00:00.000\" #@param {type:\"string\"}\n",
+ "end_time = \"00:01:00.000\" #@param {type:\"string\"}\n",
+ "frame_rate = \"23.976\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['startTime'] = start_time\n",
+ "os.environ['endTime'] = end_time\n",
+ "os.environ['frameRate'] = frame_rate\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir \"$outputPath\"/\"Extracted Frames\"\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -ss \"$startTime\" -to \"$endTime\" -r \"$frameRate\"/1 \"$outputPath\"/\"Extracted Frames\"/frame%04d.png"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9ZcgdPBT2SQK"
+ },
+ "source": [
+ "### » Generate Thumbnails - Preview from Video (3x2)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "J2u-Rha8miNy"
+ },
+ "outputs": [],
+ "source": [
+ "#@markdown Example of output image: https://yuju.pw/y/39i2.png \n",
+ "import os, sys, re\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = 'png' #@param [\"png\", \"jpg\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['outputExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" -vframes 1 -q:v 2 -vf \"select=not(mod(n\\,200)),scale=-1:480,tile=3x2\" -an \"$outputPath\"/\"$fileName\"_thumbnails.\"$outputExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7-3O4en4C4IL"
+ },
+ "source": [
+ "### » Convert Audio Filetype (*mp3, m4a, ogg, flac, etc.*)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "2sKzNHSG2SQq"
+ },
+ "outputs": [],
+ "source": [
+ "import os, sys, re\n",
+ "\n",
+ "audio_file_path = \"\" #@param {type:\"string\"}\n",
+ "output_file_type = \"mp3\" #@param [\"mp3\", \"ogg\", \"m4a\", \"opus\", \"flac\", \"alac\", \"wav\"]\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", audio_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", audio_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = audio_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileExtension'] = output_file_type\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "\n",
+ "!ffmpeg -hide_banner -i \"$inputFile\" \"$outputPath\"/\"$fileName\"converted.\"$fileExtension\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VRk2Ye1exWVA"
+ },
+ "source": [
+ "### » Extract + Upload Frames from Video "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "BIGsgarfxWVI"
+ },
+ "outputs": [],
+ "source": [
+ "import os, re, time, pathlib\n",
+ "import urllib.request\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "Auto_UP_Gdrive = False \n",
+ "AUTO_MOVE_PATH = \"/content\" \n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "pathDoneCMD = f'{HOME}/doneCMD.sh'\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/ttmg.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/biplobsd/\" \\\n",
+ " \"Google-Colab-CloudTorrent/master/res/ttmg.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/ttmg.py\")\n",
+ "\n",
+ "from ttmg import (\n",
+ " runSh,\n",
+ " findProcess,\n",
+ " loadingAn,\n",
+ " updateCheck,\n",
+ " ngrok\n",
+ ")\n",
+ "\n",
+ "video_file_path = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "output_file_path = re.search(\"^[\\/].+\\/\", video_file_path)\n",
+ "output_file_path_raw = output_file_path.group(0)\n",
+ "delsplit = re.search(\"\\/(?:.(?!\\/))+$\", video_file_path)\n",
+ "filename = re.sub(\"^[\\/]\", \"\", delsplit.group(0))\n",
+ "filename_raw = re.sub(\".{4}$\", \"\", filename)\n",
+ "file_extension = re.search(\".{3}$\", filename)\n",
+ "file_extension_raw = file_extension.group(0)\n",
+ "\n",
+ "os.environ['inputFile'] = video_file_path\n",
+ "os.environ['outputPath'] = output_file_path_raw\n",
+ "os.environ['fileName'] = filename_raw\n",
+ "os.environ['fileExtension'] = file_extension_raw\n",
+ "\n",
+ "!mkdir -p \"/content/frames\"\n",
+ "\n",
+ "for i in range(10):\n",
+ " clear_output()\n",
+ " loadingAn()\n",
+ " print(\"Uploading Frames...\")\n",
+ "\n",
+ "%cd \"/content/frames\"\n",
+ "!ffmpeg -hide_banner -ss 00:56.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame1.png\"\n",
+ "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame1.png\" https://catbox.moe/user/api.php -o frame1.txt\n",
+ "f1 = open('frame1.txt', 'r')\n",
+ "%cd \"/content\"\n",
+ "file_content1 = f1.read()\n",
+ "\n",
+ "%cd \"/content/frames\"\n",
+ "!ffmpeg -hide_banner -ss 02:20.0 -i \"$inputFile\" -vframes 1 -q:v 1 -y \"/content/frames/frame2.png\"\n",
+ "!curl --silent -F \"reqtype=fileupload\" -F \"fileToUpload=@frame2.png\" https://catbox.moe/user/api.php -o frame2.txt\n",
+ "%cd \"/content/frames\"\n",
+ "f2 = open('frame2.txt', 'r')\n",
+ "%cd \"/content\"\n",
+ "file_content2 = f2.read()\n",
+ "\n",
+ "clear_output()\n",
+ "print (\"Screenshot URLs:\")\n",
+ "print (\"1. \" + file_content1)\n",
+ "print (\"2. \" + file_content2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tozwpAhhnm69"
+ },
+ "source": [
+ "### MediaInfo "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "NTULRguzu0b0"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MediaInfo \n",
+ "path_to_file = \"\" # @param {type:\"string\"}\n",
+ "save_output_to_file = False # @param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, uuid, re, IPython\n",
+ "import ipywidgets as widgets\n",
+ "import time\n",
+ "from glob import glob\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "def mediainfo():\n",
+ " get_ipython().system_raw(\"\"\"mediainfo --LogFile=\"/root/.nfo\" \"$path_to_file\" \"\"\")\n",
+ " with open('/root/.nfo', 'r') as file:\n",
+ " media = file.read()\n",
+ " media = media.replace(os.path.dirname(path_to_file)+\"/\", \"\")\n",
+ " print(media)\n",
+ " get_ipython().system_raw(\"rm -f '/root/.nfo'\")\n",
+ " \n",
+ " if save_output_to_file:\n",
+ " txt = path.rpartition('.')[0] + \".txt\"\n",
+ " if os.path.exists(txt):\n",
+ " get_ipython().system_raw(\"rm -f '$txt'\")\n",
+ " with open(txt, 'a+') as file:\n",
+ " file.write(media)\n",
+ " \n",
+ "if not os.path.exists(\"/usr/bin/mediainfo\"):\n",
+ " get_ipython().system_raw(\"apt-get install mediainfo\")\n",
+ " \n",
+ "mediainfo()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ts6zYXUdEfrz"
+ },
+ "source": [
+ "## Google Drive Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "B10h_KlyE_S5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install googleDriveFileDownloader\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/Google Drive'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "mHTDvjRKEs9n"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Google Drive Downloader \n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/Google Drive).
\n",
+ "# @markdown > This downloader is somewhat working.The only problem (for now) is that the downloaded file is not stored with the same name and appears to not have extension as well.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from googleDriveFileDownloader import googleDriveFileDownloader\n",
+ "\n",
+ "if url == '':\n",
+ " print(\"The url field is empty!\")\n",
+ "else:\n",
+ " if output == '':\n",
+ " output = '/content/downloads/Google Drive'\n",
+ " %cd \"$output\"\n",
+ " a = googleDriveFileDownloader()\n",
+ " a.downloadFile(url)\n",
+ " else:\n",
+ " %cd \"$output\"\n",
+ " a = googleDriveFileDownloader()\n",
+ " a.downloadFile(url)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FWdEg4H9JlSp"
+ },
+ "source": [
+ "## HandBrake "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "E2seNDqYO8wg"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install HandBrake \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from os import makedirs\n",
+ "\n",
+ "makedirs(\"/content/temp/HandbrakeTemp\", exist_ok = True)\n",
+ "\n",
+ "!wget -qq https://github.com/vot/ffbinaries-prebuilt/releases/download/v4.2.1/ffmpeg-4.2.1-linux-64.zip \n",
+ "!rm -f ffmpeg-4.2.1-linux-64.zip\n",
+ "!add-apt-repository ppa:stebbins/handbrake-releases -y \n",
+ "!apt-get install -y handbrake-cli\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "CQdjykVdJw0a"
+ },
+ "outputs": [],
+ "source": [
+ "##################################################\n",
+ "#\n",
+ "# Code author: SKGHD\n",
+ "# https://github.com/SKGHD/Handy\n",
+ "#\n",
+ "##################################################\n",
+ "\n",
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] HandBrake \n",
+ "MODE = \"SINGLE\" #@param [\"SINGLE\", \"BATCH\"]\n",
+ "# @markdown > Select mode (batch conversion / single file)\n",
+ "# @markdown ---\n",
+ "SOURCE = \"\" # @param {type:\"string\"}\n",
+ "DESTINATION = \"\" # @param {type:\"string\"}\n",
+ "FORMAT = \"mkv\" # @param [\"mp4\", \"mkv\"]\n",
+ "RESOLUTION = \"480p\" # @param [\"480p\", \"576p\", \"720p\", \"1080p\"]\n",
+ "Encoder = \"x264\" # @param [\"x264\", \"x265\"]\n",
+ "Encoder_Preset = \"ultrafast\" # @param [\"ultrafast\", \"faster\", \"fast\", \"medium\", \"slow\", \"slower\"]\n",
+ "CQ = 30 #@param {type:\"slider\", min:10, max:30, step:1}\n",
+ "# @markdown > Choose Constant Quality Rate (higher quality / smaller file size)\n",
+ "Additional_Flags = \"\" # @param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import smtplib\n",
+ "import os\n",
+ "\n",
+ "formats = ('.mkv','.mp4','.ts','.avi','.mov','.wmv')\n",
+ "\n",
+ "######## Renames the file ########\n",
+ "def fileName(fPath):\n",
+ " tName = fPath.split('/')[-1] \n",
+ " if tName.endswith('ts'):\n",
+ " tName = '[HandBrake] ' + tName[:-3] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
+ " else:\n",
+ " tName = '[HandBrake] ' + tName[:-4] + f' [{RESOLUTION}] [{Encoder}].{FORMAT}' \n",
+ " return tName\n",
+ "\n",
+ "def set_resolution():\n",
+ " global w,h,flags\n",
+ " if RESOLUTION == \"480p\":\n",
+ " w, h = \"854\" , \"480\"\n",
+ " if RESOLUTION == \"480p\":\n",
+ " w, h = \"1024\" , \"576\"\n",
+ " elif RESOLUTION == \"720p\":\n",
+ " w, h = \"1280\" , \"720\"\n",
+ " elif RESOLUTION==\"1080p\":\n",
+ " w, h = \"1920\" , \"1080\"\n",
+ "\n",
+ "def addFlags():\n",
+ " global flags\n",
+ " flags = f\" --encoder {Encoder} --all-audio -s '0,1,2,3' --cfr --optimize --quality={CQ} --width={w} --height={h} --format={FORMAT} --encoder-preset={Encoder_Preset} \"\n",
+ " if Additional_Flags != \"\":\n",
+ " flags += str(Additional_Flags)\n",
+ "\n",
+ "set_resolution()\n",
+ "addFlags()\n",
+ "\n",
+ "##### HandBrake and Rclone #####\n",
+ "def runner(path):\n",
+ " f_name = fileName(path)\n",
+ " hTemp=f\"/content/temp/HandbrakeTemp/{f_name}\"\n",
+ " !HandBrakeCLI -i \"$path\" -o \"$hTemp\" $flags\n",
+ "\n",
+ "\n",
+ " if os.path.isfile(hTemp):\n",
+ " print(f\"\\n\\n********** Successfully converted {f_name}\\n Now saving to Destination.....\")\n",
+ " if os.path.exists('/usr/bin/rclone'):\n",
+ " !rclone move \"$hTemp\" --user-agent \"Mozilla\" \"$DESTINATION\" --transfers 20 --checkers 20 --stats-one-line --stats=5s -v --tpslimit 95 --tpslimit-burst 40\n",
+ " else:\n",
+ " dest = DESTINATION+'/'+f_name\n",
+ " !mv \"$hTemp\" \"$dest\"\n",
+ " if os.path.isfile(DESTINATION+ '/' +f_name): \n",
+ " print(f\"\\n\\n********** Successfully saved {f_name} to Destination\")\n",
+ "\n",
+ "########## Check Mode ########\n",
+ "if MODE==\"BATCH\":\n",
+ " os.makedirs(DESTINATION, exist_ok=True)\n",
+ " if SOURCE.endswith('/'):\n",
+ " pass\n",
+ " else: SOURCE +='/'\n",
+ " filesList = os.listdir(SOURCE+'.')\n",
+ " if os.path.isfile(SOURCE+'processed_db.txt'):\n",
+ " pass\n",
+ " else:\n",
+ " with open((SOURCE+'processed_db.txt'), 'w') as fb:\n",
+ " fb.write(\"Do not delete this file until all files have been processed!\\n\")\n",
+ " fb.close()\n",
+ " with open((SOURCE+'processed_db.txt'), \"r+\") as filehandle:\n",
+ " processedList = [x.rstrip() for x in filehandle.readlines()]\n",
+ "\n",
+ " print('<<<<<<<<<<<<<<<<<< Starting Conversion in Batch mode. >>>>>>>>>>>>>>>>>>')\n",
+ "\n",
+ " for currentFile in filesList:\n",
+ " if currentFile.endswith(formats):\n",
+ " if currentFile not in processedList:\n",
+ " currentPath = SOURCE + currentFile \n",
+ " print(f'\\n\\n**************** Current File to process: {currentFile}')\n",
+ " runner(currentPath)\n",
+ " filehandle.write(currentFile+'\\n')\n",
+ " filehandle.close()\n",
+ " \n",
+ "\n",
+ "else:\n",
+ " if SOURCE.endswith(formats): \n",
+ " runner(SOURCE)\n",
+ " else: print(\"Are you sure you have selected the correct file?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Rd6Br05y7_Ya"
+ },
+ "source": [
+ "## MEGA Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LeGWoVGW8Eem"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module and Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install git+https://github.com/jeroenmeulenaar/python3-mega.git\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/MEGA'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "JiZ0tJd78LNQ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] MEGA Downloader \n",
+ "url = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > If the \"output\" field is empty, the default download path will be used (/content/downloads/MEGA).
\n",
+ "# @markdown > Currently not working due to the module haven't been updated to work with the new MEGA link structure. \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "from mega import Mega\n",
+ "\n",
+ "if url == '':\n",
+ " print(\"The url field is empty!\")\n",
+ "else:\n",
+ " if output == '':\n",
+ " output = '/content/downloads/MEGA'\n",
+ " %cd /content/downloads/MEGA\n",
+ " m = Mega.from_ephemeral()\n",
+ " m.download_from_url(url)\n",
+ " else:\n",
+ " %cd \"$output\"\n",
+ " m = Mega.from_ephemeral()\n",
+ " m.download_from_url(url)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7bNutSOeJ1kM"
+ },
+ "source": [
+ "## zippyshare Downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "if-ge8tzJ305"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Module and Dependencies \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "\n",
+ "path1 = '/content/downloads'\n",
+ "path2 = '/content/downloads/zippyshare'\n",
+ "\n",
+ "if os.path.exists(path1) == False:\n",
+ " os.makedirs(path1)\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " None\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "tO22WPSLKdbH"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] zippyshare Downloader \n",
+ "mode = 'single' #@param [\"single\", \"batch\"]\n",
+ "# @markdown ---\n",
+ "direct_url = \"\" #@param {type:\"string\"}\n",
+ "store_path = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > This downloader isn't working as it can't read from zippyshare's weird url (www(random_number).zippyshare)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "\n",
+ "if mode == 'single':\n",
+ " if direct_url == '':\n",
+ " print(\"The URL field is empty!\")\n",
+ " else:\n",
+ " url = direct_url\n",
+ " url_content = requests.get(url).content\n",
+ " soup = BeautifulSoup(url_content,'html.parser')\n",
+ "\n",
+ " x = list(soup.find_all('script',type='text/javascript'))\n",
+ " xx = []\n",
+ "\n",
+ " for i in x:\n",
+ " xx.append(str(i))\n",
+ "\n",
+ " for j in xx:\n",
+ " if '51245' in j:\n",
+ " thing = j\n",
+ " break\n",
+ " else:\n",
+ " pass\n",
+ "\n",
+ " thing_stripped = thing.strip()\n",
+ "\n",
+ " ylist = thing_stripped.split('/')\n",
+ "\n",
+ " url_initial = url.split('/')[2]\n",
+ "\n",
+ " file_id = ylist[3]\n",
+ "\n",
+ " unique_code = ylist[4].strip(\" '\\\" ()+\")\n",
+ " unique_code0 = eval(unique_code)\n",
+ "\n",
+ " game_name = ylist[-2].strip('\";\\n}< ')\n",
+ " parsed_link = f'{url_initial}/d/{file_id}/{unique_code0}/{game_name}'\n",
+ " direct_url = parsed_link\n",
+ " \n",
+ " if store_path == '':\n",
+ " store_path = '/content/downloads/zippyshare'\n",
+ " !wget -P {store_path} {direct_url}\n",
+ " else:\n",
+ " !wget -P {store_path} {direct_url}\n",
+ "elif mode == 'batch':\n",
+ " print(\"Upload a download.txt file that contains a list of zippyshare links.\\n\")\n",
+ " files.upload()\n",
+ " clear_output()\n",
+ " if store_path == '':\n",
+ " store_path = '/content/downloads/zippyshare'\n",
+ " !plowdown {direct_url} -o {store_path}\n",
+ " else:\n",
+ " !plowdown {direct_url} -o {store_path}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pUODCRACrvGC"
+ },
+ "source": [
+ "## Penetration Testing "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5Lo-h1Cnrxou"
+ },
+ "source": [
+ "### hashcat \n",
+ "GPU runtime needed! "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dWFBQvMVOJv0"
+ },
+ "source": [
+ "This block is unlikely going to make any progress as the learning curve of hashcat is quite steep..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "LPxKv5DAr3KV"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install hashcat \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!apt install cmake build-essential -y && apt install checkinstall git -y && git clone https://github.com/hashcat/hashcat.git && cd hashcat && git submodule update --init && make && make install \n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "SeubAcoyxCsw"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] hashcat Bechmark \n",
+ "# ================================================================ #\n",
+ "\n",
+ "!hashcat -b"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "HwRqNJoYR4Us"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] hashcat \n",
+ "hash = \"\" # @param {type:\"string\"}\n",
+ "output = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > The output field is currently there just as a placeholder.
\n",
+ "# @markdown ---\n",
+ "hash_type = 'WPA-EAPOL-PBKDF2' #@param [\"MD5\", \"SHA1\", \"WPA-EAPOL-PBKDF2\"]\n",
+ "attack_mode = 'dictionary' #@param [\"dictionary\", \"combination\", \"mask\", \"hybrid_wordlist_+_mask\", \"hybrid_mask_+_wordlist\"]\n",
+ "wordlist = \"\" # @param {type:\"string\"}\n",
+ "# @markdown > Enter the path to your wordlist (only used when the dictionary attack is chosen).
\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if hash == '':\n",
+ " print(\"The hash field is empty!\")\n",
+ "\n",
+ "if output == '':\n",
+ " output = '/content/hashcat_output.txt'\n",
+ "\n",
+ "placeholder = 'This cell is not complete yet and could be dropped/abandoned at any time.'\n",
+ "\n",
+ "if hash_type == 'MD5' or hash_type == 'SHA1':\n",
+ " print(placeholder)\n",
+ "elif hash_type == 'WPA-EAPOL-PBKDF2':\n",
+ " hash_type = 2500\n",
+ " if attack_mode == 'dictionary':\n",
+ " attack_mode = 0\n",
+ " if wordlist == '':\n",
+ " print(\"The wordlist field is empty!\")\n",
+ " else:\n",
+ " !hashcat -m {hash_type} -a {attack_mode} {hash} {wordlist} -o {output} --force\n",
+ " elif attack_mode == 'combination' or attack_mode == 'mask' or attack_mode == 'hybrid_wordlist_+_mask' or attack_mode == 'hybrid_mask_+_wordlist':\n",
+ " print(placeholder)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2EIvy2zb8re1"
+ },
+ "outputs": [],
+ "source": [
+ "!hashcat -m 2500 -a 0 /content/test.hccapx /content/downloads/rockyou.txt -d 1 -o /content/test.txt "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gdgYuWnst4ed"
+ },
+ "source": [
+ "## ProxyBroker "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "SuLleS03tzjn"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install proxybroker"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1czv6VpwuJs8"
+ },
+ "outputs": [],
+ "source": [
+ "\"\"\"Find 10 working HTTP(S) proxies and save them to a file.\"\"\"\n",
+ "\n",
+ "import asyncio\n",
+ "from proxybroker import Broker\n",
+ "\n",
+ "\n",
+ "async def save(proxies, filename):\n",
+ " \"\"\"Save proxies to a file.\"\"\"\n",
+ " with open(filename, 'w') as f:\n",
+ " while True:\n",
+ " proxy = await proxies.get()\n",
+ " if proxy is None:\n",
+ " break\n",
+ " proto = 'https' if 'HTTPS' in proxy.types else 'http'\n",
+ " row = '%s://%s:%d\\n' % (proto, proxy.host, proxy.port)\n",
+ " f.write(row)\n",
+ "\n",
+ "\n",
+ "def main():\n",
+ " proxies = asyncio.Queue()\n",
+ " broker = Broker(proxies)\n",
+ " tasks = asyncio.gather(broker.find(types=['HTTP', 'HTTPS'], limit=10),\n",
+ " save(proxies, filename='proxies.txt'))\n",
+ " loop = asyncio.get_event_loop()\n",
+ "# loop.run_until_complete(tasks)\n",
+ "\n",
+ "\n",
+ "if __name__ == '__main__':\n",
+ " main()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TxQiE-LXjnAb"
+ },
+ "source": [
+ "## Prawler "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "rq0caJ3njq08"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install Prawler"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Gl8Efvbzjs3T"
+ },
+ "outputs": [],
+ "source": [
+ "import Prawler\n",
+ "\n",
+ "proxy_list = Prawler.get_proxy_list(5, \"http\", \"elite\", \"US\")\n",
+ "\n",
+ "print(proxy_list)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JyUn6Yn8lM_c"
+ },
+ "source": [
+ "## Free-Proxy "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "K8qprse5lLcb"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install free-proxy"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "jR8t0j98lWG1"
+ },
+ "outputs": [],
+ "source": [
+ "from fp.fp import FreeProxy\n",
+ "\n",
+ "proxy = FreeProxy(country_id=['US', 'AU', 'CA', 'SG', 'JP', 'KR'], timeout=1, rand=False).get()\n",
+ "\n",
+ "print(proxy)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "jmlQ0JeXyH9j"
+ },
+ "source": [
+ "## madodl "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PZrpgJGe59yp"
+ },
+ "source": [
+ "## code-server "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "yzLxqKex6BQ6"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install code-server \n",
+ "# ================================================================ #\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!pip install colabcode\n",
+ "\n",
+ "# clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "-LB2nKez6XOz"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] code-server \n",
+ "# @markdown > Please note that while running this cell, you cannot run other cell until you stop this one first.\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from colabcode import ColabCode\n",
+ "\n",
+ "# Run VSCode with password\n",
+ "# ColabCode(port=10000, password=\"12345\")\n",
+ "\n",
+ "# Run VSCode without password\n",
+ "ColabCode()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eWs-zl2gNvwW"
+ },
+ "source": [
+ "## Create/Extract Archive "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "XO8dzdyyH5pT"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Create Archive \n",
+ "MODE = \"ZIP\" #@param [\"ZIP\", \"TAR\", \"7Z\"]\n",
+ "FILENAME = \"\" # @param {type:\"string\"}\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
+ "\n",
+ "# option supports b k m g (bytes, kilobytes, megabytes, gigabytes)\n",
+ "SPLIT = \"no\" #@param [\"1g\", \"2g\", \"3g\", \"4g\", \"5g\", \"no\"]\n",
+ "\n",
+ "compress = 4#@param {type:\"slider\", min:0, max:9, step:0}\n",
+ "#@markdown > Use the character `|` to separate paths. (Example `path/to /1 | path/to/2`)\n",
+ "# ================================================================ #\n",
+ "\n",
+ "from pathlib import PurePosixPath\n",
+ "\n",
+ "pathList = PATH_TO_FILE.split('|')\n",
+ "if MODE == \"ZIP\":\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE.ZIP\"\n",
+ " if ARCHIVE_PASSWORD:\n",
+ " passADD = f'--password \"{ARCHIVE_PASSWORD}\"'\n",
+ " else:\n",
+ " passADD = ''\n",
+ " splitC = f\"-s {SPLIT}\" if not 'no' in SPLIT else \"\" \n",
+ " for part in pathList:\n",
+ " pathdic = PurePosixPath(part.strip())\n",
+ " parent = pathdic.parent\n",
+ " partName = pathdic.parts[-1]\n",
+ " cmd = f'cd \"{parent}\" && zip {passADD} -{compress} {splitC} -v -r -u \"{FILENAME}\" \"{partName}\"'\n",
+ " !$cmd\n",
+ "elif MODE == \"TAR\":\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE\"\n",
+ " cmd = f'GZIP=-{compress} tar -zcvf \"{FILENAME}.tar.gz\" {PATH_TO_FILE}'\n",
+ " !$cmd\n",
+ "else:\n",
+ " if not FILENAME:\n",
+ " FILENAME = \"/content/NEW_FILE\"\n",
+ " for part in pathList:\n",
+ " pathdic = PurePosixPath(part.strip())\n",
+ " parent = pathdic.parent\n",
+ " partName = pathdic.parts[-1]\n",
+ " cmd = f'cd \"{parent}\" && 7z a -mx={compress} \"{FILENAME}.7z\" \"{partName}\"'\n",
+ " !$cmd\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "k98WImeXH5pK"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Extract Archive \n",
+ "MODE = \"7Z\" # @param [\"UNZIP\", \"UNTAR\", \"UNRAR\", \"7Z\"]\n",
+ "PATH_TO_FILE = \"\" # @param {type:\"string\"}\n",
+ "extractPath = \"\" # @param {type:\"string\"}\n",
+ "ARCHIVE_PASSWORD = \"\" #@param {type:\"string\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os, urllib.request\n",
+ "HOME = os.path.expanduser(\"~\")\n",
+ "\n",
+ "if not os.path.exists(f\"{HOME}/.ipython/mixlab.py\"):\n",
+ " hCode = \"https://raw.githubusercontent.com/shirooo39/\" \\\n",
+ " \"MiXLab/master/resources/mixlab.py\"\n",
+ " urllib.request.urlretrieve(hCode, f\"{HOME}/.ipython/mixlab.py\")\n",
+ "\n",
+ "from mixlab import (\n",
+ " runSh,\n",
+ " checkAvailable,\n",
+ ")\n",
+ "\n",
+ "def extractFiles():\n",
+ " global extractPath\n",
+ " if ARCHIVE_PASSWORD:\n",
+ " passADD = f'-P {ARCHIVE_PASSWORD}'\n",
+ " else:\n",
+ " passADD = ''\n",
+ " if not extractPath:\n",
+ " extractPath = \"/content/extract\"\n",
+ " os.makedirs(extractPath, exist_ok=True)\n",
+ " if MODE == \"UNZIP\":\n",
+ " runSh('unzip '+passADD+f' \"{PATH_TO_FILE}\" -d \"{extractPath}\"', output=True)\n",
+ " elif MODE == \"UNRAR\":\n",
+ " runSh(f'unrar x \"{PATH_TO_FILE}\" \"{extractPath}\" '+passADD+' -o+', output=True)\n",
+ " elif MODE == \"UNTAR\":\n",
+ " runSh(f'tar -C \"{extractPath}\" -xvf \"{PATH_TO_FILE}\"', output=True)\n",
+ " else:\n",
+ " runSh(f'7z x \"{PATH_TO_FILE}\" -o{extractPath} '+passADD, output=True)\n",
+ "\n",
+ "extractFiles()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "wBCtu4fMAwRn"
+ },
+ "source": [
+ "## 4chan-downloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "0w-c_xBUBCXN"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Clone 4chan-downloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "import os.path\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
+ " print(\"Hey, Anon-kun/chan!\\n\\nDid you know that you already have cloned the 4chan-downloader?\\nNo need to do that again, you know...\\n\\n(How do I know that? Well, I can os.path.exists the file inb4404.py, so... yeah)\")\n",
+ "else:\n",
+ " !git clone https://github.com/Exceen/4chan-downloader.git /content/tools/4chan-downloader\n",
+ " clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "VkBNduaUBg6S"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] 4chan-downloader \n",
+ "automatically_clear_output = False #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "import os.path\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "if os.path.exists(\"/content/tools/4chan-downloader/inb4404.py\"):\n",
+ " !python /content/tools/4chan-downloader/inb4404.py -h\n",
+ " if automatically_clear_output == True:\n",
+ " clear_output()\n",
+ "else:\n",
+ " print(\"Hey, Anon-kun/chan... I can't find the inb4404.py.\\n\\nHave you run the cell above this one?\\nIf you haven't already, run the cell above first.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "um3eitPj0QWG"
+ },
+ "source": [
+ "## Instagram Scraper "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "dqFUrm7M3B4j"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install Instagram Scraper \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "%pip install instagram-scraper\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "wk2bY_l00Sq3"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] Instagram Scraper \n",
+ "target_username = \"\" #@param {type:\"string\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
In case if the account is private, you will need to authenticate using your account. \n",
+ "your_username = \"\" #@param {type:\"string\"}\n",
+ "your_password = \"\" #@param {type:\"string\"}\n",
+ "use_login = False #@param {type:\"boolean\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
Options: \n",
+ "download_path = \"\" #@param {type:\"string\"}\n",
+ "download_mode = 'default' #@param [\"default\", \"image_only\", \"video_only\", \"story_only\", \"broadcast_only\"]\n",
+ "silent_mode = False #@param {type:\"boolean\"}\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "path1 = \"/content/downloads/\"\n",
+ "path2 = \"/content/downloads/instagram-scraper/\"\n",
+ "silent = \"\"\n",
+ "\n",
+ "if download_path != \"\":\n",
+ " pass\n",
+ "elif download_path == \"\":\n",
+ " if os.path.exists(path1) == False:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path1)\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " os.makedirs(path1)\n",
+ " elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " download_path = path2\n",
+ "\n",
+ "if download_mode == \"default\":\n",
+ "\tdownload_mode = \"\"\n",
+ "elif download_mode == \"image_only\":\n",
+ "\tdownload_mode = \"image\"\n",
+ "elif download_mode == \"video_only\":\n",
+ "\tdownload_mode = \"video\"\n",
+ "elif download_mode == \"story_only\":\n",
+ "\tdownload_mode = \"story\"\n",
+ "elif download_mode == \"broadcast_only\":\n",
+ "\tdownload_mode = \"broadcast\"\n",
+ "\n",
+ "if silent_mode == True:\n",
+ "\tsilent = \"-q\"\n",
+ "else:\n",
+ "\tsilent = \"\"\n",
+ "\n",
+ "if target_username == \"\":\n",
+ " sys.exit(\"No target username to download is given.\")\n",
+ "else:\n",
+ " if use_login == True:\n",
+ " if your_username == \"\" and your_password == \"\":\n",
+ " sys.exit(\"The username and password fields are empty!\")\n",
+ " elif your_username == \"\" and your_password != \"\":\n",
+ " sys.exit(\"The username field is empty!\")\n",
+ " elif your_username != \"\" and your_password == \"\":\n",
+ " sys.exit(\"The password field is empty!\")\n",
+ " else:\n",
+ " !instagram-scraper \"$target_username\" -u \"$your_username\" -p \"$your_password\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent\"\n",
+ " else:\n",
+ " !instagram-scraper \"$target_username\" -d \"$download_path\" -n -t \"$download_mode\" \"$silent_mode\"\n",
+ "\n",
+ "print(\"\")\n",
+ "print(\"==================================================\")\n",
+ "print(\"Downloaded files are stored in\", download_path + target_username)\n",
+ "print(\"==================================================\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "OgTz1FAtcHJH"
+ },
+ "outputs": [],
+ "source": [
+ "!instagram-scraper \"\" -u \"\" -p \"\" -d \"\" -n -t image"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3PmqhlfgKj85"
+ },
+ "source": [
+ "## instaloader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "XH3kLNW9KoRf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install instaloader \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!pip3 install instaloader\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "avemAgewKydt"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] instaloader \n",
+ "target_username = \"\" #@param {type:\"string\"}\n",
+ "# @markdown ---\n",
+ "# @markdown
Options: \n",
+ "use_login = False #@param {type:\"boolean\"}\n",
+ "download_path = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > If the download path is not specified, the default one will be used.\"/content/downloads/instaloader/username\"\n",
+ "# ================================================================ #\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "if download_path != \"\":\n",
+ " pass\n",
+ "elif download_path == \"\":\n",
+ " path1 = \"/content/downloads/\"\n",
+ " path2 = \"/content/downloads/instaloader/\"\n",
+ " if os.path.exists(path1) == False:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path1)\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " os.makedirs(path1)\n",
+ " elif os.path.exists(path1) == True:\n",
+ " if os.path.exists(path2) == False:\n",
+ " os.makedirs(path2)\n",
+ " elif os.path.exists(path2) == True:\n",
+ " download_path = path2\n",
+ "\n",
+ "if target_username == \"\":\n",
+ " sys.exit(\"No target username to download is given.\")\n",
+ "else:\n",
+ " if use_login == True:\n",
+ " username = input(\"Enter your username: \")\n",
+ " username = \"--login=\" + username\n",
+ " %cd \"$download_path\"\n",
+ " clear_output()\n",
+ " !instaloader --fast-update \"$target_username\" \"$username\"\n",
+ " else:\n",
+ " %cd \"$download_path\"\n",
+ " clear_output()\n",
+ " !instaloader \"$target_username\"\n",
+ "\n",
+ "print(\"\")\n",
+ "print(\"==================================================\")\n",
+ "print(\"Downloaded files are stored in\", download_path + target_username)\n",
+ "print(\"==================================================\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "bpnK5DH0VBs6"
+ },
+ "outputs": [],
+ "source": [
+ "# Copy session from local to google drive\n",
+ "!cp -a /root/.config/instaloader/ /content/drive/MyDrive/instaloader-session"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0Jej2XFqWI4O"
+ },
+ "outputs": [],
+ "source": [
+ "# Copy session from google drive to local\n",
+ "!cp -a /content/drive/MyDrive/instaloader-session /root/.config/instaloader"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2b4Igr8g0duu"
+ },
+ "source": [
+ "## ecchi.iwara-dl "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "__PBrzCP0fPf"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Clone] ecchi.iwara-dl \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import HTML, clear_output\n",
+ "\n",
+ "!apt-get install -y jq\n",
+ "!apt-get install python3-bs4\n",
+ "!git clone https://github.com/hare1039/iwara-dl /content/tools/iwara-dl\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "lLU9f0EH0mZN"
+ },
+ "outputs": [],
+ "source": [
+ "!bash /content/tools/iwara-dl/iwara-dl.sh [-u [U]] [-p [P]] [-i [n]] [-rhftcsdn] [url [url ...]]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iHyp2Bgkx1B2"
+ },
+ "source": [
+ "## UUP Dump "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "uoSecUJvx4za"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← Install the Requirements \n",
+ "# ================================================================ #\n",
+ "\n",
+ "import IPython\n",
+ "from IPython.display import clear_output\n",
+ "\n",
+ "!sudo apt-get install aria2 cabextract wimtools chntpw genisoimage\n",
+ "!git clone https://github.com/uup-dump/converter \"/content/tools/uup-dump/converter\"\n",
+ "\n",
+ "clear_output()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "6NqUFFnBx9C5"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# @markdown ← [Start] UUP Dump \n",
+ "script_location = \"\" #@param {type:\"string\"}\n",
+ "# @markdown > Only type in the script's path and exclude the script's name. Type in: /content/path/to/script Exclude: uup_download_linux.sh\n",
+ "# ================================================================ #\n",
+ "\n",
+ "if not script_location == \"\":\n",
+ " pass\n",
+ "else:\n",
+ " script_location = \"/content\"\n",
+ "\n",
+ "%cd \"$script_location\"\n",
+ "\n",
+ "!bash \"uup_download_linux.sh\"\n",
+ "\n",
+ "%cd \"/content\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KDFWgCYE0ULQ"
+ },
+ "outputs": [],
+ "source": [
+ "# ============================= FORM ============================= #\n",
+ "# Custom commands goes here\n",
+ "# ================================================================ #\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6uDnLXX40fv6"
+ },
+ "source": [
+ "TO DO:\n",
+ "\n",
+ "- Add files and paths checker ot make sure they are exist"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [
+ "ygDyFQvR5Gci",
+ "XU_IUOV6owRg",
+ "uzuVvbfSo16m",
+ "e-0yDs4C0HkB",
+ "21-Wb8ywqQeJ",
+ "GaegjvHPPW9q",
+ "4sXeh7Tdx1v-",
+ "477G4hACPgqM",
+ "KTXERiVMIKgw",
+ "wkc0wCvPIUFh",
+ "YSEyWbWfY9qx",
+ "UFQoYxRKAclf",
+ "c1N141ZcEdwd",
+ "O0NHwsI_-d3W",
+ "4DdRcv08fzTG",
+ "o_uCXhC1S0GZ",
+ "nGKbLp4P8MXi",
+ "l8uIsoVrC6to",
+ "YMSrqjUm_bDN",
+ "N09EnjlB6wuV",
+ "bZ-Z0cUdz7IL",
+ "ERBVA5aIERou",
+ "bEYznPNQ61sm",
+ "1mctlRk1TTrc",
+ "dEq11jIB5oee",
+ "Ci0HTN9Xyxze",
+ "tL-ilxH0N_B9",
+ "QOyo5zf4suod",
+ "FejGUkxPhDmE",
+ "_GVSJ9jdn6lW",
+ "OJBVlUw-kKyt",
+ "yqY0BtjuGS78",
+ "nFrxKe_52fSj",
+ "Ssn-ZMNcv5UQ",
+ "iLcAVtWT4NTC",
+ "bQ73mxqlpNjb",
+ "UU-y9pOU4sRB",
+ "EpwNYbcfRvcl",
+ "5CWw65NugcjI",
+ "3AbFcLJr5PHk",
+ "pIk3H6xUic8a",
+ "LOmbPf7Tihne",
+ "paeY4yX7jNd1",
+ "j-PgCLYrZFbm",
+ "TgwoGxAitg0y",
+ "xmq_9AJCtvlV",
+ "nUI7G8OSSXbM",
+ "aStiEPlnDoeY",
+ "d7hdxEjc-ynr",
+ "Jbw2QIUB6JKR",
+ "e-OWHJwruE6V",
+ "AMu9crpy-7yb",
+ "uQT6GEq9Na_E",
+ "FdDNhzc0NdeS",
+ "_wlFbVS6JcSL",
+ "WaSgbPEch7KH",
+ "Th3Qyn2uttiW",
+ "CKxGMNKUJloT",
+ "COqwo7iH6_vu",
+ "JM1Do14AKIdF",
+ "0vHRnizI9BXA",
+ "9JBIZh3OZBaL",
+ "2zGMePbPQJWI",
+ "eaUJNGmju5G6",
+ "xzeZBOnhyKPy",
+ "NgCsGSiDu1bY",
+ "OOpAjMjxsNd6",
+ "UdiQLlm5zX3_",
+ "EFOqhHG6hOVH",
+ "ey6-UveDalxR",
+ "GahMjYf8miNs",
+ "NQ0TxfKeghR8",
+ "Ja95mvvq8oei",
+ "CD36vcpf2FSb",
+ "RDHuIkoi6l9a",
+ "66I2t2sQ2SMq",
+ "NObEcBWAJoaz",
+ "FpJXJiRl6-gK",
+ "SNDGdMRn3PA-",
+ "KFcIThDuBii_",
+ "2f-THZmDoOaY",
+ "MSUasbRUDP3B",
+ "9UagRtLPyKoQ",
+ "9ZcgdPBT2SQK",
+ "7-3O4en4C4IL",
+ "VRk2Ye1exWVA",
+ "tozwpAhhnm69",
+ "Ts6zYXUdEfrz",
+ "FWdEg4H9JlSp",
+ "Rd6Br05y7_Ya",
+ "7bNutSOeJ1kM",
+ "pUODCRACrvGC",
+ "5Lo-h1Cnrxou",
+ "gdgYuWnst4ed",
+ "TxQiE-LXjnAb",
+ "JyUn6Yn8lM_c",
+ "jmlQ0JeXyH9j",
+ "PZrpgJGe59yp",
+ "eWs-zl2gNvwW",
+ "wBCtu4fMAwRn",
+ "um3eitPj0QWG",
+ "3PmqhlfgKj85",
+ "2b4Igr8g0duu",
+ "iHyp2Bgkx1B2"
+ ],
+ "include_colab_link": true,
+ "name": "MiXLab",
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.1"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}