Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
109 commits
Select commit Hold shift + click to select a range
59e93d3
added vertex eval package to kernel
olex-snk Jul 4, 2025
180c5a5
added evaluation_adk_agent_with_vertex_ai notebook
olex-snk Jul 4, 2025
704b5d8
updated ipynb
olex-snk Jul 7, 2025
793b631
cleaned notebook
olex-snk Jul 7, 2025
3fcc052
fix style
olex-snk Jul 7, 2025
5cb2aca
revert
olex-snk Jul 7, 2025
6107516
style_fix
olex-snk Jul 7, 2025
1b0afd2
cleaned
olex-snk Jul 14, 2025
d71ed00
cleaned_and_style_fix
olex-snk Jul 14, 2025
c8dd5b4
reformatted_v1
olex-snk Jul 14, 2025
c2aa5b0
fixed_nest_asyncio_text
olex-snk Jul 14, 2025
b7c8dd4
added_check_for_bucket_exist
olex-snk Jul 14, 2025
bb446b3
added helper functions info
olex-snk Jul 14, 2025
4e5cb66
fixed_new_bucked_creation
olex-snk Jul 15, 2025
f4bb6b5
remove the necessity of nest_asyncio
takumiohym Jul 16, 2025
df73ec3
removed redunant description
olex-snk Aug 12, 2025
ca05f6f
Resolved merger conflict
olex-snk Jan 15, 2026
03cc401
fixed env and uuid generation
olex-snk Jan 15, 2026
f82ce63
cleaned
olex-snk Jan 15, 2026
5258916
WIP: testing modular approach
takumiohym Jan 2, 2026
47739f8
remove irrelevant files
takumiohym Jan 3, 2026
da402db
Integrated setup_env.sh into a single file
takumiohym Jan 3, 2026
7895a4f
add dev
takumiohym Jan 5, 2026
bff7edd
Add python version explicitly
takumiohym Jan 5, 2026
2627eb2
update make dev
takumiohym Jan 5, 2026
10bb9ee
rename sub projects
takumiohym Jan 5, 2026
43cbe37
remove unintended file
takumiohym Jan 5, 2026
a35c76a
Update README.md
takumiohym Jan 5, 2026
40fcac8
Update README.md
takumiohym Jan 5, 2026
d7db3ae
Update README.md
takumiohym Jan 5, 2026
0df8de7
clean up setup process and scripts
takumiohym Jan 5, 2026
430365b
Update README.md
takumiohym Jan 5, 2026
8ef968f
Update README.md
takumiohym Jan 5, 2026
dd0ee81
Updated requirements files
takumiohym Jan 6, 2026
53cb7fb
Updated Python version to 3.12
takumiohym Jan 6, 2026
7470601
rename contents to notebooks
takumiohym Jan 6, 2026
469a804
Update KFP custom module to Python 3.12
takumiohym Jan 6, 2026
071f140
Update README.md
takumiohym Jan 7, 2026
6887f29
Rename asl_agent to asl_genai
takumiohym Jan 8, 2026
3126f48
Removed requirements-common.txt
takumiohym Jan 8, 2026
5773835
update asl_genai requirements.txt
takumiohym Jan 13, 2026
e217900
remove sklean dependency
takumiohym Jan 13, 2026
d1783c5
fix pre-commit
takumiohym Jan 14, 2026
130b7a0
Add automated steps & moved to service account setup
takumiohym Feb 3, 2026
7477101
Clean up the precommit config
takumiohym Feb 3, 2026
a9f9e55
Merge pull request #774 from GoogleCloudPlatform/cleanup_precommit
takumiohym Feb 3, 2026
c78d80c
Add nltk download
takumiohym Feb 3, 2026
d81668a
Update setup message
takumiohym Feb 3, 2026
60cd652
Update README
takumiohym Feb 3, 2026
2560a52
Update Agent Engine notebook
takumiohym Feb 5, 2026
7a811d2
Update adk web instruction
takumiohym Feb 5, 2026
72568d0
cherry pick #775
takumiohym Feb 5, 2026
12b3b76
Merge pull request #769 from GoogleCloudPlatform/adk_web_update
takumiohym Feb 5, 2026
16aeebf
Merge pull request #770 from GoogleCloudPlatform/update_agent_engine
takumiohym Feb 6, 2026
6a3ad7a
disable idle shutdown
takumiohym Feb 8, 2026
937ca5e
create safety adk notebook
takumiohym Feb 5, 2026
b989789
Reflected feedback
takumiohym Feb 5, 2026
acd92b7
Merge pull request #771 from GoogleCloudPlatform/safety_adk
takumiohym Feb 10, 2026
47ce343
update gemini version
sanjanalreddy Feb 11, 2026
8655346
Merge pull request #777 from GoogleCloudPlatform/update_gemini_version
sanjanalreddy Feb 11, 2026
2f1decb
Merge branch 'new_env_test' into adk_agent_evaluation_vertex_ai
takumiohym Feb 12, 2026
cffe3c2
update requirement in asl_genai
takumiohym Feb 12, 2026
ca06827
Fixed gcloud config command
takumiohym Feb 12, 2026
f76669b
Update gcloud config list to gcloud config get-value
takumiohym Feb 16, 2026
1618a31
remove a tags
takumiohym Feb 16, 2026
77d1fcc
fix md syntax
takumiohym Feb 16, 2026
e13387e
Merge pull request #781 from GoogleCloudPlatform/fix_md_syntax
takumiohym Feb 16, 2026
33b4d19
Merge pull request #782 from GoogleCloudPlatform/fix_adk_md
takumiohym Feb 16, 2026
ac91957
Add telemetry API and ModelArmor API enablements
takumiohym Feb 16, 2026
c87b1f3
Merge branch 'new_env_test' of https://github.com/GoogleCloudPlatform…
takumiohym Feb 16, 2026
a71213b
Update default kernelspecs
takumiohym Feb 16, 2026
69c8fe8
Add GPU (T4) support on the setup script
takumiohym Feb 16, 2026
7be281c
Merge pull request #780 from GoogleCloudPlatform/update_gcloud_config
takumiohym Feb 16, 2026
a0304cb
Merge pull request #783 from GoogleCloudPlatform/update_kenelspec
takumiohym Feb 16, 2026
5cdfae6
Add GPU instruction on README.md
takumiohym Feb 17, 2026
d4a9dc1
add seaborn to asl_core
takumiohym Feb 17, 2026
50f3924
add load_ext google.cloud.bigquery
takumiohym Feb 17, 2026
d4f1779
downgrade setuptools for tensorboard compatibility
takumiohym Feb 17, 2026
68cc3ad
Add fsspec module
takumiohym Feb 17, 2026
2511fc7
Add gcsfs module
takumiohym Feb 17, 2026
20c574d
update cicd kfp container to aligh with the new env
takumiohym Feb 17, 2026
cb306e4
remove k8s modules
takumiohym Feb 17, 2026
4cec856
specified google-cloud-aiplatform version
takumiohym Feb 18, 2026
fee8315
Merge pull request #788 from GoogleCloudPlatform/remove_k8s
takumiohym Feb 18, 2026
28e38de
Merge pull request #785 from GoogleCloudPlatform/add_load_bq
takumiohym Feb 18, 2026
1032080
Merge pull request #784 from GoogleCloudPlatform/add_gpu
takumiohym Feb 18, 2026
6ec4eae
fix numpy convert issue
takumiohym Feb 18, 2026
3df9e0a
fix wrong space in markdown cells
takumiohym Feb 18, 2026
8bb73d4
fix shape for the updated maplotlib version
takumiohym Feb 18, 2026
7097d24
update image path in flowers.csv
takumiohym Feb 18, 2026
4eae802
update path in tfrecords_tfdata.ipynb
takumiohym Feb 18, 2026
9bc68f4
Disable running shutdown
takumiohym Feb 18, 2026
f75a197
update data path
takumiohym Feb 19, 2026
c43fe08
update numpy method
takumiohym Feb 19, 2026
f97e65c
fix word2vec issue due to python version change
takumiohym Feb 19, 2026
0484903
Merge branch 'new_env_test' of https://github.com/GoogleCloudPlatform…
takumiohym Feb 19, 2026
5dcbc63
reflected feedback & updated gitignore
takumiohym Feb 20, 2026
9c6ee00
Merge pull request #792 from GoogleCloudPlatform/bert_path_update
takumiohym Feb 20, 2026
2b64bc2
Merge pull request #787 from GoogleCloudPlatform/cicd_env_update
takumiohym Feb 20, 2026
81a1ccf
Merge pull request #790 from GoogleCloudPlatform/update_flower_filepath
takumiohym Feb 21, 2026
20139c8
Merge branch 'new_env_test' into adk_agent_evaluation_vertex_ai
olex-snk Feb 21, 2026
36b9061
Merge pull request #620 from GoogleCloudPlatform/adk_agent_evaluation…
olex-snk Feb 21, 2026
b76e3fd
fix rand generation
takumiohym Feb 23, 2026
62fc121
Merge remote-tracking branch 'origin' into new_env_test
takumiohym Mar 4, 2026
fedb6f7
fix setup flow regarding GPU
takumiohym Mar 9, 2026
62f7cdf
Merge branch 'new_env_test' of https://github.com/GoogleCloudPlatform…
takumiohym Mar 9, 2026
b988110
fix lab discrepancy on keras sequential API
takumiohym Feb 25, 2026
4a7dcda
fix semantic search notebook
takumiohym Mar 9, 2026
43f148d
Merge pull request #789 from GoogleCloudPlatform/modelmonitoring_nump…
takumiohym Mar 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -149,3 +149,6 @@ dmypy.json
cython_debug/

*_kernel/

# Dataset
bert_dataset/
18 changes: 7 additions & 11 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,24 +26,20 @@ repos:
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.9.1
hooks:
- id: nbqa-black
args: [-l80]
- id: nbqa-pyupgrade
args: [--py36-plus]
- id: nbqa-isort
args: [--profile=black, -l80]
- repo: https://github.com/PyCQA/isort
rev: 7.0.0
hooks:
- id: isort
args: ["--profile", "black", "-l80"]
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 25.12.0
hooks:
- id: black
args: [-l 80]
- id: black-jupyter
args: [-l 80]
types_or: [python, pyi, jupyter]
- repo: https://github.com/psf/black
rev: 25.12.0
hooks:
- id: black
args: [-l 80]
types_or: [python, pyi, jupyter]
- repo: https://github.com/asottile/pyupgrade
rev: v3.21.2
hooks:
Expand Down
99 changes: 55 additions & 44 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2021 Google LLC. All Rights Reserved.
# Copyright 2025 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -12,51 +12,62 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
all: clean install

kernels: \
object_detection_kernel \
pytorch_kfp_kernel \
tf_privacy_kernel
SHELL := /bin/bash
export PATH := $(HOME)/.local/bin:$(PATH)
SETUP_SCRIPT = ./scripts/setup_kernel.sh
PYTHON_VERSION = 3.12

PROJECTS = \
asl_core:asl_core:"ASL Core" \
asl_genai:asl_genai:"ASL Gen AI" \
asl_mlops:asl_mlops:"ASL MLOps"

.PHONY: all install clean setup-apt setup-ide setup build-kernels

all: setup build-kernels install-pre-commit

install: all

.PHONY: clean
clean:
@find . -name '*.pyc' -delete
@find . -name '*.pytest_cache' -delete
@find . -name '__pycache__' -delete
@find . -name '*egg-info' -delete

.PHONY: install
install:
@pip install --user -U pip
@pip install --user "Cython<3"
@pip install --user -e .
@pip install --user --no-deps -r requirements-without-deps.txt
@./scripts/setup_on_jupyterlab.sh
@pre-commit install
@sudo apt-get update
@sudo apt-get -y install graphviz

.PHONY: precommit
precommit:
@pre-commit run --all-files

.PHONY: object_detection_kernel
object_detection_kernel:
./kernels/object_detection.sh

.PHONY: pytorch_kfp_kernel
pytorch_kfp_kernel:
./kernels/pytorch_kfp.sh

.PHONY: tf_privacy_kernel
tf_privacy_kernel:
./kernels/tf_privacy.sh

.PHONY: keras_cv_kernel
keras_cv_kernel:
./kernels/keras_cv.sh

.PHONY: tests
tests:
pytest tests/unit
@find . -type d -name '*.egg-info' -exec rm -rf {} +

@for config in $(PROJECTS); do \
IFS=: read -r dir name disp <<< "$$config"; \
bash $(SETUP_SCRIPT) $$dir $$name "$$disp" remove; \
done

setup-apt:
$(eval TOKEN=$(shell curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token | sed 's/.*"access_token":"\([^"]*\)".*/\1/'))
@export CLOUDSDK_AUTH_ACCESS_TOKEN=$(TOKEN); \
export GOOGLE_APPLICATION_CREDENTIALS=""; \
sudo rm -f /etc/apt/sources.list.d/yarn.list /usr/share/keyrings/yarn.gpg; \
curl -fsSL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo gpg --dearmor -o /usr/share/keyrings/yarn.gpg; \
echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list; \
sudo apt-get update

setup-ide:
@if command -v code-oss-cloud-workstations > /dev/null; then \
echo "Installing Workstation extensions..."; \
code-oss-cloud-workstations --install-extension ms-python.python --force; \
code-oss-cloud-workstations --install-extension ms-toolsai.jupyter --force; \
fi

setup: setup-apt setup-ide
sudo apt-get -y install graphviz
@command -v uv >/dev/null 2>&1 || curl -LsSf https://astral.sh/uv/install.sh | sh
uv python install $(PYTHON_VERSION)
uv tool install jupyter-core --with jupyter-client
@grep -q "local/bin" ~/.bashrc || echo 'export PATH="$$HOME/.local/bin:$$PATH"' >> ~/.bashrc

build-kernels:
@for config in $(PROJECTS); do \
IFS=: read -r dir name disp <<< "$$config"; \
bash $(SETUP_SCRIPT) $$dir $$name "$$disp"; \
done

install-pre-commit:
uv tool install pre-commit
uv tool run pre-commit install
91 changes: 62 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,60 +1,93 @@
# Advanced Solutions Lab

## Overview
This repository contains Jupyter notebooks meant to be run on Vertex AI. This is maintained by Google Cloud’s [Advanced Solutions Lab (ASL)](https://cloud.google.com/asl) team. [Vertex AI](https://cloud.google.com/vertex-ai) is the next generation AI Platform on the Google Cloud Platform.
The material covered in this repo will take a software engineer with no exposure to machine learning to an advanced level.

In particular, the notebooks in this repository cover
- A wide range of model architectures (DNN, CNN, RNN, transformers, SNGP, etc.) targeting many data modalities (tabular, image, text, time-series) implemented mainly in Tensorflow and Keras.
- Tools on Google Cloud’s Vertex AI for operationalizing Tensorflow, Scikit-learn and PyTorch models at scale (e.g. Vertex training, tuning, and serving, TFX and Kubeflow pipelines).

If you are new to machine learning or Vertex AI start here: [Introduction to TensorFlow](https://github.com/GoogleCloudPlatform/asl-ml-immersion/tree/master/notebooks/introduction_to_tensorflow)
This repository contains AI and Machine Learning contents meant to be run on Google Cloud. This is maintained by Google Cloud’s [Advanced Solutions Lab (ASL)](https://cloud.google.com/asl) team.

This repository contains 3 main modules to covers various AI/ML toipcs:
- `asl_core`: A wide range of model architectures (DNN, CNN, RNN, transformers, SNGP, etc.) targeting many data modalities (tabular, image, text, time-series) implemented mainly in Tensorflow and Keras.
- `asl_mlops`: Tools on Google Cloud’s Vertex AI for operationalizing Tensorflow, Scikit-learn and PyTorch models at scale (e.g. Vertex training, tuning, and serving, TFX and Kubeflow pipelines).
- `asl_genai`: Generative AI and Agent System using Gemini and Agentic Frameworks like Google ADK.

## Repository Structure
All notebooks are in the notebooks folder. This folder is organized by different ML topics. Each folder contains a `labs` and a `solutions` folder. Use the `labs` notebooks to test your coding skills by filling in TODOs and refer to the notebooks in the `solutions` folder to verify your code.
Each module (`asl_core`, `asl_mlops`, `asl_genai`) has separate environment and materials, which are organized in each directory.

All learning materials are in the contets folder. This folder is organized by different topics. Each folder contains a `labs` and a `solutions` folder. Use the `labs` notebooks to test your coding skills by filling in TODOs and refer to the notebooks in the `solutions` folder to verify your code.

We have three main folders described below:

```
├── kernels - contains kernel scripts needed for certain notebooks in lab folder
├── notebooks - contains labs and solutions notebook organized by topic
│ ├── bigquery
│ ├── building_production_ml_systems
│ ├── docker_and_kubernetes
│ ├── . . .
├── scripts - contains setup scripts for enabling and setting up services on Vertex AI
├── asl_core
│ ├── notebooks - contains learning materials organized by topic
│ │ ├── building_production_ml_systems
│ │ │ ├── labs
│ │ │ └── solutions
│ │ ├── end-to-end-structured
│ │ ├── image_models
│ │ ├── ...
│ ├── kernels - contains kernel scripts needed for certain notebooks
│ ├── scaffolds - contains sample code to accelerate AI/ML projects
│ ├── requirements.txt - dependencies for this module
├── asl_mlops
│ ├── ...
├── asl_genai
│ ├── ...
├── ...
```

For a more detailed breakdown of the notebooks in this repo, please refer to this [readme](https://github.com/GoogleCloudPlatform/asl-ml-immersion/blob/master/notebooks/README.md).


## Environment Setup (Vertex AI)

First, open [CloudShell](https://cloud.google.com/shell) and run the following instructions:
## Environment Setup
### Step 1. Run the Setup Script on Cloud Shell
This repository is tested on Vertex AI Workbench and Cloud Workstations. To begin, run the setup script in [Cloud Shell](https://shell.cloud.google.com) to configure essential project infrastructure (APIs, IAM, Buckets).

Run the setup script in [Cloud Shell](https://shell.cloud.google.com) to provision your environment.
```bash
git clone https://github.com/GoogleCloudPlatform/asl-ml-immersion.git
cd asl-ml-immersion
./scripts/setup_on_cloudshell.sh
bash scripts/setup_env.sh
```

Second, follow the instruction of [the official documentation](https://cloud.google.com/vertex-ai/docs/workbench/instances/create-console-quickstart) to set up a JupyterLab instance on [Vertex AI Workbench Instance](https://cloud.google.com/vertex-ai/docs/workbench/instances/introduction).
You will be prompted to select the environment to set up:
* **1) Vertex AI Workbench:** Setup Vertex AI Workbench.
* **2) Cloud Workstations:** Setup Cloud Workstations.
* **3) Setup both:** Setup both environments.
* **4) Skip:** Setup project infrastructure (APIs, IAM, Buckets) only.

By selecting the option 1-3, you can automatically setup the environment, or you can select 4 and manually set up the environment following the official documentation:
* **Vertex AI Workbench:** [Create a user-managed notebook instance](https://cloud.google.com/vertex-ai/docs/workbench/instances/create-console-quickstart)
* **Cloud Workstations:** [Create a workstation](https://docs.cloud.google.com/workstations/docs/create-workstation)

**Note:** Accelerators (GPU/TPU) are not required in most of the labs, but some notebooks recommend using them.
Next, you will be asked if you want to attach a GPU (Nvidia T4) to the environment. Select `y` or `n` depending on your preference.

After creating a Vertex Workbench Instance, open the terminal *in your JupyterLab instance* and run the following commands:
**Note:** Accelerators (GPU/TPU) are not required in most of the notebooks, but some notebooks recommend using them.

### Step 2. Build the Environemnt
Once your environment is running, open it. Then, run the commands below in Terminal **inside the environment** to clone this repository, and build the environemnt (venvs and jupyter kernels).

```bash
git clone https://github.com/GoogleCloudPlatform/asl-ml-immersion.git
cd asl-ml-immersion
export PATH=$PATH:~/.local/bin
make install
make
```

On Cloud Workstations, click `Open Folder` -> `asl-ml-immersion` to open the repository window. If the folder is already opend, `Command + Shift + P` and type `Developer: Reload Window` to reflect the changes.

## Using the Environment
### Running a notebook
After the setup above, you can open a Jupyter notebook file, and execute on a module kernel (`ASL Core`, `ASL MLOps`, or `ASL Agent`). <br>
If a correct kernel is not pre-selected, click `Select Kernel` and select a correct one.

**On Cloud Workstations**, you can find a kernel under `Select Kernel` -> `Jupyter Kernels`. <br>
If you can't find `Jupyter Kernels`, click `Python Environment` -> `<- (Left Arrow)` to reload the environment.

**Note**: Some notebooks might require additional setup, please refer to the instructions in specific notebooks.

After running these commands, you can open and execute a notebook on the base "Python 3" kernel.
### Running a command on Terminal
When running a command from the terminal, make sure to activate a venv for a specific environment.

E.g. (under asl-ml-immersion directory),
```bash
source ./asl_genai/.venv/bin/activate
adk web ./asl_genai/notebooks/vertex_genai/solutions/adk_agents
```

## Contributions
Currently, only Googlers can contribute to this repo. See [CONTRIBUTING.md](https://github.com/GoogleCloudPlatform/asl-ml-immersion/blob/master/CONTRIBUTING.md) for more details on the contribution workflow.
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@
"%env PATH=/home/jupyter/.local/bin:{PATH}\n",
"\n",
"%load_ext tensorboard\n",
"%load_ext google.cloud.bigquery\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\""
]
Expand Down Expand Up @@ -1240,9 +1241,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m132"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -884,9 +884,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m131"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@
"%env PATH=/home/jupyter/.local/bin:{PATH}\n",
"\n",
"%load_ext tensorboard\n",
"%load_ext google.cloud.bigquery\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\""
]
Expand Down Expand Up @@ -1244,9 +1245,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m132"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -873,9 +873,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m131"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -422,9 +422,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m121"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@
"source": [
"import os\n",
"\n",
"from google.cloud import bigquery"
"from google.cloud import bigquery\n",
"%load_ext google.cloud.bigquery"
]
},
{
Expand All @@ -67,7 +68,7 @@
"metadata": {},
"outputs": [],
"source": [
"PROJECT = !gcloud config list --format 'value(core.project)'\n",
"PROJECT = !gcloud config get-value project\n",
"PROJECT = PROJECT[0]\n",
"BUCKET = PROJECT\n",
"REGION = \"us-central1\"\n",
Expand Down Expand Up @@ -471,9 +472,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m121"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel) (Local)",
"display_name": "ASL Core",
"language": "python",
"name": "conda-base-py"
"name": "asl_core"
},
"language_info": {
"codemirror_mode": {
Expand Down
Loading
Loading