You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
'\x1b[?25l \x1b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\x1b[0m \x1b[32m0.0/254.7 kB\x1b[0m \x1b[31m?\x1b[0m eta \x1b[36m-:--:--\x1b[0m',
566
-
'\x1b[2K \x1b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\x1b[0m \x1b[32m254.7/254.7 kB\x1b[0m \x1b[31m18.3 MB/s\x1b[0m eta \x1b[36m0:00:00\x1b[0m',
567
-
'\x1b[?25hBuilding wheels for collected packages: groundingdino',
568
-
' Building wheel for groundingdino (setup.py) ... \x1b[?25l\x1b[?25hdone',
569
-
' Created wheel for groundingdino: filename=groundingdino-0.1.0-cp310-cp310-linux_x86_64.whl size=3038498 sha256=1e7306dfa5ebd4bebb340bfe814e13026800708bbc0223d37ae8963e90145fb2',
570
-
' Stored in directory: /tmp/pip-ephem-wheel-cache-multbs74/wheels/6b/06/d7/b57f601a4df56af41d262a5b1b496359b13c323bf5ef0434b2',
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3609.)
578
-
579
-
final text_encoder_type: bert-base-uncased
580
-
581
-
UserWarning:
582
-
Error while fetching `HF_TOKEN` secret value from your vault: 'Requesting secret HF_TOKEN timed out. Secrets can only be fetched when running from the Colab UI.'.
583
-
You are not authenticated with the Hugging Face Hub in this notebook.
584
-
If the error persists, please let us know by opening an issue on GitHub (https://github.com/huggingface/huggingface_hub/issues/new).
FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
599
446
600
-
```
601
-
</div>
602
447
Let's load an image of a dog for this part!
603
448
604
449
@@ -615,15 +460,6 @@ plt.axis("on")
615
460
plt.show()
616
461
```
617
462
618
-
<divclass="k-default-codeblock">
619
-
```
620
-
Downloading data from https://storage.googleapis.com/keras-cv/test-images/mountain-dog.jpeg
621
-
1236492/1236492 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
622
-
623
-
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
657
-
UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
658
-
UserWarning: None of the inputs have requires_grad=True. Gradients will be None
659
-
FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
660
-
661
-
1/1 ━━━━━━━━━━━━━━━━━━━━ 10s 10s/step
662
-
663
-
```
664
-
</div>
665
490
And that's it! We got a segmentation mask for our text prompt using the combination of
666
491
Gounding DINO + SAM! This is a very powerful technique to combine different models to
667
492
expand the applications!
@@ -683,12 +508,6 @@ plt.axis("off")
683
508
plt.show()
684
509
```
685
510
686
-
<divclass="k-default-codeblock">
687
-
```
688
-
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
0 commit comments