File tree
551 files changed
+818
-18140
lines changed- guides
- ipynb
- keras_cv
- keras_nlp
- keras_cv
- keras_hub
- keras_nlp
- md
- keras_cv
- keras_hub
- keras_nlp
- redirects
- api
- keras_cv
- bounding_box
- formats
- utils
- clip_to_image
- compute_iou
- convert_format
- to_dense
- to_ragged
- validate_format
- layers
- augmentation
- aug_mix
- auto_contrast
- channel_shuffle
- cut_mix
- fourier_mix
- grid_mask
- jittered_resize
- mix_up
- rand_augment
- random_augmentation_pipeline
- random_channel_shift
- random_color_degeneration
- random_cutout
- random_hue
- random_saturation
- random_sharpness
- random_shear
- solarization
- preprocessing
- equalization
- grayscale
- posterization
- resizing
- regularization
- drop_path
- dropblock2d
- squeeze_and_excite_2d
- stochastic_depth
- losses
- binary_focal_crossentropy
- ciou_loss
- focal_loss
- giou_loss
- iou_loss
- simclr_loss
- smoothl1_loss
- models
- backbones
- csp_darknet
- densenet
- efficientnet_lite
- efficientnet_v1
- efficientnet_v2
- mix_transformer
- mobilenet_v3
- resnet_v1
- resnet_v2
- vgg16
- vitdet
- yolo_v8
- tasks
- basnet_segmentation
- deeplab_v3_segmentation
- feature_extractor
- image_classifier
- retinanet
- segformer_segmentation
- segment_anything
- stable_diffusion
- yolo_v8_detector
- keras_hub
- base_classes
- backbone
- causal_lm_preprocessor
- causal_lm
- masked_lm_preprocessor
- masked_lm
- preprocessor
- seq_2_seq_lm_preprocessor
- seq_2_seq_lm
- task
- text_classifier_preprocessor
- text_classifier
- upload_preset
- layers
- fnet_encoder
- mlm_head
- mlm_mask_generator
- multi_segment_packer
- position_embedding
- sine_position_encoding
- start_end_packer
- token_and_position_embedding
- transformer_decoder
- transformer_encoder
- metrics
- perplexity
- modeling_layers
- alibi_bias
- cached_multi_head_attention
- fnet_encoder
- masked_lm_head
- position_embedding
- reversible_embedding
- rotary_embedding
- sine_position_encoding
- token_and_position_embedding
- transformer_decoder
- transformer_encoder
- models
- albert
- albert_backbone
- albert_masked_lm_preprocessor
- albert_masked_lm
- albert_text_classifier_preprocessor
- albert_text_classifier
- albert_tokenizer
- bart
- bart_backbone
- bart_seq_2_seq_lm_preprocessor
- bart_seq_2_seq_lm
- bart_tokenizer
- bert
- bert_backbone
- bert_masked_lm_preprocessor
- bert_masked_lm
- bert_text_classifier_preprocessor
- bert_text_classifier
- bert_tokenizer
- bloom
- bloom_backbone
- bloom_causal_lm_preprocessor
- bloom_causal_lm
- bloom_tokenizer
- deberta_v3
- deberta_v3_backbone
- deberta_v3_masked_lm_preprocessor
- deberta_v3_masked_lm
- deberta_v3_text_classifier_preprocessor
- deberta_v3_text_classifier
- deberta_v3_tokenizer
- distil_bert
- distil_bert_backbone
- distil_bert_masked_lm_preprocessor
- distil_bert_masked_lm
- distil_bert_text_classifier_preprocessor
- distil_bert_text_classifier
- distil_bert_tokenizer
- electra
- electra_backbone
- electra_tokenizer
- f_net
- f_net_backbone
- f_net_masked_lm_preprocessor
- f_net_masked_lm
- f_net_text_classifier_preprocessor
- f_net_text_classifier
- f_net_tokenizer
- falcon
- falcon_backbone
- falcon_causal_lm_preprocessor
- falcon_causal_lm
- falcon_tokenizer
- gemma
- gemma_backbone
- gemma_causal_lm_preprocessor
- gemma_causal_lm
- gemma_tokenizer
- gpt2
- gpt2_backbone
- gpt2_causal_lm_preprocessor
- gpt2_causal_lm
- gpt2_tokenizer
- llama3
- llama3_backbone
- llama3_causal_lm_preprocessor
- llama3_causal_lm
- llama3_tokenizer
- llama
- llama_backbone
- llama_causal_lm_preprocessor
- llama_causal_lm
- llama_tokenizer
- mistral
- mistral_backbone
- mistral_causal_lm_preprocessor
- mistral_causal_lm
- mistral_tokenizer
- opt
- opt_backbone
- opt_causal_lm_preprocessor
- opt_causal_lm
- opt_tokenizer
- pali_gemma
- pali_gemma_backbone
- pali_gemma_causal_lm_preprocessor
- pali_gemma_causal_lm
- pali_gemma_tokenizer
- phi3
- phi3_backbone
- phi3_causal_lm_preprocessor
- phi3_causal_lm
- phi3_tokenizer
- roberta
- roberta_backbone
- roberta_masked_lm_preprocessor
- roberta_masked_lm
- roberta_text_classifier_preprocessor
- roberta_text_classifier
- roberta_tokenizer
- xlm_roberta
- xlm_roberta_backbone
- xlm_roberta_masked_lm_preprocessor
- xlm_roberta_masked_lm
- xlm_roberta_text_classifier_preprocessor
- xlm_roberta_text_classifier
- xlm_roberta_tokenizer
- preprocessing_layers
- masked_lm_mask_generator
- multi_segment_packer
- random_deletion
- random_swap
- start_end_packer
- samplers
- beam_sampler
- contrastive_sampler
- greedy_sampler
- random_sampler
- samplers
- top_k_sampler
- top_p_sampler
- tokenizers
- byte_pair_tokenizer
- byte_tokenizer
- compute_sentence_piece_proto
- compute_word_piece_vocabulary
- sentence_piece_tokenizer
- tokenizer
- unicode_codepoint_tokenizer
- word_piece_tokenizer
- keras_nlp
- base_classes
- backbone
- causal_lm_preprocessor
- causal_lm
- masked_lm_preprocessor
- masked_lm
- preprocessor
- seq_2_seq_lm_preprocessor
- seq_2_seq_lm
- task
- text_classifier_preprocessor
- text_classifier
- upload_preset
- layers
- fnet_encoder
- mlm_head
- mlm_mask_generator
- multi_segment_packer
- position_embedding
- sine_position_encoding
- start_end_packer
- token_and_position_embedding
- transformer_decoder
- transformer_encoder
- metrics
- perplexity
- modeling_layers
- alibi_bias
- cached_multi_head_attention
- fnet_encoder
- masked_lm_head
- position_embedding
- reversible_embedding
- rotary_embedding
- sine_position_encoding
- token_and_position_embedding
- transformer_decoder
- transformer_encoder
- models
- albert
- albert_backbone
- albert_masked_lm_preprocessor
- albert_masked_lm
- albert_text_classifier_preprocessor
- albert_text_classifier
- albert_tokenizer
- bart
- bart_backbone
- bart_seq_2_seq_lm_preprocessor
- bart_seq_2_seq_lm
- bart_tokenizer
- bert
- bert_backbone
- bert_masked_lm_preprocessor
- bert_masked_lm
- bert_text_classifier_preprocessor
- bert_text_classifier
- bert_tokenizer
- bloom
- bloom_backbone
- bloom_causal_lm_preprocessor
- bloom_causal_lm
- bloom_tokenizer
- deberta_v3
- deberta_v3_backbone
- deberta_v3_masked_lm_preprocessor
- deberta_v3_masked_lm
- deberta_v3_text_classifier_preprocessor
- deberta_v3_text_classifier
- deberta_v3_tokenizer
- distil_bert
- distil_bert_backbone
- distil_bert_masked_lm_preprocessor
- distil_bert_masked_lm
- distil_bert_text_classifier_preprocessor
- distil_bert_text_classifier
- distil_bert_tokenizer
- electra
- electra_backbone
- electra_tokenizer
- f_net
- f_net_backbone
- f_net_masked_lm_preprocessor
- f_net_masked_lm
- f_net_text_classifier_preprocessor
- f_net_text_classifier
- f_net_tokenizer
- falcon
- falcon_backbone
- falcon_causal_lm_preprocessor
- falcon_causal_lm
- falcon_tokenizer
- gemma
- gemma_backbone
- gemma_causal_lm_preprocessor
- gemma_causal_lm
- gemma_tokenizer
- gpt2
- gpt2_backbone
- gpt2_causal_lm_preprocessor
- gpt2_causal_lm
- gpt2_tokenizer
- llama3
- llama3_backbone
- llama3_causal_lm_preprocessor
- llama3_causal_lm
- llama3_tokenizer
- llama
- llama_backbone
- llama_causal_lm_preprocessor
- llama_causal_lm
- llama_tokenizer
- mistral
- mistral_backbone
- mistral_causal_lm_preprocessor
- mistral_causal_lm
- mistral_tokenizer
- opt
- opt_backbone
- opt_causal_lm_preprocessor
- opt_causal_lm
- opt_tokenizer
- pali_gemma
- pali_gemma_backbone
- pali_gemma_causal_lm_preprocessor
- pali_gemma_causal_lm
- pali_gemma_tokenizer
- phi3
- phi3_backbone
- phi3_causal_lm_preprocessor
- phi3_causal_lm
- phi3_tokenizer
- roberta
- roberta_backbone
- roberta_masked_lm_preprocessor
- roberta_masked_lm
- roberta_text_classifier_preprocessor
- roberta_text_classifier
- roberta_tokenizer
- xlm_roberta
- xlm_roberta_backbone
- xlm_roberta_masked_lm_preprocessor
- xlm_roberta_masked_lm
- xlm_roberta_text_classifier_preprocessor
- xlm_roberta_text_classifier
- xlm_roberta_tokenizer
- preprocessing_layers
- masked_lm_mask_generator
- multi_segment_packer
- random_deletion
- random_swap
- start_end_packer
- samplers
- beam_sampler
- contrastive_sampler
- greedy_sampler
- random_sampler
- samplers
- top_k_sampler
- top_p_sampler
- tokenizers
- byte_pair_tokenizer
- byte_tokenizer
- compute_sentence_piece_proto
- compute_word_piece_vocabulary
- sentence_piece_tokenizer
- tokenizer
- unicode_codepoint_tokenizer
- word_piece_tokenizer
- keras_tuner
- errors
- hypermodels
- base_hypermodel
- hyper_efficientnet
- hyper_image_augment
- hyper_resnet
- hyper_xception
- hyperparameters
- oracles
- base_oracle
- bayesian
- grid
- hyperband
- random
- synchronized
- tuners
- base_tuner
- bayesian
- grid
- hyperband
- objective
- random
- sklearn
- guides
- keras_cv
- classification_with_keras_cv
- custom_image_augmentations
- cut_mix_mix_up_and_rand_augment
- generate_images_with_stable_diffusion
- object_detection_keras_cv
- retina_net_overview
- segment_anything_in_keras_cv
- semantic_segmentation_deeplab_v3_plus
- keras_hub
- classification_with_keras_hub
- getting_started
- segment_anything_in_keras_hub
- semantic_segmentation_deeplab_v3
- stable_diffusion_3_in_keras_hub
- transformer_pretraining
- upload
- keras_nlp
- getting_started
- transformer_pretraining
- upload
- keras_tuner
- custom_tuner
- distributed_tuning
- failed_trials
- getting_started
- tailor_the_search_space
- visualize_tuning
- keras_cv
- keras_nlp
- scripts
- templates
- api
- keras_cv
- layers
- preprocessing
- regularization
- metrics
- models
- keras_hub
- metrics
- models
- bert
- distil_bert
- roberta
- xlm_roberta
- keras_nlp
- layers
- metrics
- models
- bert
- distil_bert
- roberta
- xlm_roberta
- tokenizers
- utils
- examples/audio
- guides
- keras_cv
- keras_hub
- keras_nlp
- keras_tuner
- keras_cv
- keras_hub
- api
- base_classes
- layers
- metrics
- modeling_layers
- models
- preprocessing_layers
- samplers
- tokenizers
- utils
- presets
- keras_nlp
- keras_tuner
- api
- hypermodels
- oracles
- tuners
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
551 files changed
+818
-18140
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
11 | 11 | | |
12 | 12 | | |
13 | 13 | | |
14 | | - | |
15 | | - | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
16 | 17 | | |
17 | 18 | | |
0 commit comments