Open
Description
What would you like to happen?
The function: VertexAIImageEmbeddings suggests using the embedding_model_name = 'multimodalembedding@001'
.
The multimodal embedding generator can have both image and contextual text embeddings generated at the same time.
However, if you try to submit with contextual_text
mm_embedding_transform = VertexAIImageEmbeddings(
model_name=text_embedding_model_name,
columns=['image','contextual_text'],
# columns=['image'],
dimension=1408,
project=project_id)
It will error out AttributeError: 'str' object has no attribute '_gcs_uri' [while running 'Embedding/RunInference/BeamML_RunInference']
because it's looking for the image.
The FR is to add the functionality so it can handle both.
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
- Component: Python SDK
- Component: Java SDK
- Component: Go SDK
- Component: Typescript SDK
- Component: IO connector
- Component: Beam YAML
- Component: Beam examples
- Component: Beam playground
- Component: Beam katas
- Component: Website
- Component: Infrastructure
- Component: Spark Runner
- Component: Flink Runner
- Component: Samza Runner
- Component: Twister2 Runner
- Component: Hazelcast Jet Runner
- Component: Google Cloud Dataflow Runner