Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Batch embedding sample does not work after update to gemini-embedding-001 #13393

Copy link
Copy link
Closed
@zeevox

Description

@zeevox
Issue body actions

TL;DR: The recently updated batch text embedding sample throws an error due to unsupported model gemini-embedding-001.

Introduced in: #13388

In which file did you encounter the issue?

generative_ai/embeddings/batch_example.py

Did you change the file? If so, how?

Only to set the project ID and output bucket.

-PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
-OUTPUT_URI = os.getenv("GCS_OUTPUT_URI")
+PROJECT_ID = "gen-lang-client-0000171954"
+OUTPUT_URI = "gs://felixplore/"

Describe the issue

  1. Installed google-cloud-aiplatform v1.95 into a Python 3.13 virtual environment.
  2. Ran python3 batch_example.py and received the error message
    400 Do not support publisher model gemini-embedding-001

Full error traceback

Creating BatchPredictionJob
Traceback (most recent call last):
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/api_core/grpc_helpers.py", line 76, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_interceptor.py", line 277, in __call__
    response, ignored_call = self._with_call(
                             ~~~~~~~~~~~~~~~^
        request,
        ^^^^^^^^
    ...<4 lines>...
        compression=compression,
        ^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_interceptor.py", line 332, in _with_call
    return call.result(), call
           ~~~~~~~~~~~^^
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_channel.py", line 440, in result
    raise self
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_interceptor.py", line 315, in continuation
    response, call = self._thunk(new_method).with_call(
                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        request,
        ^^^^^^^^
    ...<4 lines>...
        compression=new_compression,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_channel.py", line 1198, in with_call
    return _end_unary_response_blocking(state, call, True, None)
  File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Do not support publisher model gemini-embedding-001"
	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B2a00:1450:4009:81f::200a%5D:443 {grpc_message:"Do not support publisher model gemini-embedding-001", grpc_status:3, created_time:"2025-05-29T12:11:57.38838994+01:00"}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/zeevox/Projects/gemini-embed-debug/batch_example.py", line 63, in
embed_text_batch()
~~~~~~~~~~~~~~~~^^
File "/home/zeevox/Projects/gemini-embed-debug/batch_example.py", line 47, in embed_text_batch
batch_prediction_job = textembedding_model.batch_predict(
dataset=[input_uri],
destination_uri_prefix=output_uri,
)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/vertexai/language_models/_language_models.py", line 1904, in batch_predict
job = aiplatform.BatchPredictionJob.create(
model_name=model_name,
...<2 lines>...
model_parameters=model_parameters,
)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/cloud/aiplatform/jobs.py", line 620, in create
return cls._submit_impl(
~~~~~~~~~~~~~~~~^
job_display_name=job_display_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<29 lines>...
wait_for_completion=True,
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/cloud/aiplatform/jobs.py", line 1337, in _submit_impl
return cls._submit_and_optionally_wait_with_sync_support(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
empty_batch_prediction_job=empty_batch_prediction_job,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
wait_for_completion=wait_for_completion,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/cloud/aiplatform/base.py", line 863, in wrapper
return method(*args, **kwargs)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/cloud/aiplatform/jobs.py", line 1406, in _submit_and_optionally_wait_with_sync_support
gca_batch_prediction_job = api_client.create_batch_prediction_job(
parent=parent,
batch_prediction_job=gca_batch_prediction_job,
timeout=create_request_timeout,
)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/cloud/aiplatform_v1/services/job_service/client.py", line 3926, in create_batch_prediction_job
response = rpc(
request,
...<2 lines>...
metadata=metadata,
)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/api_core/gapic_v1/method.py", line 131, in call
return wrapped_func(*args, **kwargs)
File "/home/zeevox/Projects/gemini-embed-debug/.venv/lib/python3.13/site-packages/google/api_core/grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Do not support publisher model gemini-embedding-001

To establish whether this is an issue with the model or something in my setup, I tried changing the model name to a previous model, text-embedding-005 with

textembedding_model = language_models.TextEmbeddingModel.from_pretrained(
-    "gemini-embedding-001"
+    "text-embedding-005"
)

And the batch prediction job completed successfully as expected. Therefore, I conclude it is something to do with the model gemini-embedding-001.

Metadata

Metadata

Assignees

Labels

priority: p2Moderately-important priority. Fix may not be included in next release.Moderately-important priority. Fix may not be included in next release.samplesIssues that are directly related to samples.Issues that are directly related to samples.triage meI really want to be triaged.I really want to be triaged.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    Morty Proxy This is a proxified and sanitized view of the page, visit original site.