-
Notifications
You must be signed in to change notification settings - Fork 217
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Priority
P1-Stopper
OS type
N/A
Hardware type
N/A
Installation method
- Pull docker images from hub.docker.com
- Build docker images from source
- Other
- N/A
Deploy method
- Docker
- Docker Compose
- Kubernetes Helm Charts
- Other
- N/A
Running nodes
N/A
What's the version?
main
Description
The test hangs and eventually times out after the following part of the test.
Reproduce steps
N/A
Raw log
2025-11-19T08:04:28.4244544Z + docker compose -f compose_doc-summarization.yaml up docsum-vllm -d
2025-11-19T08:04:28.5069115Z time="2025-11-19T08:04:28Z" level=warning msg="The \"offline_no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5072210Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5075137Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5077481Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5080353Z time="2025-11-19T08:04:28Z" level=warning msg="The \"offline_no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5082573Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5084911Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5087051Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5089176Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5090767Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5091458Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5181247Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5182709Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5184245Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5185657Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5186933Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5188151Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5189485Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5190763Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5192064Z time="2025-11-19T08:04:28Z" level=warning msg="The \"VLLM_LLM_MODEL_ID\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5193430Z time="2025-11-19T08:04:28Z" level=warning msg="The \"TENSOR_PARALLEL_SIZE\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5194990Z time="2025-11-19T08:04:28Z" level=warning msg="The \"no_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5196305Z time="2025-11-19T08:04:28Z" level=warning msg="The \"http_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5197652Z time="2025-11-19T08:04:28Z" level=warning msg="The \"https_proxy\" variable is not set. Defaulting to a blank string."
2025-11-19T08:04:28.5496785Z Network docker_compose_default Creating
2025-11-19T08:04:28.6089152Z Network docker_compose_default Created
2025-11-19T08:04:28.6099567Z Container vllm-server Creating
2025-11-19T08:04:28.6669243Z Container vllm-server Created
2025-11-19T08:04:28.6675883Z Container docsum-vllm Creating
2025-11-19T08:04:28.6899651Z Container docsum-vllm Created
2025-11-19T08:04:28.6913203Z Container vllm-server Starting
2025-11-19T08:04:28.8764352Z Container vllm-server Started
2025-11-19T08:04:28.8765514Z Container vllm-server Waiting
2025-11-19T08:07:00.3790563Z Container vllm-server Healthy
2025-11-19T08:07:00.3791690Z Container docsum-vllm Starting
2025-11-19T08:07:00.5791577Z Container docsum-vllm Started
2025-11-19T08:07:00.5868904Z + sleep 30s
2025-11-19T08:07:30.5905678Z + validate_microservices
2025-11-19T08:07:30.5906918Z Validate vllm...
2025-11-19T08:07:30.5908177Z + URL=http://192.168.122.213:10507/v1/docsum
2025-11-19T08:07:30.5909857Z + echo 'Validate vllm...'
2025-11-19T08:07:30.5914219Z + validate_services http://192.168.122.213:12107/v1/completions text vllm-server vllm-server '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "prompt": "What is Deep Learning?", "max_tokens": 32, "temperature": 0}'
2025-11-19T08:07:30.5915133Z + local URL=http://192.168.122.213:12107/v1/completions
2025-11-19T08:07:30.5915437Z + local EXPECTED_RESULT=text
2025-11-19T08:07:30.5915725Z + local SERVICE_NAME=vllm-server
2025-11-19T08:07:30.5916011Z + local DOCKER_NAME=vllm-server
2025-11-19T08:07:30.5916550Z + local 'INPUT_DATA={"model": "meta-llama/Meta-Llama-3-8B-Instruct", "prompt": "What is Deep Learning?", "max_tokens": 32, "temperature": 0}'
2025-11-19T08:07:30.5921986Z ++ curl -s -o /dev/null -w '%{http_code}' -X POST -d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "prompt": "What is Deep Learning?", "max_tokens": 32, "temperature": 0}' -H 'Content-Type: application/json' http://192.168.122.213:12107/v1/completions
2025-11-19T08:07:35.1660227Z + local HTTP_STATUS=200
2025-11-19T08:07:35.1661259Z + echo ===========================================
2025-11-19T08:07:35.1661760Z + '[' 200 -eq 200 ']'
2025-11-19T08:07:35.1662141Z + echo '[ vllm-server ] HTTP status is 200. Checking content...'
2025-11-19T08:07:35.1662658Z ===========================================
2025-11-19T08:07:35.1663052Z [ vllm-server ] HTTP status is 200. Checking content...
2025-11-19T08:07:35.1666317Z ++ curl -s -X POST -d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "prompt": "What is Deep Learning?", "max_tokens": 32, "temperature": 0}' -H 'Content-Type: application/json' http://192.168.122.213:12107/v1/completions
2025-11-19T08:07:35.1668016Z ++ tee /home/sdp/opea-actions-runner/_work/GenAIComps/GenAIComps/tests/vllm-server.log
2025-11-19T08:07:39.7705476Z + local 'CONTENT={"id":"cmpl-144f59bf97e94cdda5bdcba7f47f1c22","object":"text_completion","created":1763539655,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"text":" A Beginner'\''s Guide\nDeep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. These neural networks are designed","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":38,"completion_tokens":32,"prompt_tokens_details":null},"kv_transfer_params":null}'
2025-11-19T08:07:39.7709491Z + echo '{"id":"cmpl-144f59bf97e94cdda5bdcba7f47f1c22","object":"text_completion","created":1763539655,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"text":"' A 'Beginner'\''s' 'Guide\nDeep' learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. These neural networks are 'designed","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":38,"completion_tokens":32,"prompt_tokens_details":null},"kv_transfer_params":null}'
2025-11-19T08:07:39.7713058Z {"id":"cmpl-144f59bf97e94cdda5bdcba7f47f1c22","object":"text_completion","created":1763539655,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"text":" A Beginner's Guide\nDeep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. These neural networks are designed","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":38,"completion_tokens":32,"prompt_tokens_details":null},"kv_transfer_params":null}
2025-11-19T08:07:39.7717124Z + echo '{"id":"cmpl-144f59bf97e94cdda5bdcba7f47f1c22","object":"text_completion","created":1763539655,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"text":" A Beginner'\''s Guide\nDeep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. These neural networks are designed","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":38,"completion_tokens":32,"prompt_tokens_details":null},"kv_transfer_params":null}'
2025-11-19T08:07:39.7718993Z + grep -q text
2025-11-19T08:07:39.7723310Z + echo '[ vllm-server ] Content is as expected.'
2025-11-19T08:07:39.7723610Z [ vllm-server ] Content is as expected.
2025-11-19T08:07:39.7724069Z + sleep 1s
2025-11-19T08:07:40.7745184Z + echo 'Validate stream=True...'
2025-11-19T08:07:40.7746795Z Validate stream=True...
2025-11-19T08:07:40.7752269Z + validate_services http://192.168.122.213:10507/v1/docsum text docsum-vllm docsum-vllm '{"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en"}'
2025-11-19T08:07:40.7758646Z + local URL=http://192.168.122.213:10507/v1/docsum
2025-11-19T08:07:40.7760137Z + local EXPECTED_RESULT=text
2025-11-19T08:07:40.7761357Z + local SERVICE_NAME=docsum-vllm
2025-11-19T08:07:40.7762316Z + local DOCKER_NAME=docsum-vllm
2025-11-19T08:07:40.7765189Z + local 'INPUT_DATA={"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en"}'
2025-11-19T08:07:40.7767269Z ++ curl -s -o /dev/null -w '%{http_code}' -X POST -d '{"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en"}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:07:50.3855838Z + local HTTP_STATUS=200
2025-11-19T08:07:50.3857372Z ===========================================
2025-11-19T08:07:50.3858913Z + echo ===========================================
2025-11-19T08:07:50.3860223Z + '[' 200 -eq 200 ']'
2025-11-19T08:07:50.3861494Z + echo '[ docsum-vllm ] HTTP status is 200. Checking content...'
2025-11-19T08:07:50.3863080Z [ docsum-vllm ] HTTP status is 200. Checking content...
2025-11-19T08:07:50.3869488Z ++ curl -s -X POST -d '{"messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en"}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:07:50.3871542Z ++ tee /home/sdp/opea-actions-runner/_work/GenAIComps/GenAIComps/tests/docsum-vllm.log
2025-11-19T08:08:00.0622349Z + local 'CONTENT={"id":"d2b5f1c4f0e0a79d636e0f07c9a2e3ee","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}'
2025-11-19T08:08:00.0630303Z {"id":"d2b5f1c4f0e0a79d636e0f07c9a2e3ee","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}
2025-11-19T08:08:00.0638077Z + echo '{"id":"d2b5f1c4f0e0a79d636e0f07c9a2e3ee","text":"' '\nTEI' is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, 'Ember","prompt":"Text' Embeddings Inference '(TEI)' is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and 'E5.","output_guardrail_params":null}'
2025-11-19T08:08:00.0641304Z + echo '{"id":"d2b5f1c4f0e0a79d636e0f07c9a2e3ee","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}'
2025-11-19T08:08:00.0643101Z + grep -q text
2025-11-19T08:08:00.0643439Z + echo '[ docsum-vllm ] Content is as expected.'
2025-11-19T08:08:00.0643710Z + sleep 1s
2025-11-19T08:08:00.0644106Z [ docsum-vllm ] Content is as expected.
2025-11-19T08:08:01.0674998Z + echo 'Validate stream=False...'
2025-11-19T08:08:01.0676602Z Validate stream=False...
2025-11-19T08:08:01.0682482Z + validate_services http://192.168.122.213:10507/v1/docsum text docsum-vllm docsum-vllm '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "stream":false}'
2025-11-19T08:08:01.0688600Z + local URL=http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:01.0689874Z + local EXPECTED_RESULT=text
2025-11-19T08:08:01.0690307Z + local SERVICE_NAME=docsum-vllm
2025-11-19T08:08:01.0690603Z + local DOCKER_NAME=docsum-vllm
2025-11-19T08:08:01.0691788Z + local 'INPUT_DATA={"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "stream":false}'
2025-11-19T08:08:01.0694225Z ++ curl -s -o /dev/null -w '%{http_code}' -X POST -d '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "stream":false}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:10.4631747Z + local HTTP_STATUS=200
2025-11-19T08:08:10.4632631Z + echo ===========================================
2025-11-19T08:08:10.4633476Z + '[' 200 -eq 200 ']'
2025-11-19T08:08:10.4634567Z + echo '[ docsum-vllm ] HTTP status is 200. Checking content...'
2025-11-19T08:08:10.4635475Z ===========================================
2025-11-19T08:08:10.4636326Z [ docsum-vllm ] HTTP status is 200. Checking content...
2025-11-19T08:08:10.4645311Z ++ curl -s -X POST -d '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "stream":false}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:10.4647536Z ++ tee /home/sdp/opea-actions-runner/_work/GenAIComps/GenAIComps/tests/docsum-vllm.log
2025-11-19T08:08:20.0167266Z + local 'CONTENT={"id":"38068711ae9f07d41bfe363dac05dd98","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}'
2025-11-19T08:08:20.0176933Z + echo '{"id":"38068711ae9f07d41bfe363dac05dd98","text":"' '\nTEI' is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, 'Ember","prompt":"Text' Embeddings Inference '(TEI)' is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and 'E5.","output_guardrail_params":null}'
2025-11-19T08:08:20.0183014Z + echo '{"id":"38068711ae9f07d41bfe363dac05dd98","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}'
2025-11-19T08:08:20.0186349Z + grep -q text
2025-11-19T08:08:20.0188191Z {"id":"38068711ae9f07d41bfe363dac05dd98","text":" \nTEI is a toolkit for deploying and serving text embeddings and sequence classification models, allowing for high-performance extraction of popular models such as FlagEmbedding, Ember","prompt":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","output_guardrail_params":null}
2025-11-19T08:08:20.0189999Z + echo '[ docsum-vllm ] Content is as expected.'
2025-11-19T08:08:20.0190363Z + sleep 1s
2025-11-19T08:08:20.0190612Z [ docsum-vllm ] Content is as expected.
2025-11-19T08:08:21.0211988Z + echo 'Validate Chinese mode...'
2025-11-19T08:08:21.0213364Z Validate Chinese mode...
2025-11-19T08:08:21.0231080Z + validate_services http://192.168.122.213:10507/v1/docsum text docsum-vllm docsum-vllm '{"messages":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。", "max_tokens":32, "language":"zh", "stream":false}'
2025-11-19T08:08:21.0232637Z + local URL=http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:21.0233005Z + local EXPECTED_RESULT=text
2025-11-19T08:08:21.0233389Z + local SERVICE_NAME=docsum-vllm
2025-11-19T08:08:21.0233705Z + local DOCKER_NAME=docsum-vllm
2025-11-19T08:08:21.0234865Z + local 'INPUT_DATA={"messages":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。", "max_tokens":32, "language":"zh", "stream":false}'
2025-11-19T08:08:21.0236331Z ++ curl -s -o /dev/null -w '%{http_code}' -X POST -d '{"messages":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。", "max_tokens":32, "language":"zh", "stream":false}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:30.4520895Z + local HTTP_STATUS=200
2025-11-19T08:08:30.4521458Z + echo ===========================================
2025-11-19T08:08:30.4521832Z + '[' 200 -eq 200 ']'
2025-11-19T08:08:30.4522204Z + echo '[ docsum-vllm ] HTTP status is 200. Checking content...'
2025-11-19T08:08:30.4522634Z ===========================================
2025-11-19T08:08:30.4523084Z [ docsum-vllm ] HTTP status is 200. Checking content...
2025-11-19T08:08:30.4533203Z ++ curl -s -X POST -d '{"messages":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。", "max_tokens":32, "language":"zh", "stream":false}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:30.4535016Z ++ tee /home/sdp/opea-actions-runner/_work/GenAIComps/GenAIComps/tests/docsum-vllm.log
2025-11-19T08:08:39.2028562Z + local 'CONTENT={"id":"344c2ad5ad61a0dcc1e89e5b067e709b","text":"英特尔发布了新的处理器英特尔®至强®6性能核处理器(代号Granite Rapids),旨在为AI","prompt":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。","output_guardrail_params":null}'
2025-11-19T08:08:39.2029984Z + echo '{"id":"344c2ad5ad61a0dcc1e89e5b067e709b","text":"英特尔发布了新的处理器英特尔®至强®6性能核处理器(代号Granite' 'Rapids),旨在为AI","prompt":"2024年9月26日,北京——今日,英特尔正式发布英特尔®' 至强® 6性能核处理器(代号Granite 'Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。","output_guardrail_params":null}'
2025-11-19T08:08:39.2031649Z {"id":"344c2ad5ad61a0dcc1e89e5b067e709b","text":"英特尔发布了新的处理器英特尔®至强®6性能核处理器(代号Granite Rapids),旨在为AI","prompt":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。","output_guardrail_params":null}
2025-11-19T08:08:39.2035186Z + echo '{"id":"344c2ad5ad61a0dcc1e89e5b067e709b","text":"英特尔发布了新的处理器英特尔®至强®6性能核处理器(代号Granite Rapids),旨在为AI","prompt":"2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。","output_guardrail_params":null}'
2025-11-19T08:08:39.2041545Z + grep -q text
2025-11-19T08:08:39.2062063Z [ docsum-vllm ] Content is as expected.
2025-11-19T08:08:39.2063124Z + echo '[ docsum-vllm ] Content is as expected.'
2025-11-19T08:08:39.2063403Z + sleep 1s
2025-11-19T08:08:40.2092978Z + echo 'Validate truncate mode...'
2025-11-19T08:08:40.2094466Z Validate truncate mode...
2025-11-19T08:08:40.2100252Z + validate_services http://192.168.122.213:10507/v1/docsum text docsum-vllm docsum-vllm '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "summary_type": "truncate", "chunk_size": 2000}'
2025-11-19T08:08:40.2106348Z + local URL=http://192.168.122.213:10507/v1/docsum
2025-11-19T08:08:40.2107459Z + local EXPECTED_RESULT=text
2025-11-19T08:08:40.2108654Z + local SERVICE_NAME=docsum-vllm
2025-11-19T08:08:40.2109602Z + local DOCKER_NAME=docsum-vllm
2025-11-19T08:08:40.2113589Z + local 'INPUT_DATA={"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "summary_type": "truncate", "chunk_size": 2000}'
2025-11-19T08:08:40.2116224Z ++ curl -s -o /dev/null -w '%{http_code}' -X POST -d '{"messages":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en", "summary_type": "truncate", "chunk_size": 2000}' -H 'Content-Type: application/json' http://192.168.122.213:10507/v1/docsum
2025-11-19T08:58:53.6967062Z ++ stop_serviceAttachments
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working