Skip to content

chore: using llama3.2 as default model for ollama provider (#644) #1541

chore: using llama3.2 as default model for ollama provider (#644)

chore: using llama3.2 as default model for ollama provider (#644) #1541

Triggered via push March 24, 2025 03:02
Status Cancelled
Total duration 2m 28s
Artifacts

release.yml

on: push
Generate docker metadata
8s
Generate docker metadata
Build and Push Backend Image
1m 37s
Build and Push Backend Image
Build and Push Frontend Image
1m 52s
Build and Push Frontend Image
E2E Test
0s
E2E Test
Deploy E2E Test Results
0s
Deploy E2E Test Results
Fit to window
Zoom out
Zoom in

Annotations

4 errors and 3 warnings
Build and Push Backend Image
Canceling since a higher priority waiting request for 'Build and Publish-refs/heads/main' exists
Build and Push Backend Image
The operation was canceled.
Build and Push Frontend Image
Canceling since a higher priority waiting request for 'Build and Publish-refs/heads/main' exists
Build and Push Frontend Image
The operation was canceled.
Generate docker metadata
main does not conform to PEP 440. More info: https://www.python.org/dev/peps/pep-0440
Generate docker metadata
main does not conform to PEP 440. More info: https://www.python.org/dev/peps/pep-0440
Generate docker metadata
main does not conform to PEP 440. More info: https://www.python.org/dev/peps/pep-0440