Skip to content

gemini embeddings #1217

gemini embeddings

gemini embeddings #1217

Triggered via push February 2, 2026 23:45
Status Failure
Total duration 17m 16s
Artifacts 9

ci-workflow.yml

on: push
should_run
7s
should_run
Matrix: integration
Matrix: unit
Matrix: versioned-internal
Matrix: ci
Matrix: lint
Matrix: versioned-external
Matrix: codecov
all-clear
3s
all-clear
Fit to window
Zoom out
Zoom in

Annotations

25 errors and 10 notices
unit (24.x)
Process completed with exit code 1.
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/google-genai/chat-completion-message.test.js#L75
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: "I don't know!", id: '411f204921ae192090ce1021e5af1ff2d0ca', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '2803d0b985f4f70f', token_count: 0, trace_id: 'aa9de474b6714d7520c1e79138fb3f78', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: "I don't know!", id: '411f204921ae192090ce1021e5af1ff2d0ca', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '2803d0b985f4f70f', token_count: 0, trace_id: 'aa9de474b6714d7520c1e79138fb3f78', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:75:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '411f204921ae192090ce1021e5af1ff2d0ca', appName: 'New Relic for Node.js tests', trace_id: 'aa9de474b6714d7520c1e79138fb3f78', span_id: '2803d0b985f4f70f', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'model', sequence: 2, completion_id: 'chat-summary-id', is_response: true, content: "I don't know!", token_count: 0 }, expected: { id: '411f204921ae192090ce1021e5af1ff2d0ca', trace_id: 'aa9de474b6714d7520c1e79138fb3f78', span_id: '2803d0b985f4f70f', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: "I don't know!", sequence: 2, completion_id: 'chat-summary-id', role: 'model', token_count: 0, 'response.model': 'gemini-2.0-flash', is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/google-genai/chat-completion-message.test.js#L47
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: '730bc01718dfd12c32f1aca2b0ad4c56a636', ingest_source: 'Node', is_response: false, role: 'user', sequence: 0, span_id: 'c33761130100d169', timestamp: 1770076056908, token_count: 0, trace_id: '054564991deb263a7321e31a2fe928b7', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: '730bc01718dfd12c32f1aca2b0ad4c56a636', ingest_source: 'Node', role: 'user', sequence: 0, span_id: 'c33761130100d169', timestamp: 1770076056908, token_count: 0, trace_id: '054564991deb263a7321e31a2fe928b7', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:47:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '730bc01718dfd12c32f1aca2b0ad4c56a636', appName: 'New Relic for Node.js tests', trace_id: '054564991deb263a7321e31a2fe928b7', span_id: 'c33761130100d169', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'user', sequence: 0, completion_id: 'chat-summary-id', is_response: false, content: 'Why is the sky blue?', timestamp: 1770076056908, token_count: 0 }, expected: { id: '730bc01718dfd12c32f1aca2b0ad4c56a636', trace_id: '054564991deb263a7321e31a2fe928b7', span_id: 'c33761130100d169', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: 'Why is the sky blue?', sequence: 0, completion_id: 'chat-summary-id', role: 'user', token_count: 0, 'response.model': 'gemini-2.0-flash', timestamp: 1770076056908 }, operator: 'deepEqual', diff: 'simple' }
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.13304, error: false, id: '74104da6b605e88988c141ebfaecd51f0abf', ingest_source: 'No... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.13304, id: '74104da6b605e88988c141ebfaecd51f0abf', ingest_source: 'Node', span_id: '41edc968c68a00f4', timestamp: 1770076056719, trace_id: 'f125cb... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '74104da6b605e88988c141ebfaecd51f0abf', appName: 'New Relic for Node.js tests', trace_id: 'f125cb97c20c64871a55cd9dbd05b1c5', span_id: '41edc968c68a00f4', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.13304, error: false, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'request.max_tokens': 1000000, 'request.temperature': 1, timestamp: 1770076056719, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: '74104da6b605e88988c141ebfaecd51f0abf', trace_id: 'f125cb97c20c64871a55cd9dbd05b1c5', span_id: '41edc968c68a00f4', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.13304, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.model': 'gemini-2.0-flash', timestamp: 1770076056719 }, operator: 'deepEqual
should properly serialize input when it is a array of array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1093:21) at process.processTicksAndRejections (node:internal/process/task_queues:103:5) at async Test.processPendingSubtests (node:internal/test_runner/test:788:7)
should properly serialize input when it is a array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1093:21) at process.processTicksAndRejections (node:internal/process/task_queues:103:5) at async Test.processPendingSubtests (node:internal/test_runner/test:788:7)
should properly serialize input when it is a array of strings: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1093:21) at process.processTicksAndRejections (node:internal/process/task_queues:103:5) at async Test.processPendingSubtests (node:internal/test_runner/test:788:7)
should properly serialize input when it is a string: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1093:21) at process.processTicksAndRejections (node:internal/process/task_queues:103:5) at async Test.processPendingSubtests (node:internal/test_runner/test:788:7)
unit (22.x)
Process completed with exit code 1.
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/google-genai/chat-completion-message.test.js#L75
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: "I don't know!", id: 'cb5fe417893464393d7e4f30fb7b64f7aac4', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '7e232f9ec9987c29', token_count: 0, trace_id: 'ced874726a009b9de774cc9bd846206a', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: "I don't know!", id: 'cb5fe417893464393d7e4f30fb7b64f7aac4', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '7e232f9ec9987c29', token_count: 0, trace_id: 'ced874726a009b9de774cc9bd846206a', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:75:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: 'cb5fe417893464393d7e4f30fb7b64f7aac4', appName: 'New Relic for Node.js tests', trace_id: 'ced874726a009b9de774cc9bd846206a', span_id: '7e232f9ec9987c29', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'model', sequence: 2, completion_id: 'chat-summary-id', is_response: true, content: "I don't know!", token_count: 0 }, expected: { id: 'cb5fe417893464393d7e4f30fb7b64f7aac4', trace_id: 'ced874726a009b9de774cc9bd846206a', span_id: '7e232f9ec9987c29', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: "I don't know!", sequence: 2, completion_id: 'chat-summary-id', role: 'model', token_count: 0, 'response.model': 'gemini-2.0-flash', is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/google-genai/chat-completion-message.test.js#L47
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: 'c4ae273eec2ecfc5728a9ec2b310565d799a', ingest_source: 'Node', is_response: false, role: 'user', sequence: 0, span_id: 'bfd107331348d8c2', timestamp: 1770076065245, token_count: 0, trace_id: '554cd91523b950de34303bd10389608d', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: 'c4ae273eec2ecfc5728a9ec2b310565d799a', ingest_source: 'Node', role: 'user', sequence: 0, span_id: 'bfd107331348d8c2', timestamp: 1770076065245, token_count: 0, trace_id: '554cd91523b950de34303bd10389608d', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:47:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: 'c4ae273eec2ecfc5728a9ec2b310565d799a', appName: 'New Relic for Node.js tests', trace_id: '554cd91523b950de34303bd10389608d', span_id: 'bfd107331348d8c2', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'user', sequence: 0, completion_id: 'chat-summary-id', is_response: false, content: 'Why is the sky blue?', timestamp: 1770076065245, token_count: 0 }, expected: { id: 'c4ae273eec2ecfc5728a9ec2b310565d799a', trace_id: '554cd91523b950de34303bd10389608d', span_id: 'bfd107331348d8c2', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: 'Why is the sky blue?', sequence: 0, completion_id: 'chat-summary-id', role: 'user', token_count: 0, 'response.model': 'gemini-2.0-flash', timestamp: 1770076065245 }, operator: 'deepEqual', diff: 'simple' }
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.091622, error: false, id: '1600e4a6c02da91ff4b634c3e67623af2e10', ingest_source: 'N... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.091622, id: '1600e4a6c02da91ff4b634c3e67623af2e10', ingest_source: 'Node', span_id: '36fb9df46b92c959', timestamp: 1770076065044, trace_id: '9d6f3... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '1600e4a6c02da91ff4b634c3e67623af2e10', appName: 'New Relic for Node.js tests', trace_id: '9d6f391af1f867193e8c021c30545f1d', span_id: '36fb9df46b92c959', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.091622, error: false, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'request.max_tokens': 1000000, 'request.temperature': 1, timestamp: 1770076065044, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: '1600e4a6c02da91ff4b634c3e67623af2e10', trace_id: '9d6f391af1f867193e8c021c30545f1d', span_id: '36fb9df46b92c959', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.091622, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.model': 'gemini-2.0-flash', timestamp: 1770076065044 }, operator: 'deepEqual'
should properly serialize input when it is a array of array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1034:21) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async Test.processPendingSubtests (node:internal/test_runner/test:744:7)
should properly serialize input when it is a array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1034:21) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async Test.processPendingSubtests (node:internal/test_runner/test:744:7)
should properly serialize input when it is a array of strings: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1034:21) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async Test.processPendingSubtests (node:internal/test_runner/test:744:7)
should properly serialize input when it is a string: test/unit/llm-events/openai/embedding.test.js#L68
TypeError: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:214:14) at Test.run (node:internal/test_runner/test:1034:21) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async Test.processPendingSubtests (node:internal/test_runner/test:744:7)
unit (20.x)
Process completed with exit code 1.
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/google-genai/chat-completion-message.test.js#L75
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: "I don't know!", id: '6aa917903960ed678580a0b4573f667f0011', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '44a4b0cc05d278a6', token_count: 0, trace_id: '9af0ff440c6a26e9c43c7acf98edcaea', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: "I don't know!", id: '6aa917903960ed678580a0b4573f667f0011', ingest_source: 'Node', is_response: true, role: 'model', sequence: 2, span_id: '44a4b0cc05d278a6', token_count: 0, trace_id: '9af0ff440c6a26e9c43c7acf98edcaea', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:75:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '6aa917903960ed678580a0b4573f667f0011', appName: 'New Relic for Node.js tests', trace_id: '9af0ff440c6a26e9c43c7acf98edcaea', span_id: '44a4b0cc05d278a6', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'model', sequence: 2, completion_id: 'chat-summary-id', is_response: true, content: "I don't know!", token_count: 0 }, expected: { id: '6aa917903960ed678580a0b4573f667f0011', trace_id: '9af0ff440c6a26e9c43c7acf98edcaea', span_id: '44a4b0cc05d278a6', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: "I don't know!", sequence: 2, completion_id: 'chat-summary-id', role: 'model', token_count: 0, 'response.model': 'gemini-2.0-flash', is_response: true }, operator: 'deepEqual' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/google-genai/chat-completion-message.test.js#L47
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionMessage { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: 'd7fe9024426ee6cbc38fa00ed40c85afd5fe', ingest_source: 'Node', is_response: false, role: 'user', sequence: 0, span_id: '84eb7a3bf47a1b64', timestamp: 1770076073240, token_count: 0, trace_id: 'a422e92393b8e1f9b61e7a816c1b11ce', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', completion_id: 'chat-summary-id', content: 'Why is the sky blue?', id: 'd7fe9024426ee6cbc38fa00ed40c85afd5fe', ingest_source: 'Node', role: 'user', sequence: 0, span_id: '84eb7a3bf47a1b64', timestamp: 1770076073240, token_count: 0, trace_id: 'a422e92393b8e1f9b61e7a816c1b11ce', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-message.test.js:47:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: 'd7fe9024426ee6cbc38fa00ed40c85afd5fe', appName: 'New Relic for Node.js tests', trace_id: 'a422e92393b8e1f9b61e7a816c1b11ce', span_id: '84eb7a3bf47a1b64', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', role: 'user', sequence: 0, completion_id: 'chat-summary-id', is_response: false, content: 'Why is the sky blue?', timestamp: 1770076073240, token_count: 0 }, expected: { id: 'd7fe9024426ee6cbc38fa00ed40c85afd5fe', trace_id: 'a422e92393b8e1f9b61e7a816c1b11ce', span_id: '84eb7a3bf47a1b64', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', content: 'Why is the sky blue?', sequence: 0, completion_id: 'chat-summary-id', role: 'user', token_count: 0, 'response.model': 'gemini-2.0-flash', timestamp: 1770076073240 }, operator: 'deepEqual' }
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: LlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.207184, error: false, id: '6b0d91cfa7d48711ac7d4308d0e558f12840', ingest_source: 'N... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.207184, id: '6b0d91cfa7d48711ac7d4308d0e558f12840', ingest_source: 'Node', span_id: '8d498cf3e9e6138a', timestamp: 1770076073199, trace_id: '30260... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { id: '6b0d91cfa7d48711ac7d4308d0e558f12840', appName: 'New Relic for Node.js tests', trace_id: '30260c04060211eaaa6ea4bbfc32df2f', span_id: '8d498cf3e9e6138a', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.207184, error: false, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'request.max_tokens': 1000000, 'request.temperature': 1, timestamp: 1770076073199, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: '6b0d91cfa7d48711ac7d4308d0e558f12840', trace_id: '30260c04060211eaaa6ea4bbfc32df2f', span_id: '8d498cf3e9e6138a', 'request.model': 'gemini-2.0-flash', vendor: 'gemini', ingest_source: 'Node', duration: 0.207184, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.model': 'gemini-2.0-flash', timestamp: 1770076073199 }, operator: 'deepEqual' }
should properly serialize input when it is a array of array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError [Error]: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:206:9) at Test.run (node:internal/test_runner/test:783:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Test.processPendingSubtests (node:internal/test_runner/test:526:7)
should properly serialize input when it is a array of numbers: test/unit/llm-events/openai/embedding.test.js#L68
TypeError [Error]: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:206:9) at Test.run (node:internal/test_runner/test:783:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Test.processPendingSubtests (node:internal/test_runner/test:526:7)
should properly serialize input when it is a array of strings: test/unit/llm-events/openai/embedding.test.js#L68
TypeError [Error]: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:206:9) at Test.run (node:internal/test_runner/test:783:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Test.processPendingSubtests (node:internal/test_runner/test:526:7)
should properly serialize input when it is a string: test/unit/llm-events/openai/embedding.test.js#L68
TypeError [Error]: Cannot read properties of null (reading 'getDurationInMillis') at new LlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/embedding.js:38:29) at new OpenAiLlmEmbedding (/home/runner/work/node-newrelic/node-newrelic/lib/llm-events-new/openai/embedding.js:15:5) at TestContext.<anonymous> (/home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:68:28) at Test.runInAsyncScope (node:async_hooks:206:9) at Test.run (node:internal/test_runner/test:783:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Test.processPendingSubtests (node:internal/test_runner/test:526:7)
all-clear
Process completed with exit code 1.
ci (lts/*)
Total Tests: 36 Suites 📂: 0 Passed ✅: 36 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 689.808ms
unit (24.x)
Total Tests: 5036 Suites 📂: 13 Passed ✅: 5008 Failed ❌: 7 Canceled 🚫: 0 Skipped ⏭️: 2 Todo 📝: 19 Duration 🕐: 62897.087ms
integration (24.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 450.891ms
integration (24.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 787 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 4 Todo 📝: 0 Duration 🕐: 41525.180ms
unit (22.x)
Total Tests: 5036 Suites 📂: 13 Passed ✅: 5008 Failed ❌: 7 Canceled 🚫: 0 Skipped ⏭️: 2 Todo 📝: 19 Duration 🕐: 65524.332ms
unit (20.x)
Total Tests: 5033 Suites 📂: 13 Passed ✅: 5004 Failed ❌: 7 Canceled 🚫: 0 Skipped ⏭️: 3 Todo 📝: 19 Duration 🕐: 80068.873ms
integration (20.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 582.262ms
integration (20.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 786 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 5 Todo 📝: 0 Duration 🕐: 123894.028ms
integration (22.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 543.456ms
integration (22.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 787 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 4 Todo 📝: 0 Duration 🕐: 129793.856ms

Artifacts

Produced during runtime
Name Size Digest
integration-tests-cjs-20.x
153 KB
sha256:4eb4c6004d85cdb72b248368ea8dcc2a6e8fb02320dcfb2361fad64183b51547
integration-tests-cjs-22.x
153 KB
sha256:c86db9754ab8cfae95cbceaed82d14b452526b9c3705545a9555114cee9f1a8d
integration-tests-cjs-24.x
153 KB
sha256:1fd12c9470582d9b4f4872155d493f94cfeb39eb7db2ba9f5b581886b95538ca
integration-tests-esm-20.x
89 KB
sha256:dce9c536436395e25d8bef76ff8215a1834a000d9d74eb267fc6e4f5f2265011
integration-tests-esm-22.x
89 KB
sha256:a34813f54311e10b4dbeaa73e85e94fc32fcc491b8695524d7e88be3ccd4818f
integration-tests-esm-24.x
89.9 KB
sha256:58142e356b815cbd4d8aaee850e27c832a8d76c3bdc225b127634dfb469e5f2e
versioned-tests-20.x
180 KB
sha256:a3f802bd2b0edc6d9c3c3f8e06b38b857e8f6027935c93ccb3a11c11a99b635a
versioned-tests-22.x
179 KB
sha256:0d5a2bda54f9b4aa9bd06b59190e80796e106207d6315676f5e334b7be95a7f6
versioned-tests-24.x
177 KB
sha256:c3697875c276ec8dbfad55f23b8c43ba735b640dcf11f1d93369bee33e956eab