Skip to content

LangChain chat msg and summary refactor #1224

LangChain chat msg and summary refactor

LangChain chat msg and summary refactor #1224

Triggered via push February 3, 2026 19:22
Status Failure
Total duration 16m 52s
Artifacts 9

ci-workflow.yml

on: push
should_run
6s
should_run
Matrix: integration
Matrix: unit
Matrix: versioned-internal
Matrix: ci
Matrix: lint
Matrix: versioned-external
Matrix: codecov
all-clear
2s
all-clear
Fit to window
Zoom out
Zoom in

Annotations

34 errors and 10 notices
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.097064, id: 'e39324b7afb6ac00bef21de8915d0a20246a', ingest_source: 'Node',... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.097064, id: 'e39324b7afb6ac00bef21de8915d0a20246a', ingest_source: 'Node', span_id: '9886a7a6d80d5ce0', timestamp: 1770146646432, trace_id: '1d7e5... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'e39324b7afb6ac00bef21de8915d0a20246a', span_id: '9886a7a6d80d5ce0', trace_id: '1d7e50f49ec66361243b5a07b5ec2a3d', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.number_of_messages': 2, timestamp: 1770146646432, duration: 0.097064, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: 'e39324b7afb6ac00bef21de8915d0a20246a', trace_id: '1d7e50f49ec66361243b5a07b5ec2a3d', span_id: '9886a7a6d80d5ce0', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.097064, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'request.model': 'gemini-2.0-flash', timestamp: 1770146646432 }, operator: 'deepEqual', diff: 'simple'�
should properly create a LlmEmbedding event: test/unit/llm-events/google-genai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmEmbedding { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', duration: 0.096506, id: 'd4842e7c9c3fea500724cb2823560b6ab754', ingest_source: 'Node', input: 'This is my test input', span_id: '857afaf490cb56e2', trace_id: '8cc67535d700504011b0187cbb0c8061', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', duration: 0.096506, id: 'd4842e7c9c3fea500724cb2823560b6ab754', ingest_source: 'Node', input: 'This is my test input', span_id: '857afaf490cb56e2', trace_id: '8cc67535d700504011b0187cbb0c8061', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'd4842e7c9c3fea500724cb2823560b6ab754', span_id: '857afaf490cb56e2', trace_id: '8cc67535d700504011b0187cbb0c8061', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', duration: 0.096506, input: 'This is my test input' }, expected: { id: 'd4842e7c9c3fea500724cb2823560b6ab754', trace_id: '8cc67535d700504011b0187cbb0c8061', span_id: '857afaf490cb56e2', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.096506, 'request.model': 'gemini-2.0-flash', input: 'This is my test input' }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L399
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: 'f7c3e465efad5034', trace_id: 'a07c4ec6893c3d3869d9c04a7ca2db4c', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: 'f7c3e465efad5034', trace_id: 'a07c4ec6893c3d3869d9c04a7ca2db4c', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:399:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-2', span_id: 'f7c3e465efad5034', trace_id: 'a07c4ec6893c3d3869d9c04a7ca2db4c', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'assistant', content: 'a lot' }, expected: { id: 'resp_id-2', request_id: 'req-id', trace_id: 'a07c4ec6893c3d3869d9c04a7ca2db4c', span_id: 'f7c3e465efad5034', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'assistant', sequence: 2, completion_id: 'chat-summary-id', is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L366
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '65b069effecb2b1e', timestamp: 1770146646033, token_count: 0, trace_id: '7fb887f58c812fb7b650f719d6bfac45', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '65b069effecb2b1e', timestamp: 1770146646033, token_count: 0, trace_id: '7fb887f58c812fb7b650f719d6bfac45', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:366:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-0', span_id: '65b069effecb2b1e', trace_id: '7fb887f58c812fb7b650f719d6bfac45', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'user', timestamp: 1770146646033, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'resp_id-0', request_id: 'req-id', trace_id: '7fb887f58c812fb7b650f719d6bfac45', span_id: '65b069effecb2b1e', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'user', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146646033 }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L130
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: 'b926a3d9f426732c', token_count: 0, trace_id: '0211bd5e19bd6024ca20b61fcbabdbdd', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: 'b926a3d9f426732c', token_count: 0, trace_id: '0211bd5e19bd6024ca20b61fcbabdbdd', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:130:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-2', span_id: 'b926a3d9f426732c', trace_id: '0211bd5e19bd6024ca20b61fcbabdbdd', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'know-it-all', content: 'a lot', token_count: 0 }, expected: { id: 'res-id-2', request_id: 'req-id', trace_id: '0211bd5e19bd6024ca20b61fcbabdbdd', span_id: 'b926a3d9f426732c', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'know-it-all', sequence: 2, completion_id: 'chat-summary-id', token_count: 0, is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L101
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: '4edc3567bb58d0e1', timestamp: 1770146645999, token_count: 0, trace_id: '4a5386c511ed3f1e96de3390ca858c6b', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: '4edc3567bb58d0e1', timestamp: 1770146645999, token_count: 0, trace_id: '4a5386c511ed3f1e96de3390ca858c6b', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:101:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-0', span_id: '4edc3567bb58d0e1', trace_id: '4a5386c511ed3f1e96de3390ca858c6b', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'inquisitive-kid', timestamp: 1770146645999, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'res-id-0', request_id: 'req-id', trace_id: '4a5386c511ed3f1e96de3390ca858c6b', span_id: '4edc3567bb58d0e1', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'inquisitive-kid', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146645999 }, operator: 'deepEqual', diff: 'simple' }
responses.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L63
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', ... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', 'response.number_of_messages': ... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:63:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '901c7843ff5086b2b63fc22ab6716f43bffc', span_id: 'c71d827a5e6da671', trace_id: '4033c1d5c2d2a782435c617574ba4ecd', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', 'request.model': 'gpt-4', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.organization': 'new-relic', 'response.number_of_messages': 2, timestamp: 1770146645890, duration: 0.052411, 'response.choices.finish_reason': 'completed', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '901c7843ff5086b2b63fc22ab6716f43bffc', request_id: 'req-id', trace_id: '4033c1d5c2d2a782435c617574ba4ecd', span_id: 'c71d827a5e6da671', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.052411, 'request.model': �
chat.completions.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L41
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.mo... should loosely deep-equal { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', '... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:41:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '3599ac5976f31916018caf4e2fa940cf2eb1', span_id: 'bf9ccd37f8b9b9f4', trace_id: 'f1e9bb219d79a4f8aba3810c8db8c4b2', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'request.max_tokens': '1000000', 'request.temperature': 'medium-rare', 'response.organization': 'new-relic', 'response.number_of_messages': 3, timestamp: 1770146645884, duration: 0.097378, 'response.choices.finish_reason': 'stop', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '3599ac5976f31916018caf4e2fa940cf2eb1', request_id: 'req-id', trace_id: 'f1e9bb219d79a4f8aba3810c8db8c4b2', span_id: 'bf9ccd37f8b9b9f4', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.097
should properly create a LlmEmbedding event: test/unit/llm-events/openai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmEmbedding { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, appName: 'New Relic for ... should loosely deep-equal { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, duration: 0.095618, id: 'c1108b4eb80d5f9e... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'c1108b4eb80d5f9e65a11c624dcb505bdcc4', span_id: '4c43e923acdc6d0c', trace_id: '5fbd1fbc7836b9d74129b2ca6c9a6b2b', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', duration: 0.095618, input: 'This is my test input', 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: 'c1108b4eb80d5f9e65a11c624dcb505bdcc4', request_id: 'req-id', trace_id: '5fbd1fbc7836b9d74129b2ca6c9a6b2b', span_id: '4c43e923acdc6d0c', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.095618, 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.rat
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.109704, id: 'c027c57914560f0a74b979c92105c653f78c', ingest_source: 'Node',... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.109704, id: 'c027c57914560f0a74b979c92105c653f78c', ingest_source: 'Node', span_id: '9b8d37e073c68079', timestamp: 1770146655047, trace_id: 'fa5d0... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'c027c57914560f0a74b979c92105c653f78c', span_id: '9b8d37e073c68079', trace_id: 'fa5d084f62f2309f7048ede8ab622222', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.number_of_messages': 2, timestamp: 1770146655047, duration: 0.109704, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: 'c027c57914560f0a74b979c92105c653f78c', trace_id: 'fa5d084f62f2309f7048ede8ab622222', span_id: '9b8d37e073c68079', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.109704, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'request.model': 'gemini-2.0-flash', timestamp: 1770146655047 }, operator: 'deepEqual', diff: '
should properly create a LlmEmbedding event: test/unit/llm-events/google-genai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmEmbedding { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', duration: 0.115044, id: '2951139117c35863c1c5d41256e314da021d', ingest_source: 'Node', input: 'This is my test input', span_id: 'e383417a6588637e', trace_id: '459ad1b03085a261e5c1a12365e2eb70', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', duration: 0.115044, id: '2951139117c35863c1c5d41256e314da021d', ingest_source: 'Node', input: 'This is my test input', span_id: 'e383417a6588637e', trace_id: '459ad1b03085a261e5c1a12365e2eb70', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '2951139117c35863c1c5d41256e314da021d', span_id: 'e383417a6588637e', trace_id: '459ad1b03085a261e5c1a12365e2eb70', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', duration: 0.115044, input: 'This is my test input' }, expected: { id: '2951139117c35863c1c5d41256e314da021d', trace_id: '459ad1b03085a261e5c1a12365e2eb70', span_id: 'e383417a6588637e', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.115044, 'request.model': 'gemini-2.0-flash', input: 'This is my test input' }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L399
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: '153b69b45893c756', trace_id: '5a188fce6916a32211fe23e5d64e31ac', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: '153b69b45893c756', trace_id: '5a188fce6916a32211fe23e5d64e31ac', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:399:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-2', span_id: '153b69b45893c756', trace_id: '5a188fce6916a32211fe23e5d64e31ac', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'assistant', content: 'a lot' }, expected: { id: 'resp_id-2', request_id: 'req-id', trace_id: '5a188fce6916a32211fe23e5d64e31ac', span_id: '153b69b45893c756', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'assistant', sequence: 2, completion_id: 'chat-summary-id', is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L366
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '39963a07a634cae2', timestamp: 1770146654659, token_count: 0, trace_id: '087ab2ebc87d51d674c14087a055b381', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '39963a07a634cae2', timestamp: 1770146654659, token_count: 0, trace_id: '087ab2ebc87d51d674c14087a055b381', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:366:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-0', span_id: '39963a07a634cae2', trace_id: '087ab2ebc87d51d674c14087a055b381', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'user', timestamp: 1770146654659, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'resp_id-0', request_id: 'req-id', trace_id: '087ab2ebc87d51d674c14087a055b381', span_id: '39963a07a634cae2', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'user', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146654659 }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L130
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: '9cf121fd7c65a251', token_count: 0, trace_id: '3c370957499a181567eadc71058fb7e7', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: '9cf121fd7c65a251', token_count: 0, trace_id: '3c370957499a181567eadc71058fb7e7', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:130:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-2', span_id: '9cf121fd7c65a251', trace_id: '3c370957499a181567eadc71058fb7e7', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'know-it-all', content: 'a lot', token_count: 0 }, expected: { id: 'res-id-2', request_id: 'req-id', trace_id: '3c370957499a181567eadc71058fb7e7', span_id: '9cf121fd7c65a251', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'know-it-all', sequence: 2, completion_id: 'chat-summary-id', token_count: 0, is_response: true }, operator: 'deepEqual', diff: 'simple' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L101
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: '02a4419a25fe551f', timestamp: 1770146654635, token_count: 0, trace_id: '9417e99bc00db68988d8d15610cdc157', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: '02a4419a25fe551f', timestamp: 1770146654635, token_count: 0, trace_id: '9417e99bc00db68988d8d15610cdc157', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:101:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-0', span_id: '02a4419a25fe551f', trace_id: '9417e99bc00db68988d8d15610cdc157', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'inquisitive-kid', timestamp: 1770146654635, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'res-id-0', request_id: 'req-id', trace_id: '9417e99bc00db68988d8d15610cdc157', span_id: '02a4419a25fe551f', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'inquisitive-kid', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146654635 }, operator: 'deepEqual', diff: 'simple' }
responses.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L63
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', ... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', 'response.number_of_messages': ... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:63:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '73a9a21f4828ae7c26645670c14dc9ede499', span_id: '08ae21e106b17047', trace_id: '896a2bc655badc83c706f721114437ab', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', 'request.model': 'gpt-4', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.organization': 'new-relic', 'response.number_of_messages': 2, timestamp: 1770146654395, duration: 0.092402, 'response.choices.finish_reason': 'completed', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '73a9a21f4828ae7c26645670c14dc9ede499', request_id: 'req-id', trace_id: '896a2bc655badc83c706f721114437ab', span_id: '08ae21e106b17047', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.092402, 'request.model'
chat.completions.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L41
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.mo... should loosely deep-equal { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', '... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:41:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '2dcb819a06bf7ad7988b9b8115fc2b5cf3cc', span_id: '68254ec267a62cce', trace_id: 'e58906095d8ad181451631aad8129647', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'request.max_tokens': '1000000', 'request.temperature': 'medium-rare', 'response.organization': 'new-relic', 'response.number_of_messages': 3, timestamp: 1770146654387, duration: 0.111448, 'response.choices.finish_reason': 'stop', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '2dcb819a06bf7ad7988b9b8115fc2b5cf3cc', request_id: 'req-id', trace_id: 'e58906095d8ad181451631aad8129647', span_id: '68254ec267a62cce', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: �[
should properly create a LlmEmbedding event: test/unit/llm-events/openai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmEmbedding { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, appName: 'New Relic for ... should loosely deep-equal { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, duration: 0.103363, id: '753c90c6ce71b0fd... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:internal/async_local_storage/async_context_frame:63:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '753c90c6ce71b0fd5fcc8714fd261b961cce', span_id: 'f8152221ff48fddc', trace_id: '6ce5baac6e83e297a1717fea6d9e6e4b', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', duration: 0.103363, input: 'This is my test input', 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '753c90c6ce71b0fd5fcc8714fd261b961cce', request_id: 'req-id', trace_id: '6ce5baac6e83e297a1717fea6d9e6e4b', span_id: 'f8152221ff48fddc', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.103363, 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.hea
should properly create a LlmChatCompletionSummary event: test/unit/llm-events/google-genai/chat-completion-summary.test.js#L40
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, appName: 'New Relic for Node.js tests', duration: 0.105766, id: '85de360af843c8632111aef8826aa9da9f6f', ingest_source: 'Node',... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gemini-2.0-flash', 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.model': 'gemini-2.0-flash', 'response.number_of_messages': 2, 'response.usage.completion_tokens': 20, 'response.usage.prompt_tokens': 10, 'response.usage.total_tokens': 30, duration: 0.105766, id: '85de360af843c8632111aef8826aa9da9f6f', ingest_source: 'Node', span_id: '0d7e75dd31489595', timestamp: 1770146665678, trace_id: '5f124... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/chat-completion-summary.test.js:40:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '85de360af843c8632111aef8826aa9da9f6f', span_id: '0d7e75dd31489595', trace_id: '5f124307357864eed99db7053e1e52f4', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.choices.finish_reason': 'STOP', 'response.number_of_messages': 2, timestamp: 1770146665678, duration: 0.105766, 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30 }, expected: { id: '85de360af843c8632111aef8826aa9da9f6f', trace_id: '5f124307357864eed99db7053e1e52f4', span_id: '0d7e75dd31489595', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.105766, 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.number_of_messages': 2, 'response.choices.finish_reason': 'STOP', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'request.model': 'gemini-2.0-flash', timestamp: 1770146665678 }, operator: 'deepEqual' }
should properly create a LlmEmbedding event: test/unit/llm-events/google-genai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: GoogleGenAiLlmEmbedding { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', appName: 'New Relic for Node.js tests', duration: 0.094095, id: '90859017e49b6842b3510d8c154e6cc5913d', ingest_source: 'Node', input: 'This is my test input', span_id: '43bcc278853b61a7', trace_id: '3e78695a61c426ebe509514225f256ba', vendor: 'gemini' } should loosely deep-equal { 'request.model': 'gemini-2.0-flash', 'response.model': 'gemini-2.0-flash', duration: 0.094095, id: '90859017e49b6842b3510d8c154e6cc5913d', ingest_source: 'Node', input: 'This is my test input', span_id: '43bcc278853b61a7', trace_id: '3e78695a61c426ebe509514225f256ba', vendor: 'gemini' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/google-genai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '90859017e49b6842b3510d8c154e6cc5913d', span_id: '43bcc278853b61a7', trace_id: '3e78695a61c426ebe509514225f256ba', vendor: 'gemini', appName: 'New Relic for Node.js tests', 'response.model': 'gemini-2.0-flash', 'request.model': 'gemini-2.0-flash', duration: 0.094095, input: 'This is my test input' }, expected: { id: '90859017e49b6842b3510d8c154e6cc5913d', trace_id: '3e78695a61c426ebe509514225f256ba', span_id: '43bcc278853b61a7', vendor: 'gemini', ingest_source: 'Node', 'response.model': 'gemini-2.0-flash', duration: 0.094095, 'request.model': 'gemini-2.0-flash', input: 'This is my test input' }, operator: 'deepEqual' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L399
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: 'a9bac4eba26ba796', trace_id: '271b4ea750b327805bd8910adb15a864', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'resp_id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'assistant', sequence: 2, span_id: 'a9bac4eba26ba796', trace_id: '271b4ea750b327805bd8910adb15a864', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:399:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-2', span_id: 'a9bac4eba26ba796', trace_id: '271b4ea750b327805bd8910adb15a864', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'assistant', content: 'a lot' }, expected: { id: 'resp_id-2', request_id: 'req-id', trace_id: '271b4ea750b327805bd8910adb15a864', span_id: 'a9bac4eba26ba796', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'assistant', sequence: 2, completion_id: 'chat-summary-id', is_response: true }, operator: 'deepEqual' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L366
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-4-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '70385ee2309ebead', timestamp: 1770146665162, token_count: 0, trace_id: 'bb13a4e20e5a43b7a71d3d863a17cdac', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'resp_id-0', ingest_source: 'Node', request_id: 'req-id', role: 'user', sequence: 0, span_id: '70385ee2309ebead', timestamp: 1770146665162, token_count: 0, trace_id: 'bb13a4e20e5a43b7a71d3d863a17cdac', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:366:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'resp_id-0', span_id: '70385ee2309ebead', trace_id: 'bb13a4e20e5a43b7a71d3d863a17cdac', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'user', timestamp: 1770146665162, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'resp_id-0', request_id: 'req-id', trace_id: 'bb13a4e20e5a43b7a71d3d863a17cdac', span_id: '70385ee2309ebead', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'user', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146665162 }, operator: 'deepEqual' }
should create a LlmChatCompletionMessage from response choices: test/unit/llm-events/openai/chat-completion-message.test.js#L130
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: 'f56ee835912eff86', token_count: 0, trace_id: '4bf1ce6f8db30bffb16ecdc999d9a0fb', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'a lot', id: 'res-id-2', ingest_source: 'Node', is_response: true, request_id: 'req-id', role: 'know-it-all', sequence: 2, span_id: 'f56ee835912eff86', token_count: 0, trace_id: '4bf1ce6f8db30bffb16ecdc999d9a0fb', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:130:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-2', span_id: 'f56ee835912eff86', trace_id: '4bf1ce6f8db30bffb16ecdc999d9a0fb', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 2, is_response: true, role: 'know-it-all', content: 'a lot', token_count: 0 }, expected: { id: 'res-id-2', request_id: 'req-id', trace_id: '4bf1ce6f8db30bffb16ecdc999d9a0fb', span_id: 'f56ee835912eff86', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'a lot', role: 'know-it-all', sequence: 2, completion_id: 'chat-summary-id', token_count: 0, is_response: true }, operator: 'deepEqual' }
should create a LlmChatCompletionMessage event: test/unit/llm-events/openai/chat-completion-message.test.js#L101
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionMessage { 'response.model': 'gpt-3.5-turbo-0613', appName: 'New Relic for Node.js tests', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: 'd0ede5ad9c2c17b9', timestamp: 1770146665131, token_count: 0, trace_id: 'c6cdb733c7b27827727a9b2d434c102b', vendor: 'openai' } should loosely deep-equal { 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', content: 'What is a woodchuck?', id: 'res-id-0', ingest_source: 'Node', request_id: 'req-id', role: 'inquisitive-kid', sequence: 0, span_id: 'd0ede5ad9c2c17b9', timestamp: 1770146665131, token_count: 0, trace_id: 'c6cdb733c7b27827727a9b2d434c102b', vendor: 'openai' } at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-message.test.js:101:16 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'res-id-0', span_id: 'd0ede5ad9c2c17b9', trace_id: 'c6cdb733c7b27827727a9b2d434c102b', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', completion_id: 'chat-summary-id', sequence: 0, role: 'inquisitive-kid', timestamp: 1770146665131, content: 'What is a woodchuck?', token_count: 0 }, expected: { id: 'res-id-0', request_id: 'req-id', trace_id: 'c6cdb733c7b27827727a9b2d434c102b', span_id: 'd0ede5ad9c2c17b9', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', content: 'What is a woodchuck?', role: 'inquisitive-kid', sequence: 0, completion_id: 'chat-summary-id', token_count: 0, timestamp: 1770146665131 }, operator: 'deepEqual' }
responses.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L63
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', ... should loosely deep-equal { 'request.max_tokens': 1000000, 'request.model': 'gpt-4', 'request.temperature': 1, 'response.choices.finish_reason': 'completed', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-4-0613', 'response.number_of_messages': ... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:63:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'eadb3c2bfe465decaba1be12eb792f0bc8a7', span_id: '7a7079664db61bb8', trace_id: 'f365f30f69f08dc7be21211746e7c873', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-4-0613', 'request.model': 'gpt-4', 'request.max_tokens': 1000000, 'request.temperature': 1, 'response.organization': 'new-relic', 'response.number_of_messages': 2, timestamp: 1770146665001, duration: 0.051716, 'response.choices.finish_reason': 'completed', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: 'eadb3c2bfe465decaba1be12eb792f0bc8a7', request_id: 'req-id', trace_id: 'f365f30f69f08dc7be21211746e7c873', span_id: '7a7079664db61bb8', 'response.model': 'gpt-4-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.051716, 'request.model': 'gpt-4', 'resp
chat.completions.create should properly create a LlmChatCompletionSummary event: test/unit/llm-events/openai/chat-completion-summary.test.js#L41
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmChatCompletionSummary { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.mo... should loosely deep-equal { 'request.max_tokens': '1000000', 'request.model': 'gpt-3.5-turbo-0613', 'request.temperature': 'medium-rare', 'response.choices.finish_reason': 'stop', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', '... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/chat-completion-summary.test.js:41:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: '4581a4f5d8b5e2d92568ed3e3fd45ec7ee49', span_id: 'b8faac9682ee28c0', trace_id: '8af8fd8c99d21fcf3aa6825efa390c6c', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'request.max_tokens': '1000000', 'request.temperature': 'medium-rare', 'response.organization': 'new-relic', 'response.number_of_messages': 3, timestamp: 1770146664991, duration: 0.138829, 'response.choices.finish_reason': 'stop', 'response.usage.prompt_tokens': 10, 'response.usage.completion_tokens': 20, 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: '4581a4f5d8b5e2d92568ed3e3fd45ec7ee49', request_id: 'req-id', trace_id: '8af8fd8c99d21fcf3aa6825efa390c6c', span_id: 'b8faac9682ee28c0', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.138829, 'request.mode
should properly create a LlmEmbedding event: test/unit/llm-events/openai/embedding.test.js#L43
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal: OpenAiLlmEmbedding { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, appName: 'New Relic for ... should loosely deep-equal { 'request.model': 'gpt-3.5-turbo-0613', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitRemainingRequests': '10', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitResetTokens': '100', 'response.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.usage.total_tokens': 30, duration: 0.170297, id: 'c6c56a9666007111... at /home/runner/work/node-newrelic/node-newrelic/test/unit/llm-events/openai/embedding.test.js:43:14 at runInContextCb (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1251:20) at AsyncLocalStorage.run (node:async_hooks:346:14) at AsyncLocalContextManager.runInContext (/home/runner/work/node-newrelic/node-newrelic/lib/context-manager/async-local-context-manager.js:60:38) at wrapped (/home/runner/work/node-newrelic/node-newrelic/lib/transaction/tracer/index.js:273:37) at TransactionShim.applyContext (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:1254:66) at _applyRecorderSegment (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:819:16) at _doRecord (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:775:13) at wrapper (/home/runner/work/node-newrelic/node-newrelic/lib/shim/shim.js:712:22) at API.startSegment (/home/runner/work/node-newrelic/node-newrelic/api.js:914:10) { generatedMessage: true, code: 'ERR_ASSERTION', actual: { ingest_source: 'Node', id: 'c6c56a966600711171b3ea3f3b305a339225', span_id: 'd0f45dc1957699b4', trace_id: '06bef0f5ac9d0ce6592c393ff5387da2', vendor: 'openai', appName: 'New Relic for Node.js tests', request_id: 'req-id', 'response.model': 'gpt-3.5-turbo-0613', 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', duration: 0.170297, input: 'This is my test input', 'response.usage.total_tokens': 30, 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': '100', 'response.headers.ratelimitRemainingTokens': '10', 'response.headers.ratelimitRemainingRequests': '10' }, expected: { id: 'c6c56a966600711171b3ea3f3b305a339225', request_id: 'req-id', trace_id: '06bef0f5ac9d0ce6592c393ff5387da2', span_id: 'd0f45dc1957699b4', 'response.model': 'gpt-3.5-turbo-0613', vendor: 'openai', ingest_source: 'Node', duration: 0.170297, 'request.model': 'gpt-3.5-turbo-0613', 'response.organization': 'new-relic', 'response.headers.llmVersion': '1.0.0', 'response.headers.ratelimitLimitRequests': '100', 'response.headers.ratelimitLimitTokens': '100', 'response.headers.ratelimitResetTokens': �[3
versioned-internal (24.x)
Process completed with exit code 4.
versioned-internal (22.x)
Process completed with exit code 4.
versioned-internal (20.x)
Process completed with exit code 4.
all-clear
Process completed with exit code 1.
ci (lts/*)
Total Tests: 36 Suites 📂: 0 Passed ✅: 36 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 613.215ms
integration (24.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 518.523ms
integration (24.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 787 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 4 Todo 📝: 0 Duration 🕐: 50490.141ms
unit (22.x)
Total Tests: 5036 Suites 📂: 13 Passed ✅: 5001 Failed ❌: 14 Canceled 🚫: 0 Skipped ⏭️: 2 Todo 📝: 19 Duration 🕐: 66020.724ms
unit (24.x)
Total Tests: 5036 Suites 📂: 13 Passed ✅: 5001 Failed ❌: 14 Canceled 🚫: 0 Skipped ⏭️: 2 Todo 📝: 19 Duration 🕐: 68257.710ms
unit (20.x)
Total Tests: 5033 Suites 📂: 13 Passed ✅: 4997 Failed ❌: 14 Canceled 🚫: 0 Skipped ⏭️: 3 Todo 📝: 19 Duration 🕐: 79589.308ms
integration (22.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 538.844ms
integration (22.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 787 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 4 Todo 📝: 0 Duration 🕐: 125610.307ms
integration (20.x)
Total Tests: 1 Suites 📂: 0 Passed ✅: 1 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 0 Todo 📝: 0 Duration 🕐: 554.342ms
integration (20.x)
Total Tests: 791 Suites 📂: 0 Passed ✅: 786 Failed ❌: 0 Canceled 🚫: 0 Skipped ⏭️: 5 Todo 📝: 0 Duration 🕐: 117991.675ms

Artifacts

Produced during runtime
Name Size Digest
integration-tests-cjs-20.x
153 KB
sha256:b1c6b92f6ddc0a02865360e7a12c9c62d5b3a31d8363a0eef46f357cb747a72c
integration-tests-cjs-22.x
153 KB
sha256:db3d670e3e294f91793a37f9fca7b4d9edb28298a802c5557767123e53fdf079
integration-tests-cjs-24.x
153 KB
sha256:5133f26b5560cc4d9c11d4ea65da55b573cd0f8a94ca009eef85cffcfe8b7513
integration-tests-esm-20.x
89 KB
sha256:2306ad1e456415d6bfa79ed08d27161e0d461e207070814e7a97934b1aee8679
integration-tests-esm-22.x
89 KB
sha256:7cf074f176f060c8fc7dd6f1f742ffcbbba7a06acafd866d31071f75e2087c84
integration-tests-esm-24.x
89.9 KB
sha256:cdffa6382c8e79b66122edbaa2307a7757f4ac6facfaa302ac5dd12edaf7aa3a
logs-20.x.tgz
539 KB
sha256:59ff87eafb9abb5f5de77491e7835db897a888400d64fc14925828cacce54bc6
logs-22.x.tgz
527 KB
sha256:ce6ce1ee1fdafb838a0c1ebdf3be5bd9cfe0b9861476c20749e3378f113ee9d8
logs-24.x.tgz
508 KB
sha256:11c328ad2848aa6ac43f85d73b2bc794724bc84db1e4b1a0ced43ecf86a4204c