Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -303,10 +303,14 @@ export default class CallDiagnosticLatencies extends WebexPlugin {
* @returns - latency
*/
public getStayLobbyTime() {
return this.getDiffBetweenTimestamps(
'client.locus.join.response',
'internal.host.meeting.participant.admitted'
);
// stayLobbyTime is sent with client.lobby.exited event so we may not have time timestamp yet
if (!this.latencyTimestamps.get('client.lobby.exited')) {
this.saveTimestamp({
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't feel right to be saving a timestamp in a getter for latency. In general getters shouldn't change the state of a system, right? Could we maybe have a separate setter for the "client.lobby.exited" timestamp? Also I'm a bit confused why we need this in the first place, aren't the timestamps for all events saved already via the calls to saveTimestamp() from submitClientEvent()?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like you're right, we do saveTimestamp before prepareDiagnosticMetricItem gets called
i've also tested with logs to confirm this

key: 'client.lobby.exited',
});
}

return this.getDiffBetweenTimestamps('client.locus.join.response', 'client.lobby.exited');
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -361,14 +361,18 @@ export const prepareDiagnosticMetricItem = (webex: any, item: any) => {
joinTimes.totalMediaJMT = cdl.getTotalMediaJMT();
joinTimes.interstitialToMediaOKJMT = cdl.getInterstitialToMediaOKJMT();
joinTimes.callInitMediaEngineReady = cdl.getCallInitMediaEngineReady();
joinTimes.stayLobbyTime = cdl.getStayLobbyTime();
joinTimes.totalMediaJMTWithUserDelay = cdl.getTotalMediaJMTWithUserDelay();
joinTimes.totalJMTWithUserDelay = cdl.getTotalJMTWithUserDelay();
break;

case 'client.media.tx.start':
audioSetupDelay.joinRespTxStart = cdl.getAudioJoinRespTxStart();
videoSetupDelay.joinRespTxStart = cdl.getVideoJoinRespTxStart();
break;

case 'client.lobby.exited':
joinTimes.stayLobbyTime = cdl.getStayLobbyTime();
Comment on lines +373 to +374

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Save the lobby-exit timestamp before populating stayLobbyTime

When client.lobby.exited is sent through the public fetch/keepalive path (buildClientEventFetchRequestOptions() / setMetricTimingsAndFetch()), this new case computes cdl.getStayLobbyTime() before that event’s timestamp has ever been recorded. submitClientEvent() records callDiagnosticLatencies.saveTimestamp({key: name}) in new-metrics.ts, but buildClientEventFetchRequestOptions() skips that and calls prepareDiagnosticMetricItem() directly, so the generated payload for client.lobby.exited ends up with joinTimes.stayLobbyTime unset. That means the new dedicated lobby-exit metric is silently missing in the unload-safe submission flow this SDK already exposes.

Useful? React with 👍 / 👎.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

appears to not be an actual issue
the buildClientEventFetchRequestOptions path gets called when the client is about to be closed, if the user did exit the lobby before that, we'll have the timestamp, and if not, it's ok and expected that the stayLobbyTime will be undefined

Comment on lines +373 to +374

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve original stayLobbyTime on retried lobby-exit events

If a client.lobby.exited batch is retried after a NetworkOrCORSError, MetricsBatcher.rerequest() re-runs prepareItem() on the same event. This branch recomputes stayLobbyTime from the shared callDiagnosticLatencies map, which Meeting.leave() clears and the next meeting repopulates. In the sequence “meeting A lobby exit -> network failure -> join meeting B -> retry”, merge() here will overwrite meeting A’s previously prepared stayLobbyTime with meeting B’s value, so the resent lobby-exit metric reports the wrong lobby duration.

Useful? React with 👍 / 👎.

break;
}

if (!isEmpty(joinTimes)) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,7 @@ describe('plugin-metrics', () => {
webex.internal.newMetrics.callDiagnosticLatencies.getDiffBetweenTimestamps = sinon
.stub()
.returns(10);
webex.internal.newMetrics.callDiagnosticLatencies.getU2CTime = sinon
.stub()
.returns(20);
webex.internal.newMetrics.callDiagnosticLatencies.getU2CTime = sinon.stub().returns(20);
webex.internal.newMetrics.callDiagnosticLatencies.getReachabilityClustersReqResp = sinon
.stub()
.returns(10);
Expand All @@ -165,7 +163,7 @@ describe('plugin-metrics', () => {
registerWDMDeviceJMT: 10,
showInterstitialTime: 10,
getU2CTime: 20,
getReachabilityClustersReqResp: 10
getReachabilityClustersReqResp: 10,
},
});
assert.lengthOf(
Expand All @@ -189,9 +187,8 @@ describe('plugin-metrics', () => {
webex.internal.newMetrics.callDiagnosticLatencies.getDownloadTimeJMT = sinon
.stub()
.returns(100);
webex.internal.newMetrics.callDiagnosticLatencies.getClickToInterstitialWithUserDelay = sinon
.stub()
.returns(43);
webex.internal.newMetrics.callDiagnosticLatencies.getClickToInterstitialWithUserDelay =
sinon.stub().returns(43);
webex.internal.newMetrics.callDiagnosticLatencies.getTotalJMTWithUserDelay = sinon
.stub()
.returns(64);
Expand Down Expand Up @@ -346,7 +343,7 @@ describe('plugin-metrics', () => {
webex.internal.newMetrics.callDiagnosticLatencies.getInterstitialToJoinOK = sinon
.stub()
.returns(7);
webex.internal.newMetrics.callDiagnosticLatencies.getStayLobbyTime = sinon
webex.internal.newMetrics.callDiagnosticLatencies.getStayLobbyTime = sinon
.stub()
.returns(1);
webex.internal.newMetrics.callDiagnosticLatencies.getTotalMediaJMTWithUserDelay = sinon
Expand All @@ -372,7 +369,6 @@ describe('plugin-metrics', () => {
totalMediaJMT: 61,
interstitialToMediaOKJMT: 22,
callInitMediaEngineReady: 10,
stayLobbyTime: 1,
totalMediaJMTWithUserDelay: 43,
totalJMTWithUserDelay: 64,
},
Expand All @@ -382,6 +378,34 @@ describe('plugin-metrics', () => {
0
);
});

it('appends the correct join times to the request for client.lobby.exited', async () => {
webex.internal.newMetrics.callDiagnosticLatencies.getStayLobbyTime = sinon
.stub()
.returns(10);

const promise = webex.internal.newMetrics.callDiagnosticMetrics.submitToCallDiagnostics(
//@ts-ignore
{event: {name: 'client.lobby.exited'}}
);
await flushPromises();
clock.tick(config.metrics.batcherWait);

await promise;

//@ts-ignore
assert.calledOnce(webex.request);
assert.deepEqual(webex.request.getCalls()[0].args[0].body.metrics[0].eventPayload.event, {
name: 'client.lobby.exited',
joinTimes: {
stayLobbyTime: 10,
},
});
assert.lengthOf(
webex.internal.newMetrics.callDiagnosticMetrics.callDiagnosticEventsBatcher.queue,
0
);
});
});

describe('when the request fails', () => {
Expand Down
Loading
Loading