Skip to content

Add request Content-Length to PerformanceResourceTiming entries #1777

Open
@kyledevans

Description

@kyledevans

What problem are you trying to solve?

We would like the ability to display an upload throughput indicator in our UI (IE: 13.1 Mbps). We are using the @azure/storage-blob SDK to upload files directly to an Azure Storage Blob account. Throughput calculations require: 1) Start time, 2) duration, 3) payload size in bytes. The SDK internally splits files into chunks and then uploads each chunk with separate fetch calls which makes getting the raw data for the calculation very difficult.

The PerformanceResourceTiming API has multiple properties related to the response that can be used to measure download performance. But there is not a standard place that records the size of the request payload in order to measure uploads.

It seems clear (perhaps only to me) that request payload size is a key metric that really belongs at the standards level. Third party libraries such as the Azure SDK I linked above certainly should try and expose this information. But having this metric at the standards level enables developers to fill in the functionality gaps of third party libraries. It also isn't really a stretch to imagine how observability platforms would benefit from being able to record and visualize bottlenecks and issues for file uploads.

What solutions exist today?

Current solutions to calculate throughput require the ability to measure the request payload size when initiating the fetch call and then find the corresponding PerformanceResourceTiming entry. This can be difficult to achieve in practice because it requires directly measuring the size of the payload stream (or serialized JSON, or whatever) and then looking up the corresponding timing entry to get the start time and duration. It's even more difficult if the fetch call happens deep inside a third party library.

Some solutions that come to mind:

  • Monkey patch the fetch API so that I can get payload size. Then attempt to correlate that request to a PerformanceResourceTiming entry. yuck
  • Ditch the third party library I'm using for uploads, and manually implement file splitting. yuck
  • Try and convince my coworkers and higher-ups that in the year 2024 - calculating upload throughput is just too hard. This is the approach I'm going with for now.

How would you solve it?

Waves a magic wand: Add a property to PerformanceResourceTiming entries called requestContentLength that is hydrated from the Content-Length header on the request.

Waves Dumbledore's Elder Wand: Add an API to track fetch progress updates in the browser Performance APIs. This might be a bit over-ambitious but it sure would be nice to finally have a standard way to measure upload (and download) progress and throughput. Perhaps this solution isn't realistic.

Anything else?

This feature is about getting a key piece of information for calculating throughput. The actual throughput calculation itself is also very difficult, but is outside the scope of this request. Library authors and application developers looking to calculate throughput will quickly see how deep the rabbit hole goes, and avoid feature requests for throughput indicators.

This feature is attempting to simplify (even if only by a little bit) what is already a difficult task.

Some discussions I've found that are relevant:

Metadata

Metadata

Assignees

No one assigned

    Labels

    addition/proposalNew features or enhancementsneeds implementer interestMoving the issue forward requires implementers to express interest

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions