-
Notifications
You must be signed in to change notification settings - Fork 5
Description
This was touched on in #7 and #8, and has an issue on the main spec in immersive-web/webxr#894 but I wanted to call it out in it's own issue here to ensure that it get's handled/discussed appropriately.
Unlike WebGL, which uses a [-1, 1] depth range for it's clip space coordinates, WebGPU uses a [0, 1] depth range. This means that if projection matrices that are designed for WebGL are returned from the API and used without modification the results will be incorrect.
My proposal for this is that, assuming we have a "API mode switch" as discussed in #7 the projectionMatrix of every XRView produced by the session simply begins returning the correct projection matrix for the API. Something like so:
const session = await navigator.xr.requestSession('immersive', {
requiredFeatures: ['webgpu'], // Not the finalized API shape!
});
session.requestAnimationFrame((time, frame) => {
const viewer = frame.getViewerPose(xrReferenceSpace);
for (let view of viewer.views) {
const projectionMat = view.projectionMatrix; // Is a [0, 1] range matrix because of 'webgpu' required feature.
// And so on...
}
});Fairly obvious, I think, outside of determining the exact mechanism for specifying the graphics API.
For the sake of completeness: Alternatives could be to begin returning a second WebGPU appropriate matrix alongside the current one (projectionMatrixGPU? projectionMatrixZeroToOne?) if we felt there was any benefit to having both, but I don't see what that would be and it would create a pit of failure for developers porting their WebXR content from WebGL to WebGPU. We could also just tell developers to do the math to transform between the two themselves, but that would be rather petty of us when we could just solve the problem so easily for developers.