Description
The specs scpecify the behavior of offset when creating a new TypedArray with:
- If offset modulo elementSize ≠ 0, throw a RangeError exception.
(https://tc39.github.io/ecma262/#sec-typedarray-buffer-byteoffset-length)
Imo this is a big limitation for memory efficient parsing of files.
Take for example a binary stl-file which holds a 3d model:
input.addEventListener('change', function (e) {
const file = e.target.files[0];
const reader = new FileReader()
reader.onload = () => {
const buffer = reader.result
var offset = 0
// First 80 Byte are header with description
var description = String.fromCharCode.apply(null, new Uint16Array(buf, offset, 80/2))
offset += 80
// Next 32 Byte
const numberOfTriangles = new Uint32Array(buf, offset, 1)
offset += 4
var normal = new Float32Array(buf, offset, 3)
offset += 12
//
// reading vertices (all aligned)...
//
// at some point you have attribute byte count
// and you go
offset += 2
// BAAAM, no alignment anymore -> RangeError
var anotherNormal = new Float32Array(buf, offset, 3)
}
}
You could super easily wrap a Float32Array around every value you need and never touch the buffer or copy ANY data. However, at some point the Float32Array has to start at a byteOffset that is not aligned (see code above)
This throws the range error specified in the specs. But why do we have this limitation?
A TypedArray is only a view on the underlying data. It shouldnt matter at which byte I put my schablona to make sense of the underlying data. Now I need to copy over all data of the buffer which slows down stuff a lot when we are dealing with very big files.
- Why do we have this limitation?
- If there is no reason, can we allow offsets which are not aligned?
- If there is a reason, what is it and can we work around it?