-
Notifications
You must be signed in to change notification settings - Fork 40
Use streams when calculating digest of data file #54
Copy link
Copy link
Open
Description
Currently there is an upper limit to the size of signable data files, related to the servers available memory.
The problem is, that in byte[] org.digidoc4j.DataFile.calculateDigestInternal(DigestAlgorithm digestAlgorithm), the digest calculation function uses getBytes - internally, the whole datafile is loaded to memory, which is inefficient.
A more efficient solution would be to use the byte[] eu.europa.esig.dss.digest(final DigestAlgorithm digestAlgo, final InputStream inputStream).
In this case, the digest is calculated over a stream. Internally, they use a 4096 byte buffer.
Attached is a zip of patch of possible fix (generated using git diff on develop branch).
digidoc4j_datafile_streams.zip
Required where:
- low memory sysems
- Systems with large files
- Systems with large numbers of parallel users
Best regards,
Mart Simisker
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels