Skip to content

Optimize for large files #1

@mindplay-dk

Description

@mindplay-dk

The parser currently requires you to read the entire file into memory first, then splits the entire file body into an array of unicode characters.

This is okay for schema migrations (which is primarily what we needed this for) since file-sizes tend to be manageable, but is not a suitable approach for very large files.

Make it work with streams rather than strings and character arrays.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions