Skip to content

Temporary buffer size hardcoded in process_field_size_default is too small in some cases and can cause a seg fault #2954

@bwbusacker

Description

@bwbusacker

`void process_field_size_default(int offset, char *sfield, __u8 *buf, int size, char *datastr)
{
__u8 cval;
char description_str[256] = "0x"; <----------------------------------
char temp_buffer[3] = { 0 };

    for (unsigned char i = 0; i < (unsigned char)size; i++) {
            cval = (buf[offset + i]);

            sprintf(temp_buffer, "%02X", cval);
            strcat(description_str, temp_buffer);
    }
    sprintf(datastr, "%s", description_str);`

Micron's plugin uses this method to parse it's log data. On some drives the description_str is between 256 and 512 bytes. This function should not allow anything larger than 256 bytes if that is the limit it supports. It should check for the size and fail out instead of segfaulting. If 256 bytes is not an uppler limit that needs enforced then change the defintion of description_str from 256 bytes in size to 512 bytes resolved the issue for us. Requesting this method get revisited to enforce its limits and possibly allow up to 512 bytes intead of 256 or allow the description_str to get allocated dynamically based on the size that is needed.

There are other methods in there with similar hardcoded temporary arrays that could also be impacted by a value that exceeds the size and will also seg fault.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions