Skip to content

Fix: parsing ident starting with underscore in certain dialects #1835

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 10, 2025

Conversation

MohamedAbdeen21
Copy link
Contributor

@MohamedAbdeen21 MohamedAbdeen21 commented Apr 30, 2025

The dialects that support underscore as a separator in numeric literals used to parse ._123 as a number, which I don't think is valid SQL. However, that means that something like ._abc would be parsed as Number ._ and word abc, which is wrong.

This PR splits the tokenizer match branch for numbers and periods into two branches to make things easier, fixes the issue mentioned above, and adds tests.

CC: @mvzink

src/tokenizer.rs Outdated
Comment on lines 1292 to 1300
Some('_') => {
self.tokenizer_error(
chars.location(),
"Unexpected underscore here".to_string(),
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if its worth returning an error here or whether we lose anything by allowing the tokenizer continue? I'm guessing its still possible for the tokenizer to return Token::Period here as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK underscore after a dot can only be an identifier, so it makes sense to error here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without the errors, '._123' as a whole will be parsed as a number, which is not valid in any standard AFAICT.
'._abc' as a whole would also be parsed as a word, which doesn't make sense because the prev token was not a word.

src/tokenizer.rs Outdated
Comment on lines 1309 to 1340
let is_number_separator = |ch: char, next_char: Option<char>| {
self.dialect.supports_numeric_literal_underscores()
&& ch == '_'
&& next_char.is_some_and(|c| c.is_ascii_digit())
};

s += &peeking_next_take_while(chars, |ch, next_ch| {
ch.is_ascii_digit() || is_number_separator(ch, next_ch)
});

// Handle exponent part
if matches!(chars.peek(), Some('e' | 'E')) {
let mut exp = String::new();
exp.push(chars.next().unwrap());

if matches!(chars.peek(), Some('+' | '-')) {
exp.push(chars.next().unwrap());
}

if matches!(chars.peek(), Some(c) if c.is_ascii_digit()) {
exp += &peeking_take_while(chars, |c| c.is_ascii_digit());
s += &exp;
}
}

// Handle "L" suffix for long numbers
let long = if chars.peek() == Some(&'L') {
chars.next();
true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm most of this logic looks to already be duplicated on the number parsing code path, so that that side effect would be undesirable I think.

If I understood the issue being solved for, its only the case of ._ being parsed as a number, would it be possible/more-desirable to only update the existing logic to properly detect and handle that case or is the current logic not well equipped to handle that sanely?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know. I tried to fix it without splitting the branches, but I couldn't.

Feel free to take a go at it if possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is the match happens on a peek and you need to consume the dot in order to peek the underscore.

What if the second peek wasn't an underscore? You need to un-consume the dot for it to be parsed as part of the number.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the peekable_clone help in this case maybe? in order to lookahead without consuming characters

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, but I prefer not cloning the entire token stream, especially since the dup code is only a few lines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you prefer pulling the dup code into functions, if possible?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh is that really the case that cloning the peekable iterator would clone the entire stream? (iirc it should only clone a struct with pointer offsets), the functionality is used in a few places in the tokenizer so I think it would be preferrable to avoid dup logic (if it turns out its an expensive thing to do then maybe we need to do a pass through the tokenizer to avoid that pattern overall, but that would be unrelated to this PR ofc)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

The dialects that support underscore as a separator in numeric literals
used to parse ._123 as a number, meaning that an identifier like
._abc would be parsed as Number `._` and word `abc`, which is
obv wrong.

This PR splits the tokenizer branch for numbers and periods into
two branches to make things easier, fixes the issue mentioned above
and adds tests.
Copy link
Contributor

@iffyio iffyio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @MohamedAbdeen21!
cc @alamb

@iffyio iffyio merged commit 052ad4a into apache:main May 10, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants