Skip to content

Conversation

@andrewstrohman
Copy link
Contributor

@andrewstrohman andrewstrohman commented Nov 11, 2025

When a pointer to a BTF struct is NULL, every type of matchArgs selector should fail to match, because the argument was not really resolved.

When we fail to dereference during resolve, report the "depth" at which the failure occurred, and expose this in the event, so that users can distinguish between bogus and legitimate arg values.

When investigating how to plumb this to the event, I found a bug regarding how returnCopy works. I've added a fix for that as well.

Copy link
Contributor

@tdaudi tdaudi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello !

Thank you for taking time to write this PR.
I opened an issue for this few month ago, see #3728.

I don’t know of you will share my point of view because will I have thinking about this i have found 3 problems :

  • First, what happen if the null structure is not the last resolving field. How can you track where it ends ?
  • Also, what happened if you are resolving int *value and the ptr is null ?
  • And if you are resolving a struct (e.g. type: file) you need to let extract_arg ends correctly, because at this point you have no idea if extract_arg return a legitimate 0 or a null struct. But when resolving it's fields, you will figure out that it is actually NULL.

The potential solution I was thinking of for this was to take all the return values of the probe_read and find a way to send this error to the userspace if it is < 0. So in case of a null pointer, you can have a new error code that the kernel does not cover, for instance -100, that is dedicated to it.

return 1;
/* NULL pointer */
if (*data->arg == 0) {
*data->null_ptr_found = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the last btf_config resolve in a null pointer, you would not reach that code right ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is good point. I think my new placement for error detection takes care of this problem.

- index: 1
type: "uint8"
btfType: "mystruct"
resolve: "sub.v8"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 2 test cases should be added. This one is good, but if sub alone is resolved, does it raise an error too ? For instance, if type: file is null, does Tetragon display null as well ?

Copy link
Contributor Author

@andrewstrohman andrewstrohman Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 2 test cases should be added.

You go on to describe what sounds like one test case. What is the other test case you're thinking about here?

but if sub alone is resolved, does it raise an error too ? For instance, if type: file is null, does Tetragon display null as well ?

As I mentioned in #3728, I think that we should break this up into two independent tasks. One is to handle the dereferencing in extract_arg_depth, and the other is to handle issues for the terminal types reach by resolve (in read_arg), which would happen even without the use of the resolve feature. With pointer types, read_arg() has some issues when the pointer is NULL. Because of this, I don't want to test scenarios where we resolve to a NULL pointer and pass that NULL pointer to read_arg() (yet).

Also, for the example you have above:

what happened if you are resolving int *value and the ptr is null ?

From my testing, I see that read_arg() is passed a pointer to an int in this scenario. If we want to resolve the int value that the pointer points to, I think we need to call extract_arg_depth one more time in order to dereference, because read_arg() takes the integer "by value" instead of by pointer to an integer. So, I don't want to test this scenario until we make this work in the non-NULL case.

So, I think that I should add a test for when the first pointer is NULL, and when the second to last pointer is NULL, but I don't want to test the scenario where the last pointer is NULL, because that problem is not specific to resolve, so I want to solve that issue independently.

@netlify
Copy link

netlify bot commented Nov 12, 2025

Deploy Preview for tetragon ready!

Name Link
🔨 Latest commit e900a2c
🔍 Latest deploy log https://app.netlify.com/projects/tetragon/deploys/691f8105a5cbb60008975550
😎 Deploy Preview https://deploy-preview-4327--tetragon.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@andrewstrohman
Copy link
Contributor Author

andrewstrohman commented Nov 13, 2025

Thank you for taking time to write this PR. I opened an issue for this few month ago, see #3728.

Thanks for bringing this to my attention, and your feedback/ideas here.

  • First, what happen if the null structure is not the last resolving field. How can you track where it ends ?

I've changed things so that I keep track of which level of depth we are unable to dereference. I use a sentinel value of -1 to indicated that no issues were detected. This is then used as an indicator that selectors against this arg shouldn't match. I'm thinking that perhaps this level of depth of failure information could be put in the event not only to distinguish resolve failures from success, but also what depth the failure occurred.

  • Also, what happened if you are resolving int *value and the ptr is null ?

I'm investigated this, and it seems that this scenario doesn't work for the non-NULL case. So, I want to fix that first, before considering how to handle the NULL case. This is what I'm trying:

struct mystruct {
	uint8_t  v8;
	uint16_t v16;
	uint32_t v32;
	uint64_t v64;
	struct mysubstruct sub;
	struct mysubstruct *subp;
	uint32_t *v32p;
};

In the userspace program I'm testing against, I do this:

struct mystruct s = {0};
uint32_t value = 3;
s.v32p = &value;

The policy's arg config:

    - index: 1
      type: "uint32"
      btfType: "mystruct"
      resolve: "v32p"

Is this the scenario you're describing here?

It seems like we might need to do one more iteration of calling extract_arg_depth and dereference again to make this work as expected. Put another way, I think that we are acting on a pointer as if it's an integer.

  • And if you are resolving a struct (e.g. type: file) you need to let extract_arg ends correctly, because at this point you have no idea if extract_arg return a legitimate 0 or a null struct. But when resolving it's fields, you will figure out that it is actually NULL.

I'll look into this more closely, but from reading the code it seems that we don't handle the type: file NULL case correctly, even outside of resolving. At first glance, I'm not seeing checks for probe_read() return values even though this type requires dereference. Overall, I thinking that my strategy is to fix issues in read_arg() where we should be watching the return value of probe_read() (types that require dereference), if they exist, independently from catching dereference problems in extract_arg_depth. I think the combination of these two methods will cover all bases. What do you think about this approach?

The potential solution I was thinking of for this was to take all the return values of the probe_read and find a way to send this error to the userspace if it is < 0.

Thanks for this idea. I think this is the right approach, and I have started moving in this direction.

So in case of a null pointer, you can have a new error code that the kernel does not cover, for instance -100, that is dedicated to it.

I'm not sure if I follow this. I think we need to communicate the error out of band of the arg value or else we will have a collision. Perhaps we can report the depth at which dereferencing failed in the event, in order to distinguish between resolve success and failure.

I'm trying to figure out what is the right behavior is for when we cannot resolve due to a NULL pointer. My current approach is to indicate there was a resolve error, but not let that prevent the event from firing, unless there is a matchArgs selector associated with the arg. In this method, I would communicate that the arg was not actually resolved in the event somehow. But, I realized from the issue that you created about this, that NULL string pointers prevent an event from firing, regardless of selectors. So maybe that's a more appropriate outcome.

@kkourt
Copy link
Contributor

kkourt commented Nov 13, 2025

I'm trying to figure out what is the right behavior is for when we cannot resolve due to a NULL pointer. My current approach is to indicate there was a resolve error, but not let that prevent the event from firing, unless there is a matchArgs selector associated with the arg. In this method, I would communicate that the arg was not actually resolved in the event somehow.

Above makes sense to me.

But, I realized from the issue that you created about this, that NULL string pointers prevent an event from firing, regardless of selectors. So maybe that's a more appropriate outcome.

I don't think that the appropriate outcome.

I'll use the example from the issue for clarity:

  lsmhooks:
  - hook: "bprm_check_security"
    args:
    - index: 0
      type: "string"
      resolve: "executable.f_path.dentry.d_name.name"
    selectors:
    - matchActions:
      - action: Post

In this case (that is, where there no matchArgs), I think we should generate an event (perhaps with an empty argument, or even a special value to indicate that some resolution went wrong).

@andrewstrohman andrewstrohman force-pushed the pr/andrewstrohman/resolve-null branch 17 times, most recently from 765d55d to f56102b Compare November 18, 2025 23:32
@andrewstrohman andrewstrohman force-pushed the pr/andrewstrohman/resolve-null branch 3 times, most recently from 73648d1 to 4767048 Compare November 19, 2025 05:18
We have been ignoring errors returned from bpf_probe_read() when
dereferencing during resolve. bpf_probe_read() returns a negative value
when it detects that a seg fault would occur. This is problematic both
in terms of exposing bogus argument values in the event and also when
preforming filtering on behalf of the matchArgs selector.

This change notes when we were unable to dereference so that arg
filtering does not happen against a bogus value. We send the depth at
which the dereference failed to userspace, so that the event can reflect
that there was a resolve failure, and at what depth during resolve that
error occurred. If the depth is 0, this indicates no issues with
resolve/dereference.

Note that this approach leaves the bogus argument value in the message
transmitted to userspace and in the event.

Signed-off-by: Andy Strohman <[email protected]>
This indicates if a seg fault was encountered during argument
resolution.

0 indicates no resolve issue. Otherwise a non-zero value indicates the
depth at which dereferencing failed.

Signed-off-by: Andy Strohman <[email protected]>
The previous commit altered a .proto file. This commit reflects the
outcome of running "make protogen".

Signed-off-by: Andy Strohman <[email protected]>
@andrewstrohman andrewstrohman force-pushed the pr/andrewstrohman/resolve-null branch from 4767048 to e900a2c Compare November 20, 2025 20:58
A depth of 0 indicates no resolve error. Any non-zero value indicates
the depth at which the resolution failed.

Signed-off-by: Andy Strohman <[email protected]>
This program will be used to test how resolve handles NULL pointers.

Signed-off-by: Andy Strohman <[email protected]>
Test that the resolve error depth is expected when NULL pointers are
encountered.

Test that matchArgs will not match when resolve fails.

Signed-off-by: Andy Strohman <[email protected]>
With this change, GetIndex() has the same semantic meaning regardless of
hook type. It returns the index of the Arg within the spec.

This fixes a returnCopy issue where we were overwriting the wrong arg
when we merge the arg value after the retprobe event.

Signed-off-by: Andy Strohman <[email protected]>
@andrewstrohman andrewstrohman force-pushed the pr/andrewstrohman/resolve-null branch from e900a2c to 5bcb17c Compare November 21, 2025 00:27
This test confirms that we don't mix up argument indexes within the
function signature vs argument indexes within the spec. It does this by
providing a different ordering of args in the spec than the funciton
signature.

We overwrite a returnCopy arg's value with retprobe. This test confirms
that we will overwrite the correct arg.

Signed-off-by: Andy Strohman <[email protected]>
@andrewstrohman andrewstrohman changed the title resolve NULL pointer handle resolve of NULL pointers Nov 21, 2025
@andrewstrohman andrewstrohman marked this pull request as ready for review November 21, 2025 01:32
@andrewstrohman andrewstrohman requested a review from a team as a code owner November 21, 2025 01:32
@olsajiri olsajiri requested review from olsajiri and tdaudi and removed request for tdaudi November 21, 2025 16:56
@andrewstrohman andrewstrohman marked this pull request as draft November 21, 2025 21:08
@andrewstrohman
Copy link
Contributor Author

andrewstrohman commented Nov 21, 2025

I'm converting back to draft, as I just realized a problem with my approach. I need to think about tracepoints and usdt a bit more.

@olsajiri
Copy link
Contributor

hi, just some thoughts for discussion.. I dont feel strongly about it, I'm not sure what the right solution is ;-)

do we need that error-depth number in the final event? I wonder if you see it in event in most cases you'll know what failed.. in which case using some flag instead would be enough?

and if there's reason to keep it (I guess long derefs would justify that) could we maybe store the part of the deref chain (string) that failed instead of depth (we have that deref chain info already)

and how about using metric instead, something like deref_fails_cnt[policy,sensor,arg-idx,deref-chain] .. as I'm not sure this info should be part of the final event.. but I could be easily wrong

@andrewstrohman andrewstrohman marked this pull request as ready for review November 22, 2025 02:10
@andrewstrohman
Copy link
Contributor Author

hi, just some thoughts for discussion.. I dont feel strongly about it, I'm not sure what the right solution is ;-)

Thanks for the review and helping me think this through.

do we need that error-depth number in the final event? I wonder if you see it in event in most cases you'll know what failed.. in which case using some flag instead would be enough?

I'd be OK with just a boolean flag that indicates that resolving failed. My primary goal here is to indicate that the argument value is bogus. I can't just remove the bogus arg value from the event because that would shift some args values to the left of what was configured in the spec, causing a mis-match with expectations.

I think I went down this path of recording which dereference failed, because of @tdaudi's earlier comment. He said:

First, what happen if the null structure is not the last resolving field. How can you track where it ends ?

@tdaudi, did I understand your comment correctly? Is this something that is important to you? If this is not what you meant, or you don't feel strongly about this, I think we should just go with a boolean flag. I initially thought this might be nice to expose, but it's not important to me.

If users really want to know where the NULL deference happened, they can resolve the intermediate pointers to see which one was NULL, if the they have the args to spare (we have a max of 5). This way, they can also run a selector against it too, whereas just reporting the depth does not allow them to filter based on it.

and if there's reason to keep it (I guess long derefs would justify that) could we maybe store the part of the deref chain (string) that failed instead of depth (we have that deref chain info already)

When you say "long derefs" here, I think you mean multiple dereferences per resolve. Please let me know if I misunderstood. I agree that if we decide to pinpoint the exact deference that failed it would be better to go all the way and show them where it failed via a prefix of the resolve configuration string. Just reporting the depth requires the user to know too much (and do extra investigation) in order to be able to interpret it in a useful way. But I'm currently leaning toward just changing it to be a boolean.

and how about using metric instead, something like deref_fails_cnt[policy,sensor,arg-idx,deref-chain] .. as I'm not sure this info should be part of the final event.. but I could be easily wrong

When you say "this info", I think you mean the depth of dereference failure here. I think we agree that we need some signal in the event that indicates that the arg value is bogus. Please correct me if I misunderstand your position here.

I would prefer your first or second second suggestion over this, because the user cannot definitely know which event the counter increment was related to.

When we fix read_arg(),so that NULL pointers won't prevent events(see here for background), we can use this same boolean flag to indicate the dereference failure, unifying the two sources of dereference failures.

@tdaudi
Copy link
Contributor

tdaudi commented Nov 22, 2025

@tdaudi, did I understand your comment correctly? Is this something that is important to you? If this is not what you meant, or you don't feel strongly about this, I think we should just go with a boolean flag. I initially thought this might be nice to expose, but it's not important to me.

If users really want to know where the NULL deference happened, they can resolve the intermediate pointers to see which one was NULL, if the they have the args to spare (we have a max of 5). This way, they can also run a selector against it too, whereas just reporting the depth does not allow them to filter based on it.

My initial idea was to have a convinient way to know which field in the resolve path is null to avoid spending time on updating the policy and testing again. But for production, I think it is interesting to know where the resolve fails when the null value is unexpected. Knowing where the null appears help understanding the reason and avoid the difficulty of having to reproduce it, especially if it occurs rarely.

a = (&e->a0)[arg_index];

extract_arg(config, index, &a);
extract_arg(config, index, &a, &e->resolve_err_depth[index]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so if we fail to resolve we still continue and store the 'unresolved data' which likely null, right? I think we can skip it

but how about we make this more generic and we start each argument data with '__u32 error' status and in case of failure we write just error != 0 as argument value.. in case of resolve failure we encode the depth into it

then on user space side the getArg would just read first uint32 from event reader to get the argument status
so all the extra retrieval of error depth from each probe type would not be needed, also probably the argument index will be clear, because you have the error for argument you are about to read, or skip reading if it's != 0

we could use this to store other errors that happen during argument encoding, which there are plenty

and into final event instead of the depth value, I'd add error string that in case of resolve failure would contain something like: "failed to resolve current->file->f_inode"

seems like this could save some cycles and prevent bogus arguments values, because IIUC if we bail from argument storing in the middle on kernel side the user side still tries to read the whole part

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants