Skip to content

Expose setting CPU SamplingProfiler rate #83635

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from

Conversation

pedrobsaila
Copy link
Contributor

Fixes #82939

@ghost ghost added area-Tracing-coreclr community-contribution Indicates that the PR has been added by a community member labels Mar 18, 2023
@@ -56,6 +56,17 @@ static
void
sample_profiler_enable (void);

static
void
ep_sample_event_pipe_callback(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate prototype isn't needed.

@tommcdon
Copy link
Member

@noahfalk @davmason @brianrob PTAL

@tommcdon tommcdon requested a review from lateralusX April 10, 2023 15:08
@tommcdon
Copy link
Member

while (offset < filter_data_size) {
candidateKey = filter_data_char + offset;
if (strcmp(candidateKey, sampleProfilerIntervalMSKey) == 0) {
ep_sample_profiler_set_sampling_rate(strtoull(candidateKey + 25, NULL, 10) * 1000000);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check that strtoull returns a value greater than 0. If someone passes in a bad value that cannot be parsed it would return 0 and the sample thread would sample continuously and busy lock the app.

Copy link
Member

@lateralusX lateralusX Apr 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably even add logic to make sure the value falls within some accepted limits, so maybe a value between 1000 samples/second (current default and not sure we want to go lower than that with current sampling solution) and some lower limit that makes sense. You might want to sample quite infrequently in some cases, so not sure what the lower limit could be, so maybe we should just cap it at some sane value, like 1 sample/second, since its still a sample profiler that you configure. If you would like to get even more infrequent samples, then maybe it should be handled by a custom provider and not the sample profiler.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that keeping a sane range is a good idea. For example, ETW will not allow you to sample any more than every 0.125 seconds if I recall correctly. I'm not super concerned about the other side of the range because there isn't a risk of affecting the availability of the app - there is a risk of not getting data. I think it's probably safe to allow any large value, though at a certain point it doesn't really provide much value for the scenarios that we are currently tracking.

@ghost ghost added the needs-author-action An issue or pull request that requires more info or actions from the author. label Apr 11, 2023
@davmason
Copy link
Contributor

Thanks for opening this, @pedrobsaila! Sorry it went unnoticed for so long

@lateralusX
Copy link
Member

lateralusX commented Apr 11, 2023

@pedrobsaila thanks for the contribution!

Couple of thoughts:

  • The sample frequency is a global setting, currently only set during EventPipe init. If we make it possible to change it from sessions, then we will affect the sample frequency for all running sessions, so that is a side effect to be aware of implementing this.
  • We might need some locking to correctly handle the values of _sampling_rate_in_ns and _time_period_is_set, probably best done in ep_sample_profiler_set_sampling_rate alternative lock in sample profiler callback using EP_LOCK_ENTER/EP_LOCK_EXIT.
  • Since _sampling_rate_in_ns now can be read/written by parallel threads (including sample thread), while not holding the config lock, it needs to be read/written using volatile load/store functions in case it doesn't hold config lock. The same goes with _time_period_is_set but if we make sure that its only read/written while holding config lock, then is no need to do that.
  • Right now the setting, SampleProfilerIntervalMS, is set in MS, an alternative used by other tools (like Linux perf) is to express frequency as number of samples/second (expressed as Hz), so our current default for example is 1000 Hz, 1000 samples/second. If we would like to express it that way instead, we should name the property SampleProfilerFrequency, @davmason, @noahfalk, thoughts?
  • We should probably have a EventPipe env var setting for this as well, setting the default for the complete process in case you don't want the default 1000 Hz sample rate.
  • We should add limits for the frequency making sure it falls within some accepted range of values.

@brianrob
Copy link
Member

Agree with @lateralusX's comments. One thing on how to express the value - there is prior art here in both directions. ETW uses a sampling rate (e.g. every 1ms), and perf uses Hz. From my perspective, either is fine, but I think the current APIs support the former.

void* callback_data)
{
if (filter_data) {
ep_char8_t *filter_data_char = (ep_char8_t *)((uintptr_t)(filter_data->ptr));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I read that this is implemented as filter data that is passed to the provider? If so, I have a bit of a concern around how to de-duplicate requests when there are multiple sessions. Right now, this is a single value. If we are going to allow this to be specified on provider enable, we should probably have some behavior that de-conflicts requests. I'm not sure if that means taking the most frequent rate of all session requests, or something else. Another option is to consider having an API outside of enablement that sets this, at which point it can remain a global, and that is expressed at the API level.

@noahfalk
Copy link
Member

noahfalk commented Apr 12, 2023

If we make it possible to change it from sessions, then we will affect the sample frequency for all running sessions, so that is a side effect to be aware of implementing this.

This was a significant reason why I didn't make the current implementation configurable. We tried to do configurable polling intervals for EventCounters but I don't feel like it has worked out that well. Things are OK as long as only one tool is doing work but as soon as a 2nd one shows up with different expectations about the interval things get very messy.

Sorry I missed the discussion on #82939 when it happened earlier or I would have interjected at that point. How would folks feel about a different approach to solving this problem that won't cause multiple sessions to compete over setting a global sample frequency? Today we have Informational level events that are emitted at 1000Hz, what if we added a couple new events that are at a lower severity level but filtered with keywords for emitting samples at 1Hz, 10Hz, and 100Hz? For example a tool that wants to subscribe to 10Hz CPU samples would set level=critical,keywords=10HzCpuSamples. Internally we can dial the sampling rate up or down based on the highest requested frequency but each session would always get exactly the frequency of events it asked for.

@davmason
Copy link
Contributor

How would folks feel about a different approach to solving this problem that won't cause multiple sessions to compete over setting a global sample frequency?

How often do we expect to have multiple sessions all requesting samples? I was thinking CPU sampling is fairly high overhead and you wouldn't normally run two profilers at once.

When I think about potential objections that profilers would have over having the interval set to something other than what they specified, it seems like the perf impact on the process is the main concern and how many events they get is a lesser one. Then even if we do fancy things to make sure everybody gets samples at the right rate it will still have a big impact on the process if we are sampling at a high frequency.

I'm willing to go with the community vote if other people disagree, but it doesn't seem that bad to have it globally configurable and if we had to do something more intricate I would vote for highest frequency winning. Our policy could be that setting the sample rate means we will sample at least at that interval, though it could be more often.

@brianrob
Copy link
Member

@noahfalk, I think your proposal of multiple keywords is interesting. At the same time, I think it's fine to just keep a global sampling rate. I think the key is to change how it's set, so that it's clear that it's not a per-session thing - it's a global.

@pedrobsaila
Copy link
Contributor Author

pedrobsaila commented Apr 12, 2023

So if I can do a resume for the above discussions :

  • I should validate that the value SampleProfilerIntervalMS falls whithin some acceptable range : 1000 samples/s & 1/s
  • SampleProfilerIntervalMS should be read from a global configuration : any idea for the naming ? maybe DOTNET_EventPipeSampleProfilerIntervalMS ?
  • Do I keep the session setting ?
    • If yes :
      • I should take the highest value that is proposed by a random session
      • Implement locking for _sampling_rate_in_ns and _time_period_is_set
      • Implement volatile read/write for _sampling_rate_in_ns and _time_period_is_set

@ghost ghost removed the needs-author-action An issue or pull request that requires more info or actions from the author. label Apr 12, 2023
@noahfalk
Copy link
Member

How often do we expect to have multiple sessions all requesting samples? I was thinking CPU sampling is fairly high overhead and you wouldn't normally run two profilers at once

CPU sampling at 1000Hz is high overhead, but at 1 or 10 its probably neglible. I agree that today it would be uncommon to have multiple because of the overhead, but once we give users a lower overhead option I don't see why they wouldn't change their behavior in response. I'd imagine that a major use-case is to create long-lived low overhead profiling tools that monitor in the background. I expect those tools will mostly be in addition to other tools rather than replacing them. So imagine that any scenario today where there is 1 profiler attached, in the future that might be two profilers with different desired sampling rates.

When I think about potential objections that profilers would have over having the interval set to something other than what they specified, it seems like the perf impact on the process is the main concern and how many events they get is a lesser one

I think number of events is going to be important to them for a few reasons:

  • Long-lived profilers are capable of producing large volumes of data which has to be transmitted and stored. If a solution is designed to handle X data/sec and without warning it is subjected to 100X that amount of data it could fail entirely, have significant performance impact from the tooling, or just shock the users with a staggeringly large bill.
  • When analyzing the data there is usually an assumption that every sample represents a uniform time interval. If the sample interval can change without notice any statistics based on uniform sampling will be invalid.

Assuming we do extra work to notify profilers what sample rate they are getting then yes, it is possible for them to contend with this by doing further down-sampling, but that is extra complexity in every tool. The need to re-sample won't be apparent from testing in isolation so I think there is a pretty good chance profiling tool authors may miss the requirement entirely.

I think the key is to change how it's set, so that it's clear that it's not a per-session thing - it's a global.

I'm not sure if you are proposing its a global that is updated based on session specific requests as the current PR has it, or it is a global that is configured via some other mechanism independent of EventPipe sessions? Either way its not yet clear to me how a profiling tool author can make a robust tool when the sampling rate may differ from the rate they requested/expected given the current design. I do think we could keep adjusting the design until we made it work but I worry that once all those adjustments happen the final solution will be more complex, more work, and more error prone than offering a few alternate fixed rate events.

These are some of the questions that come to mind:

  1. How will a profiler know that the rate has changed so that it can do downsampling?
  2. How do we expect dynamic rate information is encoded in the NetTrace file given the current format only encodes the value once in the header?
  3. The current proposed policy in the PR is last-set-value wins, I don't think that will work because tools can't up-sample missing data. Highest-value wins seems viable.
  4. Assuming we have a highest-value-wins policy, what happens when the session requesting that highest value disconnects? Ideally I think we'd want a recomputation of the new highest value at that point, otherwise attaching a short-lived high-sample-rate tool to an app that also has a long-lived low rate tool will permanently put the app into a high perf overhead state. The previously requested rates for all open sessions won't be info that is directly passed to a provider callback but presumably we can grovel for that data using internal native EventPipe APIs.
  5. Do we allow rates above 1000Hz? Ideally I'd suggest no. If we do its a breaking change for all the existing tools and they need to add downsampling logic.

So if I can do a resume for the above discussions...

I don't think we've reached a consensus on the direction yet which would be needed to answer these finer grained implementation questions, but I do expect the discussion will get us there. Thanks all!

@noahfalk
Copy link
Member

@brianrob @davmason - I think the discussion is waiting on your feedback at the moment.

@lateralusX
Copy link
Member

lateralusX commented Apr 19, 2023

A couple of thoughts:

  • Currently the sample profiler runs in one thread, that sleeps for the sample frequency interval, when it wakes up it will do a STW (stop the world), take all threads stack traces and write it into EventPipe, RTW (restart the world) and sleep until next next sample interval. So the overhead of running multiple profiling sessions doesn't have an effect on how the current sample profiler works, it will still wake every 1 ms, STW, unwind all threads, write events into EventPipe, restart threads, but it will increased amount of data distributed through EventPipe since the same sampling data will be duplicated in each session requesting sampling data, so that's what will add overhead running multiple profiling sessions, but the rest of the activities around the sampling (STW, unwind thread stacks, write events, RTW) is constant.

  • Based on above discussions I get the feeling that sampling interval needs to be set per session and then let the sample thread sample based on the session with highest frequency and only write the events into the sessions that "expired" their next sampling interval. That is how other tools work (like perf), each session sets the sampling frequency. In order for that to work, EventPipe probably needs to be more aware of sample profiler, since EventPipe will need to decide what sessions should get sample events and know about the current frequency of sample profiler as well as the frequency of each session. The frequency the sample thread runs at could also become a little complicated if we decide to allow any sample frequencies within a specific range. For example one session requests 250 Hz (4ms) and another 100 Hz (10ms), if profiler threads runs at highest frequency, 250 Hz, it won't be able to correctly handle the 100 Hz session, so it will need different heuristics to handle different frequencies not following a specific pattern. There is also a diff between sleeping and doing the full sampling, so an alternative could be for the sample profile thread to wake up more frequently, ask EventPipe if there are any sessions that would need sampling at current tick, and if not, sleep again. In the above scenario, the sleep time could be done at a constant 500 Hz and most of the wake ups won't do any sampling at all. Alternative the sampling thread needs to do variations in sleep time to make sure all frequencies are correctly covered and there is always the alternative to limit what frequencies that can be used making sure sample thread can use a constant sleep time and always sample when waking up. Having different sessions running at different sample rates not following a fixed pattern can also cause increase the sampling rate, like in the above example, running at 250 Hz would give you 250 samples per second, but if we also need to make sure the 100 Hz is accurate, we would need to sample on 4ms, 8ms, 10ms, 12ms, 16ms, 20ms, 24ms, 28ms, 30ms, 32ms ..., so in reality, sample profiler needs to run at a higher frequency than 250 Hz to make sure the 100 Hz session gets right frequency of samples.

  • From what I understand out of the main problem (Expose setting CPU SamplingProfiler rate #82939), is that the default sampling is to high for some workloads. The ideal would be to set it per session as discussed above, but making that a solid solution will (as this discussion exemplifies) need additional thought and might end up to be a none trivial implementation. Could an short term alternative be to implement ability to adjust default sample frequency through a env variable, like DOTNET_SampleProfilerFrequency (in hz or ms depending on what we decide to use) replacing current 1ms default during startup? That way workloads at least have a way to decide default sampling frequency? There is still no way to present this to tools since there is nothing in nettrace file that includes that information, but not sure if tools currently rely on our 1000 Hz default and would break if another default is used or not. The sampling frequency is still only a best effort in runtime, since we sleep sampling thread using frequency, 1ms, but we don't account for overhead doing the sampling, meaning we will run at some undefined value < 1000 Hz already.

@noahfalk
Copy link
Member

Thanks @lateralusX!

Based on above discussions I get the feeling that sampling interval needs to be set per session and then let the sample thread sample based on the session with highest frequency and only write the events into the sessions that "expired" their next sampling interval

I'd be happy with that type of plan.

The frequency the sample thread runs at could also become a little complicated if we decide to allow any sample frequencies within a specific range

Yeah, I was attempting to simplify this issue of sampling intervals that aren't multiples of another by restricting the set of allowed values. I was guessing that multiples of 10 provided enough flexibility for tools without adding an unnecessarily large set of options. I'm not inherently opposed to other finer grained options if folks believe they are needed as long as the design addresses how we implement it, how we keep track of the rate in the nettrace file, and how we prevent multiple sessions from adversely affecting each other. Multiples have the nice property that the number of samples is always the max of the session sampling rates but even if we had a lesser bound where number_of_samples = max(1000, sum_of_sample_rate_in_all_sessions) that still feels OK. It should mean at worst starting a new sampling session with rate X might increase overhead by X additional samples/sec.

There is also a diff between sleeping and doing the full sampling, so an alternative could be for the sample profile thread to wake up more frequently, ask EventPipe if there are any sessions that would need sampling at current tick, and if not, sleep again.

I'm not sure if you are proposing these extra wake-ups that take no samples to make the implementation simpler or that would create more normalized load over time? It wouldn't be my preference but as long as the wake-up frequency doesn't exceed 1000Hz I could take comfort that its not a regression from the status quo. My ideal solution for wakeups would be that we do as few as necessary to still deliver the requested samples. In the general case I assume that means calculating the next wake up time for each session and then sleeping for the min() of those times.

Could an short term alternative be to implement ability to adjust default sample frequency through a env variable, like DOTNET_SampleProfilerFrequency

It makes me nervous if nothing is constraining the usage. The most likely usage I see for low-overhead sampling is that some monitoring tool wants to do continuous profiling in production over long periods time (a completely reasonable goal). The moment such an env var exists nothing stops those monitoring tools from setting the env vars at startup and treating it as a permanent solution. Eventually devs might try to use other tracing tools just as they do today but now those tools either don't work at all or they give very misleading results. In a simple world there would be no confusion because users would always set the env var with full knowledge and acceptance of the consequences and no-one is ever surprised later. However scenarios I anticipate in practice for non-trivial projects are different engineers and tools being involved, each one has limited knowledge about the actions of others, and one person/tool owner can easily make a choice without understanding or properly accounting for the impact it will have. Once the impact is eventually discovered it takes effort to root cause the issue and once understood the decisions can still be difficult to back-out because now people have started depending on that ongoing monitoring. So from my perspective the quick solution comes with substantial risks and I am advocating its not a good tradeoff.

@pedrobsaila - Sorry that sorting the design is taking longer than I had hoped and the discussion is backtracking from what presumably looked like a more settled plan earlier. Do you have preferences how you hope this design will go or a specific goal you are aiming for with this work?

@lateralusX
Copy link
Member

lateralusX commented Apr 21, 2023

I'm not sure if you are proposing these extra wake-ups that take no samples to make the implementation simpler or that would create more normalized load over time? It wouldn't be my preference but as long as the wake-up frequency doesn't exceed 1000Hz I could take comfort that its not a regression from the status quo. My ideal solution for wakeups would be that we do as few as necessary to still deliver the requested samples. In the general case I assume that means calculating the next wake up time for each session and then sleeping for the min() of those times.

Yes, the extra wake ups was just to check if there is a need to sample, but if no session hit their frequency nothing will be done the sample profiler thread can go back to sleep. As part of that it could look through all sessions frequency and decide the next minimum time to sleep before it needs to wake up to make sure all running sessions sample frequency is correctly handled. Adding/Removing sampling sessions probably need to wake up the profiling session thread, since it will need to recalculate its sleep time, meaning that it probably needs to wait on a event instead of just doing a sleep as it currently does.

@lateralusX
Copy link
Member

lateralusX commented Apr 21, 2023

how we keep track of the rate in the nettrace file

Looks like we currently have a field in the nettrace file header that has the sampleing rate expressed in nano seconds. In ep_file_alloc we do:

instance->sampling_rate_in_ns = (uint32_t)ep_sample_profiler_get_sampling_rate ();

so we should be able to set that field to the sessions real sampling frequency and tools should be able to decide at what sampling frequency current nettrace file used.

@pedrobsaila
Copy link
Contributor Author

@pedrobsaila - Sorry that sorting the design is taking longer than I had hoped and the discussion is backtracking from what presumably looked like a more settled plan earlier.

No worries. It's better to take time to refine well the solution than to rush to something bad for the users.

Do you have preferences how you hope this design will go or a specific goal you are aiming for with this work?

I don't have a strong opinion about the design because that's the first time I work on event pipe, I lack experience on this area. For the time being, I would rather follow the interesting discussion ^^. If we settle for a solution at the end, I would be happy to contribute to it.

@noahfalk
Copy link
Member

I was chatted offline with @brianrob and @davmason, together with @lateralusX's suggestions above I think we are all converging towards a similar design. Let me describe it here and then folks can either confirm this sounds good or raise issues/questions/suggestions:

  • Each session will be allowed to have its own sampling rate, specified using the SampleProfilerIntervalMS filter argument. Any integer value from [1,1000] is legal.
  • We continue to have a single CPU sampling event definition, but if sessions request different sampling rates then they will receive this event at different times. Each session receives the event at the sampling interval that it requested, regardless of the intervals requested by other sessions. For example if session A requests 10Hz events and session B requests 20 Hz events then at T=0 the event is sent to both sessions, at T=50ms it is sent to session B only, and at T=100ms it is again sent to both. Having the provider decide which sessions to send an event to is not something that is typically achievable for a managed EventSource, but because the sampling provider is implemented internally to EventPipe it has access to APIs that can let it do this.
  • We can set the sampling_rate_in_ns field inside the nettrace file to the requested rate, but I learned from @brianrob that TraceEvent actually ignores that value and uses the observed timestamp interval between CPU sample events instead. (So my worry about how the interval is persisted appears to have been unnecessary)

I looked at the code a bit to offer some suggestions on how to implement this design, but @pedrobsaila if anything doesn't feel like it makes sense or you think there is a better way to do it happy to chat. Its entirely possible I overlooked things or made mistakes.

To track session sampling rates:

  • In the provider_callback, the EventPipeSessionID parameter can be cast to EventPipeSession* (I don't think there is any value in us treating the session as an opaque ID in this callback, its just a historical artifact of how the code evolved)
  • You can convert the EventPipeSessionID to an index between [0,63] using ep_session_get_index().
  • Store the interval in an array indexed with the session index
  • During the disable callback use the session index to know which entry to remove
  • signal an AutoResetEvent to wake up the wait loop which might need to recalculate the wait time based on a new session interval being added/removed.

The wait loop:

  • Inside the sampling_thread loop you would need to iterate through the cached array of session intervals and calculate how far in the future each one needs to be triggered.
  • For any that need to be triggered now do that (see below). If the loop woke up because the wait event was signaled its possible that no sessions are ready to be triggered.
  • wait on the AutoResetEvent with a timeout set for whichever session(s) will need to trigger first in the future

To write sampling events to selected sessions rather than all sessions:

  • Inside the sampling_thread loop calculate which sessions are scheduled to receive the sampling event, encode that data in a 64bit mask and pass the mask to ep_rt_sample_profiler_write_sampling_event_for_threads using a new parameter.
  • Continue propagating that mask along until it eventually reaches the call to ep_write_sample_profile_event
  • In ep_write_sample_profile_event instead of calling write_event_2, call some new function (write_event_filtered?) that takes the 64 bit session mask. write_event_filtered would be implemented similar to write_event_2 except all the rundown stuff isn't needed and the for loop that iterates through sessions should only check the sessions that have bits set in the mask.

Does this sound good?

@lateralusX
Copy link
Member

lateralusX commented Apr 25, 2023

  • In ep_write_sample_profile_event instead of calling write_event_2, call some new function (write_event_filtered?) that takes the 64 bit session mask. write_event_filtered would be implemented similar to write_event_2 except all the rundown stuff isn't needed and the for loop that iterates through sessions should only check the sessions that have bits set in the mask.

I think we can just extend write_event_2 (internal function in ep.c) with a 64-bit variable that contains the bit mask of sessions to write event into. We can have a define that can be used as default enabling all sessions for the event. Inside write_event_2 we can extend this check:

if ((ep_volatile_load_allow_write () & ((uint64_t)1 << i)) == 0)

to also look at the passed in bitmask to make sure the session should be written into for this specific call.

Does this sound good?

The plan sounds reasonable to me. One thing to keep in mind is that we don't want to take locks in sample profiler loop to sync changes, so there will probably be an inherited race between enable/disable profiling sessions and sample profiler calculations. We still might need to issue memory barriers to make sure we don't see changed memory out of order. There is always the underlying filter on the event that will take place inside EventPipe, so that will take care of the case when a new session gets created on an old profiling session slot, just after sample profiler has calculated its bitmask. There is also the case where a new profiling session with a different sample rate ends up on the same slot as an old profiling session, just after sample profiler has calculated its bitmask, and probably a couple of other scenarios. As long as we can tolerate potential smaller frequency variations on the first sample events send to a profiling session, it will work. The current sample profiler implementation is "best effort" anyways, since the frequency set the sleep time of sampling thread, but doesn't account for time it takes to sample.

@noahfalk
Copy link
Member

As long as we can tolerate potential smaller frequency variations on the first sample events send to a profiling session, it will work

Yeah I'm not expecting that level of error is likely to cause any practical problems. In the unlikely case that it was a real issue we could adjust the implementation to handle those corner cases more rigorously.

@tommcdon
Copy link
Member

Hello @pedrobsaila! Thanks for the community contribution and I hope that the feedback has been helpful on steering it in the right direction. It's been a while since there has been activity on this PR. Would you like to change the status to "Draft" while work is being done on it, or do you feel we are close to addressing the feedback?

@tommcdon tommcdon added the needs-author-action An issue or pull request that requires more info or actions from the author. label May 22, 2023
@pedrobsaila
Copy link
Contributor Author

sorry I've been quite busy this last month. Just resumed working on it. I'll make it a draft until it's ready for review.

@ghost ghost removed the needs-author-action An issue or pull request that requires more info or actions from the author. label May 22, 2023
@pedrobsaila pedrobsaila marked this pull request as draft May 22, 2023 20:17
@tommcdon
Copy link
Member

sorry I've been quite busy this last month. Just resumed working on it. I'll make it a draft until it's ready for review.

Great to hear!

@ghost ghost closed this Jun 21, 2023
@ghost
Copy link

ghost commented Jun 21, 2023

Draft Pull Request was automatically closed for 30 days of inactivity. Please let us know if you'd like to reopen it.

@ghost ghost locked as resolved and limited conversation to collaborators Jul 22, 2023
This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-Tracing-coreclr community-contribution Indicates that the PR has been added by a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expose setting CPU SamplingProfiler rate
7 participants