Skip to content

Conversation

@ke-zhang-rd
Copy link
Contributor

Description

Let user's log handler has WARNING as default

Motivation and Context

There are two groups of handdler. One is user's file/stdout handler, One is elastic handler. Both could be independently. Here user's handler is updated to use ERROR as default and allow customized.

How Has This Been Tested?

@danielballan
Copy link
Member

I was thinking we would do this in NSLS-II/nslsii#49. I think leaving the default to NOTSET is the right behavior for the library, and we should keep our facility-specific preferences in the nslsii repo.

@codecov-io
Copy link

codecov-io commented Jan 15, 2019

Codecov Report

Merging #1140 into master will increase coverage by 0.05%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1140      +/-   ##
==========================================
+ Coverage   89.32%   89.38%   +0.05%     
==========================================
  Files          51       51              
  Lines        8552     8553       +1     
==========================================
+ Hits         7639     7645       +6     
+ Misses        913      908       -5
Impacted Files Coverage Δ
bluesky/log.py 75.32% <100%> (+6.9%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8852741...9069d57. Read the comment docs.

@danielballan
Copy link
Member

On second though, we may also want to expose the level here.

@tacaswell
Copy link
Contributor

👍 to adding level as an input to this function.

@danielballan
Copy link
Member

I think exposing level here has no downside. I think the default of that level needs more thought. The current default (NOTSET) means that the logger controls what gets through. If I type

RE.log.setLevel('DEBUG')

I immediately see log messages. With this change, what's the easiest way to get that functionality?

@teddyrendahl
Copy link
Collaborator

For what it's worth I find that it is best practice to always set the level on the handler. By setting RE.log level you affect any Handler that may or not be subscribed.

If this is agreed upon it seems like RE.log might want to be switched to affect the handler and/or the bluesky.log.handler?

@danielballan
Copy link
Member

For what it's worth I find that it is best practice to always set the level on the handler. By setting RE.log level you affect any Handler that may or not be subscribed.

That's a solid point and a major shortcoming of what we currently recommend and make easy in bluesky, ophyd, and caproto. I think we want:

  1. The ability to attach a handler that has its own opinions about verbosity
  2. The ability for the user to debug to the stdout or a file and to narrow down which components they want logs from. A common example is in caproto: getting logs from just one PV rather than the whole system.

We need to settle on a good recommendation for how users should do (2) without interfering with (1). Our current recommendation -- adjust the levels of the loggers like pv.log.setLevel(...) -- is not suitable.

If this is agreed upon it seems like RE.log might want to be switched to affect the handler and/or the bluesky.log.handler?

I don't think this is quite right either. RE.log, PV.log and friends currently point to Logger instances, and I don't think we have to change that. Instead, I think we need to recommend that users add handlers to specific loggers rather than to the top-level logger. Something like this:

set_handler(level='DEBUG')
pv.log.addHandler(caproto.current_handler)

(I'm using caproto for this argument because it's fine-grained logs make it easy to imagine practical, interesting examples.)

But then that prompts the question of how to get DEBUG logs from some component and INFO logs from a different component. Perhaps instead of adding level to set_handler, we should have a family of handlers in a SimpleNamespace:

pv1.log.addHandler(caproto.current_handler.debug)
pv1.context.addHandler(caproto.current_handler.info)

That could potentially be elided with a convenience method that knows about the global handlers:

pv1.handle_logs('DEBUG')
pv1.context.handle_logs('DEBUG')

@teddyrendahl
Copy link
Collaborator

I don't think this is quite right either. RE.log, PV.log and friends currently point to Logger instances, and I don't think we have to change that.

This is correct, think my initial suggestion was off base. What I really was thinking of is, wasn't there some RE.set_debug_mode or RE.debug? Maybe I'm going crazy. If not, I think it might be a good idea to have an optional function that someone could run requiring know knowledge of logging

we should have a family of handlers in a SimpleNamespace:

I kind of like this ... you could even start adding logs to the family of handlers automatically in your debug mode

for obj in RE._obj_seen:
   if hasattr(obj, 'log'):
      for handler in RE.handlers:
          obj.log.adddHandler(handler)

Co-Authored-By: ke-zhang-rd <[email protected]>
@danielballan
Copy link
Member

danielballan commented Jan 19, 2019

@teddyrendahl I don't want to rule out the possibility that you might be "going crazy" :- D but you are corrent, there is RE.verbose. (That operated by disabling the logger, which doesn't solve any of our problems.)

Usability aside, our viable options seem to be:

  1. Adjust levels on the loggers (current approach)
  2. Leave the loggers at NOTSET. Make family of handlers (one per level) and add the appropriate handler specifically to each granular logger of interest, not to the top-level loggers.
  3. Apply handlers to the top-level loggers, but use logging Filters to opt out of messages that are not interesting. (Thanks @ke-zhang-rd for investigating this.)

(1) is a nonstarter because it would block our ability to run always-on Big Brother loggers in the background (e.g. for Elastic). The loggers themselves must be left at NOTSET.
(2) is the "family of handlers in a SimpleNamespace" thing that I pitched above and @teddyrendahl sounded a positive note on as well.
(3) To quote @ke-zhang-rd from our DMs, "Reason is filter design to use in the case when 90% record going inside filter will be return true and only 10% will be return false. Not 10% record will be logged and 90% which wasted too much time." I think that's right. Adding Filters employs yet another concept and yet another layer, and I don't think it solves the problem any better than (2).

you could even start adding logs to the family of handlers automatically in your debug mode

That's a very interesting idea. I would like to avoid adding a "BLUESKY DEBUG MODE" per se, but I like what your code illustrates: the RunEngine always propagates its logging handlers (if any) to each obj it sees. (It should probably then remove them inside finally:.)

Per @klauer's thoughts here, we should first make a list of scenarios and then evaluate if the proposed scheme(s) make them easy/possible for the user to implement. Here are some ideas.

  1. Track a specific PV through its search/connect/read+write lifecycle, just inside caproto.
  2. Turn on RE msg logging (akin to what we used to recommend RE.msg_hook = print for before we had good loggers).
  3. Turn on RE document logging but not msg logging.
  4. Turn on verbose ophyd logging for a specific Device.
  5. Propagate the handlers set up for that specific Device down to the caproto PVs inside it, so we see logs from ophyd and all PV traffic only related to that Device.

I have chosen these because I believe only one of them (the second one) is even possible currently. I would like to see a demonstration that they are possible by implementing the "SimpleNamespace of handlers" along with some handler-propagation code along the lines of @teddyrendahl's code snippet. Then we can begin a second discussion around adding new API to make the usage easier in the spirit of RE.verbose or my pv1.handle_logs('DEBUG') suggestion above.

@teddyrendahl
Copy link
Collaborator

Leave the loggers at NOTSET. Make family of handlers (one per level) and add the appropriate handler specifically to each granular logger of interest, not to the top-level loggers.

Do you see every library having their own family? caproto.handlers, bluesky.handlers could get messy. Or do you want to move this into some sort of logging utility module. The latter might be nice so we don't have to copy the general logging setup between repositories.

@klauer
Copy link
Contributor

klauer commented Jan 23, 2019

I think ignoring the existence of LoggingAdapter and, perhaps, LoggingFilter could be a mistake at this critical point in deciding the new logging architecture.

Take, for example, a caproto client with a PV. Its client PV could have a LoggingAdapter that adds on contextual information (as suggested in the Python logging cookbook) such as logging_record.pvname, logging_record.address, and so on. With this information on the log records themselves - from which the formatted message users see is generated - it is completely within the user's power to filter based on pvname, IOC host, the contents of the log messages themselves, time, or any arbitrary condition that's needed. User-friendly filters and wrappers could easily be written.

The logger name-based method works when all of the information one needs to put a message in a bucket is in the logger name itself. Additionally, the howto linked above even states to not do what we are doing (or proposing to do): While it might be tempting to create Logger instances on a per-connection basis, this is not a good idea because these instances are not garbage collected.

Even if the filter argument above isn't terribly compelling, I can't see a scenario in which additional information tacked on by LoggingAdapters could ever possibly be a bad thing.

@danielballan
Copy link
Member

Having hashed out these ideas caproto and adopted the good suggestions above --- minus my eccentric "family of handlers" suggestion which fortunately never saw the light of day :-D --- we are finally circling back to bluesky to apply those ideas here. See #1185.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants