Conversation
8ff308d to
0bf59f3
Compare
0bf59f3 to
c5e91fc
Compare
|
I think we need both Having 1: make sure that the base64-encoded payload unpacks to a byte array, |
913f922 to
fdfbf0a
Compare
fdfbf0a to
ffaa83f
Compare
stvoutsin
left a comment
There was a problem hiding this comment.
Overall I think the core spec looks solid, just a few thoughts/questions:
- I generally find descriptive names slightly better instead of
DixonandKeene, but maybe that's just me - Is it worth expanding how third-party support would work, how are plugins discovered at runtime?
- What type flows between the parsers in the chain? If a parser calls Keene in open-notebook mode and gets back a URL, what does the next parser receive? Should this be clarified?
- How do the
Keeneerror codes propagate to the user? What would a user see from a 500 error inKeene? - Is it worth clarifying the choice of
dict[str, byte[]]overdict[str, str]? - For cases where file exists should we return 409 instead of 403?
- The current
ensure_running_labhook redirects users to the JupyterHub spawner form so they can choose their image and size. As I understand it this is changing and we now choose the default image/size for them. Are we 100% sure that is what users want, or is it possible they'd prefer to be able to choose a specific image? - If this is delegating the templating to Times Square, will it be able to handle arbitrary Github repos? Is that functionality already there or planned to be there in Times Square? Is Times Square available on all envs where Ghostwriter will be?
- What is the "consumer" of the render mode for
Keene? - Regarding the
Dixon/Keenesplit, if they end-up with in-memory Python communication, i.e. two modules I'd treat them as the same service which is close to the architecture you already have?
|
|
Updated with |
No description provided.