-
Notifications
You must be signed in to change notification settings - Fork 31
Alternative mechanisms for passing KG_URL? #41
Comments
Interesting. So this implies you'd like to hit a different gateway server on a per notebook basis - correct? Or are you simply wanting to launch multiple notebook servers within the same user session - each pointing at a different gateway? (I suspect the latter) Per my note in your other issue, with Notebook 6, nb2kg is embedded so a notebook server can be started using |
Hi Kevin, It's actually the former - a different gateway server on a per notebook basis. Thanks for the update on Notebook 6. I don't think it solves my use case however. I'd still like to be able to chose the endpoint dynamically. Should we continue this discussion in the notebook repository? |
I think here is fine. Can you elaborate on why you need to hit a different gateway server for each active notebook? |
Essentially I would like to have remote kernels each on different machines. But rather than using ZMQ over the network, kernel gateway is nice because it allows routing the traffic over HTTPS/WSS. In the simplest case you would have 1 notebook - 1 kernel gateway - 1 local (to the kernel gateway) kernel - achieving a remote kernel. Although it's easy to imagine at some point you would have multiple kernels on the remote machines with some more sophisticated routing logic in the nb2kg layer. Is that helpful? |
Thanks. Have you looked at Enterprise Gateway? It derives from KG, but launches kernels across resource-managed clusters. Currently supported clusters are Hadoop YARN, Kubernetes, Docker/Swarm, and "Distributed" which uses SSH and round-robins over a predefined set of hosts. |
Hi Kevin, Yes! Enterprise Gateway was what I initially looked at. I decided not to go down that route for a few reasons:
Overall using kernel gateway directly seems like a slightly simpler architecture and means I don't have to pull in features from EG which I won't be using. What do you think? |
EG is completely compatible with KG (as far as the websocket personality goes). If you don't have kernel.json files configured for the managed cluster (i.e., they don't contain a process-proxy stanza), then you get local behavior. Same protocol, etc. However, it sounds like you want to use the HTTP Personality, so you're right, you need KG (for that reason only). That said, NB2KG is intended for single destination uses. The If this was possible, how would you obtain the different URLs and associate each with the notebook/kernel instances? And, how would you get the correct list of available kernel specs to launch since these must reside on the same destination? Just trying to better understand in order to determine if a different approach can be taken. |
Appreciate you taking the time to dive into this use case :) Agreed about If this was possible? - I imagine a lot of these associations (notebook/kernel id -> KG url) are going to have to be available in memory on the notebook server side, so when a request comes through we can route it to the right place. Honestly I'd have to get my hands dirty doing some implementation to give you a better answer there. |
I appreciate your patience. I'm still struggling with the functional aspect of this. In the original post, you stated the following:
What I'm not following is how does a given manager obtain its set of of URLs and then know when to use a specific url to a) obtain a list of kernels and b) start a kernel selected from the list corresponding to the previous Are you really needing to use the I'm assuming the latter (but I've assumed wrong once already 😃) and you can achieve kernels on different (yet specific) hosts using EG (but not KG). With EG and the DistributedProcessProxy, you could, today, configure kernelA for hostA, kernelB for hostB, etc. When we add parameterized kernels (in jupyter_server), you could then get prompted for a set of hosts and decide to run a given kernel on a runtime-specified host. In addition, assuming the plan holds, that particular functionality (remote kernel and all) would be baked into jupyter_server along with the installation of a specific kernel provider. Can you give a concrete example of a given user needing to hit different gateway servers from within the same session, and how that user would discover and convey the url for kernel selection, startup and its lifecycle management? Thanks. |
You're spot on - 99% of the time will be the default behavior. The HTTP Personality is a really cool feature that I think customers will like. It was another point in favor of Kernel Gateway.
Yes, this is basically the functionality that I'm aiming for. We'll be building extensions to allow users to pick which host to run a kernel on etc. Apologies for not giving you that context up front - for this issue I was only looking at the work required in the context of nb2kg changes. |
ok - thanks for the response. Since you have some need for HTTP Personality, I'm wondering if you could treat those cases as one-offs as they are completely different beasts (and not really for use from a standard notebook client). You could obtain the behavior you desire today using the I'm curious about examples of when a given user would want a kernel run on a different hosts than where their current kernel is running? Just wondering if a kubernetes environment might better-suit your needs. |
I have a general use case that relates to this thread: an academic running their own local jupyter notebook server which suits their needs most of the time, but who sometimes needs to connect to a teaching related Enterprise Gateway for course activities (with that EG offering several kernels for different courses which the user should be able to select from) and research EG for research work. As well as access to multiple remote gateways they also need to access locally run kernels. |
Thanks for the comment Tony. It seems there have been a few similar requests, so that gets me noodling ideas. 🤔 We need a way to address the static nature of One thing that comes to mind once the Kernel Providers model is adopted in jupyter_server, would be to have a "Gateway Provider" where it essentially emulates a kernelspec, but its information is really a URL to a Gateway server. Kernelspecs that call What I'm not sure about is how the websocket channels would be accessed since, today, that interaction doesn't involve the kernel provider. Hmm. I think the kernel manager that is produced by the GatewayProvider would need to have information on it to direct websocket creation against, but this would be one-off code in the channels handler, which isn't cool. I have to study this area to see how best to do this. Given the current jupyter framework, this doesn't seem as straightforward as it should be, but I think we have much better options with kernel providers than today because they enable the ability to have different KernelManager class instances simultaneously (which is something @dleen was driving at previously). As a result, I think it would be good to somehow take the kernel manager instance into consideration when handling the websocket interactions as well. |
Hi Kevin Thanks for spending a few cycles noodling this :-) (I don't really have much understanding of how the Jupyter plumbing works; it's something that I keep meaning to start reading up on...). One of the things struck me was that if multiple enterprise gateway URLs are supported, this would provide a workaround for local kernels if you run a gateway on localhost and connect to that. I can imagine many more cases where folk might want to connect to local Jupyter servers most of the time, but then occasionally call out to eg a hosted GPU backed kernel. There are other extensions out there that can support this, I think, but I suspect ssh tunnels may be involved. In a lot of edu settings:
The second point actually raises a whole host of issues when it comes to auth; if the notebook server was accessed via an institutional JupyterHub, you can presumably set checks on the enterprise gateway that traffic is coming from a recognised domain. But if a user is running a local server, whilst they may have some institutional auth cookies set by their institutional website, I don't see how these could be used to help connect to EG. In which case, the external kernel connection menu would need to be able to prompt for credentials to connect to the EG, or you'd maybe need to install an extension that enabled you to set credentials. Hmm... that may make more sense from a user perspective - a notebook extension that allows you to maintain multiple connection records with the EG URL, and some sort of credential? But institutional IT folk probably wouldn't like username/password combinations being saved into an app.. Hmmm... (Security / auth is another thing I don't really understand the mechanics of properly!:-( |
Actually, I believe this kind of functionality would be address with the kernel providers approach. With that, we have the ability to launch kernels both local and remote. Many of Enterprise Gateway's remote kernel offerings have been prototyped as kernel providers. They all derive from Remote Kernel Provider and kernels are launched against the various resource managers, etc. These would then be available directly from Notebook (well, JupyterServer running a Notebook/Lab extension). The issue EG resolves is that when going from Notebook to EG, only EG needs to be "in the cluster". For example, in Kubernetes, one can hit EG in k8s, while the notebook server is outside k8s. However, for things like Hub in K8s, where each Notebook server is in K8s, then a KubernetesKernelProvider makes more sense. With that, you can have "cheap" kernels for prototyping and "expensive" kernels for high-end analysis. I really like the idea of having the ability to support multiple EGs simultaneously, but I'm not able to feret out how the WS channels issue would be resolved. |
Ah.. Right.. so, a user would see this as "just adding a kernel". I can add a local py2 kernel, or R kernel, or a remote EG kernel. And I could then open one notebook with a local kernel, one with a remote kernel etc? For the remote kernel, how would auth work? |
Correct. You'd get a mixture of "flavors". With kernel providers, EG wouldn't really come into play. The "remoteness" is baked into the provider. It's up to the provider how authentication works. I suspect in most cases (true for those in gateway-experiments) its generally assumed that the cluster in which the kernels are launched is already behind a firewall, as is the notebook server instance in this case (i.e., the kernel-launching server), and authentication has already been performed. Things could probably use some improvements in this area and having an EG server to bounce things through really helps, but we're then constrained to the singleton nature of things (which sucks) right now. |
Hi,
Right now the only way to specify the kernel gateway endpoint is via the environment variable
KG_URL
. I have a use case where I'd like to dynamically load the kernel gateway endpoint at runtime.I was thinking of implementing a mechanism where the endpoint would be provided by a manager. The default manager would be an environment variable manager which has the current
KG_URL
behavior for backwards compatibility. But users would be able to implement their own managers and specify which one to use via an argument like--NotebookApp.endpoint_manager_class=nb2kg.managers.MyCustomKgUrlManager
.What do you think of a change like this?
The text was updated successfully, but these errors were encountered: