Skip to content
This repository has been archived by the owner on Dec 8, 2022. It is now read-only.

Alternative mechanisms for passing KG_URL? #41

Open
dleen opened this issue Jul 17, 2019 · 17 comments
Open

Alternative mechanisms for passing KG_URL? #41

dleen opened this issue Jul 17, 2019 · 17 comments

Comments

@dleen
Copy link
Contributor

dleen commented Jul 17, 2019

Hi,

Right now the only way to specify the kernel gateway endpoint is via the environment variable KG_URL. I have a use case where I'd like to dynamically load the kernel gateway endpoint at runtime.

I was thinking of implementing a mechanism where the endpoint would be provided by a manager. The default manager would be an environment variable manager which has the current KG_URL behavior for backwards compatibility. But users would be able to implement their own managers and specify which one to use via an argument like --NotebookApp.endpoint_manager_class=nb2kg.managers.MyCustomKgUrlManager.

What do you think of a change like this?

@kevin-bates
Copy link
Member

Interesting. So this implies you'd like to hit a different gateway server on a per notebook basis - correct? Or are you simply wanting to launch multiple notebook servers within the same user session - each pointing at a different gateway? (I suspect the latter)

Per my note in your other issue, with Notebook 6, nb2kg is embedded so a notebook server can be started using --gateway-url <url>, bypassing the need to set KG_URL in the env. Does this solve your use case?

@dleen
Copy link
Contributor Author

dleen commented Jul 18, 2019

Hi Kevin,

It's actually the former - a different gateway server on a per notebook basis.

Thanks for the update on Notebook 6. I don't think it solves my use case however. I'd still like to be able to chose the endpoint dynamically.

Should we continue this discussion in the notebook repository?

@kevin-bates
Copy link
Member

I think here is fine. Can you elaborate on why you need to hit a different gateway server for each active notebook?

@dleen
Copy link
Contributor Author

dleen commented Jul 18, 2019

Essentially I would like to have remote kernels each on different machines. But rather than using ZMQ over the network, kernel gateway is nice because it allows routing the traffic over HTTPS/WSS. In the simplest case you would have 1 notebook - 1 kernel gateway - 1 local (to the kernel gateway) kernel - achieving a remote kernel. Although it's easy to imagine at some point you would have multiple kernels on the remote machines with some more sophisticated routing logic in the nb2kg layer.

Is that helpful?

@kevin-bates
Copy link
Member

Thanks. Have you looked at Enterprise Gateway? It derives from KG, but launches kernels across resource-managed clusters. Currently supported clusters are Hadoop YARN, Kubernetes, Docker/Swarm, and "Distributed" which uses SSH and round-robins over a predefined set of hosts.

@dleen
Copy link
Contributor Author

dleen commented Jul 18, 2019

Hi Kevin,

Yes! Enterprise Gateway was what I initially looked at. I decided not to go down that route for a few reasons:

  • I only needed single user support.
  • I didn't need any of the cluster support that EG has.
  • I liked the simplicity of pure HTTPS/WSS over the network.
  • I liked how each kernel gateway instance behaves as a full notebook server.
  • I liked the feature to serve HTTP requests from notebooks of kernel gateway.

Overall using kernel gateway directly seems like a slightly simpler architecture and means I don't have to pull in features from EG which I won't be using.

What do you think?

@kevin-bates
Copy link
Member

EG is completely compatible with KG (as far as the websocket personality goes). If you don't have kernel.json files configured for the managed cluster (i.e., they don't contain a process-proxy stanza), then you get local behavior. Same protocol, etc. However, it sounds like you want to use the HTTP Personality, so you're right, you need KG (for that reason only).

That said, NB2KG is intended for single destination uses. The MappingKernelManager that is extended by NB2KG (and the Gateway package) on which the configuration is associated is a singleton. In addition, the available kernel specs come from the gateway server.

If this was possible, how would you obtain the different URLs and associate each with the notebook/kernel instances? And, how would you get the correct list of available kernel specs to launch since these must reside on the same destination?

Just trying to better understand in order to determine if a different approach can be taken.

@dleen
Copy link
Contributor Author

dleen commented Jul 19, 2019

Appreciate you taking the time to dive into this use case :)

Agreed about MappingKernelManager, I was imagining it would be changed from a singleton to be a map from some identifier (the kernel id?) to instances of the manager. In the single destination use case each entry in the map would point to the same manager, maintaining today's behavior. Or some variant of this.

If this was possible? - I imagine a lot of these associations (notebook/kernel id -> KG url) are going to have to be available in memory on the notebook server side, so when a request comes through we can route it to the right place. Honestly I'd have to get my hands dirty doing some implementation to give you a better answer there.

@kevin-bates
Copy link
Member

I appreciate your patience. I'm still struggling with the functional aspect of this. In the original post, you stated the following:

I was thinking of implementing a mechanism where the endpoint would be provided by a manager. ... users would be able to implement their own managers and specify which one to use via an argument like --NotebookApp.endpoint_manager_class=nb2kg.managers.MyCustomKgUrlManager.

What I'm not following is how does a given manager obtain its set of of URLs and then know when to use a specific url to a) obtain a list of kernels and b) start a kernel selected from the list corresponding to the previous url?

Are you really needing to use the HTTP Personality offered by Kernel Gateway, or do you want a more classic notebook (and default) behavior where the user submits blocks of code (i.e., a cell) for execution?

I'm assuming the latter (but I've assumed wrong once already 😃) and you can achieve kernels on different (yet specific) hosts using EG (but not KG). With EG and the DistributedProcessProxy, you could, today, configure kernelA for hostA, kernelB for hostB, etc. When we add parameterized kernels (in jupyter_server), you could then get prompted for a set of hosts and decide to run a given kernel on a runtime-specified host. In addition, assuming the plan holds, that particular functionality (remote kernel and all) would be baked into jupyter_server along with the installation of a specific kernel provider.

Can you give a concrete example of a given user needing to hit different gateway servers from within the same session, and how that user would discover and convey the url for kernel selection, startup and its lifecycle management?

Thanks.

@dleen
Copy link
Contributor Author

dleen commented Jul 19, 2019

Are you really needing to use the HTTP Personality offered by Kernel Gateway, or do you want a more classic notebook (and default) behavior where the user submits blocks of code (i.e., a cell) for execution?

You're spot on - 99% of the time will be the default behavior. The HTTP Personality is a really cool feature that I think customers will like. It was another point in favor of Kernel Gateway.

When we add parameterized kernels (in jupyter_server), you could then get prompted for a set of hosts and decide to run a given kernel on a runtime-specified host. In addition, assuming the plan holds, that particular functionality (remote kernel and all) would be baked into jupyter_server along with the installation of a specific kernel provider.

Yes, this is basically the functionality that I'm aiming for. We'll be building extensions to allow users to pick which host to run a kernel on etc. Apologies for not giving you that context up front - for this issue I was only looking at the work required in the context of nb2kg changes.

@kevin-bates
Copy link
Member

ok - thanks for the response. Since you have some need for HTTP Personality, I'm wondering if you could treat those cases as one-offs as they are completely different beasts (and not really for use from a standard notebook client).

You could obtain the behavior you desire today using the DistributedProcessProxy and different kernelspecs - each of which specify a different host (i.e., the kernelA, kernelB example). Such an example is presented here, focusing on the remote_hosts entries.

I'm curious about examples of when a given user would want a kernel run on a different hosts than where their current kernel is running? Just wondering if a kubernetes environment might better-suit your needs.

@psychemedia
Copy link

psychemedia commented Feb 5, 2020

I have a general use case that relates to this thread: an academic running their own local jupyter notebook server which suits their needs most of the time, but who sometimes needs to connect to a teaching related Enterprise Gateway for course activities (with that EG offering several kernels for different courses which the user should be able to select from) and research EG for research work. As well as access to multiple remote gateways they also need to access locally run kernels.

@kevin-bates
Copy link
Member

Thanks for the comment Tony. It seems there have been a few similar requests, so that gets me noodling ideas. 🤔

We need a way to address the static nature of --gateway-url (NB >= 6) and make it more dynamic.

One thing that comes to mind once the Kernel Providers model is adopted in jupyter_server, would be to have a "Gateway Provider" where it essentially emulates a kernelspec, but its information is really a URL to a Gateway server.

Kernelspecs that call find_kernels() via this provider trigger a redirect to the that "kernel's" Gateway to get a list of its kernelspecs. We'd then prefix these kernelspecs with an indication of where they came from. When one of the kernels associated with a GatewayProvider is selected, the provider forwards the start request to that Gateway via its kernel manager that knows what to do.

What I'm not sure about is how the websocket channels would be accessed since, today, that interaction doesn't involve the kernel provider. Hmm. I think the kernel manager that is produced by the GatewayProvider would need to have information on it to direct websocket creation against, but this would be one-off code in the channels handler, which isn't cool. I have to study this area to see how best to do this.

Given the current jupyter framework, this doesn't seem as straightforward as it should be, but I think we have much better options with kernel providers than today because they enable the ability to have different KernelManager class instances simultaneously (which is something @dleen was driving at previously). As a result, I think it would be good to somehow take the kernel manager instance into consideration when handling the websocket interactions as well.

@psychemedia
Copy link

Hi Kevin

Thanks for spending a few cycles noodling this :-) (I don't really have much understanding of how the Jupyter plumbing works; it's something that I keep meaning to start reading up on...).

One of the things struck me was that if multiple enterprise gateway URLs are supported, this would provide a workaround for local kernels if you run a gateway on localhost and connect to that. I can imagine many more cases where folk might want to connect to local Jupyter servers most of the time, but then occasionally call out to eg a hosted GPU backed kernel.

There are other extensions out there that can support this, I think, but I suspect ssh tunnels may be involved. In a lot of edu settings:

  1. IT folk may be unwilling to let folk ssh in, especially if they want to start tunnels...
  2. anything more complicated than selecting a kernel from a menu is too complicated.

The second point actually raises a whole host of issues when it comes to auth; if the notebook server was accessed via an institutional JupyterHub, you can presumably set checks on the enterprise gateway that traffic is coming from a recognised domain. But if a user is running a local server, whilst they may have some institutional auth cookies set by their institutional website, I don't see how these could be used to help connect to EG. In which case, the external kernel connection menu would need to be able to prompt for credentials to connect to the EG, or you'd maybe need to install an extension that enabled you to set credentials. Hmm... that may make more sense from a user perspective - a notebook extension that allows you to maintain multiple connection records with the EG URL, and some sort of credential? But institutional IT folk probably wouldn't like username/password combinations being saved into an app.. Hmmm... (Security / auth is another thing I don't really understand the mechanics of properly!:-(

@kevin-bates
Copy link
Member

One of the things struck me was that if multiple enterprise gateway URLs are supported, this would provide a workaround for local kernels if you run a gateway on localhost and connect to that. I can imagine many more cases where folk might want to connect to local Jupyter servers most of the time, but then occasionally call out to eg a hosted GPU backed kernel.

Actually, I believe this kind of functionality would be address with the kernel providers approach. With that, we have the ability to launch kernels both local and remote. Many of Enterprise Gateway's remote kernel offerings have been prototyped as kernel providers. They all derive from Remote Kernel Provider and kernels are launched against the various resource managers, etc. These would then be available directly from Notebook (well, JupyterServer running a Notebook/Lab extension). The issue EG resolves is that when going from Notebook to EG, only EG needs to be "in the cluster". For example, in Kubernetes, one can hit EG in k8s, while the notebook server is outside k8s. However, for things like Hub in K8s, where each Notebook server is in K8s, then a KubernetesKernelProvider makes more sense. With that, you can have "cheap" kernels for prototyping and "expensive" kernels for high-end analysis.

I really like the idea of having the ability to support multiple EGs simultaneously, but I'm not able to feret out how the WS channels issue would be resolved.

@psychemedia
Copy link

I believe this kind of functionality would be address with the kernel providers approach. With that, we have the ability to launch kernels both local and remote. Many of Enterprise Gateway's remote kernel offerings have been prototyped as kernel providers

Ah.. Right.. so, a user would see this as "just adding a kernel". I can add a local py2 kernel, or R kernel, or a remote EG kernel. And I could then open one notebook with a local kernel, one with a remote kernel etc?

For the remote kernel, how would auth work?

@kevin-bates
Copy link
Member

Correct. You'd get a mixture of "flavors". With kernel providers, EG wouldn't really come into play. The "remoteness" is baked into the provider.

It's up to the provider how authentication works. I suspect in most cases (true for those in gateway-experiments) its generally assumed that the cluster in which the kernels are launched is already behind a firewall, as is the notebook server instance in this case (i.e., the kernel-launching server), and authentication has already been performed.

Things could probably use some improvements in this area and having an EG server to bounce things through really helps, but we're then constrained to the singleton nature of things (which sucks) right now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants