Replies: 5 comments 4 replies
-
One other possibility, given that we already include Flask as a dependency, is to set up some sort of web interface, perhaps as a third component. I believe that Flask includes Jinja as a template engine which makes developing web front ends super easy (I think Jinja might already be one of our dependencies). We might not even need to create another Flask daemon, we could just integrate the client API into the WFM interface. There are also some nice abstractions that Flask provides that could be useful if we wanted to do something like that (Blueprints). As for visualizing the DAG, I'm not sure but we might be able to include some sort of JavaScript library that could be run in the browser (no need for node.js). That's just an idea though, I don't know if a web interface is exactly what we need here. |
Beta Was this translation helpful? Give feedback.
-
I’ve compiled a list of some questions for the CLI backend. What server is the workflow manager running on? Is it a LANL HPC cluster, Darwin, summit, cloud resources, or some other resource? This will need to be saved in the configuration options for the client. Our approach for connecting to the machine will vary depending on the resource needed. How will we send commands to the workflow manager running on a server from the client? For internal LANL usage at least, we’ll need users to have kerberos authentication enabled on their machines. I think the easiest way to handle this is to leverage ssh. If a user is logged in via kerberos, LANL machines are already configured to send the kerberos ticket with each ssh command. So, we can send commands using How do we get data from the workflow manager to the CLI? The aforementioned approach has a pretty big limitation which is that we can send commands to the workflow manager, but we can’t easily get data back. We could use scp for this purpose, but scping repeatedly seems like a bad idea. We'll have tons of small files that will have to go somewhere and write a bunch of stuff to a network file system which is asking for trouble. ssh's command functionality returns the stdout and stderr of the command. We could use this by having the forwarding application print a jsonpickle object to stdout upon completion. Then, the CLI will take the json and re-encode it. We’ll need the object definition on both sides, but it’s very doable. What do we do about latency? |
Beta Was this translation helpful? Give feedback.
-
Here are some questions about the client frontend. What will it look like? What framework will it use? What will the functionality be like? |
Beta Was this translation helpful? Give feedback.
-
The minimum to meet the milestone is to have a gui that does what the present client does and to allow the user to visualize the current state of their workflow as a DAG. However, do we want to add the capability to build a cwl? Or at least plan for it? |
Beta Was this translation helpful? Give feedback.
-
I know this should most likely be in a later timeframe, but wanted to mention it while thinking of it. In the design document @rstyd mentions sending workflows to any location. Should we also have some means of setting up those locations from the client, such as start the tm, wfm and scheduler, or any other modules needed. Or at least verify they have been started. |
Beta Was this translation helpful? Give feedback.
-
We need to work out a plan for the enhanced client.
Currently we have two options:
For the exiting workflow submission tool side, Tim has mentioned we could work with SAW. Tim said they've been looking to add CWL support to their tool. This has a bundled editor which is nice. The one issue with this approach is it will require bolting on the capabilities to submit and manage BEE jobs. We could accomplish this with an Eclipse plugin, but the interface would probably be pretty clunky just from my experience with Eclipse.
There's also Rabix composer. It's currently no longer supported, but with an Apache 2.0 license, I believe we could make a fork and enhance it with the features we'd need. This could be a very powerful solution since it includes a built in editor and facilities for visualizing workflows. We could reuse the workflow visualization to add visualizing the current state of a workflow running on a system. The big downside to this approach is we'd be inheriting a legacy code base that we'd responsible for which is not great.
Finally, we could consider putting together our own client. From my research so far, the best solution appears to be an electron app. Rabix composer is also based on electron. Electron is a javascript based API for developing native desktop apps. This would be the most flexible solution, but it could require a lot of work. It still may be less work than Rabix would be. The other issue which we'll have to look at with rabix composer as well is that electron apps are known for using a ton of ram. We'll have to see if this is an issue. There may be ways to mitigate it.
Let me know if you have any other ideas.
Beta Was this translation helpful? Give feedback.
All reactions