-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add/modify/delete ThirdPartyResources. #8
Conversation
val request = new Request.Builder() | ||
.post(body) | ||
.url(s"${client.getMasterUrl()}" + | ||
s"apis/apache.io/v1/namespaces/default/sparkjobs/") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: parameterize this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by parameterize you mean make this configurable ? like --conf spark.kubernetes.TPREndpoint=......
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant that we should pick up the namespace, and the apiversion to construct that string. I don't see the point of making the TPREndpoint something that the user can pass in. There is no reason for a Spark user to know that detail IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh got it ! Yeah I agree
|
||
val httpClient = this.getHTTPClient(client.asInstanceOf[BaseClient]) | ||
def createJobObject(name: String, executors: Int, image: String): Unit = { | ||
val json = ("apiVersion" -> ApiVersion) ~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very similar to what I've done, except for the representation of the resource, I was thinking more in the line of:
type Specification = Map[String, String]
case class SparkJobResource(apiVersion: String, kind: String, metadata: Metadata, spec: Specification)
case class Metadata(name: String, uid: Option[String], labels: Option[Map[String, String]], annotations: Option[Map[String, String]])
Then we can create a json from creating an instance of SparkJobResource like:
val sparkJobObject = SparkJobResource(apiVersion="apache.io/v1", ........)
sparkJobObject.toJson
Then this Object can be converted into json through whatever implicit formater is defined.
I believe we can trim the case classes down further....
This way we enjoy all the good features that come with using scala case classes as well as easily transforming the case class to Json and from json back to the object (This is mostly useful if we would be extracting some info from the resource in the program or patching it as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the case classes definitely makes sense. If you have sorted out the CLA/licence issues, you could merge in the contents of your work as a commit on this PR. Otherwise, if you feel that your change is more comprehensive, you could put up your PR and I can merge my commits into it.
val request = new Request.Builder() | ||
.post(body) | ||
.url(s"${client.getMasterUrl()}" + | ||
s"apis/apache.io/v1/namespaces/default/sparkjobs/") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by parameterize you mean make this configurable ? like --conf spark.kubernetes.TPREndpoint=......
.url(s"${client.getMasterUrl()}" + | ||
s"apis/apache.io/v1/namespaces/default/sparkjobs/${name}") | ||
.build() | ||
val response = httpClient.newCall(request).execute() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this is a synchronous call ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, DELETE
call will be sychronous.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant the rest call. But from your response I'm guessing it is as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, IIRC only calls to delete RCs/RSs and Pods are asynchronous, given that one needs to wait for graceful termination. This delete should be executed synchronously.
ee3eb24
to
14913e9
Compare
adding jobState and consuming resource class in k8s cluster backend adding jobstate and consuming resource class in k8scluster backend adding support for imagePullSecret when pulling from private registry some minor refactoring and changes to pod building logic moving SparkJobResource and refactoring
852c4a2
to
0509679
Compare
@@ -154,9 +154,7 @@ export MAVEN_OPTS="${MAVEN_OPTS:--Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCac | |||
# Store the command as an array because $MVN variable might have spaces in it. | |||
# Normal quoting tricks don't work. | |||
# See: http://mywiki.wooledge.org/BashFAQ/050 | |||
# BUILD_COMMAND=("$MVN" -T 1C clean package -DskipTests $@) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reverts a previous change made on k8s-support.
@iyanuobidele @erikerlandson PTAL. |
@foxish LGTM. |
Merging now, and updating doc. The up-to-date image based on this is: |
What changes were proposed in this pull request?
Addresses #3
Implements POST/PATCH/DELETE for the SparkJob TPR.
It uses the image foxish/kube-spark, which is pretty much the same as that in https://github.com/foxish/openshift-spark, with a more up-to-date spark distribution baked in.
Run with "foxish/kube-spark" as the spark image.