Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add/modify/delete ThirdPartyResources. #8

Merged
merged 3 commits into from
Dec 8, 2016
Merged

Add/modify/delete ThirdPartyResources. #8

merged 3 commits into from
Dec 8, 2016

Conversation

foxish
Copy link
Owner

@foxish foxish commented Nov 23, 2016

What changes were proposed in this pull request?

Addresses #3
Implements POST/PATCH/DELETE for the SparkJob TPR.

It uses the image foxish/kube-spark, which is pretty much the same as that in https://github.com/foxish/openshift-spark, with a more up-to-date spark distribution baked in.
Run with "foxish/kube-spark" as the spark image.

@foxish
Copy link
Owner Author

foxish commented Nov 23, 2016

val request = new Request.Builder()
.post(body)
.url(s"${client.getMasterUrl()}" +
s"apis/apache.io/v1/namespaces/default/sparkjobs/")
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: parameterize this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by parameterize you mean make this configurable ? like --conf spark.kubernetes.TPREndpoint=......

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just meant that we should pick up the namespace, and the apiversion to construct that string. I don't see the point of making the TPREndpoint something that the user can pass in. There is no reason for a Spark user to know that detail IMO.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh got it ! Yeah I agree


val httpClient = this.getHTTPClient(client.asInstanceOf[BaseClient])
def createJobObject(name: String, executors: Int, image: String): Unit = {
val json = ("apiVersion" -> ApiVersion) ~
Copy link
Collaborator

@iyanuobidele iyanuobidele Nov 23, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very similar to what I've done, except for the representation of the resource, I was thinking more in the line of:

type Specification = Map[String, String]
case class SparkJobResource(apiVersion: String, kind: String, metadata: Metadata, spec: Specification)
case class Metadata(name: String, uid: Option[String], labels: Option[Map[String, String]], annotations: Option[Map[String, String]])

Then we can create a json from creating an instance of SparkJobResource like:

val sparkJobObject = SparkJobResource(apiVersion="apache.io/v1", ........)
sparkJobObject.toJson

Then this Object can be converted into json through whatever implicit formater is defined.

I believe we can trim the case classes down further....

This way we enjoy all the good features that come with using scala case classes as well as easily transforming the case class to Json and from json back to the object (This is mostly useful if we would be extracting some info from the resource in the program or patching it as well).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the case classes definitely makes sense. If you have sorted out the CLA/licence issues, you could merge in the contents of your work as a commit on this PR. Otherwise, if you feel that your change is more comprehensive, you could put up your PR and I can merge my commits into it.

val request = new Request.Builder()
.post(body)
.url(s"${client.getMasterUrl()}" +
s"apis/apache.io/v1/namespaces/default/sparkjobs/")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by parameterize you mean make this configurable ? like --conf spark.kubernetes.TPREndpoint=......

.url(s"${client.getMasterUrl()}" +
s"apis/apache.io/v1/namespaces/default/sparkjobs/${name}")
.build()
val response = httpClient.newCall(request).execute()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this is a synchronous call ?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, DELETE call will be sychronous.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant the rest call. But from your response I'm guessing it is as well

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, IIRC only calls to delete RCs/RSs and Pods are asynchronous, given that one needs to wait for graceful termination. This delete should be executed synchronously.

adding jobState and consuming resource class in k8s cluster backend

adding jobstate and consuming resource class in k8scluster backend

adding support for imagePullSecret when pulling from private registry

some minor refactoring and changes to pod building logic

moving SparkJobResource and refactoring
@foxish foxish force-pushed the tpr-fix branch 4 times, most recently from 852c4a2 to 0509679 Compare December 8, 2016 12:05
@@ -154,9 +154,7 @@ export MAVEN_OPTS="${MAVEN_OPTS:--Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCac
# Store the command as an array because $MVN variable might have spaces in it.
# Normal quoting tricks don't work.
# See: http://mywiki.wooledge.org/BashFAQ/050
# BUILD_COMMAND=("$MVN" -T 1C clean package -DskipTests $@)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reverts a previous change made on k8s-support.

@foxish
Copy link
Owner Author

foxish commented Dec 8, 2016

@iyanuobidele @erikerlandson PTAL.
Will merge later today if there are no objections.

@iyanuobidele
Copy link
Collaborator

@foxish LGTM.

@foxish
Copy link
Owner Author

foxish commented Dec 8, 2016

Merging now, and updating doc. The up-to-date image based on this is:
foxish/kube-spark:0.18

@foxish foxish merged commit a8fbf54 into k8s-support Dec 8, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants