Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The future of this project/Wishlist #40

Open
luxas opened this issue Dec 30, 2015 · 40 comments
Open

The future of this project/Wishlist #40

luxas opened this issue Dec 30, 2015 · 40 comments

Comments

@luxas
Copy link
Owner

luxas commented Dec 30, 2015

I want this to be a kind of tracking issue for things the community (you) want from this project.
Note: I do not promise to make all features one may wish for/propose, but I would like to know what you think.

Also, feel free to comment here if you are using this project and it is working, that will probably give more fuel to this project :)

I'm working on merging this functionality into mainline Kubernetes, and for that feedback would be greatly appreciated, so the Kubernetes guys know which features count.
Here is the ARM tracking issue: kubernetes/kubernetes#17981

@nsteinmetz
Copy link

Hi,

Not sure it's doable/feasible as I'm not completely familar with k8s yet:

  • Could we access the pods not only via the master to avoid the SPOF ? I saw some presentation of Openshift and it seemd we could expose API or proxy on several nodes
  • A UI could be interesting too - I know it's in your roadmap
  • Monitoring topic (at master / node / pods level)
  • ...

My 2 first cents for this project for 2016 ;-)

@luxas
Copy link
Owner Author

luxas commented Jan 2, 2016

Haha, your typo of Openshift was a good one.
Well, I guess you mean this doc. That would be great to have here is a lite-variant which I am going to try to implement in future versions. I wanted it for v0.6.0, but I gave up with it. It just didn't work.

The UI is probably coming soon

Monitoring, I will investigate if the Heapster addon is worth running on Pis, I think it's quite resource-heavy.
It isn't worth it if it just outputs that it takes all resources ;)
However, it would be nice to have!

@nsteinmetz
Copy link

Oops!

Yes, I mean routers ; I didn't remember the name. service-loadbalancer seems indeed an alternative. HA Proxy seems to be everywhere now ; would have thought about something lighter like a basic nginx (don't know much HA Proxy but seems overkill for my needs)

For the monitoring, there is indeed the resource issue you mention.

@luxas
Copy link
Owner Author

luxas commented Jan 9, 2016

@nsteinmetz Check out v0.6.3! Released today. It's most bugfixes and such, but it may be useful.

@lohmander
Copy link

Regarding the monitoring, my PR (kubernetes/dashboard#232) to the dashboard might be a good option since it just uses the cAdvisor instance already running on each node to poll usage metrics. Just sayin...

@luxas
Copy link
Owner Author

luxas commented Jan 19, 2016

Works fine for me if you rebase your PR to the released beta version when it's released.
So we'll have the "official" beta and your features on top of it

@luxas
Copy link
Owner Author

luxas commented Jan 31, 2016

FYI, there's an official ARM etcd image for Kubernetes now :)
kubernetes/kubernetes#19216 was pushed some days ago
Wanna try? docker pull gcr.io/google_containers/etcd-arm:2.2.1

Soon, I think kubernetes/kubernetes#19769 will be merged, and then we'll get official binaries and some images for every release >= 1.2.0-alpha.7

So although there hasn't been so much commits on this project lately, ARM support is moving fast forward 👍

RancherOS has announced they are working on ARM support: rancher/os#735
When it's working (probably a month away), this project will include it

@ibuildthecloud
Copy link

Just FYI. What we are working on at Rancher is first RancherOS, then Rancher (our full container platform), and finally k8s. In the end we want Rancher running Kubernetes on RancherOS on ARM (specifically ARM64 servers, but rpis will work too). We expect this to take at least another 2-3 months. But I'm watching this project closely.

@luxas
Copy link
Owner Author

luxas commented Feb 4, 2016

@ibuildthecloud Directly when we have RancherOS support for ARM, we'll be able to run Kubernetes on ARM and ARM64: kubernetes/kubernetes#19769 (I hope we're gonna merge this before v1.2) and kubernetes/kubernetes#17981
Of course, it will take some time to build up a stable rootfs, but it should be possible to run Kubernetes on ARM in docker from the first boot of RancherOS on ARM

BTW, I made a PR some time ago to enhance Kubernetes setup on RancherOS: kubernetes/kubernetes#19109 and kubernetes/kubernetes#19193. It was so frustrating to not be able to download kubectl from the release page for controlling Kubernetes on RancherOS. I'm also making the k8s-in-docker method better (e.g. now serviceAccounts are working)

@luxas
Copy link
Owner Author

luxas commented Feb 15, 2016

Well, v0.6.5 is here!

And things have evolved on the mainline side too.
kubernetes/kubernetes#19769 is merged, and that means that there's no need to build k8s binaries no more... I will probably do it for some future releases though

We will probably have linux/arm64 (and linux/ppc64le) binaries available in v1.3.0-alpha.1, because in v1.3 they'll use go1.5 that has support for those new platforms.

On the dashboard side, I will use official ARM builds only (gcr.io/google_containers/kubernetes-dashboard-arm:version).
Sorry @lohmander, but that's the best way.

RancherOS is running on my Pi 2, and I'll work with @ibuildthecloud to get things working OOTB

I'll start working on v0.7.0 of this project now, which probably will include many, many changes.
Some technical things I have in mind:

  • Kubernetes 1.2.0
  • go1.5.3 or go1.6, registry v2.3.0, etcd v2.3.0 or v2.2.5
  • flannel VXLAN backend, try to switch to iptables proxying again (hopefully fixed in v1.2)
  • build flannel statically => minimize image size
  • get rid of --containerized => more like the "native" kubelet
  • experimental heapster running alongside dashboard
  • see if RancherOS on ARM is ready for this project

But a non-technical thing would be to include some kind of catalog with prebuilt apps that are ready-to-run. E.g. only type kube-config run-app gogs or something.

Right now I have two apps running on my local cluster, gogs image updated 24.1 and owncloud image updated 27.1
Test them out if you want, and suggest more apps that we may run!
@nsteinmetz @w0mbat @saturnism @larmog @kyletravis @lavvy @sokoow @gyoho @TimCook1

@lavvy
Copy link

lavvy commented Feb 15, 2016

Thumbs up @lucas . One question : Could the app catalog have a gui fronted
wit a git repo backend. I think it will make so much sense, it may have
to integrate with dashboard, though I don't know how you are planning it
On Feb 15, 2016 10:05 PM, "Lucas Käldström" [email protected]
wrote:

Well, v0.6.5 is here!

And things have evolved on the mainline side too.
kubernetes/kubernetes#19769
kubernetes/kubernetes#19769 is merged, and that
means that there's no need to build k8s binaries no more... I will probably
do it for some future releases though

We will probably have linux/arm64 (and linux/ppc64le) binaries available
in v1.3.0-alpha.1, because in v1.3 they'll use go1.5 that has support for
those new platforms.

On the dashboard side, I will use official ARM builds only (
gcr.io/google_containers/kubernetes-dashboard-arm:version).
Sorry @lohmander https://github.com/lohmander, but that's the best way.

RancherOS is running on my Pi 2, and I'll work with @ibuildthecloud
https://github.com/ibuildthecloud to get things working OOTB

I'll start working on v0.7.0 of this project now, which probably will
include many, many changes.
Some technical things I have in mind:

  • Kubernetes 1.2.0
  • go1.5.3 or go1.6, registry v2.3.0, etcd v2.3.0 or v2.2.5
  • flannel VXLAN backend, try to switch to iptables proxying again
    (hopefully fixed in v1.2)
  • build flannel statically => minimize image size
  • get rid of --containerized => more like the "native" kubelet
  • experimental heapster running alongside dashboard
  • see if RancherOS on ARM is ready for this project

But a non-technical thing would be to include some kind of catalog with
prebuilt apps that are ready-to-run. E.g. only type kube-config run-app
gogs or something.

Right now I have two apps running on my local cluster, gogs image updated
24.1 https://hub.docker.com/r/luxas/gogs/ and owncloud image updated
27.1 https://hub.docker.com/r/luxas/owncloud/
Test them out if you want, and suggest more apps that we may run!
@nsteinmetz https://github.com/nsteinmetz @w0mbat
https://github.com/w0mbat @saturnism https://github.com/saturnism
@larmog https://github.com/larmog @kyletravis
https://github.com/kyletravis @lavvy https://github.com/lavvy @sokoow
https://github.com/sokoow @gyoho https://github.com/gyoho @TimCook1
https://github.com/TimCook1


Reply to this email directly or view it on GitHub
#40 (comment)
.

@larmog
Copy link
Contributor

larmog commented Feb 16, 2016

Great Job Lucas!
I installed v0.6.5 yesterday on to v0.6.3. and hypriotos. The Docker upgrade took some time on each node, which made the docker service start timeout.

I’ve tried the new dashboard with heapster (larmog/heapster-armhf image) but something is broken in cadvisor and docker subcontainers. I found some issue reports from people running debian (on x86_64) with the same problem. I guess we have to wait for 1.2 release.

Have you had a look at https://helm.sh/ package manager for Kubernetes?

I’m running Gogs, Drone and mysql on a 6 node mixed Pi cluster using NFS volumes. I’m not happy with the NFS setup so I will try to add some nodes and setup glusterfs.

I really would like to try out RancherOS if I can find the time.

I’ll try to help if there’s anything I can do. I’m more of a dev guy than ops, but I’m learning :)

/Regards

15 feb. 2016 kl. 22:05 skrev Lucas Käldström [email protected]:

Well, v0.6.5 is here!

And things have evolved on the mainline side too.
kubernetes/kubernetes#19769 kubernetes/kubernetes#19769 is merged, and that means that there's no need to build k8s binaries no more... I will probably do it for some future releases though

We will probably have linux/arm64 (and linux/ppc64le) binaries available in v1.3.0-alpha.1, because in v1.3 they'll use go1.5 that has support for those new platforms.

On the dashboard side, I will use official ARM builds only (gcr.io/google_containers/kubernetes-dashboard-arm:version).
Sorry @lohmander https://github.com/lohmander, but that's the best way.

RancherOS is running on my Pi 2, and I'll work with @ibuildthecloud https://github.com/ibuildthecloud to get things working OOTB

I'll start working on v0.7.0 of this project now, which probably will include many, many changes.
Some technical things I have in mind:

Kubernetes 1.2.0
go1.5.3 or go1.6, registry v2.3.0, etcd v2.3.0 or v2.2.5
flannel VXLAN backend, try to switch to iptables proxying again (hopefully fixed in v1.2)
build flannel statically => minimize image size
get rid of --containerized => more like the "native" kubelet
experimental heapster running alongside dashboard
see if RancherOS on ARM is ready for this project
But a non-technical thing would be to include some kind of catalog with prebuilt apps that are ready-to-run. E.g. only type kube-config run-app gogs or something.

Right now I have two apps running on my local cluster, gogs image updated 24.1 https://hub.docker.com/r/luxas/gogs/ and owncloud image updated 27.1 https://hub.docker.com/r/luxas/owncloud/
Test them out if you want, and suggest more apps that we may run!
@nsteinmetz https://github.com/nsteinmetz @w0mbat https://github.com/w0mbat @saturnism https://github.com/saturnism @larmog https://github.com/larmog @kyletravis https://github.com/kyletravis @lavvy https://github.com/lavvy @sokoow https://github.com/sokoow @gyoho https://github.com/gyoho @TimCook1 https://github.com/TimCook1

Reply to this email directly or view it on GitHub #40 (comment).

@luxas
Copy link
Owner Author

luxas commented Feb 19, 2016

@lavvy No it won't have an UI. It's just some .yaml template files.

@larmog Yeah, the delay between docker-1.9 to docker-1.10 is expected. As you may see, I've uploaded heapster to dev and I'm testing it now, so it integrates with dashboard. Maybe we'll have to wait for v1.2 and a newer dashboard release.

Yeah, I've heard/read about helm, but haven't used it. It's an interesting project (but only for x86_64 😄)

I have also thought about glusterfs, but haven't had time to convert it into ARM yet. Probably v0.7.5-ish if I do it for this project.

RancherOS was a lot of hacking, and had so many rough edges that I'm waiting for rancher/os#760 before I really start to think about it for this project.

I've read your blog posts, and they're very nice. Haven't tried drone yet. Interesting. Keep up the good work :)

@lavvy
Copy link

lavvy commented Feb 19, 2016

+1 billion for glusterfs in this project @lucas :)
On Feb 19, 2016 6:37 AM, "Lucas Käldström" [email protected] wrote:

@lavvy https://github.com/lavvy No it won't have an UI. It's just some
.yaml template files.

@larmog https://github.com/larmog Yeah, the delay between docker-1.9 to
docker-1.10 is expected. As you may see, I've uploaded heapster to dev
and I'm testing it now, so it integrates with dashboard. Maybe we'll have
to wait for v1.2 and a newer dashboard release.

Yeah, I've heard/read about helm, but haven't used it. It's an interesting
project (but only for x86_64 [image: 😄])

I have also thought about glusterfs, but haven't had time to convert it
into ARM yet. Probably v0.7.5-ish if I do it for this project.

RancherOS was a lot of hacking, and had so many rough edges that I'm
waiting for rancher/os#760 rancher/os#760
before I really start to think about it for this project.

I've read your blog posts, and they're very nice. Haven't tried drone
yet. Interesting. Keep up the good work :)


Reply to this email directly or view it on GitHub
#40 (comment)
.

@larmog
Copy link
Contributor

larmog commented Feb 24, 2016

Hi,
+1 for "get rid of --containerized => more like the "native" kubelet"

@lavvy I'm now running glusterfs 3.5.2-2 instead of NFS and it just works: http://bit.ly/1XJRAJO

@lavvy
Copy link

lavvy commented Feb 24, 2016

Wow, real good. I wish @lucas can support this upstream. It appears so
much technical for us noobs.
I believe persistent storage is so necessary especially for private cloud
infrastructure like k8s on arm.
On Feb 24, 2016 4:18 PM, "Lars Mogren" [email protected] wrote:

Hi,
+1 for "get rid of --containerized => more like the "native" kubelet"

@lavvy https://github.com/lavvy I'm now running glusterfs 3.5.2-2
instead of NFS and it just works: http://bit.ly/1XJRAJO


Reply to this email directly or view it on GitHub
#40 (comment)
.

@luxas
Copy link
Owner Author

luxas commented Feb 24, 2016

@larmog Thanks for your work again! I read your post yesterday, and it was amazing! I'm gonna investigate it more, but now I have a rough idea what it looks like thanks to your work.

+1 for "get rid of --containerized => more like the "native" kubelet"

I'm gonna do this when upstream supports it. There are known issues for the time being.

However, all this takes time. Now I'm busy with many things, building v0.7.0 and validating v1.2.0 for ARM, testing everything on RancherOS and porting all this to upstream Kubernetes at the same time, so it will take some time before glusterfs is running smoothly OOTB from this project. But anyway, just follow @larmog's guide and you're done in no time at all! 😄

@lavvy Please ping me with @luxas. @lucas is a completely other person which I don't know, and he probably don't want to get these conversations to his email.

@lucas
Copy link

lucas commented Feb 24, 2016

Thanks @luxas, much appreciated :)

@lavvy
Copy link

lavvy commented Feb 24, 2016

Oh am sorry, thanks for hint. Noted
On Feb 24, 2016 10:17 PM, "Lucas" [email protected] wrote:

Thanks @luxas https://github.com/luxas, much appreciated :)


Reply to this email directly or view it on GitHub
#40 (comment)
.

@larmog
Copy link
Contributor

larmog commented Feb 24, 2016

@luxas is there a reason why the flannel config uses udp as Backend instead of host-gw?
I've switched to host-gw. See: http://machinezone.github.io/research/networking-solutions-for-kubernetes/

@luxas
Copy link
Owner Author

luxas commented Feb 25, 2016

No there isn't a specific reason, it was just the default.
I planned to include vxlan, but I'll switch to host-gw as the default thanks to your performance measuring link.

@larmog
Copy link
Contributor

larmog commented Feb 25, 2016

Sorry @luxas my fault, I was too quick when I said that I got it working.
I checked the logs this morning and flannel still uses udp. I'll try to put some more work into it, and maybe create an issue to track it.

@luxas
Copy link
Owner Author

luxas commented Feb 29, 2016

Progress! Now I've got Kubernetes v1.2.0-alpha.8 running on my Pis.
I'm using the official binaries I've built in kubernetes/kubernetes#19769.
Also heapster is somehow working.

I'm using hostgw as @larmog suggested, and flanneld is built statically, resulting in 72MB smaller kubernetesonarm/flannel image. Yay!

@luxas
Copy link
Owner Author

luxas commented Mar 4, 2016

+1 for "get rid of --containerized => more like the "native" kubelet"

@larmog (and others) Please test this:

$ docker run -d --net=host kubernetesonarm/etcd
$ docker run \
    --volume=/sys:/sys:ro \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:shared \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged=true \
    -d \
    kubernetesonarm/hyperkube \
    /hyperkube kubelet \
        --pod_infra_container_image=kubernetesonarm/pause \
        --hostname-override="127.0.0.1" \
        --address="0.0.0.0" \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests-multi \
        --cluster-dns=10.0.0.10 \
        --cluster-domain=cluster.local \
        --allow-privileged=true --v=2

This spins up a single node cluster (do kube-config disable && kube-config delete-data first).
Of course, you may hack the /usr/lib/systemd/system/k8s-master.service too.

Test mounting a secret inside a pod, using downward-api and such things. If these test passes, I'll switch to this more "native" approach in v0.7.0. It requires docker-1.10+, but since v0.6.5 all machines should run docker-1.10

@luxas
Copy link
Owner Author

luxas commented Mar 5, 2016

Yeah, now it works! Got native kubelet support with: aefa2bc

@larmog
Copy link
Contributor

larmog commented Mar 5, 2016

@luxas great, I had problem mounting secrets with your first suggested solution.

I'll test if:

-v /var/lib/kubelet:/var/lib/kubelet:rw \
 + -v /var/lib/kubelet:/var/lib/kubelet:shared \

makes a change

@larmog
Copy link
Contributor

larmog commented Mar 6, 2016

I ran some tests and it looks great. Seems that a bunch of errors from when unmounting and removing pods is gone. I saw that it's still the 0.6.2 image, so I'll restart all nodes in "native" mode. I'll even try to enable heapster in dashboard and see how it looks. Awesome job @luxas :D

@luxas
Copy link
Owner Author

luxas commented Mar 6, 2016

Yes, I haven't published newer images to Docker Hub than v0.6.2, cause not much has changed.
But soon, when v1.2 is released, we'll have brand new images!

@DorianGray
Copy link
Contributor

Hey all, joining the party late. Can we start documenting each thing that needs to get done as a separate issue? I'd love to help with implementation and testing.

I'm planning on getting this running on odroid c2's personally, also have rasp pi 2's for testing.

@luxas
Copy link
Owner Author

luxas commented Mar 22, 2016

And there we go! v0.7.0 released. See the release notes
Please test it and give feedback 😄

@luxas
Copy link
Owner Author

luxas commented Mar 22, 2016

BTW, @larmog if you want to send an initial PR with your elk things we could try to make general scripts for it. It would also be fun to look into glusterfs for the next release (PS an image is pushed here)

@larmog
Copy link
Contributor

larmog commented Mar 23, 2016

Congrats to a great job @luxas.
My own work feels like a mess right now. I started with elk, and then the c2 board arrived, that gave me some headache. When I was done with the c2 and tried to run gluster on it, it failed. My cluster is now a mix of different OS, boards and versions, and I need to clean up, and start over.

I'll be back with PR for elk.

PS. the C2 with eMMC is lightning fast compared to Pi2. Only wish it had a newer kernel.

@nsteinmetz
Copy link

Hi,

Congrats for the V0.7 to @luxas and all other participants ; I was less active last weeks/months but with all these new stuff, I need to free some time to test and use it.

I may even buy some new cards to replace my RPI1s :-P

@DorianGray
Copy link
Contributor

@larmog I had to make relatively few changes to https://github.com/sterburg/kubernetes-glusterfs-server to get it working on arm. It works pretty well, but it's annoying that kubernetes-gluster requires the fuse kmod on the host and the glusterfs-fuse driver in the hyperkube container to mount volumes. I found that it was difficult to get working with the registry without moving the registry out of the kube-system namespace(might be a good idea anyways) because the kubernetes volume plugin only allows searching for endpoints in the client pod's namespace. It also, imo, should have replicas set to nodes/2 by default...

Also I need to get emmc cards...all of mine are sd and all of these reinstalls are killing me slowly.

@larmog
Copy link
Contributor

larmog commented Apr 8, 2016

@luxas I have got elk somewhat running, and sure, I can create a PR but what to do with docker images? I've forked (@)pires projects just changing the base images to ARM and rebuilding them using droneio. I can use my images in the PR but if you want them in kubernetesonarm, then we need to think of something else.

@luxas
Copy link
Owner Author

luxas commented Apr 10, 2016

@nsteinmetz I'm glad that you're back in business :)

Many things has happened since v0.7.0!
I have hacked a lot in Kubernetes mainline. From v1.3.0-alpha.2, these images will be (most of them are available already):

  • gcr.io/google_containers/etcd-arm:2.2.1
  • gcr.io/google_containers/flannel-arm:0.5.5
  • gcr.io/google_containers/hyperkube-arm:v1.2.0-alpha.2
    • With this image, you may follow the official docker guide just by replacing the image name! No need for --pod_infra_container_image more.
  • gcr.io/google_containers/pause-arm:2.0
  • gcr.io/google_containers/skydns-arm:1.0
  • gcr.io/google_containers/kube2sky-arm:1.15
  • gcr.io/google_containers/exechealthz-arm:1.0
  • gcr.io/google_containers/kubernetes-dashboard-arm:v1.0.1
  • gcr.io/google_containers/pause-arm64:2.0
  • gcr.io/google_containers/skydns-arm64:1.0
  • gcr.io/google_containers/kube2sky-arm64:1.15
  • gcr.io/google_containers/exechealthz-arm64:1.0
  • gcr.io/google_containers/kubernetes-dashboard-arm64:v1.0.1

That's a very long list, and the long term goal for this project is to merge things with mainline, and a huge step has been taken these two weeks since I released v0.7.0

Feel free to test these images out! From a Google Kubernetes perspective, they are expermiental (no commercial support of course), but they work thanks to my PRs. For more information and links, see kubernetes/kubernetes#17981

@larmog I want the images to be in images/kubernetesonarm/ so we keep the control here. A week ago I uploaded kibana already. I'm struggeling to build glibc for alpine on my own, so I don't have to depend on others uploads. But we'll see. If you want, upload a PR with initial things, and we'll work on it together.

@DorianGray The official hyperkube image doesn't have glusterfs support for the time being, which is a bit sad. Is there any better (and easier) way of dealing with storage other than glusterfs?

The thing that kubernetes-gluster only works per-namespace is a quite huge limitation. Will take a look at official docs.

@larmog
Copy link
Contributor

larmog commented Apr 10, 2016

@luxas I agree that building all on your own and don't be dependent is the way to go. But that means more work ;). I'll create a PR as soon as I have something working.

I got tired of handling all nodes manually and started using ansible. I can really recommend to start using something like ansible from kube-config so that you can manage the whole cluster from one central point.

@luxas and @DorianGray: a simpler way of handling storage is of course nfs and you can export a nfs-volume from gluster, so maybe that´s the way to go? OpenShift uses nfs so maybe we should start there? Running gluster-server as daemonSet and using nfs for mounting?

@luxas
Copy link
Owner Author

luxas commented Apr 10, 2016

I got tired of handling all nodes manually and started using ansible. I can really recommend to start using something like ansible from kube-config so that you can manage the whole cluster from one central point.

Then you should watch as progress is being made in https://github.com/kubernetes/kube-deploy

a simpler way of handling storage is of course nfs and you can export a nfs-volume from gluster, so maybe that´s the way to go

We should take a look at it at least. How fast is glusterfs? IIRC, I think I have heard someone say ~2MBps on a Raspberry. Maybe plain nfs would be faster.
We will take advantage of daemonsets. I'm planning to move out kube-proxy at least to a daemonset for the next release.

@larmog
Copy link
Contributor

larmog commented May 6, 2016

Hi,
I have EFK (elasticsearch, fluentd, kibana) currently runing on kubernetes-on-arm. It works but requires multiple nodes and I've placed es-data on an odroid-c2 board. It also uses glusterfs and I'm running the kubelet and kube-proxy native (not as containers). I don't know how useful it might be as a addon, but the code can be found here: @kodbasen. I have ten nodes running and they generate quite a lot of log data (storing journald and container logs). It probably requires something else than a SD-card for storage.

I'm willing to help if it is something for this project. If not, those who want's can take a look at @kodbasen. There is probably room for a lot of improvements ;)

@luxas
Copy link
Owner Author

luxas commented Aug 6, 2016

Hi everyone!
I'm working on a new release: v0.8.0
It will feature v1.3.4, helm, automatic dns and dashboard and a lot of other cool things, and it's a complete rewrite of v0.7.0!

With this release, everything is cross-compiled

Now, kubernetes-on-arm is based on docker-multinode, and is built from "official" hyperkube arm images.

Feel free to ask things about it!

BTW; I've been maintaining mainline Kubernetes since April, so now I'm improving Kubernetes directly on the first hand, then porting things on arm.

In v1.4 focus lies on easy cluster deployment, and I'm working a lot on it.
If it goes really well, Kubernetes is only one command away after that!

I'm planning to make a follow up v0.8.5 (or v0.9.0) with additional features and hopefully some performance improvements also later on.

Thanks for following this project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants