Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure build disk capacity #757

Closed
jithine opened this issue Oct 6, 2017 · 8 comments
Closed

Configure build disk capacity #757

jithine opened this issue Oct 6, 2017 · 8 comments
Assignees
Labels

Comments

@jithine
Copy link
Member

jithine commented Oct 6, 2017

Context

As a pipeline owner, I want the ability to configure my build to be run on a machine with higher disk capacity than what is allocated by default.

Objective

Disk capacity allocated by default (20G) is not suitable for very large sized Docker images. Users should be allowed to specify larger disk capacity for their builds.

Requirements

  1. User specifies in screwdriver.yaml user's Disk capacity requirement
  2. Requirement can be specified at a job level or at a pipeline level
  3. If executor cannot honor user requirement then it should be communicated to user. (en entry is log should be okay)
  4. We could provide presets to user eg: HIGH_DISK/HIGH which could potentially be combined with other resource requirements (eg: memory/cpu). Eg: HIGH could mean high capacity for cpu, disk & memory. HIGH_DISK could mean high capacity for Disk and default for everything else.
@minzcmu
Copy link
Member

minzcmu commented Nov 7, 2017

PR to add disk space to hyperd daemon config has been merged. Waiting for it to be released.
hyperhq/hyperd#663

@minzcmu
Copy link
Member

minzcmu commented Nov 14, 2017

Update 11/14

V1.0 with the above change has been released.
http://download.hypercontainer.io/

Will test it out today.

@minzcmu
Copy link
Member

minzcmu commented Nov 15, 2017

Summary 11/14

It turned out that my PR won't solve our problem. That is to bump the pool size for all devices. But what we want is to bump the size for a single device/vm. :(

Read through their code and did some hack to get it work:

  1. change size inside /var/lib/hyper/devicemapper/metadata/base to 20G
    {"device_id":1,"size":21474836480,"transaction_id":1,"initialized":true,"deleted":false}
  2. sudo service hyperd stop, remove previous devices created by hyper from /dev/mapper and then sudo service hyperd start
  3. Now all the new vm will be created with the new size.
  4. But when I exec into the container the root volume is still 10G but lsblk gives 20G for the device
root@ubuntu-1022953525:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda         10G  141M  9.9G   2% /
devtmpfs         54M     0   54M   0% /dev
tmpfs            59M     0   59M   0% /dev/shm
rootfs           54M   19M   35M  36% /etc/hostname
share_dir       1.0M  4.0K 1020K   1% /etc/hosts
root@ubuntu-1022953525:/# lsblk
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda    8:0    0 19.3G  0 disk /
  1. After I install xfs and did xfs_growfs /dev/sda, the root is 20G
root@ubuntu-1022953525:/# xfs_growfs /dev/sda
meta-data=/dev/sda               isize=512    agcount=16, agsize=163824 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2621184, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2621184 to 5062846
root@ubuntu-1022953525:/# 
root@ubuntu-1022953525:/# 
root@ubuntu-1022953525:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda         20G  474M   19G   3% /
devtmpfs         54M     0   54M   0% /dev
tmpfs            59M     0   59M   0% /dev/shm
rootfs           54M   19M   35M  36% /etc/hostname
share_dir       1.0M  4.0K 1020K   1% /etc/hosts

TO DO

This is pretty hacky. I will ping ppl from hyperd to see if there is an easy way to achieve this.

@minzcmu
Copy link
Member

minzcmu commented Nov 15, 2017

Updates 11/14

I talked to hyperd team. @bergwolf helped me investigate the problem and open another PR to fix the problem!
hyperhq/hyperd#677

We can build our own binaries after that PR is merged.

@minzcmu
Copy link
Member

minzcmu commented Nov 22, 2017

Updates 11/22

We have a pipeline internally to build the rpm files for hyperd and push to artifactory.
Tested, works great!

PR to bump disk space to 20G has been merged by Chestery. Will be working with Chestery to roll it out to K8S cluster today

@Filbird
Copy link
Member

Filbird commented Sep 11, 2018

Update 9/10

We have a pending PR in the k8s-vm repository to parse the new beta.screwdriver.cd/disk annotation and set tolerations and node affinity selectors depending on the value.

Similar to the other resource configurations, it will be up to the cluster admin to dictate how the annotation should be consumed. For our purposes, affixing nodes with disk=high works in delegating builds to the right resource.

Accompanying this change is a PR in the guide to document the new annotation.

@Filbird
Copy link
Member

Filbird commented Sep 12, 2018

Update 9/11

We implemented a parseAnnotations function in executor-base to strip out the beta.screwdriver.cd and screwdriver.cd prefixes from relevant annotations.

Alongside this change we have announced the deprecation of the beta.screwdriver.cd prefix within our guide.

Additionally, we have decided to rename the label from disk to screwdriver.cd/disk to make it less generic.

screwdriver-cd/executor-k8s-vm#42

@Filbird
Copy link
Member

Filbird commented Sep 28, 2018

Summary

disk is now a configurable resource. This resource differs from cpu and ram, however, in that it is not a native container compute resource in the context of Kubernetes. https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

As such, the implementation is currently exclusive to the k8s-vm executor. Nodes are pre-selected to support high disk builds. Those nodes are labeled with screwdriver.cd/disk=high and have hyperd configurations set to enable a higher base storage size. From the executor's perspective, it simply needs to append the appropriate nodeAffinity label in order to ensure that the pod is scheduled on a one of these nodes.

@Filbird Filbird closed this as completed Sep 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants