forked from mrzool/cv-boilerplate
-
Notifications
You must be signed in to change notification settings - Fork 0
/
details.yml
282 lines (243 loc) · 12.2 KB
/
details.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
---
# Settings
mainfont: Hoefler Text
# mainfont: "Linux Libertine O"
fontsize: "10pt"
lang: "en-US"
geometry: "a4paper, left=28mm, right=28mm, top=17mm, bottom=17mm"
# Personal details
name: Tomasz Klosinski
address:
- "Not valid"
- "Anymore"
phone: "+41 XYZ"
email: XYZ
# Insert URLs without http://
linkedin: tomaszklosinski
# gpg:
# key: "E75959B0"
# url: "http://pgp.mit.edu/pks/lookup?op=get&search=0x0F79558EE75959B0"
# Sections
# intro:
# ""
skill:
- Docker & Kubernetes
- Python Development
- Cloud Computing
- Monitoring & Observability
- Linux System Administration
interests:
- Cloud-Native Infrastructure
- Software Engineering
- Automation & Configuration Management
- Continuous Delivery (CI/CD)
- Open Source Solutions
languages:
- language: Polish
proficiency: Native
- language: Python
proficiency: Native
- language: English
proficiency: C1
- language: German
proficiency: B1
experience:
- years: "2019--Present"
employer: Move Digital AG
job: Senior DevOps Engineer
city: Zurich, Switzerland
description:
"At Nectar I worked on:
- Upgrading and refactoring docker-based CI workflow (CircleCi v2)
- Maintaining Docker images (Dockerfiles), Docker Compose stacks configuration & Docker Swarm clusters
- Refactoing Docker Compose stack files to support multiple versions for multiple use-cases & environments
- Automating deployments from Github via CircleCI building/testing via Docker Hub images repo to Docker Swarm dev/stage/prod clusters
- Building ELK logging stack with Kibana dashboards
- Building Prometheus-based monitoring and performance metrics collection and Grafana dashboards visualisations for Node apps, RabbitMQ, Elasticsearch, Redis
- Manage Exoscale VMs, AWS services and physical servers in a local datacenter with Ansible using Infrastructure as a Code paradigm"
- years: "2018--2019"
employer: Nectar FInancial AG
job: Senior DevOps Engineer
city: Altendorf (SZ), Switzerland
description:
"At Nectar I worked on:
- Upgrading and refactoring docker-based CI workflow (CircleCi v2)
- Maintaining Docker images (Dockerfiles), Docker Compose stacks configuration & Docker Swarm clusters
- Refactoing Docker Compose stack files to support multiple versions for multiple use-cases & environments
- Automating deployments from Github via CircleCI building/testing via Docker Hub images repo to Docker Swarm dev/stage/prod clusters
- Building ELK logging stack with Kibana dashboards
- Building Prometheus-based monitoring and performance metrics collection and Grafana dashboards visualisations for Node apps, RabbitMQ, Elasticsearch, Redis
- Manage Exoscale VMs, AWS services and physical servers in a local datacenter with Ansible using Infrastructure as a Code paradigm"
- years: "2016--2017"
employer: Flynt Bank AG
job: Senior DevOps Engineer
city: Zug, Switzerland
description:
"At Flynt I have worked on:
- Building a logging and tracing/metrics collection system based on Elasticsearch/Fluentd/Kibana and Cassandra/KairosDB/Fluentd/Grafana and auditing system based on auditd.
- Automation of VMs provisioning (VMware), bootstrapping CentOS 7 system configuration and deploying services and applications on top of them
- Developing ~100 Ansible roles for Linux services like Cassandra, Elasticsearch, HAProxy and Nginx load balancers and our scala/akka-based banking/financial applications.
- Designing and implementing a package delivery workflow - from fetching jars from Nexus and Ansible code from GitLab to deploying a new version on multi-staged environments.
- Interviewing new DevOps candidates."
- years: "2015--2016"
employer: CERN
job: Project Associate
city: Geneva, Switzerland
description:
"At CERN Control Center I have been working in the support team of over 2000 (OpenStack) VMs, desktops and servers.
Most of the servers that I have been taking care of were Linux machines providing mission critical software
(code base of over 10 MLOC of Java and 500 kLOC of C++ and C, structured in roughly 1000 projects) for the operation of LHC and other accelerators.
Along with my daily support and troubleshooting tasks for around 70 accelerator operators and 200 software developers,
I have been involved in numerous activities in the System Administration Modernisation project."
- years: "2014–-2015"
employer: Linux Polska
job: Solution Architect
city: Warsaw, Poland
description:
"At Linux Polska I have worked on a continuous delivery system for building rpm packages.
I have contributed also to the organization of the biggest Open Source event in Poland
(which Linux Polska is co-organizing every year in May): \\href{http://opensourceday.com/}{Open Source Day}."
- years: "2011--2014"
employer: IMPAQ
job: Software / System Engineer
city: Warsaw, Poland
description:
"My career as a System Engineer at IMPAQ can be split into two periods.
For first half year I have been member of a support team of RHEL-based telco applications for international cellular networks.
For another 1.5+ year I have worked for Machine-to-Machine, Cloud Computing and Big Data Business Practice (Department)."
- years: "2010--2011"
employer: Outbox
job: Junior Consultant
city: Warsaw, Poland
description:
"At Outbox I have worked on the development of the CRM system based on Oracle PeopleSoft platform for Telekomunikacja Polska SA."
education:
- year: "2010--2012"
subject: Information Technology (Databases)
degree: M.A.
institute: Polish-Japanese Academy of Information Technology
city: Warsaw, Poland
- year: "2008--2011"
subject: Business Management
degree: B.A.
institute: University of Lodz
city: Lodz, Poland
- year: "2007--2010"
subject: Computer Science
degree: B.A.
institute: University of Lodz
city: Lodz, Poland
cerification:
- title: "Red Hat Certified Engineer"
license: 140-054-446
url: "https://www.redhat.com/rhtapps/certification/verify/?certId=140-054-446"
dates: "2014--2017"
- title: "Red Hat Certified System Administrator"
license: 140-054-446
url: "https://www.redhat.com/rhtapps/certification/verify/?certId=140-054-446"
dates: "2014--2017"
project:
- name: "Process Management and Deployment (PoC)"
tags:
- Ansible
- Python
- Bash
- monit
occupation: "CERN, BE-CO-IN"
years: "08/2015--Present"
description:
"* One of our most challenging projects at CERN BE-CO-IN section was to design and implement
the Proof of Concept platform that could replace the former legacy service management solution,
while keeping the interface and release-deploy-configure workflow for users backward compatible.
The platform consisted of Python and Ansible-based tool for deployment of the accelerators controls applications,
adding/removing the services to/from an applications database (to make them persistent across installations),
and generating configuration for service monitoring and management (based on monit).
Additionally, we provided a consistent solution for supplying the user specific checks for a service or group of services."
- name: "System Administration Modernisation Project"
tags:
- Ansible
- git
- auditd
- kickstart
- PXE
- monit"
occupation: "CERN, BE-CO-IN"
years: "04/2015--Present"
description:
"* My responsibilities in the project have been mostly related to the transformation of an old bash/awk/Python script-based
configuration management to a modern Ansible-based solution. One of the main goals of the project was to replace
inconsistent (and in some cases non-existing) version controlling systems of various parts of system configuration with a single git tree.
* Some of the most interesting challenges I have faced included developing Ansible playbooks for:\\\
** converting Python-based variable database into a set of Ansible variables\\\
** converting specific sudo and SSH configuration for all machines\\\
** implementing a filesystem monitoring system based on auditd, rsyslog, Elasticsearch, Logalike and Kibana (dashboards)\\\
** massive analysis, comparison, fixing errors and making more consistent of kickstart files for all 2000+ machines (included also changes in PXE servers and generating kickstart files based on Ansible inventory file and CERN's internal sources of data about hosts)\\\
** creating monit configuration for process management and monitoring."
- name: "Accelerator Project Exploitation Tools - Tracing"
tags:
- Ansible
- monit
- Elasticsearch
- Logalike
- Logstash
- Kibana
- nginx
- monit
- Java
occupation: "CERN, BE-CO-DO"
years: "05/2015--Present"
description:
"* My main responsibility in DO has been system administration activities regarding the Elasticsearch/Logalike/Kibana clusters,
that served as a database, search and dashboarding system for the accelerator controls applications and Linux servers' logging.\\\
* Among numerous tasks in this area, I have been working on:\\\
** preparation of the operation systems for the smooth clusters operation\\\
** upgrade to ES 2.0 and Kibana 4\\\
** creation of nginx reverse proxy configuration\\\
** creation of monit configuration and checks for all ELK services\\\
** a little bit of Java 8 programming (Logalike extension)."
- name: "Automated Delivery Pipeline for building RPM packages"
tags:
- Jenkins
- git
- Bash
- RPM
- GitLab
- Koji
- Vagrant
- Chef
occupation: "Linux Polska"
years: "02/2015--03/2016"
description:
"* At Linux Polska I have had the opportunity to design and implement Automated Delivery Pipeline for building RPM packages based set of bash scripts
combined into consistent chain of jobs (and eventual rollbacks) in Jenkins.
My responsibilities in the project included maintenance of the crucial services: Jenkins, GitLab.
I have developed also multitude of mostly bash (and sometimes Python) scripts. I have managed also our git repositories.
For building RPM packages we have used set of open source tools (mostly from Fedora Project):
koji, mock, mash/createrepo, revisor/pungi, Sigul.
Our testing environment was based on Vagrant, Packer, RHEV's virtual machines spawned during
the testing phase and removed after it success or failure. We have monitored all services using Zabbix."
- name: "Big Data, Cloud Computing and DevOps POC projects"
tags:
- OpenStack
- GlusterFS
- KVM
- Chef
- Hadoop
occupation: "IMPAQ"
years: "07/2012--01/2014"
description:
"(1) In the very early stage of its development, I've worked on establishing and maintenance a (Ubuntu Server and KVM-based)
OpenStack 12-nodes cluster integrated with a GlusterFS cluster (as a replacement for NFS for instance migration between nodes).
Installation and configuration of the machines had been automated with Opscode Chef.
I have really enjoyed it, although at that time it was too early for adoption for OpenStack in production.
Similar deployment has been prepared by me also for Eucalyptus cloud (except the OS difference: in this project I have used CentOS).\\\
(2) Successful evaluation of Chef as a DevOps configuration management tool, resulted in adoption of it in of our clients project.
I have written a chef cookbook for automatic orchestration of development and testing environment of Java web application
run on Apache Tomcat and using Oracle XE 11g. It was an application for one of the biggest Polish insurance companies.\\\
(3) Finally, I have worked on few Proof of Concept projects evaluating Cloudera Hadoop.
I was responsible for installation, configuration (using Chef) and maintenance of the cluster and additional tools.
The projects outcome was a set of MapReduce jobs (written in Java) and deployed on 20 nodes Hadoop cluster,
operating on 10TB of data. Additionally, we have used Flume for integration with Twitter API.
MR Job's configuration was managed with Oozie.
I've also integrated R Statistical Language with the cluster and I have written dozens of shell scripts for file manipulations on HDFS. "
---