From fda32bc69cca48326f318880a4920e590e3e93a1 Mon Sep 17 00:00:00 2001 From: aviskase Date: Thu, 4 Jan 2024 20:23:04 +0000 Subject: [PATCH] deploy: f8c5064eb08f2095a5859d871f37211e5b49d553 --- 404.html | 4 ++-- articles/2015/12/23/how-to-watch-swf-in-linux/index.html | 4 ++-- .../index.html | 4 ++-- .../2016/04/24/testing-knowledge-transfer-outline/index.html | 4 ++-- articles/2016/05/02/coverage-metrics/index.html | 4 ++-- articles/2016/05/09/thoughts-on-dear-evil-tester/index.html | 4 ++-- .../06/test-planning-questions-by-google-outline/index.html | 4 ++-- .../2016/07/17/course-intro-to-devops-by-udacity/index.html | 4 ++-- .../2016/07/27/notes-for-course-intro-to-linux/index.html | 4 ++-- articles/2016/07/30/thoughts-on-the-a-word/index.html | 4 ++-- articles/2017/03/19/enabling-l2tpipsec-in-ubuntu/index.html | 4 ++-- articles/2018/01/06/2018-and-mega-mind-map/index.html | 4 ++-- articles/2018/01/23/testers-in-this-world/index.html | 4 ++-- articles/2018/03/19/mega-mind-map-version-2/index.html | 4 ++-- .../03/20/remember-the-milk-for-task-management/index.html | 4 ++-- .../2019/07/30/amazing-marvin-for-task-management/index.html | 4 ++-- articles/2019/08/08/lasha-tumbai-or-rm-rf-ru/index.html | 4 ++-- articles/2019/08/26/revision-testers-in-this-world/index.html | 4 ++-- articles/2019/09/02/your-api-is-your-public-image/index.html | 4 ++-- articles/2019/09/04/stereotype-rant/index.html | 4 ++-- .../index.html | 4 ++-- articles/2019/09/15/soap-testing-101/index.html | 4 ++-- articles/2019/09/24/new-domain/index.html | 4 ++-- articles/2019/09/30/achievement-unlocked-rsta/index.html | 4 ++-- .../05/internal-struggle-with-language-gymnastics/index.html | 4 ++-- .../2019/10/13/how-to-test-api-usability-part-1/index.html | 4 ++-- .../2019/10/19/how-to-test-api-usability-part-2/index.html | 4 ++-- .../index.html | 4 ++-- articles/2019/11/11/weekly-hmms-__init__/index.html | 4 ++-- .../index.html | 4 ++-- .../weekly-hmms-fowler-collaborations-html-and-cts/index.html | 4 ++-- articles/2019/11/25/why-i-dont-use-postman/index.html | 4 ++-- .../11/30/weekly-hmms-watching-reading-learning/index.html | 4 ++-- .../12/07/weekly-hmms-communities-laptops-linux/index.html | 4 ++-- .../12/14/weekly-hmms-advent-of-code-nginx-apis/index.html | 4 ++-- .../index.html | 4 ++-- .../26/lunchlearn-linting-openapi-description-docs/index.html | 4 ++-- articles/2020/02/01/hmms-january/index.html | 4 ++-- .../07/api-testing-in-python-requests-vs-bravado/index.html | 4 ++-- articles/2020/03/08/hmms-february/index.html | 4 ++-- articles/2020/03/31/hmms-march/index.html | 4 ++-- .../2020/04/22/using-insomnia-for-api-exploration/index.html | 4 ++-- articles/2020/05/02/hmms-april/index.html | 4 ++-- articles/2020/06/02/hmms-may/index.html | 4 ++-- articles/2020/06/18/robust-apis-are-weird/index.html | 4 ++-- articles/2020/07/01/hmms-june/index.html | 4 ++-- articles/2020/07/03/knowledge-management-tools/index.html | 4 ++-- articles/2020/08/04/hmms-july/index.html | 4 ++-- articles/2020/08/08/be-chicken/index.html | 4 ++-- .../2020/08/20/postmortem-borking-ubuntu-again/index.html | 4 ++-- .../08/30/remembering-javascript-and-typescript/index.html | 4 ++-- articles/2020/09/01/hmms-august/index.html | 4 ++-- articles/2020/09/15/conference-notes-asc-2020/index.html | 4 ++-- articles/2020/10/04/from-pelican-to-hugo/index.html | 4 ++-- articles/2020/10/18/blog-redesign-phase-1/index.html | 4 ++-- .../index.html | 4 ++-- articles/2020/11/03/hmms-october/index.html | 4 ++-- articles/2020/11/30/hmms-november/index.html | 4 ++-- .../07/crash-course-into-api-related-terminology/index.html | 4 ++-- articles/2020/12/20/using-openapi-cli-intro/index.html | 4 ++-- articles/2020/12/30/hmms-2020/index.html | 4 ++-- .../01/10/using-openapi-cli-for-api-exploration/index.html | 4 ++-- .../using-openapi-cli-during-api-design-part-one/index.html | 4 ++-- .../using-openapi-cli-during-api-design-part-two/index.html | 4 ++-- .../08/16/using-openapi-cli-custom-preprocessing/index.html | 4 ++-- articles/2021/09/06/using-openapi-cli-custom-rules/index.html | 4 ++-- articles/2023/12/15/the-navys-it-challenges/index.html | 4 ++-- ...9959de045977e43a3521ef920460d6a3016b82e46fecd52f047418.css | 1 - ...9e3c994ba757b870d5cd72f0513c62d420900272db6a26240265bb.css | 1 + index.html | 4 ++-- pages/about/index.html | 4 ++-- pages/archive/index.html | 4 ++-- pages/i-dont-track-you/index.html | 4 ++-- 73 files changed, 143 insertions(+), 143 deletions(-) delete mode 100644 css/main.min.11d7659db89959de045977e43a3521ef920460d6a3016b82e46fecd52f047418.css create mode 100644 css/main.min.66d509a01b9e3c994ba757b870d5cd72f0513c62d420900272db6a26240265bb.css diff --git a/404.html b/404.html index a09100eb..ab0ff95a 100644 --- a/404.html +++ b/404.html @@ -1,4 +1,4 @@ Whoops! | aviskase -

Whoops!

404 =(

\ No newline at end of file diff --git a/articles/2015/12/23/how-to-watch-swf-in-linux/index.html b/articles/2015/12/23/how-to-watch-swf-in-linux/index.html index c37f0d9a..26b7fc4b 100644 --- a/articles/2015/12/23/how-to-watch-swf-in-linux/index.html +++ b/articles/2015/12/23/how-to-watch-swf-in-linux/index.html @@ -2,9 +2,9 @@

How to watch SWF in Linux

23 Dec 2015
 1 min

Sometimes people use Jing to record videos for bug reports. This pest is saving them as SWF file. So, here is a simple note on how to open these videos in Linux.

It’s really easy. Firefox can open them (of course, if Shockwave plugin is present). Just download that nasty video and open it in FF, all’s done.

But there is a catch. I don’t know how it’s on Windows, but for Linux you should edit mime types. In order to do that you should create a file ~/.mime.types with this content:

application/x-shockwave-flash  swf swfl
 

That’s all! This way is the easiest, because it works only for the owner of the home directory, where the file was created.

But if you want, you can make this setting global. But be careful, because everything will be reset after an upgrade. You should open the file:

$ sudo nano /usr/share/mime/packages/freedesktop.org.xml
 

and replace this string:

<mime-type type="application/vnd.adobe.flash.movie">
 

with this one:

<mime-type type="application/x-shockwave-flash">
diff --git a/articles/2016/01/09/list-of-articles-and-videos-on-api-and-web-services-testing/index.html b/articles/2016/01/09/list-of-articles-and-videos-on-api-and-web-services-testing/index.html
index 149358b6..ddae0ea5 100644
--- a/articles/2016/01/09/list-of-articles-and-videos-on-api-and-web-services-testing/index.html
+++ b/articles/2016/01/09/list-of-articles-and-videos-on-api-and-web-services-testing/index.html
@@ -2,8 +2,8 @@
 

List of articles and videos on API and web services testing

9 Jan 2016
 3 mins

There was a great list on now discontinued site qahelp.net. I managed to save it through yandex cache (even google cache and web archive couldn’t help).

Difference between API and web services

SOAP and REST

API and web services testing

Deep dive into REST API

API security testing

Service virtualization

Introduction to microservices

Microservices testing

More on API

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/04/24/testing-knowledge-transfer-outline/index.html b/articles/2016/04/24/testing-knowledge-transfer-outline/index.html index 1f642617..28120f83 100644 --- a/articles/2016/04/24/testing-knowledge-transfer-outline/index.html +++ b/articles/2016/04/24/testing-knowledge-transfer-outline/index.html @@ -1,6 +1,6 @@ Testing knowledge transfer (outline) | aviskase -

Testing knowledge transfer (outline)

24 Apr 2016
 1 min

There is an excellent blog post (ru) by Elena Poplouhina with the list of what not to forget to tell about testing to project newcomers. So I’ve made an English translation with some correction. Here it is as an outline.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/05/02/coverage-metrics/index.html b/articles/2016/05/02/coverage-metrics/index.html index 032bf075..53caf408 100644 --- a/articles/2016/05/02/coverage-metrics/index.html +++ b/articles/2016/05/02/coverage-metrics/index.html @@ -2,8 +2,8 @@

Coverage metrics

2 May 2016
 4 mins

There are two strange metrics on the project where I work: how much is tested and how much works. Every time I should update these two numbers my head explodes. I don’t like them at all.

What means “how much is tested”?

Traditionally we are writing a percentage of completed checks out of all checks in the checklist. I don’t like this tradition, because checks are different. Here you should just assert error’s text message, and there you should produce this error under some conditions.

I see this solution. Every check should have corresponding number like a story point. Let’s call it a test point. To check text there is 1 tp, and to reproduce an error — 3 tp. That way the whole checklist costs 4 tp. When we are going through the checklist, a tester should write how much is tested for every check. For example, text is asserted — 1 pt. But reproducing is done only for several scenarios, so only 2 tp. In total, we completed 3 out of 4 test points.

What means “how much works”?

Based on what are other testers are doing, it is a number of “green” (or passed) checks out of all checks.

Consider this situation: test some kind of import. Test session is started, and voila, there no “Import” button. That’s a blocker and no more checks could be done. What we are writing for “how much works”? Zero. A developer fixes the button. Checks are flying, everything is perfect, we are writing 100% (a miracle).

Here is a question: is it correct that we put zero after the first session? After all, it was only a button, everything else was working already. I think it’s correct. But we should rename this metric, because it shows not how much works, but how much is available to a user and to what extent.

And what is in result?

Checklists are measured in test points, availability metric is renamed. Now watch carefully. We are throwing away percentages. After all, all checklists are different. Why should we use percentages, when they can’t show reality. 20% of a small task with 10 tp — not bad, 80% of a task with 1000 tp — can ruin a release. And strictly speaking this method is using not ratio, but interval scale. You can’t multiply and divide in this scale, therefore, can’t calculate percentage.

Guru talks

Michael Bolton writes a lot about choosing right scales and using them properly. Recently there was yet another article. His positions is that in testing even interval scale is too much, nominal and ordinal are more correct. There is an excellent example in that article, so I just leave it here:

  • ⚪ Level 0: we know nothing at all about this area of the product.
  • 😶 Level 1: we have done a very cursory evaluation of this area. Smoke- or sanity-level; we’ve visited this feature and had a brief look at it, but we don’t really know very much about it; we haven’t probed it in any real depth.
  • 😐 Level 2: we’ve had a reasonable look at this area, although we haven’t gone all the way deep. We’ve examined the common, the core, the critical, the happy paths, the handling of everyday errors or exceptions. We’ve pretty familiar with this area. We’ve done the kind of testing that would expose some significant bugs, if they were there.
  • 😎 Level 3: we’ve really kicked this area harshly and hard. We’ve looked at unusual and complex conditions or states. We’ve probed deeply for subtle or hidden bugs. We’ve exposed the product to the extreme, the exceptional, the rare, the improbable. We’ve looked for bugs that are deep in the corners or hidden in the dark. If there were a serious bug, we’re pretty sure we would have found it by now.
older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/05/09/thoughts-on-dear-evil-tester/index.html b/articles/2016/05/09/thoughts-on-dear-evil-tester/index.html index 7302a1cc..54d1480b 100644 --- a/articles/2016/05/09/thoughts-on-dear-evil-tester/index.html +++ b/articles/2016/05/09/thoughts-on-dear-evil-tester/index.html @@ -5,8 +5,8 @@ The first part is kick-ass. You can just quote a random sentence from it and it will be great. For instance, I really want to use this one as a personal motto:">

Thoughts on “Dear Evil Tester”

9 May 2016
 2 mins

Recently I’ve read a book “Dear Evil Tester” by Alan Richardson. The book has three parts: published letters, unpublished, essays.

The first part is kick-ass. You can just quote a random sentence from it and it will be great.

For instance, I really want to use this one as a personal motto:

The only principle I’m prepared to absolutely commit to, with absolute certainty, is that I can change my mind.

And this one clearly shows the moment when everything starts flying to hell:

If a person has the power to cause the project to fail, then they can say ‘testing is not required’, at the point they make the decision to doom the project.

Oh, how often it is when a check is “done”, then no, not a single step back, there will be no time for restesting and rechecking:

And as we all know, once a ‘Test Case’ is done, it can never be undone.

Guru by themselves, so classic:

Enumerate everything that you do, and only you do, and then define ‘true’ Exploratory Testing as the specific combination of items that you enumerated

The second part wasn’t so interesting. Yep, there are more practical and philosophical ideas, but they weren’t evil enough. It also applies to the third part.

Nevertheless, it’s the book you’ll want to reread — cute devil’s doodles and provocative style are awesome. And you can buy it on Leanpub.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/06/06/test-planning-questions-by-google-outline/index.html b/articles/2016/06/06/test-planning-questions-by-google-outline/index.html index 3881cb5d..44bd9774 100644 --- a/articles/2016/06/06/test-planning-questions-by-google-outline/index.html +++ b/articles/2016/06/06/test-planning-questions-by-google-outline/index.html @@ -1,6 +1,6 @@ Test planning: questions by Google (outline) | aviskase -

Test planning: questions by Google (outline)

6 Jun 2016
 1 min

Google Testing Blog made all testers happy one more time. The article provides a comprehensive list of questions to be asked before writing a test plan (or a test strategy). So I’ve made an outline to share.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/07/17/course-intro-to-devops-by-udacity/index.html b/articles/2016/07/17/course-intro-to-devops-by-udacity/index.html index dfda6924..0cf41944 100644 --- a/articles/2016/07/17/course-intro-to-devops-by-udacity/index.html +++ b/articles/2016/07/17/course-intro-to-devops-by-udacity/index.html @@ -5,8 +5,8 @@ The course is concise and comprehensive. Here are some notes I’ve made. DevOps DevOps is the practice of operations and development engineers participating together in the entire service life-cycle, from design through the development process to production support.">

Course “Intro to DevOps” by Udacity

17 Jul 2016
 4 mins

I love MOOCs: Coursera, Udacity, Stepic. There are so many courses to watch for entire life. Just now watched a course Intro to DevOps by Udacity.

The course is concise and comprehensive. Here are some notes I’ve made.


DevOps

DevOps is the practice of operations and development engineers participating together in the entire service life-cycle, from design through the development process to production support.

DevOps is also characterized by operations staff making use many of the same techniques as developers for their systems work.

CommitStrip — what DevOps is not

Components that make up DevOps — CAMS:

  • Communication — agile communications, lean, respect
  • Automation — deployment, testing, integration
  • Measurement — monitoring, useful logs, biz metrics, usefulness of tools & processes
  • Sharing — shared view of goals, problems, and benefits

If you can’t measure it, you can’t improve it.

Solving the environment problem

  1. Golden image
    • more work up front — large install image must be regenerated for any change
    • much faster installation/boot
  2. Configuration management
    • lighter build process — integration is done at install/initial boot time
    • slower start up process
  3. Combination of 1 & 2

Monitoring

Monitoring data sources:

  • external probing, test queries
  • application levels stats (queries per second, latency)
  • environment stats (JVM memory profile)
  • host/container stats (load average, disk errors)

Monitoring data products:

  • alerting
  • performance analysis
  • capacity prediction
  • growth measurement
  • debugging metrics

Monitoring systems

Additional resources

Notable books

Notable presentations

  • What DevOps means to me — an explanation of the components that make up CAMS (Culture, Automation, Measurement, Sharing), as well additional thoughts on what DevOps is and is not — by John Willis
  • dev2ops — delivering change in a DevOps and cloud world
  • the agile admin — blog on topics of DevOps, agile operations, cloud computing, infrastructure automation, Web security (especially AppSec), transparency, open source, monitoring, Web performance optimization, and more
  • The DevOps checklist — this checklist is comprised of 48 items you can use to gauge the maturity of your software delivery competency, and form a baseline to measure your future improvements
  • DevOps — A Crash Course by Matt Stratton. A lot of links to good resources on DevOps topics.

Additional resources by Nutanix

  • Nagios and Zabbix — comprehensive solutions for monitoring large infrastructure, but maybe too big and complex for small projects
  • Graphite — open-source database and a graphing solution for storing and displaying monitoring data
  • InfluxDB — an open-source distributed time series database for metrics, events, and analytics
  • StatsD — simple daemon for easy stats aggregation, by Etsy. Read about the philosophy behind it in the article by it’s creators — Measure Anything, Measure Everything
  • Grafana — metrics dashboard and graph editor for Graphite and InfluxDB
  • PagerDuty — incident resolution life-cycle management platform that integrates with over 100 other systems to streamline the process for large organizations
  • Logstash — log storage and search system, works well with — Kibana graphing and visualization software
older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2016/07/27/notes-for-course-intro-to-linux/index.html b/articles/2016/07/27/notes-for-course-intro-to-linux/index.html index 5a1c3502..86fea6b9 100644 --- a/articles/2016/07/27/notes-for-course-intro-to-linux/index.html +++ b/articles/2016/07/27/notes-for-course-intro-to-linux/index.html @@ -1,7 +1,7 @@ Notes for course “Intro to Linux” | aviskase -

Notes for course “Intro to Linux”

27 Jul 2016
 2 mins

If there is a Linux course on a platform, I’ll always watch it. Those who know me are aware that I am using some kind of Debian-based distributive full time — I’m not a hardcore fan, but I like it here. One might ask: why am I watching all these courses when they are mostly for beginners? Answer is simple: repetition is the mother of learning, plus there are always some tricks that you forget or can become more interesting with time.

So it happened with Intro to Linux(ru) on Russian platform Stepic. First, I’ve got month license for any JetBrains IDE by solving some exercises and that’s cool. Second, cute guys from Bioinformatics Institute made me adore tmux and almost persuaded to look at vim. Almost.

And now as usual, some notes to not to forget. If something looks like a magic: read books or watch some course!


Run program in background:

program &
 

Check if link is available:

wget --spider somelink
 

Download files by links from a text file:

wget -i some-textfile
 

Using arguments in scripts:

  • $# — number of arguments
  • $0 — script name
  • $1 — the first argument
  • $2 — the second argument

How much space does something occupy:

du [--max-depth <depth> -h] <path>
diff --git a/articles/2016/07/30/thoughts-on-the-a-word/index.html b/articles/2016/07/30/thoughts-on-the-a-word/index.html
index a3ea3291..be79a975 100644
--- a/articles/2016/07/30/thoughts-on-the-a-word/index.html
+++ b/articles/2016/07/30/thoughts-on-the-a-word/index.html
@@ -2,8 +2,8 @@
 

Thoughts on “The ‘A’ Word”

30 Jul 2016
 5 mins

Alan Page is known as one of the authors of “How We Test Software at Microsoft”. But there is another good book and it’s called “The ‘A’ Word”. You can buy it on LeanPub.

The book is about automation in testing, but not how to do it — it’s about how to think about it. It’s short, just 58 pages, but very dense with ideas and Alan’s opinions.

As I am not qualified to give an opinion on automation topics (because I don’t have much experience with it), I’ve just gathered some notes for future referencing. Sections are divided by chapters.


Sometimes, testers use programming skills to help their testing. Sometimes, that code automates some application functionality. That’s it.

Testing: failing to succeed

There is a very famous concept called “Orders of Ignorance” introduced by Phillip Glen Armour (more here). Chapter’s idea is that mostly tests are done on 0OI level, but we should never forget about 2OI test. 0OI is a lack of ignorance (I know) and 2OI is a lack of awareness (I don’t know what I don’t know).

0OI tests are knowledge-proving tests, while 2OI tests are knowledge-acquiring tests.

The robots are taking over

Humans fail when they don’t use automation to solve problems impossible or impractical for manual efforts.

Automation fails when it tries to do or verify something that’s more suited for a human evaluation.

To automate …?

Good testers test first — or at the very least they think of tests first.

Great testers first think about how they’re going to approach a testing problem, then figure out what’s suitable for automation, and what’s not suitable.

You should automate 100% of the tests that should be automated

Alan’s heuristic when to automate: “I’m Bored”

The coding tester

Summary:

  • the role of a coder-tester is not to automate everything
  • testers do not need to have a computer science
  • testers do not need to be able to program
  • programming knowledge does not destroy “a proper tester angle”
  • background similar to customer’s does not make you a customer

GUI shmooey

For 95% of all software applications, directly automating the GUI is a waste of time.

For the record, I typed 99% above first, then chickened out.

Design for GUI automation

Alan’s main points for disliking GUI automation:

  • It’s (typically) fragile — tests tend to break / stop working / work unsuccessfully often
  • It rarely lasts through multiple versions of a project (another aspect of fragility)
  • It’s freakin’ hard to automate UI (and keep track of state, verify, etc.)
  • Available tools are weak to moderate (this is arguable, depending on what you want to do with the tools — I’m particularly pleased, for example, with what good testers are able to do with selenium and web driver).

I love GUI automation that can automatically explore variations of a GUI based task flow.

I like GUI automation is in stress or performance issues.

It’s (probably) a design problem

  • Record & Playback automation is a non-starter
  • Basic verification that would be hit by anyone walking through the basics of the application isn’t worth automation
  • Tests that do exactly the same thing every time are not valuable
  • Always think forward
  • Plan for failure and ensure that all test failures tell you exactly what is wrong
  • Tests should be reliable
  • There is always a better alternative to Sleep statements
  • UI is fragile, its testability should designed

In the middle

Alan’s brainstorming technique: first spend a reasonable amount of time focusing on the extremes — because often, some great ideas for “the middle” comes out of that brainstorming.

Test design for automation

The first step — and most important — is to think how you’re going to test.

From that initial test design effort, you can deduce what aspects of testing could be accomplished more efficiently with automation (and without).

Orchestrating test automation

Designing good tests is one of the hardest tasks in software development.

LOL — UR AUTOMASHUN SUCKZ!

Your tests don’t suck:

  • when you treat their code like a production code
    • core reviews
    • static analysis
    • running with the debugger to ensure they are doing what you think they are
    • trust: if a test fails, it’s a product bug, not a test bug
  • when they execute automatically
  • when failures are handled automatically
    • bugs are entered automatically — including logs, call stacks, screen shots, trace information, and other relevant info
    • when bug is fixed, it’s checked automatically
    • generation of “Test Result Report”

Musings on test design

Some tests can only be run via some sort of test automation.

Some tests can only be done via human interaction.

You can’t effectively think about automated testing separately from human testing.

In my world, there are no such things as automated testing, exploratory testing, manual testing, etc.

There is only testing

Beyond regression tests & testing with code

Useful tests are tests that provide new information.

An automation strategy that only performs regression testing is short-sighted and incomplete.

How to make test useful:

  • model-based testing
  • introducing some randomness
  • data driven testing
  • scaled fault injection
  • fuzzing

More on test design

Test Design ideas are endless.

To be a good test designer (and tester), you need a lot of testing ideas, and you need to know how and when to apply them.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2017/03/19/enabling-l2tpipsec-in-ubuntu/index.html b/articles/2017/03/19/enabling-l2tpipsec-in-ubuntu/index.html index 5b0d1088..c716a23c 100644 --- a/articles/2017/03/19/enabling-l2tpipsec-in-ubuntu/index.html +++ b/articles/2017/03/19/enabling-l2tpipsec-in-ubuntu/index.html @@ -2,9 +2,9 @@

Enabling L2TP/IPSec in Ubuntu

19 Mar 2017
 1 min

Linux is like that: you can do anything, but sometimes it’s not easy for a common user. As for me, I hate writing config files for VPN, because network-manager is awesome. But sometimes it’s not easy to make it work.

The biggest problem for me is LT2PT/IPSec. There is an excellent article on how to enable it using network-manager-l2tp. But as on-line articles have a tendency to be removed, I want to save these instructions here.


Install the prerequisites:

sudo apt install \  
 intltool \  
 libtool \  
 network-manager-dev \  
diff --git a/articles/2018/01/06/2018-and-mega-mind-map/index.html b/articles/2018/01/06/2018-and-mega-mind-map/index.html
index a4de0da3..aa8fb2fa 100644
--- a/articles/2018/01/06/2018-and-mega-mind-map/index.html
+++ b/articles/2018/01/06/2018-and-mega-mind-map/index.html
@@ -2,8 +2,8 @@
 

2018 and mega mind map

6 Jan 2018
 1 min

Well, it’s been a while. I don’t want to make so called “new year resolutions”, but it’s better to add a repeating task in the RTM to write something here xD

2018 started shaky. I left my first real place of work — Quality Lab — Alma mater of testing. This decision was heartbreaking, yet expected. On the bright side, now I have time to condition my brain into the normal mode again: for the last 1.5 year I wasn’t productive in studying and reading.

First step is my mega mind map. God bless Freeplane, it’s awesome. Actually, it was a bit ugly, but now it’s looking good ;) So, MMM. It’s versioned with git and here is a repo. I have an impressive goal of documenting all testing techniques and approaches. Hope, I won’t drop it as usual.

Here is a first iteration:

  • ACC (Attributes / Components / Capabilities)
  • Decomposition
  • APV aka ДПЗ (Actions / Parameters / Values)
  • Value analysis

Mega mind map version 1

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2018/01/23/testers-in-this-world/index.html b/articles/2018/01/23/testers-in-this-world/index.html index 8034985d..3f617649 100644 --- a/articles/2018/01/23/testers-in-this-world/index.html +++ b/articles/2018/01/23/testers-in-this-world/index.html @@ -1,6 +1,6 @@ Testers in this world | aviskase -

Testers in this world

23 Jan 2018
 3 mins

Every tester saw those articles: “Testing is dead”, “Manual testing is dead”, “Testing is not dead”, “Automation is not testing”, “Company XXX has no testers and is happy about it”, etc. They might get on nerves. I love my craft, but sometimes something is nibbling at the back of my mind. Something that keeps me wondering: maybe I should move to a different role? So, I unscrambled this “something” and found an explanation why these thoughts are there and why I won’t leave testing =)

Disclaimer: these thoughts are mine and I don’t have a goal to promote or impose them on other people. As a normal human being I do understand that my opinion may change with new experience and knowledge. This article contains a documented reflection on my experience so far. Probably it’ll be fun to read in 10-20 years (if I’ll still be a tester) xD


Ideal world of software development

Imagine an ideal world of software development. Scope is clear. Everything is done on time in relaxed manner without rush. Everybody is motivated to create the best product. Everybody has happy life outside of work. No stress. Users are eager to help too. And all these aren’t just for one product, but true for all software in the world. Great, isn’t it?

Now, when I’m thinking about this ideal world, I can’t find a place for testers in it. There is just no place for bugs. Developers and analysts have all the time to design and build a product without real bugs. Remember, they don’t have just a time, they’re also motivated, so they definitively test product. They do it themselves… so there is no need for a separate role of a tester.

Let’s visualize this tester-less world as a perfect disk:

Ideal world of development

Real world of software development

But our current world is not ideal. No one has enough time. Overtimes. Burning out. Problems outside work. Toxic environment. No motivation. Some rogue manager keeps adding features out of scope. And bugs, bugs everywhere. This world is distorted, some products are a bit better than others, but no one is perfect.

Real world of development

Testers in the real world of software development

And that’s where testers are coming. We are like clay, like sealing foam. We patch this not-ideal world. We make it less distorted.

Real world of development with testers

All products are distorted in different ways, that’s why you can see testers doing all kind of things. Some are just “manual monkeys.” On a different extreme are those who automate test cases that are written by others. Most are somewhere in between. There are projects where testers have a hat of analytic. Or support. Or both. Or PM. Someone, maybe gurus, don’t test at all: they mentor a team to test themselves, control that quality is efficiently insured by others.

Here is a fun fact. That patched version is still a lie. Testers are also not perfect. Thus, our world looks more like this:

Even more real world of development with testers

We can’t patch all holes. But we are here to try.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2018/03/19/mega-mind-map-version-2/index.html b/articles/2018/03/19/mega-mind-map-version-2/index.html index 21cd3d1d..478c5e61 100644 --- a/articles/2018/03/19/mega-mind-map-version-2/index.html +++ b/articles/2018/03/19/mega-mind-map-version-2/index.html @@ -1,6 +1,6 @@ Mega mind map: version #2 | aviskase -

Mega mind map: version #2

19 Mar 2018
 1 min

My not-so-mega mind map has grown a little bit. I finally added exploratory testing tours and ideas on how to make test cases less rigid, all that great stuff from “Exploratory Software Testing” by James A. Whittaker.

Mega mind map version 2

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2018/03/20/remember-the-milk-for-task-management/index.html b/articles/2018/03/20/remember-the-milk-for-task-management/index.html index f4fd697c..cb9b916c 100644 --- a/articles/2018/03/20/remember-the-milk-for-task-management/index.html +++ b/articles/2018/03/20/remember-the-milk-for-task-management/index.html @@ -2,8 +2,8 @@

Remember The Milk for task management

20 Mar 2018
 1 min

Today I received a letter from Remember The Milk that I had won a free year of Pro. That’s great, yet I feel a bit sad because somehow everyone talks about every other todo app and not RTM.

I’ve been using RTM for several years. First, with free account, later with Pro. And it’s a second time I won a Pro :) RTM is great. For me, four killer features are:

  • Start dates and due dates
  • Very customizable repeat options
  • Really smart add syntax (you can input everything as one-liner and all task fields will be populated)
  • Smart lists (basically, it’s saved searches with operators like tag: and logic control with AND/OR/NOT/parentheses)

You can implement any imaginable task management system with it. I use a setup based on M. Dorofeev’s approach. Of course, specialized apps are probably a bit easier if you strictly follow rules of one “true” system, but with RTM you can do the heck you want any time you want. There is a bizarre development fashion to “box” users with constraints, to give no options. And for me RTM is a breeze of sane air of freedom.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/07/30/amazing-marvin-for-task-management/index.html b/articles/2019/07/30/amazing-marvin-for-task-management/index.html index f05829c1..fe54fde3 100644 --- a/articles/2019/07/30/amazing-marvin-for-task-management/index.html +++ b/articles/2019/07/30/amazing-marvin-for-task-management/index.html @@ -2,8 +2,8 @@

Amazing Marvin for task management

30 Jul 2019
 10 mins

I’m an infrequent blogger, so how weird it is that my last post was about RTM? I was a loyal RTM user for quite a time… Well, I have a new love now. His name is Marvin.

Marvin is still young, yet powerful. He has some problems and rough corners, but development couple (yes, just 2 persons!) is the most responsive and creative force I’ve seen. I’m still learning and tweaking my system there, but I want to describe the first working iteration to be able to improve and compare later.

Enabled strategies

Strategies are like extensions: add more to have more abilities.

Essential

Because these are essential, I’ll explain their usage later.

  • Category Context (big square, also beneath title, show full path)

  • Task Notes

  • Labels

  • Timers

  • Backburner (no setup)

  • Planning Ahead

  • Smart Lists

  • Custom Sidebar

  • Top Mini List

  • Custom Sections

  • Dependencies

  • End Dates (show end dates below the task)

  • Start Dates

  • Work Session Scheduler

  • Saved Items (Templates)

  • Smart List Day Alerts

  • Auto-schedule Due Tasks (cutoff = 1 day)

  • Staleness Warning (period = 40 days)

  • Email to Marvin

  • Zapier Integration

  • Review Date

  • Weekly Review

Extras

  • Eat that Frog (just 2 frog levels) — nice to have, but I don’t use it as intended (I tend to assign a frog to the bad tasks, but I don’t do them first)

  • Task Reminders (create automatically) — very important feature, but less powerful than in RTM for now (cannot create multiple reminders). Though, I have a feeling that with proper review system I actually don’t need reminders that much now.

  • Duration Estimates — I’m experimenting with having estimates for all tasks, but so far don’t feel it increases productivity.

  • Time Tracking (show > in title) — goes hand in hand with duration estimates, feels needed but not essential

  • Beat The Clock — same as time tracking, still experimenting with it

  • Project Focus Picker — just started to use it; at least it works as an “eye bugger” to push me to work on project

  • Suggested Task — never used it really, but something about it feels good ><

  • The Wall — using it occasionally, would like to have block division by section

  • Day Progress Bar — I don’t know why I enabled it

  • Procrastination Count (default) — important but not essential

  • Missing Next Steps Warning — important, but not very used much at the moment

  • Day Note (with archive) — nice to have, but I’m not very good at keeping a habit to write (as can be guessed by this blog updates frequency)

  • Calendar, Calendar Sync, All-Day Items, Top Mini List — I’ll have a whole calendar workflow moved to AM as soon as these will work with Google Calendar and Outlook. Until then, I have to go to calendars.

  • Dashboard — I like it, but not sure that I need it

  • Reward Tasks — awesome feature which I’ve never used. Dunno why.

Planning and scheduling cycle

My PS cycle has three phases:

  • Monthly planning

  • Weekly planning

  • Daily scheduling

Notice difference between planning and scheduling? This is because AM has a bit head-scratching at first, but really powerful distinction between these processes. In short, planning is about assigning start date and end date (“soft deadline”) and scheduling is about assigning a do date (“when should I do this task”). Also, there can be a due date, it’s not quite clear is it planning or scheduling category. I think both, because I use Auto-schedule Due Tasks strategy. For example, if something is due tomorrow, this task will have a do date = today.

Monthly planning

I have a recurring task Plan tasks for the next month, which is setup to run monthly on the 31st day. To complete this task I go to Planning > Monthly and plan tasks only for the next month while working from Master List. I don’t want to overplan too much into the future. What can be changed is maybe I should work from some smart list, but so far I don’t have too many tasks in ML.

Weekly planning

This is a Weekly Review strategy with checklist scheduled to be done on Sundays. Checklist is a combination of weekly planning and everyday review stuff.

Reformulate task names left in today pool

Done on: Daily view

This is a part from Jedi techniques, which goal is to rename tasks you didn’t complete for some reason. That way the next day they will look “fresher” or more inviting to you.

Review calendars for 2 weeks ahead: add tasks if needed

Done on: external sites, tasks are added to inbox

  • birthdays

  • special all day events like holidays

  • work meetings

Basically, it’s something that is not really actionable, but has a day and duration. This is what I want to do in AM in the future, when calendar sync works better:

  • Smart list with all calendar events in the next 2 weeks (to use for weekly planning)

  • Top Mini List strategy showing upcoming birthdays in the next 3 days

  • Depending on how AM will show calendar events (probably as tasks which have to be completed, which is a bit unnecessary for me), maybe all of them should be shown in Top Mini List

Right now what I use is a Custom Sidebar with links to my google and outlook calendars.

Reflect on completed week: do I need to do more?

Done on: Archive, tasks are added to inbox

Go to Archive and check what was done this week. It’s a bit cumbersome, because it shows tasks per month and not per week. In RTM I’ve used a smart list for that, but AM does not support searching completed tasks (yet).

Other way is to click through daily views, but for me it’s too many clicks)

Review start dates for backburner tasks (smartlist)

Done one: Master List, work from smartlist

Smartlist: any start date, on backburner

I use backburner only for tasks which are dependent on others or with start date in the future. I try to be very strict with start dates and set them only if it really makes sense, for example: getting a vaccine boost shot in Dec 2028 has a start date in Sept 2028, because I don’t want this task to pollute my planning all these years. Another use for start dates is for sub-project, like 3-day long learning session, which is part of a project without start date (because I want to do some preparatory tasks before session starts).

So, in order to keep backburner in check, I review it once in week. Now that I think about it, maybe I should have an alert about tasks which does not have start date and are not dependent, but are in backburner…? But more on alerts later =)

Review all projects: add new tasks if needed (smartlist)

Done on: Master List, work from smartlist

Smartlist: Projects

I had to create a smart list to show all projects, because I wanted to see backburner’s too.

Recall this day: write down everything missed (triggers/ backwards-day-recall)

Done on: Sidebar, tasks added to inbox

Two links in Sidebar:

  • link to mind map containing triggers

  • link to timer set for 20m

Triggers are things which can be used to recall what was forgotten. For example, one of the subtrees in my mindmap contains all types of utilities or all kinds of cleaning which could be done.

Backwards day recall is a technique also used to recall things. You sit down and try to remember today in detail in backwards: from now to the morning.

Empty inboxes: paper, GMail, Outlook, Joplin, AM

Categorize everything and clean up all inboxes I can have. First go with paper notes, then emails (gmail + outlook), then note taking application (for now it’s Joplin), and finally AM category Inbox

If you see something that should be planned for this month, set it right away.

Plan tasks for the next week (selecting from this month)

Done on: Planning > Weekly, working from This month list

Because everything was planned for this month, I can just bring relevant tasks to next week.

Schedule tasks for Monday by checking next week list

Done on: Daily view for tomorrow, working from This week list

Daily scheduling

Because there is no “Daily review” strategy yet, I have a recurring task for that. It repeats every Wed and Fri, just because I’m still getting accustomed to always do it. When I’m ready, it will be repeated every day except for Sunday, where weekly review is done.

Checklist is a subset of weekly review:

  • reformulate

  • reflect on Completed Today

  • recall this day

  • empty inboxes

  • schedule tasks for the next day

Extras

Categories

Main:

  • Inbox

  • Work

  • Household — tasks related to house or family

  • Hobbies — anything related to my hobbies, learning, and reading

  • Reputation — quite new for me, this is for tasks related to my external image. Participating in open source projects, buying birthday gifts, writing blog, answering some emails. Sometimes there is no clear distinction between hobbies and this category, so it’s fluid.

  • Health

  • Productivity — tasks like everyday review or cleaning up overflown inboxes. I suppose calendar sync will go there too.

Sections

I use Custom Sections strategies:

  • Morning

  • Work — linked to smartlist which finds all tasks/projects in #Work

  • Outside — linked to smartlist which finds all tasks/projects which has @outside label

  • Evening

  • Bonus

Morning and Evening tasks are essentials, while Bonus ones are nice to do. Outside tasks are for stuff where I need to go somewhere, like shopping errands. I’m still not hard set on these categories, except for Work, this one will definitely stay.

Alerts

I use Smart List Day Alerts strategy for finding and fixing potential planning problems.

  • New items pulled from backburner — reminder to check items with *new

  • Stale — review items with *stale

  • To review (waiting or pinged) — some tasks are ready to review)

  • This week unestimated — add estimation for all tasks, smartlist: Tasks, no time estimate, &thisWeek scheduleDate today == ||

Review

Review date strategy is not the best name for my usage. I use it for tasks which are not done by me.

  • tag waiting (3 day) — for long waiting tasks

  • tag ping (1 day) — reminds me ask someone everyday if s/he finished the task

Occasional tasks

I was not able to setup this correctly in AM yet, so I’m using some hacks around it. Basically, these are tasks which I want to do every 15-40 days, without specifying exact day. One of the tasks is Productivity system review. It has a note with questions which I ask while going through all my tasks and projects:

  1. Is it really mine? Maybe delegate?

  2. Is there any real profit from this task?

  3. Maybe it’s possible to do some other task so that this one becomes obsolete?

  4. Is there any easier way to do it?

  5. Do I really still need to do it?

Goal is to remove or reformulate tasks.

Ending thoughts

Of course this is just a small part of AM experience. I like being able to create work sessions for working on projects in pomodorro-style chunks. Templates are awesome and I use them for mindful book reading projects (reading, making notes, transferring them to Joplin). Gamification abilities are cute and I will explore them more, when I’ll be more comfortable and less procrastinating.

I’ve noticed that features which were very important to me in RTM, like tagging, are not so needed here. Here you can have categories, sections, do/due/end dates, projects to achieve similar goals. This granularity and specificity are the most awesome aspect of AM!

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/08/08/lasha-tumbai-or-rm-rf-ru/index.html b/articles/2019/08/08/lasha-tumbai-or-rm-rf-ru/index.html index 79068a28..deaac35f 100644 --- a/articles/2019/08/08/lasha-tumbai-or-rm-rf-ru/index.html +++ b/articles/2019/08/08/lasha-tumbai-or-rm-rf-ru/index.html @@ -1,6 +1,6 @@ Lasha Tumbai or rm -rf RU | aviskase -

Lasha Tumbai or rm -rf RU

8 Aug 2019
 1 min

Just a quick note: I’ve switched to Pelican site generator, because ruby-shuby decided not to work. Unfortunately, switching and preserving multi-language support is cumbersome. So, I’ve decided to ditch RU version. For what it’s worth, I moved to Canada and no longer really interested in trying to “promote” myself in Russian speaking communities.

Yeah, yeah, “promote”, — written by a person who hasn’t write anything for ages. Right.

Whoever read this blog in Russian, sorry. And, actually, I know that there were more traffic from CIS than from any other region >< But, two languages makes it too complicated and procrastinating to write in consistent manner (hysterical laugh).

Зыс ис зе энд.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/08/26/revision-testers-in-this-world/index.html b/articles/2019/08/26/revision-testers-in-this-world/index.html index 088ac741..eaf0c247 100644 --- a/articles/2019/08/26/revision-testers-in-this-world/index.html +++ b/articles/2019/08/26/revision-testers-in-this-world/index.html @@ -2,9 +2,9 @@
\ No newline at end of file diff --git a/articles/2019/09/02/your-api-is-your-public-image/index.html b/articles/2019/09/02/your-api-is-your-public-image/index.html index b2db0b00..ff9236fd 100644 --- a/articles/2019/09/02/your-api-is-your-public-image/index.html +++ b/articles/2019/09/02/your-api-is-your-public-image/index.html @@ -1,6 +1,6 @@ Your API is your public image | aviskase -

Your API is your public image

2 Sep 2019
 7 mins

Disclaimer: this is a translation of the article written 2 years ago for a corporate blog. I didn’t do a word-for-word translation because the original article went through an editor, whose style was not that close to mine. Too watered down and “official.” Also, some examples don’t make sense in English. Still, I didn’t update it too radically. Bear in mind, at the moment of the writing I was testing SOAP services and Excel-based import/export at big government project, so most of the examples relate to that experience.

Sometimes you’ll see a block like that. It will contain my current thoughts on the subject or comments.


First, what is API?

API (Application Programming Interface) is an interface which helps apps to communicate with each other. Just as a human interacts with apps via buttons and dialogs (user interface, UI), so apps interact via APIs.

Types of API

One way to define API types is whether it’s public or private API. Private API is used for interactions inside your system, for example:

  • sync between mobile or desktop app and a server
  • app uses server’s computational resources (e.g., an image stylization app sends image and selected style to the server, where stylization will be done)
  • communications between a web app and server
  • communications between micro-services

Primary risks for private APIs are functional and performance problems. Here, customers can only speculate why app works somehow wrong-y.

With public API communication endpoints go beyond your system boundaries. Either you use someone’s API (social networks, maps, etc.) or you provide your API to external developers.

Let’s talk about public APIs a bit more. For some companies providing APIs is the core business (e.g., payment processing: Stripe, Rebilly), for others it’s just a nice to have addition to the main services. Whatever the case, public APIs open a perspective into how your internal development process works. And you won’t be able to hide behind fancy UI and eloquent support team.

If you publish public API with bad documentation, versioning mess, and tons of functional issues, make no mistake, external developer can (and maybe should!) assume that all your system is developed is such manner. Will they build their services around such system and attract new users? Nope. Will they persuade their bosses and friends against using and/or buying your product? Probably. Don’t forget that people who are not so tech-y value developers’ opinions a lot. And of course, those developers could also give your a bad reputation by complaining on social networks or forums. Therefore, before publishing even the tiniest API, you should think about its quality.

Four ways to fuck up a public API

There are different techniques for assessing API quality (for example, hierarchy of needs). Let’s talk about four main ways to be an awful API:

Broken functionality

Sounds banal, but a service should work. And it should provide functionality it was created for. On one of my projects, there was an embarrassing situation with an export. We tested API with different objects under various conditions, but only with the small number of objects. All was fine until we found a bug on export with lots of data. The thing is that the key purpose for this service was to provide an ability to do massive exports, therefore, the service didn’t fulfill its main reason for existence.

You need to check available operations in context of other operations. For example, we released an import operation for objects A. It required an id of object B in the request body, but import and export of B was unreleased at the moment. As the result, it was impossible to do import A at all.

Other possible problem: do you consider a region where API will be used? Obviously, support for Cyrillic is not that important for purely US oriented product. But if you work globally, do not forget to check non-ASCII characters! Even though Unicode seems to be default, I did find bugs like that one: a user uploaded a file with the name Документ_1.pdf, yet it was saved as _1.pdf.

One more example. We had a service for chunked file download which “ate” last byte of the last chunk. It was highly critical problem because this service was a part of the system where these files were used as supporting documents for legal agreements.

Unreliability

Service is reliable if it works when it’s expected to be working and provides timely feedback in case of any problems.

Worst performance problems I encountered were with export services. One of them was working perfectly fine until the biggest organization in the system started using it and crashing application servers. Hot-fixes after hot-fixes, optimizations, new version; nothing helped. And we couldn’t disable that service or completely rewrite its public API because of contractual obligations.

So, what if your service experiences problems? How external users will learn about it? Will there be any alert about temporary issues or downtimes? Any resolution time frames? Usually, there is a special web page with answers to these question, with a table like this:

API status table

And beware, this page should not on the same infrastructure as the services it’s showing! It would be quite embarrassing if it goes down at the same time your services go down.

Crappy usability

When we hear the word “usability”, we usually think about GUI: buttons and dialogs. I think GUI usability is somewhat overrated: even in the ugliest app you can guess your way by trial and error. With API it won’t work:

  • No public documentation? Users will never even know that API exists.
  • Public documentation is there, but there is no info about actual endpoints? Users won’t be able to call API.
  • Public documentation is there, but written in such manner that without knowing internal docs you can’t understand a thing? Again, users won’t be happy at all.
  • Spelling mistakes? Not that critical if in text, but can be quite awful in schemes. Real support ticket:

Your developers drink too much and it impairs their accuracy. There is an epic fail in a scheme with the name of the element Pressure: the first letter is a Cyrillic character and it breaks all client code generation.

Cyrillic Р/р (pronounced like “r”) looks exactly like Latin P/p.

  • Service works fine, but error messages are not that informative? Users won’t understand how to fix an error (and probably will open a support ticket, so you’ll needlessly spend time resolving non-existing issue).

Unhelpful error message

  • You have UI and API? Don’t forget to check they correspond to each other. The most common problem is when constraints on UI fields don’t match same fields in API: for example, UI can accept maximum 50 characters for the name and API only 20, which leads to errors when trying to export anything created on UI.

  • Don’t forget about versionning (in API and its documentation). The older your services are, more careful you should be with incompatible changes. Documentation should always be up-to-date: sounds obvious, but we had a big fuck up when someone accidentally published documentation for upcoming API version and external developers started trying to use these new features, didn’t find them, and bombed support team with “nothing works again” tickets.

Security holes

When you publish API you also increase a potential attack surface for hackers. First of all, think about authorization and authentication processes. Typically, there are special access tokens for API users. Maybe simple developers’ tokens will be enough for your case, maybe you’ll need to use flows like OAuth. In some cases you should sign requests and responses.

Oftentimes there are several APIs: for example, test API (for internal developers and testers) and open public API. You should make sure that test API is secured enough. There are known cases when web crawlers accidentally found test endpoints and happily showed them in search results.

If you provide access to test API to external developers, you’d better treat this API as high security risk. One time I found a stackoverflow question with code snippet containing authorization keys and proper endpoints for our system.

One open source project I used had a different issue. Test API was used by developers to help with testing: add money to the account, change account status to premium, etc. It was hidden and secure… Until someone released a version with these APIs enabled in production. That’s bad :)

Conclusion

After reading all of these, you’d think that public APIs are too risky, challenging, and expensive. Perhaps, it’s better not to provide it? Maybe. But global connectivity is a trend. Stable and useful API can facilitate your profits: it can increase your user base via external apps or advertise your workplace to professionals. It’s a demanding work, but it pays off.

And even if you don’t and won’t have public API, think about your private ones. We should care about your own developers, shouldn’t we?

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/09/04/stereotype-rant/index.html b/articles/2019/09/04/stereotype-rant/index.html index 6d81834b..58b55c7a 100644 --- a/articles/2019/09/04/stereotype-rant/index.html +++ b/articles/2019/09/04/stereotype-rant/index.html @@ -1,6 +1,6 @@ Stereotype Rant | aviskase -

Stereotype Rant

4 Sep 2019
 2 mins

We live in the world of memes and funny pictures. Nothing wrong with that. Some companies started posting humorous content too, and, again, that’s totally fine: I myself prefer these to dry corporate pitches. But it’s a real shame when you see a post like this from a test agency:

What makes you feel more powerful:

  • Money
  • Status
  • Finding bugs in other people’s code

It would be dishonest to say I would never find this joke amusing. I did. Probably first half a year of being a tester, less or more. And I still hear that “achievement unlocked ding” in my head when I find some peculiar or very critical issue.

But I most certainly don’t feel powerful. Experience with software development comes with a grain of salt and a certain sadness. Each bug you discover means:

  • there are even more unknown problems lurking somewhere
  • you have to bother about code correctness instead of thinking about the value
  • processes are leaky and if I’m not in position to have any impact on them I’d probably feel powerless

So, sorry, but no, that post won’t receive a “like” from me. And I’m quite angry and sad seeing such posts coming from test agency, company that shouldn’t be promoting stereotypes about testers as those guys who are happy to “break stuff” and are always “at war with devs.” C’mon! In the case of this particular company, I understand that it’s most probably marketing shit which wasn’t approved by actual testers working there. But often it happens that we, testers, either don’t care or not vocal enough about pointing out these harmful mistakes to juniors and non-dev people.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/09/08/using-google-apps-scripts-for-productivity-improvements/index.html b/articles/2019/09/08/using-google-apps-scripts-for-productivity-improvements/index.html index f1f0ffd4..22685541 100644 --- a/articles/2019/09/08/using-google-apps-scripts-for-productivity-improvements/index.html +++ b/articles/2019/09/08/using-google-apps-scripts-for-productivity-improvements/index.html @@ -2,9 +2,9 @@

Using Google Apps scripts for productivity improvements

8 Sep 2019
 2 mins

Google Apps Scripts are probably the most useful automation tools I’ve used. They can be used as “excel macros” for Sheets, form processing, and much more. Here I want to share three small scripts I made to improve productivity and task management.

Mark all emails as read

If you ever got bothered by all archived and still unread emails in Gmail, this script can help you. It is based on the script by Mike Crittenden.

I don’t really need it right now, because I have a filter which marks all incoming emails as read right away:

Matches: larger:1
 Do this: Mark as read
 

But it was effective for cleaning up.

Tasks recurring randomly for Amazing Marvin

I use Amazing Marvin for task&project management and currently it doesn’t support randomly recurring items. In fact, no app I’ve tried supports that. It’s a shame, because there could be several use cases for that: “spontaneous” cleaning&organizing, fun activities, ideas review.

If you are able to import tasks (for example, via email), you can check this script. The most important thing is a TASKS list. Each item should have range_start and range_end. For example, range_start = 2 and range_end = 9 mean that task will be created in ranges from two to nine days after last created date. E.g. if the last time task with this id was created on September 10, next task will be created sometime between September 12 and September 19.

Script ensures the task will be created at some point during this range, just make sure it’s triggered to run daily.

Create a task when new package release is available on PyPI

I have a weird project which can start only after a particular release of one python package. It’s not very urgent, so no hurry, but I don’t want to check for releases manually.

This script checks RSS for the package on libraries.io and if there is a new version available, it will send email to AM to create a task.


I didn’t go into details about how to setup these scripts, so if you have any questions, feel free to comment.

older   ·   ·   ·   diff --git a/articles/2019/09/15/soap-testing-101/index.html b/articles/2019/09/15/soap-testing-101/index.html index 66b744e1..3c8f2843 100644 --- a/articles/2019/09/15/soap-testing-101/index.html +++ b/articles/2019/09/15/soap-testing-101/index.html @@ -1,7 +1,7 @@ SOAP testing 101 | aviskase -

SOAP testing 101

15 Sep 2019
 9 mins

Disclaimer: this is a translation of the article written 2 years ago for a corporate blog. I didn’t do a word-for-word translation because the original article went through an editor, whose style was not that close to mine. Too watered down and “official.” Also, some examples don’t make sense in English. Still, I didn’t update it too radically. Bear in mind, at the moment of the writing I was testing SOAP services and Excel-based import/export at big government project, so most of the examples relate to that experience.

Sometimes you’ll see a block like that. It will contain my current thoughts on the subject or comments.


SOAP (Simple Object Access Protocol) is a standardized protocol for communication between a server and a client. Typically, it’s used over HTTP(S), but it can operate over other application level protocols like SMTP or FTP.

Testing SOAP services is not drastically different from any other API testing, but you need to learn some specifics and use better suited tools. This article will provide a small checklist of know-hows and skills which you can use as a guide for getting started and improving your work.

Theory

SOAP is a protocol, so you need to read about the protocol itself as well as standards and other protocols it uses and, when the time comes, about its extensions.

XML

XML is a markup language similar to HTML. Every message sent via SOAP is a XML-document, where you can easily identify how data are structured.

<?xml version="1.0"?>
 <note>
   <to>aviskase</to>
   <from>universe</from>
diff --git a/articles/2019/09/24/new-domain/index.html b/articles/2019/09/24/new-domain/index.html
index d04dceb9..6937b8be 100644
--- a/articles/2019/09/24/new-domain/index.html
+++ b/articles/2019/09/24/new-domain/index.html
@@ -2,8 +2,8 @@
 

New domain

24 Sep 2019
 1 min

Yay, I’ve bought my own domain!

First I though about fancy-schmancy .me or .io. Or maybe aviska.se. But I went with simple aviskase.com. Not that I am particularly invested into SEO and stuff, but all the articles recommend to be boring. Also, I used comparable products heuristic: most of the blogs I am subscribed to have the same dull TLD.

BTW, if anyone ever was subscribed to RSS, sorry for whole new regeneration =(

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/09/30/achievement-unlocked-rsta/index.html b/articles/2019/09/30/achievement-unlocked-rsta/index.html index 9db00c3d..9d353d37 100644 --- a/articles/2019/09/30/achievement-unlocked-rsta/index.html +++ b/articles/2019/09/30/achievement-unlocked-rsta/index.html @@ -1,6 +1,6 @@ Achievement unlocked: RSTA | aviskase -

Achievement unlocked: RSTA

30 Sep 2019
 3 mins

Landscape of testing related courses is problematic. On one hand, there are lots of courses. On the other hand, there are few courses I would consider. Either they are lackluster and certificate-centered, or entry-level-only, or mind-bogglingly expensive. And about the last bit. Many say that if a company provide a training budget, costs don’t matter. Well, maybe if that company is one of FAANG, you can think so. But I work in a small one and I don’t want to throw its (our?) money away.

Nevertheless, there are always some courses and trainings which you just want to attend to. For example, those from Satisfice, Inc. That’s why I applied as soon as I saw time convenient online version of Rapid Software Testing Applied. And, BTW, the price was decent.

I don’t want to write about the course agenda and curriculum: google is your friend. I will neither sing praises to its appeal and importance nor criticize its materials. We can argue as much as we like on should testers be able to code or not, but the knowledge of who James Bach and Michael Bolton are can be a mark of competency (necessary but not sufficient). And based on this knowledge, I think, it’s quite obvious what you can expect from such course. I also won’t describe what I learned: the most useful attainments are tacit and will surface in future work and articles.

So, this article is about technical moments. My previous place of work, Quality Lab, provides trainings, and from there I’ve discovered an interest in learning and comparing processes used in education.

RSTA was held September 18-20, completely online. It was nice that we used Mattermost for communications; I used this open source messenger before. Usually, it’s always scary how courses handle linux users: sometimes they require skype (which became quite awful) or webinars are streamed with god knows what. Here everything was ok, linuxhead’s feelings were not offended.

After the course we received all materials, not just slides and recordings, but also:

  • Agenda & log
  • Recordings
  • Class materials (like slides and articles to read)
  • Session reports with attachments (with comments by instructor)
  • Bug list (with comments and attachments)
  • Group chats archive

Last part is really awesome. I, like a fool, sat and copied all important messages. Even woke up during the night realizing I forgot to save some PDFs. And it turned out not to be a problem at all, because I have the whole archive now. Super.

One thing I would change is a duration: three days for “Applied” is too fast. You’d want more practice. For example, double all assignments, where the first time would be a “learning” and the second time — “reinforcement and revision”. Reporting assignments would be a great addition too. Also, it would be interesting to intensify students’ cooperations: working in teams was possible, but wasn’t required. What if there was one obligatory assignment for paired testing?..

Our group, as I understand, was smaller than usual. But for me it was an upside, because I read all assignments and bug reports. As usual, some students were more active than others: big shoutout for them for questions and discussions!

Overall atmosphere was pleasant. I noticed that in some other courses instructors were present only as talking heads in the pre-recorded videos and names in the ads. Not in this case. James answered questions himself and commented on assignments and bugs; peers advisors only helped.

Active students, honest instructor and peer advisors are the most significant qualities for me. We go to the training to get out from our bubble; the more communications and sharing we get, the more valuable is this experience. And RSTA definitely fulfilled this expectation.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/10/05/internal-struggle-with-language-gymnastics/index.html b/articles/2019/10/05/internal-struggle-with-language-gymnastics/index.html index 2e79330d..55678920 100644 --- a/articles/2019/10/05/internal-struggle-with-language-gymnastics/index.html +++ b/articles/2019/10/05/internal-struggle-with-language-gymnastics/index.html @@ -5,8 +5,8 @@ On one hand, I wholly agree with the notion that we should be attentive with words. Like checking vs testing, “quality assurance”, and all those other things. I find myself as they say “nit-picking” quite often. On the other hand, I also follow a rule “if your team understands you, words don’t matter”.">

Internal struggle with language gymnastics

5 Oct 2019
 5 mins

During the RSTA class I asked a question about language and communications:

On one hand, I wholly agree with the notion that we should be attentive with words. Like checking vs testing, “quality assurance”, and all those other things. I find myself as they say “nit-picking” quite often.

On the other hand, I also follow a rule “if your team understands you, words don’t matter”. Especially, if you come to the established project with existing vocabulary. It can be bizarre and absurd (my favorite was “bug validation” for the process of checking bug fixes), but it’s habitual to everyone, so you’ll likely spend more time trying to “reteach” than working. And you’ll probably fail.

So… how to live with this contradiction?

And later:

As I understand you, contradiction is resolved by considering how much harm it can create?

That makes sense. Pain and risk analysis in practice. Even though I said that “defect validation” absurdity was my favorite, it wasn’t harmful for thinking processes: we usually didn’t even call it like that and used abbreviation “DV”. Or sometimes QA (grrr). I noticed that majority of non-dev colleagues don’t even know what A stands for. Probably it’s the reason why some testers come up with new reinterpretations like “Advocate” or “Assistant”. BTW, here is a funny example.

While James debated that there is no contradiction, it is there. It’s my internal contradiction. I feel these sides of me constantly struggling; for an outside viewer it can be perceived as inconsistency.

I haven’t found resolution to this conflict yet, but it occurred to me that it is based on the conflict between characteristic testers’ nit-pickiness and my amateur linguistics studies.

You see, when it comes to languages in general, my position is 100% on the side of “all is fine as long we understand each other”. Why?

Pronunciation

I’m not English native speaker. And I’m not gifted with an ability to perfectly emulate a native pronunciation. In fact, sometimes I have to speak with an even heavier Russian accent than my usual, because it’s easier to understand for some people.

Now, if we go even further, what is a native pronunciation? Standard American? Or Royal UK? Or Canadian, eh? So, even if I could speak using one proper variant, it still won’t be really native in many contexts.

Vocabulary

Obvious point with the same examples as before. But that’s English, the de-facto new Latin. Let me tell you an anecdote from Russian. I was born in the Northern Kazakhstan in the city on the border with Russia. You’d imagine not many vocabulary differences. Yet, the moment you cross the border everyone can tell that you’re from Kazakhstan just by one extremely common word: “sotka” (contraction for “a cell phone”). In Russia they use “mobilka” (contraction from “a mobile phone”). This reminds me of “smoke” vs “sanity” testing a lot.

Words origins

Some words with the same spelling can have completely distinct origins. For example, “bear”:

  • as for an animal came from Proto-Indo-European *bʰerH- (grey, brown) or *ǵʰwer- (wild animal)
  • as for “to sustain” came from Proto-Indo-European *bʰer- (to carry)

And the opposite situation, when words with different spelling and meaning came from the same origin, e.g. “suit” and “suite”.

People tend to make wrong assumptions about modern spelling and pronunciation. We think we understand words and their relative closeness. We don’t. Therefore, many everyday conclusions about them are also faulty. Those who try to be on a safe side refer to dictionaries, but here is a catch: dictionaries are opinionated language slices. That’s why there are so many dictionaries: general, jargon-specific, etymological. There is no single point of truth you can safely refer to. Isn’t it funny when you read news about some big-name dictionary finally including a word that was in use for a long time?

By the way, in Russian “тестер” while being direct calque of “tester” is a name for devices like multimeter. Electrical engineers coined this term in Russian earlier than our role was invented (so we are named “тестировщик” ~ “testist”). It’s quite funny considering how much emphasize is there on testing being human activity.

Evolution of languages

All aforementioned are just smaller parts of overall language evolution. For some reason we are accustomed to perceive a language as something stagnant and with rigid unchanging rules. Maybe because it’s easier to teach like that in school? And nothing can be further from true because languages are perpetually in flux, either for historical or geographical reasons. Before it was happening naturally without many obstacles, but now we have schools, official authorities like Académie française, and beloved grammar-nazis.

One of the best examples is an accentual system in Russian (stress). Whoever tried to learn how to pronounce Russian words would be certainly baffled how illogical it is. Natives make mistakes all the time. The reason is simple: current system is in the transition state from highly ordered and easy Proto-Slavic accent to the new someday also ordered but different accent. Yet right now transition is around 30% mark and it breaks havoc within speakers. What makes things worse are all those regulatory bodies and opinionated people who try to control this process and make you speak already obsolete way.


I won’t be able to stop arguing about some words. Even those who say that they are sick and tired of some common testing debates, still nit-pick on other concepts. It’s a part of human nature which is common not only in testing community: recently I’ve read an article by Troy Hunt about which way of API versioning is better. And the most valuable lesson from there is:

Unfortunately this very often gets philosophical and loses sight of what the real goal should be: building software that works

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/10/13/how-to-test-api-usability-part-1/index.html b/articles/2019/10/13/how-to-test-api-usability-part-1/index.html index 2a2fdcac..207ff0d7 100644 --- a/articles/2019/10/13/how-to-test-api-usability-part-1/index.html +++ b/articles/2019/10/13/how-to-test-api-usability-part-1/index.html @@ -2,9 +2,9 @@

How to test API usability: part 1

13 Oct 2019
 8 mins

Disclaimer: this is a translation of the article written 2 years ago for a corporate blog. Bear in mind, at the moment of the writing I was testing SOAP services and Excel-based import/export at big government project, so most of the examples relate to that experience.


Usability is one of the most crucial quality attributes of an API. Let’s talk about why, when, and how we can assess this characteristic.

Today (hopefully) no one doubts the necessity of usability testing of GUI. Yet, according to ISO 9241, usability is the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments. There is no mention of menus, fonts, or buttons color. Hence, we can evaluate usability of any product, be it a mobile app, a vacuum cleaner, or an API.

For testing API usability we can use methods developed in the field of HCI; same as used for GUI. Generally, these methods can be divided into two categories: analytical and empirical.

Analytical methods

Analytical methods involve exploration based on some expert knowledge. Loosely speaking, you and/or the whole dev team try to evaluate and find hypothetical usability problems without users input.

Heuristic evaluation

Easiest way is to use heuristics. There are no strict lists of what criteria you should check; all depends on what kind of API you have (e.g., library or REST service).

For instance, a paper on a structural analysis of usability problem categories mentions this set of heuristics:

  • Complexity. An API should not be too complex. Complexity and flexibility should be balanced. Use abstraction.
  • Naming. Names should be self-documenting and used consistently.
  • Caller’s perspective. Make the code readable, e.g. makeTV(Color) is better than makeTV(true).
  • Documentation. Provide documentation and examples.
  • Consistency and conventions. Design consistent APIs (order of parameters, call semantics) and obey conventions (get/set methods).
  • Conceptual correctness. Help programmers to use an API properly by using correct elements.
  • Method parameters and return type. Do not use many parameters. Return values should indicate a result of the method. Use exceptions when exceptional processing is demanded.
  • Parametrized constructor. Always provide default constructor and setters rather than a constructor with multiple parameters.
  • Factory pattern. Use factory pattern only when inevitable.
  • Data types. Choose correct data types. Do not force users to use casting. Avoid using strings if better type exists.
  • Concurrency. Anticipate concurrent access in mind.
  • Error handling and exceptions. Define class members as public only when necessary. Exceptions should be handled near from where it occurred. Error message should convey sufficient information.
  • Leftovers for client code. Make the user type as few code as possible.
  • Multiple ways to do one. Do not provide multiple ways to achieve one thing.
  • Long chain of references. Do not use long complex inheritance hierarchies.
  • Implementation vs. interface. Interface dependencies should be preferred as they are more flexible.

Let’s try to apply some of these heuristics. There was a time when every new tester came to me during the onboarding and asked about the error message “House with ID <> was not found.” I told them to use internal system id instead of global FIAS id (Russian index system for buildings). And every one looked startled and answered me that there is no such parameter in the API request! Well, the problem was that you had to use the same parameter named FIASHouseGUID. For some reason when system was designed no one thought that the better name would have been HouseID as it could be filled either with FIAS id or with internal id. Even though current name was misleading (naming heuristic), it was no longer possible to change without breaking backward compatibility.

Next example is about error handling. One service I tested had a very common error “Access is denied.” There were numerous reasons for this error: no entitling documents, documents are not in the status “published,” other organization already created the same object. Reasons were different, but the received error message was the same; users couldn’t guess what was their problem.

There are other, more “serious” heuristics for API. Often they target specific technical details. You need to be able to code to understand them. For example, criteria from Joshua Bloch. Or a usability research made by Microsoft to determine which constructor is better: default constructor with setters and getters or constructor with required parameters. Results showed that the first method was more preferable; this became a heuristic for library design.

Cognitive dimensions

These are distinct criteria used predominately for evaluating usability of notations, user interfaces, and programming languages — or, generally speaking, information artifacts. In my view, they intersect with some heuristics, but there is a difference: heuristics are contextually selected by some experts, whereas cognitive dimensions are more or less stable set of principles. You can read about the main set described by Thomas R.G. Green and Marian Petre in the Wikipedia.

Some companies customize cognitive dimensions for their needs, like a framework suggested by Visual Studio usability group:

  • Abstraction level. The minimum and maximum levels of abstraction exposed by the API, and the minimum and maximum levels usable by a targeted developer.
  • Learning style. The learning requirements posed by the API and the learning styles available to a targeted developer.
  • Working framework. The size of the conceptual chunk (developer working set) needed to work effectively.
  • Work-step unit. How much of a programming task must/can be completed in a single step.
  • Progressive evaluation. To what extent partially completed code can be executed to obtain feedback on code behavior.
  • Premature commitment. The number of decisions that developers have to make when writing code for a given scenario and the consequences of those decisions.
  • Penetrability. How the API facilitates exploration, analysis, and understanding of its components, and how targeted developers go about retrieving what is needed.
  • API elaboration. The extent to which the API must be adapted to meet the needs of targeted developers.
  • API viscosity. The barriers to change inherent in the API and how much effort a targeted developer needs to expend to make a change.
  • Consistency. How much of the rest of an API can be inferred once part of it is learned.
  • Role expressiveness. How apparent the relationship is between each component exposed by an API and the program as a whole.
  • Domain correspondence. How clearly the API components map to the domain and any special tricks that the developer needs to be aware of to accomplish some functionality.

Here is an example for domain correspondence. Service main entity was a house. Common apartment building can have several entryways, each leading to set of apartments. But in Kaliningrad this doesn’t apply: a typical address there can look like “2-4 Green Street,” where entryways are house 2 and house 4. This bizarre (and initially unknown) domain model broke the whole logic behind API design. For instance, we had to allow users to add house-level metering devices to entryways if it’s actually a house.

Cognitive walkthrough

While the first two methods are based on checking API against some kind of list of criteria, cognitive walkthrough is closer to scenario-based testing. Essentially, an expert comes up with typical API usage scenarios and attempts to perform them.

Cognitive walkthrough example

You can combine this method with heuristics. When we analyzed services, we found out that there were problems with the consistency: when you sent a request to create an entity, some services responded with entity version id, while others provided root id. Moreover, most of the services required entity id for creation of other entities, and again, it could be either root or version id. It didn’t look that bad, until we tried walking through a business scenario:

  1. Create an entitling document
  2. Create a metering device providing document root id

In existing API workflow you had to do it in 3 steps instead of 2:

  1. Create an entitling document → server responds with document version id
  2. Retrieve the document using provided version id and get document root id from the response
  3. Create a metering device providing document root id

This middle step is objectively unnecessary and generates additional server load. Here, using cognitive walkthrough, we also detected an inconsistency with heuristic “minimal working code size.”

API peer review

Heuristics and walkthroughs are great methods, but they could be quite subjective. For better objectivity you’d better use group evaluations, where several persons analyze API. You can read about how and why this method can find usability problems which are rarely found by empirical methods in this Microsoft paper.

Peer reviews involve these four roles:

  • Usability expert who will organize and moderate the evaluation process from usability perspective
  • A person who is an owner for the specific API chunk under review
  • A person who is an owner of the API unit (or system) where the reviewed chunk resides and who knows the context of API usage and its interactions with other APIs
  • 3-4 persons who will complete some task which will be used to actually evaluate usability (usually just other developers)

During the planning process, a usability expert and a chunk owner should discuss:

  • Key tasks to be completed during the review (e.g., how to create a document using an API)
  • Code examples to be reviewed
  • Who are the other participants (they can be selected based on specific criteria, like knowledge of SOAP services and Java)
  • Place and time for review session

You should start a peer review session with the explanation of how this meeting will proceed and communicate basic information related to the evaluated API chunk. Next you distribute code examples and discuss them, asking these main questions:

  • Do you understand what this code does and what its objective is?
  • Is this objective achieved in logical and rational manner?

Based on the answers, a usability expert asks to elaborate details. For example, hearing that naming is weird, expert should ask why a person thinks that way and what different name would be better.

The final step is to analyze problems. Here is where an API unit owner can help to identify the most significant issues and could they be resolved or not.


That’s the end of part one. Empirical methods are covered in part two.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/10/19/how-to-test-api-usability-part-2/index.html b/articles/2019/10/19/how-to-test-api-usability-part-2/index.html index f4bc91a8..9e5c10b9 100644 --- a/articles/2019/10/19/how-to-test-api-usability-part-2/index.html +++ b/articles/2019/10/19/how-to-test-api-usability-part-2/index.html @@ -5,8 +5,8 @@ Empirical methods The distinction between analytical and empirical methods is that the latter investigates how real users will use the product. But don’t assume that empirical methods are by default better than analytical: both are important because they discover different problems.">

How to test API usability: part 2

19 Oct 2019
 6 mins

This is part two of a two-parter. Check out part one.

Empirical methods

The distinction between analytical and empirical methods is that the latter investigates how real users will use the product.

But don’t assume that empirical methods are by default better than analytical: both are important because they discover different problems. This research showed that heuristics were more efficient in finding documentation and structural problems, whereas empirical methods were more useful in finding UX and runtime specific issues.

Barchart with comparison of different issue types found via different methods

Monitoring

Monitoring is used to gather usage statistics. For web services, it’s rather easy. For instance, you can discover that one API endpoint is never called. Hence, you should consider the causes: is it missing in the documentation or not needed to anyone? Monitoring also helps to map scenarios: what kind of requests, to which services, and in what order happen most often.

And don’t forget to monitor not only successful requests but also failures. Imagine, some business errors are stockpiling: maybe you need to reconsider API design or error handling?

Another thing to monitor is data volumes. Analysts from the project I worked on assumed that type A documents should be more common than type B documents, so the service was better optimized for the first type. It was quite a surprise when we did a simple SQL count and found out that the number of type A documents were 600 thousand, while type B accounted for 80 million. After that discovery, we had to prioritize tasks related to service B way higher.

Support tickets

If you have a support team, you’re in luck: analyze tickets, pick out those related to usability, and identify the most serious issues. Previously I wrote about accidental Cyrillic symbols instead of English in service schema: these problems resurfaced specifically via support.

Moreover, support tickets offer insight into the most common tools and workflows used to work with your API. Once we had an external developer who generated soapAction dynamically based on a root request structure by trimming the word Request. For example, importHouseRequest gave importHouse. But one of our services with a name importPaymentDocumentRequest expected soapAction=importPaymentDocumentData instead of importPaymentDocument (what the developer would expect). On the one hand, the user’s solution was poor: you’d better use WSDL. On the other hand, maybe they didn’t have a choice and we probably should ask ourselves why naming wasn’t consistent.

Surveys

Not everyone has a support service. Or perhaps it doesn’t give enough information. In that case, surveying API users is helpful. There is no point in giving examples: this topic is highly contextual. But you can start with the basics: “What do you like?”, “What do you don’t like?”, and “What would you like to change?”.

User sessions

User sessions are the most expensive and cumbersome usability evaluation method. You need to find people based on a typical user profile, give them some tasks, watch the process, and analyze results.

Each company administers sessions in its own way. Some perform remote sessions, others invite developers on site. In both cases developers can use their own laptops and favorite IDEs: first, it’s closer to real-world conditions, second, it minimizes stress from an unknown environment.

Yet, there are more exotic examples. A developer is led to the room with a one-way mirror (yup, like in movies). A usability expert sits behind the mirror and observes developer actions as well as what’s happening on the dev’s computer screen. The developer can ask questions, but the expert will answer only in rare cases. In my opinion, it’s too sterile.

Generally, all API related user sessions have two phases. The first phase is a developer workshop with tasks like:

  • Solve a problem in the notebook without an IDE (to get an idea of how developers would design API on their own).
  • Practical tasks for API usage (e.g., write a code for file upload using a service).
  • Read and review a code snippet to assess its clarity and readability (use printouts to make this task more challenging).
  • Debug a faulty code snippet (this helps to study how a user will handle and correct an error).

The second phase is an interview where you ask:

  • Name three biggest issues you encountered during the workshop and how did you overcome them (documentation, support, StackOverflow, a friend’s help)?
  • How much time did you spend looking for additional information outside official documentation?
  • Did you encounter unexpected error messages? If yes, did they help you correct a problem?
  • Name at least three ways to improve official documentation.
  • Name at least three ways to improve API design.

Personas

Personas are used both in analytical and empirical methods. All you need is to figure out which characteristics best describe your users (in case of API, developers). These descriptions tend to be humanized by assigning a name and a photo, adding information about fears and preferences. You can wear a “persona hat” while applying heuristics or rely on them while selecting developers for user sessions.

Typical developers’ personas:

  • Systematic developers don’t trust API and write code defensively. They are usually deductive, write on C++, C, or even Assembly.
  • Pragmatic developers are more common and work both in deductive and inductive manners. Typically they code desktop and mobile apps in Java or C#.
  • Opportunistic developers concentrate on solving business problems in an exploratory and inductive fashion. Guess what language they prefer? JavaScript.

Now, I want to point out that the aforementioned language discrimination is not my invention. If you’re lucky, perhaps you’ll find the original article by Visual Studio usability expert, where these quirky definitions were introduced. Unfortunately, I was able to get only [its first page in the Wayback Machine][visual-studio], so you have to take my word for it. Nevertheless, I hope this example can encourage you to create your own personas.

We can also combine personas with cognitive dimensions. Create a radar chart with 12 axes, where each axis is a cognitive dimension. Next, plot current values for your API and values according to the persona’s expectations. This chart is great for comparing how existing API corresponds to user values.

Radar chart with comparison of developer expectations vs current state of API

Developer from the example chart (blue line) prefers API with a high level of consistency (10) and hates writing boilerplate (4). As we can see, the current state of API (black line) doesn’t satisfy these criteria.

Summing up

Readers comfortable with GUI usability testing would say: “That’s exactly the same stuff!”. And you’re right, there is nothing supernatural about API usability. Even though it’s called an application programming interface, programs are yet to learn how to find other APIs and use them automatically; they still need us, meatbags. That’s why almost everything applied for GUI usability evaluation is reusable for API with some adjustments.

Now, what about the best method? None, apply them all! According to this research, each method can identify unique issues.

Venn diagram showing how different methods overlap in finding different issues

If you are tight on resources, I suggest using the least expensive methods: heuristics, cognitive dimensions, walkthrough, and support tickets. Even the simplest techniques can drive API improvements.

Someone would argue that API usability is not that important: “we don’t have time for that, it’s a dev thingy.” But developers created style guides not just to be fancy; this serves to accelerate the achievement of shippable quality. We care about hidden code quality, therefore we need to care about externally visible code like APIs even more.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/10/26/a-testers-guide-on-hunting-for-api-related-sources/index.html b/articles/2019/10/26/a-testers-guide-on-hunting-for-api-related-sources/index.html index 5d186cc3..2a4c8b02 100644 --- a/articles/2019/10/26/a-testers-guide-on-hunting-for-api-related-sources/index.html +++ b/articles/2019/10/26/a-testers-guide-on-hunting-for-api-related-sources/index.html @@ -2,8 +2,8 @@

A tester’s guide on hunting for API related sources

26 Oct 2019
 4 mins

You’ve got interested in APIs. Or you’re not a fan (yet) but you have to test it. Whatever the cause, you’d want to develop a mental model of this vast field. And a model construction demands a generous supply of information to consume and digest.

I prefer to seek knowledge in these five areas:

  • standards
  • dev experience
  • usability
  • tech writers
  • vendors and companies

In this article, I’ll explain a bit why each area matters and give a bunch of sources. I won’t go into details why I selected these sources in particular: some just came to my mind first. Treat them as examples, and if you know not mentioned must-reads, I’m happy to learn about them.

Standards

Standards are the foundation. And reading abridged explanations in Wikipedia or someone’s blog is never enough. True understanding requires the ability to read and reason based on original documents. Details do matter for APIs.

For web services, the most valuable sources are RFC, API description and schema specifications. While some of them are community-driven, others, like GraphQL or gRPC, are backed by companies.

Dev experience

Barebone foundation is important, but let’s add some meat. While you can try to speculate about practical differences between GraphQL and REST+HTTP/2, you’ll learn faster from those who develop and use APIs as their career. I’m talking about developers, of course. For some inexplicable reason, some so-called “professionals” still perpetuate a myth about testers not being able to understand devs’ books and articles, so prove those haters wrong!

Usability

I like API usability very much. Compared to performance or security, this theme is often overlooked. HCI is the whole field of study with real research and statistics magic, which may feel overwhelming at first. Here is a selection of papers to begin with.

By the way, there is a term in HCI: developer’s experience. DX is like UX, but when a user is a developer.

Tech writers

I insist that without proper documentation an otherwise perfect API is still a shit. And who knows about docs better than technical writers? APIs are a hot topic for them, simply because that’s higher-paying field.

On the side note, it’s curious from the tester’s viewpoint that tech writers have the same holy war about how “technical” they should be. Cute, isn’t it?

Vendors and companies

Almost all IT companies now have blogs and even conferences, and those prove to be an excellent source. I’d suggest paying attention to:

Though, be skeptical. The former tries to sell their tools, whereas the latter tend to show-off.

Testers

Some of you are probably asking where are recommendations on testers? Well… fuck testers.

Don’t even bother reading attentively testers’ blogs about API. Don’t make my mistake! I’ve lost an unimaginable amount of time doing that: oftentimes, all of them can be divided into three categories:

  1. How to apply well-known testing methods and techniques to APIs.
  2. Basic theory on how APIs work.
  3. Tutorials on using <insert a library or a tool name> to test API. All those RestAssured, Karate, you name it.

Don’t get me wrong, I do understand that I’m also an offender and write similar articles. There is some value in them. For me, it’s a way to sort my thoughts. And reading still helps making sense on what to look for. Moreover, if you’ve just started your tester’s journey, perhaps you’re not so comfortable with test theory, then stories about its application are useful. Nevertheless, I can go on a day-long rant about how learning only from testers stupefies you.

So, heed my advice. I bet you already follow all those testers and see API related stuff from them once in a while. Skim through, there could be some interesting info, but don’t rely on them. Fun fact: the most popular talks on the Heisenbug conference are from non-testers. Testing can never be an idea by itself; you need a practical application to other fields to give testing a sense and direction. That’s why studying those fields proves to be productive and enlightening. Always hunt for other sources!


See also:

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/11/11/weekly-hmms-__init__/index.html b/articles/2019/11/11/weekly-hmms-__init__/index.html index 5f96c697..f0cbeeac 100644 --- a/articles/2019/11/11/weekly-hmms-__init__/index.html +++ b/articles/2019/11/11/weekly-hmms-__init__/index.html @@ -1,6 +1,6 @@ Weekly hmms: __init__ | aviskase -

Weekly hmms: __init__

11 Nov 2019
 3 mins

__init__

This post initializes a series of weekly ponderings, interesting links, and other hmms. Think of it as typical a “Five for Friday,” but without number constraint and more emphasis on effects on my thought process. These posts should come out on Fridays, but because I forgot to commit, this one is late.

Oh, and if you have no idea what __init__ is, that’s from Python.

“Write the Docs” podcast: episode 25

I’ve just started plunging in and out tech writers’ media, and so far it keeps amusing me. WTD #25 is interesting for two things.

First, this episode covers research on how developers use API documentation. Unlike predominant web services based papers, this research focuses on C++. One discovery was that developers prefer checking header files (aka interface definitions) to implementation or documentation.

Second, I noticed again that we, testers, and API tech writers have a common identity crisis: given enough time, developers can do our work. One guy from the podcast sounded relieved when he heard that not all the docs and comments in Google are written by devs. Maybe Alan and Brent should pitch modern testing principles to tech writers too?

Read the damn code

This week testing slack group had an almost holy war about testers looking at code. The consensus seemed to be “access to code is awesome,” nevertheless, there were other opinions:

You should have a strong bias seeing the code. It will be more difficult to search for the unknowns.

Oh my. I don’t do it as often as I should, but I love checking commits:

  • Skimming through commit messages for the current build.
  • Checking what and how was touched for implemented tasks.
  • Reading commits associated with bug fixes.

And every time I uncover some unknowns. Just very recent examples:

  • Caught that the task for implementing API blah-blah also has commits for API meh-meh. Not only I wouldn’t know about these changes without the Git God, but also those were scheduled with a different design for later.
  • Identified code duplication and asked the dev to fix the bug in the remaining dups or refactor the code (hehe, sorry, Eric).
  • Suspected that the bug fix was incomplete, asked dev, he confirmed and refixed. Not spending time on build&install&test is priceless.

So, dear testers, read the damn code. Stop behaving like special snowflakes whose mind will be forever damaged if you’ll learn to code a bit. I learned Pascal at school. My father taught himself on paper using journal articles. That’s not rocket science. The way modern society goes, coding is close to become a part of common literacy. No one asks for enterprise-levels skill, but as long as your system isn’t written in Brainfuck, even basics should be fine.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/11/16/weekly-hmms-valuations-job-titles-conferences-apis/index.html b/articles/2019/11/16/weekly-hmms-valuations-job-titles-conferences-apis/index.html index 872cad24..4676f19b 100644 --- a/articles/2019/11/16/weekly-hmms-valuations-job-titles-conferences-apis/index.html +++ b/articles/2019/11/16/weekly-hmms-valuations-job-titles-conferences-apis/index.html @@ -5,8 +5,8 @@ Let’s start. SaaS valuations My current company, NetGovern, operates with an open-book management mindset. Apart from other information, all employees know company finances so we can better understand what is our real position and where we plan to go next.">

Weekly hmms: valuations, job titles, conferences, APIs

16 Nov 2019
 3 mins

To assume I could push anything on Fridays was too optimistic. Thus, I’ll commit to Saturdays.

Let’s start.

SaaS valuations

My current company, NetGovern, operates with an open-book management mindset. Apart from other information, all employees know company finances so we can better understand what is our real position and where we plan to go next.

It’s tough to wrap your head around balance sheets or cash flows, especially for me: I was born in 1992, a year after Kazakhstan declared independence from the Soviet Union. Suddenly, generations raised in the planned economy plunged into the free market (didn’t go well). I’ came from the culture of mixed and uncertain economic literacy, hence, all these stocks, investing, growth rates and margins don’t flow naturally into my brain.

Nevertheless, I don’t shy away from these topics because learning anything is the best hobby. This week, we read a paper explaining how to valuate typical SaaS companies (how to calculate a price).

I’ve created a mind map (or an outline, Mindomo supports both at the same time) with the main info:

Some tech writers avoid the word “writer” in their titles

The last myth from the podcast “10 myths about API documentation” is: people will respect you more if the word “writer” isn’t in your job title.

Insecure tech writers prefer to call themselves:

  • Information developers
  • User assistance developers
  • Information strategists
  • Content strategists
  • Technical communications professionals
  • Content engineers

First, these are so funny. Second, the testing field has the same tendency. My official job title is “QA & escalations analyst,” whatever this means. Testers don’t avoid the word “test,” but they do try to shoehorn hardcoreness with words “developer,” “engineer,” or “analyst.”

OnlineTestConf

It’s not a secret that I’m not a fan of testing-only conferences. Not rolling your eyes is impossible when you see mind-blowing prices for the STAR conference and find there popular workshops like “Learning Git.” Pass.

Still, I checkout out some occasionally. Heisenbug is amazing. But most of the talks are in Russian, sorry folks.

The next available conference is OnlineTestConf, December 3rd-4th 2019. The previous one was passable, this one seems to have a similar structure. Still, nothing beats free and online. There will be big names, yet what caught my eye was the “Adidas Testing Platform” talk. Adidas API guidelines are awesome, and learning more about their processes would be cool.

Me and APIs

Should this be a regular rubric? Perhaps.

Though I love APIs, my knowledge is unsatisfactory and insufficient to be confident in my new undertaking: writing API design guidelines. Without this document, your APIs will soon become an inconsistent mess. You need to have conventions on even the most basic stuff like capitalization or pluralization. We should have created this doc a year ago; doing consolidation and standardization will require painful fixing of existing clients, but better sooner than later.

As I said, I’m not an expert on API design. And I’m not a good writer either. I’m a tester whose job description is to find ways to improve quality. Next months will be fun: doing research, poking devs with sticks, and asking stupid questions in the “APIs You Won’t Hate” slack group.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/11/23/weekly-hmms-fowler-collaborations-html-and-cts/index.html b/articles/2019/11/23/weekly-hmms-fowler-collaborations-html-and-cts/index.html index 3f15114a..0eed270a 100644 --- a/articles/2019/11/23/weekly-hmms-fowler-collaborations-html-and-cts/index.html +++ b/articles/2019/11/23/weekly-hmms-fowler-collaborations-html-and-cts/index.html @@ -2,8 +2,8 @@

Weekly hmms: Fowler, collaborations, HTML, and CTS

23 Nov 2019
 3 mins

New week, new hmms!

Martin Fowler and exploratory testing

All testers’ slack groups, forums, and blogs erupted this week with the tidings of joy: Martin Fowler wrote a post about exploratory testing. Of course, it seems a bit late and cursory, but at least now we have a very respected source to point to.

Collaborations

Mob programming

Finally found time to read a “Mob Programming Guidebook” by Maaret Pyhäjärvi. I’m not entirely sure about applying it at the moment, but it looks like an interesting method for knowledge sharing facilitation.

And contributions style roles can be reused as hats in other activities:

  • driver (intelligent input device)
  • sponsor (supporting others from a unique position)
  • nose (noticing things about the code)
  • navigator (translating ideas into code)
  • researcher (having better information available)
  • automationist (recognizing repetition)
  • conductor (enhancing others contributions)
  • mobber (always contributing in different ways)
  • rear admiral (helping designated navigator do better and learn)
  • archivist (improving team visibility)

Tips for code review

Google Testing blog is still alive and has posted a nice short guide about being a good code reviewer. The best thing is that there are not only tips for reviewers but also for authors. Likewise, they could and should be applied for issue reports and other communications.

Validation for email inputs

Standards are weird. We had a tiny funny problem this week with an OpenAPI spec and a format: email. The context:

  • SwaggerUI generates input fields, and with this format it will have type=email.
  • When I don’t care about cross-browser stuff, I use Chrome.
  • Our middleware seems to validate email formats, somewhere very deep in dependencies.
  • We also have a custom validation.

First, I checked a request with a garbage email. It failed on the middleware layer. Then I checked an email with non-ASCII characters, but our validation failed. That meant that validation in middleware was passing; therefore, its validation checks were better than ours.

The dev who worked on fixing the bug noticed an interesting behavior. If you use Firefox to open SwaggerUI, it will add a red highlighting for an email input field when a value contains non-ASCII characters. According to the MDN docs, this is a known problem due to HTML5 issue. The specification proposes using a simple regex for validation!

Chrome does not complain about internationalized values. And it means that Firefox implemented HTML5 spec too well, introducing a confusing behavior. Oops.

Carpal tunnel syndrome

I wasn’t diagnosed with CTS (yet), but my posture at home is awful, so my right hand hurts. I don’t have a proper desk and usually I sit with a laptop at a round table. Thus, there is not enough room to position a “mouse” hand.

A year ago I bought a cheap Anker vertical mouse. It’s great, even with my shitty habits I have had no pain. But it’s wired, and soon I’ll need a wireless one. Logitech is always my first choice in this case because it doesn’t drain batteries too fast. Unfortunately, their only vertical mouse is unreasonably expensive, so I’ve decided to experiment with their M570 trackball.

It’s cool that you don’t need to move your hand at all, but the pain has returned: it looks like vertical mouse works better for small spaces. For a trackball, you should be able to lay down your hand somewhere fully. Though, the more expensive trackball model can be vertically adjusted. Anyway, I’ll get a real own desk soon, will see how it goes.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/11/25/why-i-dont-use-postman/index.html b/articles/2019/11/25/why-i-dont-use-postman/index.html index 6f7507d4..708ef119 100644 --- a/articles/2019/11/25/why-i-dont-use-postman/index.html +++ b/articles/2019/11/25/why-i-dont-use-postman/index.html @@ -2,8 +2,8 @@

Why I don’t use Postman

25 Nov 2019
 3 mins

Being an API person, many people would expect me to use Postman. It’s the most well-known tool for HTTP-based APIs and it’s so ubiquitous that some use it even for SOAP (not the best idea ever).

I did use Postman. My gist with installation scripts for Linux was so popular that Postman support team reached me out when it started causing non-obvious issues with updates. I also used this tool for internship courses. Yet I won’t do it anymore and let me explain why.

I’m not a tech writer

Postman isn’t just a tool for testing: it’s often used for writing documentation. I’m a tester, thus, this use case does not apply to my day-to-day work. Maybe it can be useful for your organization, though I don’t approve of the inability to self-host.

I’m confused by GUI-driven tools

I still have PTSD from SoapUI. It’s the best exploratory testing tool for SOAP services, but, damn, scenarios more difficult than request+response are brain busters. Many people don’t have problems with GUI-driven stuff:

  • JMeter for load testing
  • UI git apps
  • Point&click tools for web automation

For me, all these are torture devices for anything more involved than two clicks. It’s something psychological, because if GUI tool requires tinkering with LEGO-like loop/if steps or pre/post-scripts, my nerve cells start audibly screeching. I’m not a command line nerd and I use these tools for certain activities, but not for complex automation.

I can code

Here we go to the obvious part: I am comfortable with code. Yeah, shitty smelling code, nevertheless, I find it way more intuitive to write a for loop purely in a programming language than to glue together pieces of JavaScript with GUI-level bits.

My language of choice is Python because it’s very easy to scribble down a working script. It also has packages for any API needs imaginable, be it Requests, Bravado, Zeep, or Yandex.Tank.

I use Insomnia

Until recently I still used Postman a bit. I switched to Insomnia for mostly emotional reasons:

  • Insomnia is open-source.
  • Postman is bloated with features I don’t need.
  • There is too much Postman around. They even organize a conference now! I wonder is there a certification somewhere already.

Insomnia is a case where less is more. I hope its recent acquisition by a bigger company won’t be detrimental, but being open-source we can always fork it.

Though it’s not the only tool I use for exploratory API testing. My general patterns are:

  • Rapid data creation or testing simple requests: curl
  • Requests with bigger payloads: SwaggerUI or Insomnia
  • Chained, looped, or other complex stuff: reusing bravado-based adapters from automation framework

And for actual automation, it’s code and code only.


Your brain is your brain. Your context is your context. Mine resulted in avoiding GUI-driven tools. Research, try different approaches, and don’t simply default to the most popular choice.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/11/30/weekly-hmms-watching-reading-learning/index.html b/articles/2019/11/30/weekly-hmms-watching-reading-learning/index.html index 39e6dec6..b8d8b826 100644 --- a/articles/2019/11/30/weekly-hmms-watching-reading-learning/index.html +++ b/articles/2019/11/30/weekly-hmms-watching-reading-learning/index.html @@ -8,9 +8,9 @@ A great API will empower you to be really lazy.">

Weekly hmms: watching, reading, learning

30 Nov 2019
 2 mins

Watching

During lunch, I watch videos from conferences. This week my favorites were:

The second one has a nice quotable passage:

A good API will let you be lazy.

A great API will empower you to be really lazy.

Reading

Started reading:

  • The Design of Web APIs by Arnaud Lauret. I’ve got a hard copy, yay!

  • Universal Principles of Design by William Lidwell, Kritina Holden, and Jill Butler. There is a newer shorter edition, but I decided to try out the original one first. It was available in the local library, but I had to wait almost two months in the request queue: the book is old yet still relevant.

Learning

Alan and Brent talk a lot about basing feature development on testable hypotheses. Brent mentioned that he doesn’t even use the word “requirements” anymore.

And just in time came a Coursera’s newsletter with the link to Hypothesis-Driven Development course, which is a part four out of five in the agile development specialization. So, ideally, you shouldn’t start with it. Nevertheless, I’ve finished the first week’s materials and looks like time spent will be worthwhile.

Though, there are some problems:

  • The instructor seems to sell his “venture design process.” That doesn’t make that methodology inherently bad, but keep it in mind.
  • The way he talks makes hard for me to concentrate. Too choppy.

The course is short, so I’ll give it a chance (perhaps to the whole specialization).

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/12/07/weekly-hmms-communities-laptops-linux/index.html b/articles/2019/12/07/weekly-hmms-communities-laptops-linux/index.html index e9218098..8dab154d 100644 --- a/articles/2019/12/07/weekly-hmms-communities-laptops-linux/index.html +++ b/articles/2019/12/07/weekly-hmms-communities-laptops-linux/index.html @@ -2,8 +2,8 @@

Weekly hmms: communities, laptops & Linux

7 Dec 2019
 4 mins

This time, the proper title would be “Weekly Arghs.”

Testing communities

When I decided to become a tester, the first-ever topic I encountered was discussions rants about “ISO/IEC/IEEE 29119 Software Testing” set of standards. I was fresh out of the university, so I couldn’t have legal access to these documents, and naturally, I sided with opponents. Good or bad (mostly bad), I don’t care: if you wish to have guidelines or standards for the profession, it must be freely accessible to everyone. The end.

But that’s not the thing I want to talk about. It’s just an example of my first deep dive into testing: it was a sign of what to come. Basically, everything is dividing our community: terminology, schools of thoughts, views. Dammit, testing in itself is a tiny thingy, we mostly reuse and apply knowledge and methods from other fields. Nevertheless, we keep constantly bickering with each other.

I’m not a saint either. I rant, joke, correct, discuss, persuade, and roll my eyes. Maybe it comes with the profession. Or it’s a normal thing for any community. What amazes me, that this much concentration of disagreements exists around the pettiest and smallish topics. Sometimes people even talk about the same ideas but with different examples, and it results in confrontations (direct or indirect).

Giving recent cases wouldn’t be humanly correct in regard to individuals involved. Let’s just say: I participate in several slack workspaces. And it’s quite common to a discussion in one to provoke posts in others with opposite reactions like “good work” and “it drives me insane.”

Though, I want to quote one:

Testing vs Checking is like Russian revolution — senseless and merciless.

You can replace “testing vs checking” with whatever you want, and it will hold true to a lot of prevalent discussions.

Laptops & Linux

I’ve got a new laptop: Dell XPS15 7590. My old one is a Lenovo G560, almost 9 years old. It served me well, but it’s got too hard to keep even just Pycharm and Firefox running side by side. I squeezed every last bit out if it: maxing out memory to 8GB, adding SSD, using Fedora with a lighter desktop environment (XFCE). Still, its time has come.

XPS is a good choice for several reasons:

  • extensible: I’ll be able to add more memory (32GB), change SSD or WiFi card, add HDD (but with a smaller battery)
  • Linux friendly
  • good battery
  • there was an awesome discount which is significant because I didn’t want to go wild with work budget

But there are some disappointments. Let’s start with the objective one: keyboard.

This is my old keyboard: it has Numpad, all keys, and even separate volume controls.

Lenovo G560 keyboard

This is the new one:

Dell XPS15 keyboard

I begrudgingly accepted not to have a Numpad and volume control keys, yet I didn’t even consider that there wouldn’t be separate Home, End, PgUp, and PgDn! Also, no right Fn key. Wow. What bewilders me more is that a previous model did have separate keys: but people complained about it on the Dell forum. You, guys, I hate you. You’re exactly like from that XKCD comic strip.

Next disappointment is a display. Beware, this will be unexpected… I don’t like HiDPI. I’m so accustomed to shitty displays, that better ones feel wrong and uncomfortable. Colors and images are fine, but text, yikes. Letters are not crisp enough, there is something smudgy about them.

I can already hear that it’s because I’m on Linux. NOPE. This laptop came with Windows 10 (I left it in a dual boot), but the text is blurry too. Even more, I think Linux actually does better jobs with antialiasing, because old Windows apps look particularly horrible.

Maybe someone will say that I should try 4k or Mac with Retina. Thanks, but no. I saw those displays too. First, I don’t understand, why people keep using them without any scaling: font size is unbearably small. Second, I still don’t like it.

So yeah, completely subjective feelings with physical problems: my eyes are way more strained now. Habits die hard.

Oh, and I also switched from XFCE to Gnome, and this drives me crazy even more. My first Linux was Ubuntu with Gnome 2, and, surprisingly, UX in XFCE is closer to it than in Gnome 3. I’m too lazy to reinstall again, so, I guess, I’ll try to relearn.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/12/14/weekly-hmms-advent-of-code-nginx-apis/index.html b/articles/2019/12/14/weekly-hmms-advent-of-code-nginx-apis/index.html index 3baa2818..90ab678a 100644 --- a/articles/2019/12/14/weekly-hmms-advent-of-code-nginx-apis/index.html +++ b/articles/2019/12/14/weekly-hmms-advent-of-code-nginx-apis/index.html @@ -2,8 +2,8 @@

Weekly hmms: advent of code, NGINX, APIs

14 Dec 2019
 1 min

Advent of code

Last week I forgot to mention that I started solving the Advent of Code challenges. This is my first year, so I don’t even try to have elegant solutions. Old trusty Python, lists’ and dicts’ galore.

The reason is simple: the goal is to go through all 25 days. Being consistent is not easy, and today I had to solve two days because of yesterday’s Christmas party.

NGINX

The news about police raids in NGINX office is at the same time surprising and not. Before there were VKontakte and Euroset; for some reason being honestly successful is not allowed in Russia.

APIs

Tom Johnson released a podcast with Arnaud Lauret. Even though it’s a tech writing perspective, I strongly recommend testers to check it out. Laments about not being able to influence design are fun to hear: it’s almost like Agile, DevOps, WhatNotOps missed out tech writers.

Also, I found two interesting API related newsletters: “API Developer Weekly” and“Net API Notes”. If you look at the signup page for the first one, I bet you will notice that it doesn’t mention testers. I wonder, is it because their team follows MT-like approach and includes testers into dev set? Hope so.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2019/12/21/weekly-hmms-api-practices-yegor-bugaenko-culture/index.html b/articles/2019/12/21/weekly-hmms-api-practices-yegor-bugaenko-culture/index.html index 0a9eafdb..d56b6dae 100644 --- a/articles/2019/12/21/weekly-hmms-api-practices-yegor-bugaenko-culture/index.html +++ b/articles/2019/12/21/weekly-hmms-api-practices-yegor-bugaenko-culture/index.html @@ -2,9 +2,9 @@

Weekly hmms: API practices, Yegor Bugaenko, culture

21 Dec 2019
 4 mins

API practices if you hate your customers

A fun article from ACM Queue magazine about how to alienate customers from using your API. I like an “anti-tutorial” form, it is easier to recall bad patterns and behaviors.

One such form I filled out required me to describe the application I planned to write. <…> Sadly, I didn’t have a particular application in mind. I was going to explore the API and write a few simple diff --git a/articles/2019/12/26/lunchlearn-linting-openapi-description-docs/index.html b/articles/2019/12/26/lunchlearn-linting-openapi-description-docs/index.html index 1281e86a..235cc4f1 100644 --- a/articles/2019/12/26/lunchlearn-linting-openapi-description-docs/index.html +++ b/articles/2019/12/26/lunchlearn-linting-openapi-description-docs/index.html @@ -2,9 +2,9 @@

Lunch&Learn: linting OpenAPI description docs

26 Dec 2019
 5 mins

Last yearly company retrospective showed that we want to be better at knowledge sharing. One of the suggested formats is Lunch&Learn, kinda like Google’s Testing on the Toilet: short non-mandatory meeting during the lunchtime (30min) with shared recording later.

I’ve started to use them for API related topics. Tools, terminology, and other small but important things that would be beneficial to most of the development team (FYI, testers are part of the dev team, duh). Also, it’s an opportunity to organize my brain, so I’ll be posting notes here for future reference. One thing though, these posts are not supposed to be comprehensive: there is only information applicable to my team!

Validation vs. linting

Anyone remotely familiar with coding knows that validators and linters are important for successful development. Shortly, validators check “does X conform to the standards/specification,” whereas linters complain about style and design issues. Usually, tools combine both in some way.

Here I’ll cover linters for OpenAPI description documents. But you can use linting for everything, from obvious source code files to shell scripts or markdown documents.

Why we need a linter for OpenAPI description docs

We’ve started writing style guidelines for our APIs. Usually, I am that nitpicky person to review description docs; but I am lazy. I choose to automate myself out of the job. I don’t want to spend time checking for obvious style violations like capitalization or error definitions. And by the way, it would speed up the process.

Because of that, our current requirements for the linter are:

  • easy to set up and run locally (so that devs and testers could use it without pain)
  • can be run via Jenkins
  • possible to customize with our own rules
  • support for OpenAPI 2
  • has a future: development is active and there is at least the possibility of OpenAPI 3 support

Current implicit validation

Right now we have description docs validation during integration tests, where we use bravado library for client generation. By default, it has some validation on doc load, and if the doc is invalid, bravado will complain. Internally it uses swagger_spec_validator. Here is a small script to show how it’s typical exceptional output looks like.

import argparse
 from pathlib import Path
 from swagger_spec_validator import validate_spec_url
 
diff --git a/articles/2020/02/01/hmms-january/index.html b/articles/2020/02/01/hmms-january/index.html
index 1514f8b6..4f906469 100644
--- a/articles/2020/02/01/hmms-january/index.html
+++ b/articles/2020/02/01/hmms-january/index.html
@@ -1,7 +1,7 @@
 Hmms: January | aviskase
-

Hmms: January

1 Feb 2020
 3 mins

Hi! Weekly hmms are transformed into monthly hmms, mostly because writing mandatory weekly posts is too cumbersome and leaves no time to do more thematic writing. Of course, there are other reasons too. Almost all winter holidays I had a cold (TWICE!) and being ill isn’t the best motivation ever.

Knowledge bits

I didn’t read or watch much, but some of the interesting findings were:

Work things

The company I work for was recognized as one of Montreal’s top employers. While I’m quite skeptical of such ratings and rewards, I know for sure that we are better than some other winners.

Gaming

Ouch. Even though I was near computers from birth, I never was a gamer. I played only:

  • old Tomb Raiders (up until missions with driving: I suck at it),
  • Lineage2 (started playing because classmates were talking about it, but it’s impossible to solo there)
  • Dofus / Wakfu (to learn French)

Strictly speaking, my approach is more of a “watcher.” I watched my father play Wolfenstein, Doom, and Quake. And I’m diff --git a/articles/2020/02/07/api-testing-in-python-requests-vs-bravado/index.html b/articles/2020/02/07/api-testing-in-python-requests-vs-bravado/index.html index 1ea5976b..2d4a8113 100644 --- a/articles/2020/02/07/api-testing-in-python-requests-vs-bravado/index.html +++ b/articles/2020/02/07/api-testing-in-python-requests-vs-bravado/index.html @@ -1,7 +1,7 @@ API testing in Python: requests vs bravado | aviskase -

API testing in Python: requests vs bravado

7 Feb 2020
 7 mins

This article is written as a result of collaboration with TestProject. While many of you know me as a GUI-driven tools hater, that’s just my preference, so if something works for you and your company, that’s the only thing that matters. There are no best practices and there are no best tools for everyone.

What I really admire the TestProject team for is their strategy of creating a knowledge-sharing community. diff --git a/articles/2020/03/08/hmms-february/index.html b/articles/2020/03/08/hmms-february/index.html index cd117015..79cc506c 100644 --- a/articles/2020/03/08/hmms-february/index.html +++ b/articles/2020/03/08/hmms-february/index.html @@ -2,9 +2,9 @@

Hmms: February

8 Mar 2020
 2 mins

It’s March already, time to summarize February!

APIs trainings

I did a small company-wide training about APIs. Here is the main deck and supplementary deck left from dev lunch&learn. Anyone is free to use it and if you have questions just ping me somewhere. They don’t have notes, so can be pretty cryptic, but I am inclined to continue improving them to use for new hires.

The most important feedback was from members of marketing and tech writing teams. I failed to properly address “why’s” diff --git a/articles/2020/03/31/hmms-march/index.html b/articles/2020/03/31/hmms-march/index.html index b7f7b4e4..e6e6a7db 100644 --- a/articles/2020/03/31/hmms-march/index.html +++ b/articles/2020/03/31/hmms-march/index.html @@ -8,9 +8,9 @@ Another chose to build their own API gateway because no offerings existed that would operate in their preferred Windows-based server environment.">

Hmms: March

31 Mar 2020
 2 mins

Coronavirus-free edition!

Top X videos and readings

Meme: APIs, APIs everywhere

Another chose to build their own API gateway because no offerings existed that would operate in their preferred Windows-based server environment. diff --git a/articles/2020/04/22/using-insomnia-for-api-exploration/index.html b/articles/2020/04/22/using-insomnia-for-api-exploration/index.html index b03b73cc..ee4d7354 100644 --- a/articles/2020/04/22/using-insomnia-for-api-exploration/index.html +++ b/articles/2020/04/22/using-insomnia-for-api-exploration/index.html @@ -5,9 +5,9 @@ Let me show you some basic features. We will use OpenWeatherMap API. Workspaces Workspaces are collections of thematically combined requests.">

Using Insomnia for API exploration

22 Apr 2020
 3 mins

One of the tools I use almost daily is Insomnia. It’s a great alternative to the P-everyone-knows-that-one. Insomnia is easy to use on Linux, has plugins, and UI is clean and simple.

Let me show you some basic features. We will use OpenWeatherMap API.

Workspaces

Workspaces are collections of thematically combined requests. Some of my workspaces are service-specific, while others contain everything related to the particular client use case or event (i.e., cross-service).

Our first examples: get current weather and forecast for Montreal. OpenWeatherMap API requires an API key, diff --git a/articles/2020/05/02/hmms-april/index.html b/articles/2020/05/02/hmms-april/index.html index 4c850a48..9ea7bf8a 100644 --- a/articles/2020/05/02/hmms-april/index.html +++ b/articles/2020/05/02/hmms-april/index.html @@ -8,9 +8,9 @@ API The Docs hosts bite-sized virtual events with discussion opportunities.">

Hmms: April

2 May 2020
 2 mins

Learning

One of the good things to emerge during the corona times is more educational opportunities.

For example, you can (and perhaps should) check the “Advanced Distributed Systems Design” course by Udi Dahan.

Another way to satisfy knowledge thirst is to attend a virtual conference:

  • API The Docs hosts bite-sized virtual events with discussion opportunities.
  • AsyncAPI has finished already, but you can watch it on YouTube.
  • OnlineTestConf is soon. I’m particularly excited about “The Lessons we can Learn from the Aviation Industry” by Conor Fitzgerald.

New font

Any “IT person” should have a favorite monospaced font. Except for those weirdos who prefer non-monospaced. I wasn’t very original in this matter and used Fira Code because it has diff --git a/articles/2020/06/02/hmms-may/index.html b/articles/2020/06/02/hmms-may/index.html index 0fede2d9..67e9dab2 100644 --- a/articles/2020/06/02/hmms-may/index.html +++ b/articles/2020/06/02/hmms-may/index.html @@ -2,9 +2,9 @@

Hmms: May

2 Jun 2020
 2 mins

This was a slow month in the sense of discovering new.

I finished the “Advanced Distributed Systems Design” course. CQRS, DDD, messaging patterns: doesn’t sound like this course is in any way helpful for testers. Yet being just a tester is dull. Exploring such courses helps to diminish the hardest order of ignorance “I don’t know what I don’t know”. And as I’m wearing an diff --git a/articles/2020/06/18/robust-apis-are-weird/index.html b/articles/2020/06/18/robust-apis-are-weird/index.html index 97a5a53f..aea67fba 100644 --- a/articles/2020/06/18/robust-apis-are-weird/index.html +++ b/articles/2020/06/18/robust-apis-are-weird/index.html @@ -2,9 +2,9 @@

Robust APIs are weird

18 Jun 2020
 4 mins

My first full-time API testing experience was for SOAP services. There you learn what XSD is. You learn to love it.

Of course, you do! With server-side enabled validation based on a schema, you need not worry about stupid testing like checking what happens when you send 100 length string where expected maximum length is 50. Just make sure that XSD is correct.

After that witchcraft, testing RESTish APIs feels like going back in time. To the very manual times. But then you learn about JSON Schema (RAML, OpenAPI, etc) and you are happy again! Yay, we can turn on server-side validation and shove off stupid testing again.

The problem is that JSON and XML are different beasts. Assuming at face value that whatever is defined in schema should be blindly validated can be wrong.

Let me explain why. And don’t worry, I did the same mistake.

Here is a simple XML:

<?xml version="1.0"?>
 <item>
   <name>Ring of the Wild Hunt</name>
   <tugrik>10</tugrik>
diff --git a/articles/2020/07/01/hmms-june/index.html b/articles/2020/07/01/hmms-june/index.html
index 2a0ce746..20f7ea72 100644
--- a/articles/2020/07/01/hmms-june/index.html
+++ b/articles/2020/07/01/hmms-june/index.html
@@ -5,8 +5,8 @@
 Link soup Let’s start with the good news: OpenAPI 3.1.0 RC is out!. Webhooks and reconciliation with JSON Schema are the highlights of the version. I hope tooling support won’t lag this time.
 I really liked the article by Daniel Jarjoura on communications.">

Hmms: June

1 Jul 2020
 2 mins

Time for the list of things I found interesting/amusing this month.

Let’s start with the good news: OpenAPI 3.1.0 RC is out!. Webhooks and reconciliation with JSON Schema are the highlights of the version. I hope tooling support won’t lag this time.

I really liked the article by Daniel Jarjoura on communications. For example, I didn’t know that crowd brainstorming is proved to be less effective.

Cool little apps I was recommended:

Positioning change

And, finally, the regular column “I kinda don’t like testers.” This month I tracked all testers-centric communities and articles to determine how useful these sources of information are for me. The result was as expected: noise to signal ratio was appalling. I’m tired of:

  • Rehashing of the same ideas by the same authors for years.
  • Questions that will never pass Stack Overflow filter.
  • “Should testers learn to code?” debate specifically.
  • Being overly dramatic about the slightest differences in opinion. The schools of testing, yadda-yadda.
  • Over-glorifying the tester’s role and knowledge (common in certain groups).
  • Jokes and memes about us (testers) vs. them (devs) (typical amongst juniors and junior oriented resources).

Thus, I made some decision:

1. I left RST and MoT (and its abolished sister group) slack communities. Stayed only in the ABT, because it’s smaller and more enjoyable.

2. Won’t hunt for any more testers-centric resources. Mind you, testers-centric ≠ testing-centric. Whatever I had in RSS + ABT podcast is more than enough.

45 RSS feeds in &ldquo;Testing&rdquo; category

3. Won’t save articles into reading queue “just in case”, if it’s obvious from the title and the first paragraph that the author is repeating themselves.

4. Change “About me” page to be less role specific. I wrote it a long time ago.

5. Do I contribute to the same noise? Perhaps. Can’t promise, but I’ll try not to. Obviously, these Hmms aren’t really original content, but written reflection is a great way to process information, sorry!

P.S. the saddest YouTube’s recommendation

YouTube recommended me to rewatch the opening sequence to the “Valerian and the City of a Thousand Planets”. The movie is forgettable, but this small part is a masterful tearjerker, especially now.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/07/03/knowledge-management-tools/index.html b/articles/2020/07/03/knowledge-management-tools/index.html index 7b73b599..cc962724 100644 --- a/articles/2020/07/03/knowledge-management-tools/index.html +++ b/articles/2020/07/03/knowledge-management-tools/index.html @@ -2,8 +2,8 @@

Knowledge management tools

3 Jul 2020
 6 mins

As a knowledge worker, I spend a lot of time searching and experimenting with approaches for knowledge capture and storage. Currently, I play with the new shiny app, so I figured it’s a perfect moment to systemize and reflect on tools and techniques I used until now.

Handwritten

I was more into handwriting during school and university, and there are still tons of notebooks back in Kazakhstan with course notes. I followed the same process and structure for extra studies like Coursera, and the most important thing I learned was that these notes were never ever opened and referenced later. Useless waste of stationery.

One of the last examples from English&French practice:

Old handwritten notebook

Don’t get me wrong, I love stationery. There was a time when we visited the biggest stationery store almost every day after the classes to gawk on new cute notebooks and pens. If my handwriting wasn’t that atrocious, perhaps I’d be more into bullet journals and scrapbooks, but alas, it still doesn’t make them useful for anything else besides creativity outbursts.

Now I use paper only for transient scribbles consumed right away or during the day, like this problem solving for advent of code.

Untranslatable scribbles

Though, I prefer Rocketbook, because it’s like a mini-whiteboard and I get to enjoy all fancy colors. Here is a note from API design brainstorming:

Brainstorming in Rocketbook

Digital

If handwritten notes are useless, digital ones are too chaotic, because I experimented with too many formats.

Non-structured

The first iterations were a transposition of handwritten notes into digital via Google Docs.

Notes in the Google Docs

They are as ineffective as the real notebooks. Type once and forget.

The direct opposite are plain text notes in Sublime with PlainNotes plugin. I use them almost every day for:

  • transient notes like “did this during the smoke test”
  • common copy pastes

Sublime with PlainNotes

Mind maps

At some point I learned about mind mapping and tried almost all installable applications for that, including:

  • XMind: UX is great, but features in the free tier are meh, especially in the last versions. Likes to eat memory.
  • Freeplane: ugly. Very ugly. Unbearably ugly. But free, fast, and has tons of features.
  • Mindomo: my latest selection, not free, yet has feature no one else has (more on that later).

As of now I have an assortment of mind maps in various formats, like Freeplane or XMind:

XMind mind map overview

I liked mind map for capturing phase, but for some inexplicable reason, I found them uncomfortable for referencing. They have the same fate as old course notes.

Outliners

After that realization I discovered outliners: think of them as a marriage between mind maps and bullet lists. You can follow the same tree-like structure and fold/unfold branches, yet with more capabilities for formatting and longer text.

The most well advertised outliner is WorkFlowy, but I never tried it, because I’m an awful human being and prefer free stuff xD So the next best alternatives are Dynalist and Checkvist. Both are fine. There are hundreds of others, some of them are open source and/or crazy and/or vim-like outliners, but as UX junkie, I used those two the most. The reasons to stop were:

  • opposite of mind maps: hard to capture, easy to reference
  • web-based

That’s why I moved to Mindomo, because it has a killer feature: switch between mind map and outline view.

Notebooks

Around the same time as I discovered mind mapping, I saw the need for having long form note storage. Yeah, yeah, Evernote rules the stage. I had it, but the web version became slower and slower, and there was no Linux client, so I played with others. It’s hard to remember their names though, I bet I tested more than a dozen.

For example, Simplenote. Nice, but very basic.

Or extremely Chinese WizNote.

The last one ditched just a month ago was Joplin. It’s very good, I do recommend it: open source, supports markdown, has a web clipper and an Android app with WebDAV sync. If all you need is more or less suitable Evernote alternative, it’s ok. Spoilers: I needed more.

Before we go to the last section, notable mentions:

  • LaTeX and RST formats: used them for a while, but Markdown is way easier and better supported.
  • OneNote: I heard it’s fine if you are Windows user. Well, I am not :)
  • Google Keep: perfect inbox for quick notes, shopping, and other lists that I need to access from the phone:

Google Keep with transient lists

Zettels

Ok, that’s a wrong term, but it encapsulates what differs from common notebooks the best. Zettelkasten is a note-taking method that relies on linking. Overly simplified process is:

  • create basic and the smallest possible notes (one concept per note)
  • link notes with each other
  • group notes thematically

There are different variations and similar methods, but all of them are based on ability to form concept maps (non-hierarchical storage). Checkout Andy Matuschak’s notes for example.

The most crucial part is to trace links and backlinks for each note:

  • links: note A links to notes B, C, D
  • backlinks: note A is linked from notes E, F, J

Usually it’s easy to add links, but cumbersome to add backlinks, because as soon as you link A to B you need to open B and add a link to A. That’s a wiki approach.

There are apps that simplify the process, like Obsidian. It has other neat features: link visualization with a graph, note/file transclusion (aka “embedding”), and tagging.

I’ve started slowly migrating my non-work notes there, and it’s hard. You get used to rigid hierarchies based on folders. You don’t have to stop using them, but it makes sense with approaches like @nickmilo’s (which I follow loosely).

Non-text content

Only digital:

  • Inkscape: my go-to drawing app since I’m better with vector graphics.
  • Krita: when I learn how to draw with my graphic tablet xD
  • Excalidraw: discovered via Obsidian’s Discord. Will use it for graphs.
  • LibreOffice Impress: not a drawing app, but for those situations when I need a presentation.
  • Shutter: screenshots with annotations, more than enough for everyday usage. Or ShareX when on Windows.
  • SimpleScreenRecorder for short screencasts. People who use GIF for that: burn in hell.

Summary

Let’s summarize. Handwritten notes:

  • use whiteboarding format (can easily “edit” by cleaning)
  • transient, not for long-term storage
  • for generating ideas or designs
  • for small tasks during the day

One important exception is recipes. All are handwritten on small cards and stored in the box. The reason is to pull one out and stick to the fridge with a magnet. I don’t have much recipes anyway, so searching and storage aren’t a problem.

For digital:

  • Mindomo when I really want a mind map. Something tells me I won’t.
  • Sublime PlainNotes for transient notes and copy pastes.
  • Google Keep for semi-transient lists and inbox.
  • Obsidian for anything else, except overly work related (I prefer to experiment with it more on the free tier before paying for commercial license).

And for non-text content, imagine the entire section transcluded here ;)

The last important bit: almost all my data is backed up in some way. Google Disk, Yandex.Drive, and OneDrive. I once lost a hard drive with my end-of-semester projects and 300GB of carefully collected music (which I never recovered). Make backups. And backup some backups.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/08/04/hmms-july/index.html b/articles/2020/08/04/hmms-july/index.html index 55a41d17..4b477867 100644 --- a/articles/2020/08/04/hmms-july/index.html +++ b/articles/2020/08/04/hmms-july/index.html @@ -2,8 +2,8 @@

Hmms: July

4 Aug 2020
 2 mins

Oops, I have nothing to share

Since migrating to Obsidian for knowledge management, I’ve lost a track of interesting resources I read, because they are represented as atomic cross-referencing notes.

Perhaps I should try to be more consistent and always write literature notes. Alas, my current graph looks like that (obviously, I didn’t have time to transfer older notes from other systems):

Obsidian graph view as of July 2020

In general, my topics of interests for this month were:

  • jobs to be done framework
  • jobs stories vs. user stories
  • characters vs. personas
  • API design approaches
  • relearning basic Lean & Kanban principles like value stream mapping or my beloved 3Ms

No longer a tester

My official title has changed to engineer. But don’t expect me to stop whining about testers’ community; I still talk to them xD

Anyway, title change was a cumbersome thing. On one hand, I don’t care that much. As long as people know me and understand that I’m that weird glue person who just pokes into every process where waste is evident, I’m fine. On the other hand, title matters for stupid things like initial introduction to the new team or hierarchical structures. Would I prefer something less generic than “engineer” that may accidentally focus on “implementer” hat instead of the “analyst/designer”? Yup. Did I have imperfect titles before? Yup. So, whatever.

First learnings from wearing implementer hat

I am mostly Python coder, though I have some almost forgotten experience with C++ and Java. But now I need to work with Typescript/Node.js code and it’s been… interesting. The biggest problem coming from Python is that in JS there are a million ways to do the same thing which for me results in endless googling about what way is more idiomatic and recommended. The entire ecosystem around TS, JS, and Node seems to be more confusing. But, no arguments here, type system in TS is nuts. Like an additional language.

I also enjoy working in Visual Studio Code, especially with its automagic connection inside docker containers. This blog is written in the PyCharm, but now I consider switching to VSCode because it has more markdown plugins.

Another dev thingy: the blog was built using Travis CI. Recently I’ve switched to GitHub Actions, and it was fairly easy to do. I’d suggest to anyone who wants to learn DevOps-ish stuff to experiment on GitHub, since it’s all in one place and their docs are user friendly.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/08/08/be-chicken/index.html b/articles/2020/08/08/be-chicken/index.html index 7b4bc4b1..d9624880 100644 --- a/articles/2020/08/08/be-chicken/index.html +++ b/articles/2020/08/08/be-chicken/index.html @@ -8,8 +8,8 @@ Pig replies: “Hm, maybe, what would we call it?”">

Be chicken!

8 Aug 2020
 2 mins

The article on API change culture uses a well-known “the chicken and the pig” fable as an inspiration:

A Pig and a Chicken are walking down the road.

The Chicken says: “Hey Pig, I was thinking we should open a restaurant!”

Pig replies: “Hm, maybe, what would we call it?”

The Chicken responds: “How about ‘ham-n-eggs’?”

The Pig thinks for a moment and says: “No thanks. I’d be committed, but you’d only be involved.”

I bet I won’t say anything new or profound, but it always weirds me out that the pig is considered somewhat more important than the chicken. Oh, right, because it must die to be useful.

Well, I grew up in the 90s in Kazakhstan. Like many people we were relying on our village relatives for food, so ham and egg production is not an elegant metaphor but a real-life experience for me.

Growing pigs is a slow process with a big one-time payoff. It’s not free: you need to feed them and clean the place. And you’d better have a pair of pigs so that the production could be at least somewhat renewable. One committed pig is a splendid feast in the short term and hunger in the long term. Don’t forget an emotional cost: pigs don’t die by themselves. You will hear it. You will smell it.

Chicken produces eggs continuously. You don’t even need a rooster. Feeding is less involved, and you may practice a free-range approach to optimize costs. With a rooster, eggs can be fertilized, and production might grow even more.

Thus, in my opinion, chicken-style contributors are better for a project. They bring sustainability. They create small eggs incremental improvements. They are the force of kaizen.

Whereas pigs are one-time Big Bang doers. An external contractor who came in, said you need to redo the entire workflow, got their paycheck, and left. That new employee who said: “let’s rewrite the system from scratch!”. Don’t get me wrong, sometimes big changes are healthy and bring value, but they are also riskier. You need to have a bunch of chickens to support the process.

Another possible consequence for pigs is burnout. You did something big and you can’t do anything else anymore. Oops.

I often gravitate towards pig’s tendencies since it’s easier. It’s also better for CV and performance evaluations, isn’t it? ;) But in the end, as some say: we prefer being generalists; the same way I’d say: let’s be chickens who oink occasionally.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/08/20/postmortem-borking-ubuntu-again/index.html b/articles/2020/08/20/postmortem-borking-ubuntu-again/index.html index 6b4a4f2b..88f6eba1 100644 --- a/articles/2020/08/20/postmortem-borking-ubuntu-again/index.html +++ b/articles/2020/08/20/postmortem-borking-ubuntu-again/index.html @@ -8,8 +8,8 @@ Summary. On Tuesday at roughly 02:30 I wasn’t able to log into Ubuntu: GNOME crashed before being able to display login screen.">

Postmortem: borking Ubuntu (again)

20 Aug 2020
 4 mins

Since I had to deal with an incident recovery this week, I thought why not use it for practicing writing postmortems?

Postmortem is an excellent way to reflect and learn from disasters. Check Google’s SRE book for more information.

So, let’s begin.

Summary. On Tuesday at roughly 02:30 I wasn’t able to log into Ubuntu: GNOME crashed before being able to display login screen.

Impact. Had to do a clean Ubuntu installation, most of the workday time lost.

Root causes. Incorrect and incomplete purge of accidental upgrade to Pop!_OS. The upgrade itself was cased by adding ppa:system76/pop and not noticing that it contains all the packages that makes Pop!_OS a different distributive.

Action items:

  • Buy an additional external drive and setup regular /home backup procedure.
  • Document all necessary tweaks for running Ubuntu on Dell XPS 15.

Lessons learned:

  • Always use separate partition for /home. Currently Ubuntu recommends otherwise, that’s why I didn’t make it.
  • Don’t ever upgrade or purge in the middle of the night (duh!).
  • Pay attention to PPA’s content.
  • One big external drive per household is a bottleneck.

Timeline: (approximate)

  1. Saturday. While exploring shiny new themes for Ubuntu I stumbled upon Pop theme.
  2. Saturday. Since I prefer using PPA to installing from source, I added ppa:system76/pop repository and installed pop-theme.
  3. Monday 20:00. Ubuntu showed notification about available updates. I didn’t check it and simply agreed.
  4. Monday 23:00. Rebooted into Windows.
  5. Tuesday 01:30. Rebooted into Ubuntu to leave it ready for the next work day. Noticed that dual monitor settings and other customizations were gone.
  6. Tuesday 01:40. Checked apt, it showed a warning that some GNOME packages hadn’t been upgraded properly. Thus, I decided to rerun sudo apt upgrade manually.
  7. Tuesday 01:50. System suggests rebooting (accepted).
  8. Tuesday 02:00. After rebooting it became obvious that it’s no longer pure Ubuntu. Several places showed that it’s now Pop!_OS.
  9. Tuesday 02:10. Googled for how to downgrade back to Ubuntu (Pop!_OS lacks certain features). Didn’t read much and used the advice to run sudo ppa-purge ppa:system76/pop (was sleepy).
  10. Tuesday 02:20. Purge threw exception in middle and screwed up the system, GNOME in particular.
  11. Tuesday 02:30. After rebooting I was not able to login into UI. Root shell recovery mode worked.
  12. Tuesday 09:00. Spent time researching steps for fixing, decided it would take more time than reinstalling the system.
  13. Tuesday. Booted from LiveUSB. Discovered that there was no separate /home partition.
  14. Tuesday. There were no recent backups, so I had to do one manually. The external hard drive at the moment didn’t have enough space, and I needed to move data from it to other computer first.
  15. Tuesday. After cleaning up external drive I attempted to create tar archives with permission preservation. I went with two archives, one for VMs, another for everything else in /home.
  16. Tuesday. VMs’ archive creation failed several times. Had to split it in more archives based on directories.
  17. Tuesday. Copied all archives to external drive and checked it on the other computer. Archives for VMs were in the subdirectory and the system failed to open it (this computer has half-broken Manjaro, can’t find time to fix it).
  18. Tuesday. Copied VMs’ archive to USB drive. Checking failed, because I forgot that this drive had ext4 format.
  19. Tuesday. Tired and not wanting to bother with proper permissions setup, I formatted USB drive to NTFS, copied archives again, checked that they were accessible.
  20. Tuesday. Created separate partitions for / and /home and reinstalled Ubuntu.
  21. Tuesday. Transferred backups to the fresh system, unpacked, installed necessary apps.
  22. Tuesday. Checked that dual-boot Windows was still ok after all manipulations.

I’ve been using Ubuntu, Fedora, or ElementaryOS since early university. You’d think I’ll have enough experience and caution not to do stupid things like purging GUI packages while still being in GUI ;) But on the other hand, I always had a separate /home partition and got used to rapidly recover or change distributions. Even though it sounds like a typical Windows user behavior, I see one important difference: generally, I do know what got broken. I did it myself. I’m just not patient enough to fix all the problems ¯\_(ツ)_/¯

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/08/30/remembering-javascript-and-typescript/index.html b/articles/2020/08/30/remembering-javascript-and-typescript/index.html index 9e605a6b..45c4d0e7 100644 --- a/articles/2020/08/30/remembering-javascript-and-typescript/index.html +++ b/articles/2020/08/30/remembering-javascript-and-typescript/index.html @@ -2,8 +2,8 @@

Remembering JavaScript and TypeScript

30 Aug 2020
 2 mins

Until recently, I had been coding mostly in Python. I’m not an expert, but have a fair understanding of PEPs and practices.

But now I work with JavaScript/TypeScript project, and my knowledge is definitely lacking. I was following these only in the university, when there was still a chance that I had to do web development. I distinctly remember the moment of downloading “You Don’t Know JS: this & Object Prototypes” book and thinking “nope, that’s it, I don’t want to deal with such language.”

Anyway, I’m in dire need of relearning JS and TS. After experience with Python I know better how important it is to lint, follow a style guide, and type, type, type it all. For learning I prefer books and articles, but also noticed growing appreciation for videos. Yes, they are less efficient, longer, but you can use them during some “brainless” time and revisit interesting topics afterwards in a more thorough fashion.

Here is the list of resources I’m using now:

On the side note, I think I’ll be migrating this blog from Pelican to Hugo (learning Go, why not) or something JS-y for a nice added practice. Though, the actual reason is that it’s built with an older Pelican version, which is very slow, so I have to spend time on either upgrading or migrating anyway.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/09/01/hmms-august/index.html b/articles/2020/09/01/hmms-august/index.html index 605a10ce..d753a99a 100644 --- a/articles/2020/09/01/hmms-august/index.html +++ b/articles/2020/09/01/hmms-august/index.html @@ -2,8 +2,8 @@

Hmms: August

1 Sep 2020
 2 mins

This month’s hmms were supposed to be easy peasy to do: just grab all new notes created in Obsidian, curate a bit, and, voilà, done. I even wrote a small python tool.

Yeah, right. I keep forgetting that it’s almost impossible to get file’s creation date on Linux. There is a hacky way for ext4, but in my case it didn’t work: according to the script, almost all notes were created on Aug 18 o_O. Ok, lesson learned, now my notes will have yaml front matter with creation date =(

After sifting through files manually, looks like these were the resources that grabbed my attention the most:

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/09/15/conference-notes-asc-2020/index.html b/articles/2020/09/15/conference-notes-asc-2020/index.html index 6f3392ee..2bd6f0d9 100644 --- a/articles/2020/09/15/conference-notes-asc-2020/index.html +++ b/articles/2020/09/15/conference-notes-asc-2020/index.html @@ -2,8 +2,8 @@

Conference notes: ASC 2020

15 Sep 2020
 4 mins

OpenAPI Initiative’s API specifications conference (ASC) was the first payed event I’ve attended. Usually I watch free online events or past videos available on YouTube or Vimeo, but this time I decided to bite my inner Scrooge McDuck since there were relevant topics and price was reasonable.

And I didn’t regret it! I would love to attend offline or online future event: community is golden. There were tons of activity and discussions. The obvious example was day two keynote where the discussion in the chat was almost (if not more) as active as the streamed panel discussion.

The recurring theme in all breakout sessions was API governance. I found it funny that many participants said that they try to avoid the word “governance” because it scares or alienates developers. In the case of my current company, all our work is about giving customers means to do information governance. And while we certainly need to become better at governing some of our own processes, at least we’re not scared of the word.

Talks that caught my attention (aside from keynotes):

  • Bridging Systems and Subcultures: A Swagger Origin Story - Zeke Sikelianos, GitHub
    • Was distracted by work for the first half of talk, so I’d want to rewatch it.
  • Communicating Warning Information in HTTP APIs - André Cedik, shipcloud.io
    • Schedule conflict; definitely to watch later.
  • Open APIs Wide Open - David Biesack, Apiture
    • Explains the power of x- in OpenAPI definition docs.
  • Managing API Specs at Scale - Jay Dreyer, Target Corporation
    • Inspiring to see how big companies govern their APIs; and you know, that’s not a rocket science, someone just have to do it!
  • From 0 to OpenAPI: Describing a 10-year old API with OpenAPI @ GitHub - Marc-André Giroux & Andrew Hoglund, GitHub
    • Schedule conflict; definitely to watch later.
  • Going AsyncAPI: The Good, The Bad, and The Awesome - Ben Gamble, Ably
    • I expected a bit more in depth explanation of practical usage for this specification, but it was still useful.
  • JSON Schema At Home in the OpenAPI Specification - Core concepts, Vocabularies, and Drafts for 2020 - Ben Hutton, JSON Schema
    • Rewatch and rewatch and rewatch: OpenAPI and JSON Schema are going to be compatible soon (yay!)
  • Create Delightful SDKs from OpenAPI - Lorna Mitchell, Vonage
    • In short, do not ship auto-generated clients. It’s like… deploying auto-generated servers.
  • The Augmented API Design Reviewer - Arnaud Lauret, Natixis
    • I’m a fan of Arnaud: his book sits on my table and his presentations are probably the best I’ve ever seen (visually and informationally).
  • The Vocabulary of APIs: Adaptive Linting for API Style Detection and Enforcement - Tim Burks, Google Nicole Gizzo, Google
    • Schedule conflict; definitely to watch later.
  • Contract as Code as Contract: Using Rust to Unify Specification and Implementation - Adam Leventhal & David Pacheco, Oxide Computer Company
    • Schedule conflict; definitely to watch later.
  • Contracts for Collaboration Driven Development - Alianna Inzana, SmartBear
    • Schedule conflict; definitely to watch later.
  • Get Rid of CRUD: Revealing Intent With Your API - Michael Brown, Microsoft
    • The talk that got me into the conference in the first place: this is what part of our development team is working on right now. Kudos to Michael for explaining me bits of it before the conference!
  • Don’t Make It Hard for Us! What Makes a “Good” API? - Matthew Adams & Carmel Eve, endjin
    • Good conversational session for chat, felt almost like a panel.
  • API Specification and Microservices Communication Patterns with gRPC - Kasun Indrasiri, WSO2 & Danesh Kuruppu, WSO2
    • You may have noticed it already, but even though the conference was organized by OpenAPI Initiative, all types of specifications were covered and gRPC is one of them.

Other stuff to mention:

  • Got my first conference t-shirt! It’s even the right size xD
  • There were some technical hiccups, the first day I missed starting several minutes in each session.
  • Surprisingly, there were several people from Montreal and Quebec. Local testing community seems to be way less active.

Oh, right, my usual rubric, whining about testers:

  1. Amusing (or not) observation: there was no “tester” option in the event registration form. Developers, managers, marketing, DevOps, whatever. But no testers. Oops.
  2. When people talk about API governance, they usually mention developers, product managers, tech writers. But almost never testers. I wonder why?
  3. Gotcha, no whining here! I’ve met a like-minded tester! She was quite active, and I was able to deduce that she was a tester just by the questions she posted. Can’t deny, there is a certain “world view” you get by wearing this hat long enough. The despair.
older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/10/04/from-pelican-to-hugo/index.html b/articles/2020/10/04/from-pelican-to-hugo/index.html index bf2d37ab..c0ee47df 100644 --- a/articles/2020/10/04/from-pelican-to-hugo/index.html +++ b/articles/2020/10/04/from-pelican-to-hugo/index.html @@ -5,9 +5,9 @@ Why and where Reasons: Pelican got a new major version 4, which required some migration work on my side either way. When I’m fixing spelling and style I prefer checking on rendered local site and not raw markdown.">

From Pelican to Hugo

4 Oct 2020
 4 mins

As I mentioned in one of the last articles I was planning to migrate this blog from Pelican to other static site generator.

Why and where

Reasons:

  • Pelican got a new major version 4, which required some migration work on my side either way.
  • When I’m fixing spelling and style I prefer checking on rendered local site and not raw markdown. But live reload became too slow and frustrating (every change took approximately 1-3 minutes).
  • I just love to switch gears occasionally. This blog was originally built with Jekyll (until mid-2019).

So, I opened StaticGen to check what was trendy. My final choice was between Gatsby and Hugo.

GatsbyHugo
JS-based, good for enforcing much-needed JS deep diveGo-based, simply cool.
Very flexible, tons of plugins, data pulling via GraphQLSomewhat flexible. Still not sure about plugins.
Praised for fast loading, especially if you have JSFast building. Extremely fast building.
Often used for API dev portals

As you have guessed by the title, I went with Hugo because:

  • I don’t need JS on the site itself
  • I’m more interested in fast builds than loading
  • I heard that Go is the dumbest language in the world, might as well experience some of it xD

Migration shenanigans

Pelican and Hugo have a very different template format. Rewriting took most of the time, but as you can see, there are no visual changes. I also had to write Atom feed template almost from scratch and test it to make sure that id’s are the same and the whole feed won’t be completely regenerated (like it happened the last migration.)

But Hugo templates are way more flexible than Pelican’s. What I had to do with plugins was achieved with core features. I also fell in love with shortcodes and markdown render hooks which allow further customizations. For example, I used to have a Pelican plugin that detects links to external resources and adds a special external CSS class to them. Now the whole plugin is replaced with one link render hook that I can easily customize as I want later:

<a href="{{ .Destination | safeURL }}"{{ if strings.HasPrefix .Destination "http" }} class="external"{{ end }}>{{ .Text | safeHTML }}</a>
 

Second problem was the markdown sources. They had special Pelican front matter format, not compatible with any supported by Hugo (YAML, TOML, JSON, Org). Yikes. Same for !!! note admonition syntax, kbd auto-detection (all replaced by shortcode), and internal links. I don’t have too much content, so most of the work was done via grep and manually replacing stuff.

Third problem was images. The simple solution would be to set them all to static folder, but it’s not a recommended approach because you wouldn’t be able to use image processing features. Thus, you should go with assets or page bundles. I prefer bundles because they create a clear distinction where each image belongs to.

Changes to the site

While I was trying to preserve as much as I could, there are some intentional changes:

  • No more tagging and categories. I never used them; when I needed to find anything, I just used search. Perhaps it will also force me to make titles more explicit.
  • No Google tag (aka analytics). While it had been respecting all possible “Do Not Track” settings, I ditched it completely. At first, it was fun to watch graphs, incoming sources, and play with smart goals, but after a while I stopped caring. Search console is more than enough.
  • No Disqus comments. Not much value in them anyway (maybe I should actually use Twitter? LOL.)

Results

I’m quite happy with the migration. It is blazing fast as advertised: GitHub action build went from 2-3 minutes down to 30 seconds. And local live reload is faster than it takes to turn my head to the browser tab.

There is only one problem left. I used img shortcode to generate responsive images, but for some reason on my site browsers select too low resolution. I tried to understand why yet couldn’t: I had less trouble hacking my way through GnuCash reporting in Scheme (which I don’t know) than trying to fix the damn CSS. Current style sheet was borrowed from some theme and reworked over time, but it probably doesn’t follow modern best practices. Hugo documentation introduced me to Tailwind CSS and PostCSS; and while I hate CSS, the time for redesign has come.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/10/18/blog-redesign-phase-1/index.html b/articles/2020/10/18/blog-redesign-phase-1/index.html index 8e61fb12..5c6be8d7 100644 --- a/articles/2020/10/18/blog-redesign-phase-1/index.html +++ b/articles/2020/10/18/blog-redesign-phase-1/index.html @@ -5,8 +5,8 @@ Oh boi. First phase took the entire week. Planning For planning and progress tracking, I used a GitHub project. This is a basic Kanban board with two “To do” columns per phase. The first phase was about making a base dirty skeleton, whereas the second phase will add bells and whistles.">

Blog redesign: phase 1

18 Oct 2020
 3 mins

After migrating to Hugo, I decided to do a redesign to freshen up colors and introduce Hugo Pipes.

Oh boi. First phase took the entire week.

Planning

For planning and progress tracking, I used a GitHub project. This is a basic Kanban board with two “To do” columns per phase. The first phase was about making a base dirty skeleton, whereas the second phase will add bells and whistles.

GitHub project’s cards can have automated transitions between columns, for example, if you close the issue, it will go to “Done.” Unfortunately, when you work from branch, issues won’t be closed by commits until you merge the develop branch in to the main one. In my case the main branch is used for output hosting, so redesign branch won’t be merged there, thus, I had to close issues by hand. Meh.

R&D

From the Hugo docs I learned about Tailwind CSS and PostCSS. Tailwind CSS is kinda like Bootstrap, but without imposed styling; basically, a Lego constructor of utility classes. PostCSS is a tool for transforming CSS with various plugins like LESS/Sass-style syntaxes, autoprefixer, purger, etc. This repo and this one show how to integrate those tools into Hugo.

Design inspiration came when I stumbled upon Joshua Comeau’s blog. Yeah, I’ve imitated a lot of its typography and color choices xD It also provided some tutorials, for example, how to style ordered lists. I spent some time trying to find a similar free humanist sans-serif font, Nunito seems close, readable, and has Cyrillic characters (just in case). As for a monospaced font, the choice was obvious: JetBrains Mono.

Tailwind CSS has a plugin for typography; I didn’t use it, but copied 18rem based margins, line-heights, and font sizes for some elements.

For layout I used grid and flexbox. Last time I did any CSS these two weren’t a common thing; you had to do a million of div’s with float’s and other weird things. I also wanted to use grid for minor bleeding elements how it was described here, but I didn’t like the result: vertical margins don’t collapse when you apply grid to child elements.

BTW, raindrop.io helped tremendously with gathering links and sources. I completely migrated there from Pocket.

It has a similar “save for later” experience, but also supports collections and comments. For example, for this project I created a collection “blog redesign” and added a comment for each link saved there to remember later why it was saved in the first place.

Additional resources:

Next steps

Basic sections for next work are:

  • Bringing back old features: external vs. internal links, older/newer links.
  • New features: dark theme, automatic generation of Open Graph image, header-level links.
  • Cleaning up: splitting CSS into components, using plugin for font-face generation, gathering all hard-coded values into variables.
  • Fixing bugs: display of code elements in links and headers (yup, you can spot it in this article) and others.

I also think about ditching Tailwind CSS at this point. It’s great for rapid prototyping, but it quickly became a mess of styling split between inline classes and the style sheet.

And one more thing. I no longer like the selected color palette, lol. Links are too bright, block quotes are weird, the header is too cheesy. People say there are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors. I think “choosing colors” is the “naming” equivalent in design.

older   ·   ·   ·   newer
\ No newline at end of file diff --git a/articles/2020/10/25/automap-api-operation-handlers-in-components-based-project/index.html b/articles/2020/10/25/automap-api-operation-handlers-in-components-based-project/index.html index 09aa670a..2030e335 100644 --- a/articles/2020/10/25/automap-api-operation-handlers-in-components-based-project/index.html +++ b/articles/2020/10/25/automap-api-operation-handlers-in-components-based-project/index.html @@ -5,10 +5,10 @@ For example, the typical layers-based layout would be: . ├── common │ └── utils.ts ├── routers │ ├── locations.">