In this Appendix, we present the list of the main questions in the survey (a star indicates that answering the question is required) and the possible answers to the close-ended questions. The introductory and demographic questions are not included.
-
Q1* Please describe in a few short sentences or bullet points your high-level process of creating and/or evaluating design artifacts. (open-ended)
-
Q2* Are there any tools or platforms that you use during that process (please list them again, even if you mentioned them above)? (open-ended)
-
Q3* What are usual obstacles or limitations you encounter when you want to ensure that a design artifact meets users' needs as much as possible? (open-ended)
-
Q4* Which of the following types of platforms or tools sound familiar to you?
-
Accessibility checkers, or similar
-
Heuristics / guidelines / design patterns
-
Standards, e.g., ISO or DIN
-
Best practices
-
Competitive benchmarks
-
Statistical tools
-
Automated UX/UI analysis (e.g., EyeQuant, Feng-GUI)
-
User testing platforms
-
On-site behavior analytics
-
-
Q5 Of the platforms and tools above, which ones do you actually use? (open-ended)
-
Q6 If any, which machine-learning- or AI-based platforms or tools do you use in your process? (open-ended)
-
Q7 Within this list of prototyping tools, please check the ones you already know and/or use (if you check "use", please also check "know"). If you additionally use any specific plug-ins, please list them in the comments box below.
-
Adobe XD
-
Axure
-
Balsamiq
-
Figma
-
Autoflow (for Figma)
-
InVision
-
JustInMind
-
Mockplus
-
Origami Studio
-
Sketch
-
uizard.io
-
-
Q8 For the purpose of this research, we understand a user behavior model to be: A system which can, based on previously learned user behavior but without involving actual users, instantly and automatically produce a visual design/prototype, analysis, or validation. Can you think of any platforms or tools that fit this description? If yes, please list them here: (open-ended)
-
Q9 Generative tools based on UBMs or AI:
-
MenuOptimizer (by Aalto University)
-
GRIDS (by Aalto University)
-
UI-image-to-GUI-skeleton
-
Paper2Wire
-
MS Sketch2Code
-
UIZard Design Assistant
-
-
Q10 Automated (no users) usability validation/evaluation:
-
Kobold (Usability Smells Finder)
-
Qualidator
-
USEFul
-
WaPPU
-
-
Q11 UI/UX-related metrics/KPIs:
-
AIM (Aalto Interface Metrics)
-
Zyro (Heatmaps prediction)
-
Chrome/Firefox plug-ins (webpage stats)
-
Clicks/CTR prediction (Outbrain, UBR4CTR)
-
Eye-gaze/region-of-interest prediction
-
Web Vitals & Performance Metrics (https://web.dev/vitals-tools/)
-
-
Q12 Design guidelines organization/validation tools:
-
material.io (Material Design guidelines)
-
XCUITest for XCode (Apple HIG)
-
Test.ai (Apple HIG AI bots)
-
-
Q13 GOMS/KLM tools:
-
CogTool
-
Cogulator
-
KLM calculator
-
-
Q14 HTML/CSS and accessibility validators:
-
W3C validator (https://validator.w3.org/)
-
WAVE Web Accessibility Evaluation Tool
-
AChecker
-
SortSite
-
Level Access Web Accessibility Tools
-
-
Q15 Behavior simulation/GUI testing automation:
-
Selenium
-
Appium
-
Katalon (on top of Selenium and Applium)
-
Tricentis Tosca
-
Eggplant
-
Linux Desktop Testing Project
-
iMacros
-
Robot Framework
-
-
Q16 Behavior-driven development:
-
Behat (PHP framework)
-
Capybara
-
Cucumber (Gherkin)
-
TestComplete
-
Twist
-
Ranorex Studio (has GUI object recognition)
-
-
Q17 Model-driven development:
-
IBM Rational Rhapsody
-
WebRatio
-
CaseComplete
-
Appian
-
AXIOM
-
Mendix
-
-
Q18 In case you use any of the methods mentioned in questions Q9 thru Q17, for which part(s) of your work do you use which method and in which way? (open-ended)
-
Q19* Referring to the tools from questions Q7 thru Q17 you don't use, what are the reasons? Please check all that apply.
-
never heard of those
-
required usage fees
-
I don't see the added value
-
they don't produce accurate or useful results
-
too cumbersome
-
required input not available or too complicated to create
-
missing stakeholder buy-in
-
technical limitations or difficulties
-
Other
-