From d26c100e9d32f70c83e987062920e7af3d1aff81 Mon Sep 17 00:00:00 2001
From: github-actions CATcher: MarkBind: RepoSense: TEAMMATES: CATcher: MarkBind: RepoSense: TEAMMATES: Give an intro to the project here ... Give a description of your contributions, including links to relevant PRs Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource. Give an intro to the project here ... Give a description of your contributions, including links to relevant PRs Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource. Litestar is a powerful, flexible yet opinionated ASGI framework, focused on building APIs, and offers high-performance data validation and parsing, dependency injection, first-class ORM integration, authorization primitives, and much more that's needed to get applications up and running. Litestar's project outreach is one of the best that I've seen in any project. They are transparent about progress, and reach out to both users and contributers to encourage usage and contributions. Because Litestar is an ASGI framework that manages the majority of an application, it requires a large amount of effort and trust by users into the project. For example, they regularly post updates on the Python subreddit on major milestone and releases. Furthermore, they hold regular office hours live, and recordings are also posted on YouTube. Furthermore, they maintain a very large pool of "Good First Issues" for contributers to start on. As of writing, they have 20 such issues open. Because Litestar is relatively new status compared to the very popular FastAPI, the maintainers have sought to prove the project's sustainability to their users. For example, in the past few years, one of the goals were to increase bus size, to a minimum of 5. They've since achieved this, and their project has been stronger than ever. This is in contrast to FastAPI, which infamously has only a single maintainer, who refuses to take on more maintainers or accept PRs. Litestar is a powerful, flexible yet opinionated ASGI framework, focused on building APIs, and offers high-performance data validation and parsing, dependency injection, first-class ORM integration, authorization primitives, and much more that's needed to get applications up and running. Litestar's project outreach is one of the best that I've seen in any project. They are transparent about progress, and reach out to both users and contributers to encourage usage and contributions. Because Litestar is an ASGI framework that manages the majority of an application, it requires a large amount of effort and trust by users into the project. For example, they regularly post updates on the Python subreddit on major milestone and releases. Furthermore, they hold regular office hours live, and recordings are also posted on YouTube. Furthermore, they maintain a very large pool of "Good First Issues" for contributers to start on. As of writing, they have 20 such issues open. Because Litestar is relatively new status compared to the very popular FastAPI, the maintainers have sought to prove the project's sustainability to their users. For example, in the past few years, one of the goals were to increase bus size, to a minimum of 5. They've since achieved this, and their project has been stronger than ever. This is in contrast to FastAPI, which infamously has only a single maintainer, who refuses to take on more maintainers or accept PRs. A default package manager for Node.js. A CSS linter that helps enforce conventions and avoid errors. A JavaScript library that provides a framework for building command-line interfaces (CLIs) in Node.js applications A CI/CD platform allowing developers to automate workflows directly within their GitHub repository. A default package manager for Node.js. A CSS linter that helps enforce conventions and avoid errors. A JavaScript library that provides a framework for building command-line interfaces (CLIs) in Node.js applications A CI/CD platform allowing developers to automate workflows directly within their GitHub repository. Gradle is a build tool designed specifically to meet the requirements of building Java applications. Once it’s set up, building an application is as simple as running a single command on the command line. Gradle performs well and is also useful for managing dependencies via its advanced dependency management system. Learned about Gradle through a really helpful tutorial. I learned how to write basic bash scripts via tutorialspoint, and had to implement batch scripts to perform environmental checks for all files tracked by git, to ensure they end with a newline, no prohibited line endings ( Some interesting bugs were encountered when attempting to use pipes in batch files, particularly one that prevents delayed expansion of variables from being properly evaluated as per usual. This is due to variables not being evaluated in the batch context, as the lines are executed only in the cmd-line context. A more detailed analysis of the bug is done by a user of stackoverflow. As I explored Codecov to determine why it would intermittently fail for GitHub actions, I developed a greater appreciation for the role of code coverage analysis in ensuring software quality. I found its integration with popular CI/CD platforms to be seamless, making it easier to track and improve code coverage across projects. The visualization tools, such as the sunburst graph and diff coverage reports, were especially helpful in identifying areas that needed more testing attention. Furthermore, learning about Codecov's ability to enforce coverage thresholds and generate pull request comments reinforced the importance of maintaining high-quality test suites. Vue is a progressive JavaScript framework that simplifies the creation of responsive and efficient web applications. Its reactive data-binding and component-based architecture promote modular programming, resulting in more maintainable and scalable code. Learning about Vue's component-based architecture also expanded my understanding of modular programming and how it can lead to more maintainable and scalable code. Pug is a templating engine that integrates well with Vue, allowing for cleaner and more concise HTML code with the use of whitespace and indentation for structure. By removing the need for closing tags, Pug attempts to make code more readable and organized. Its support for variables, mixins, and inheritance facilitates code reusability and modular design, improving the overall structure and readability of templates. Cypress is an end-to-end testing framework that simplifies the process of writing and executing tests for web applications. Its intuitive syntax, real-time reloading, and support for network stubbing improve debugging and development efficiency, emphasizing thorough testing. I found its syntax and API to be intuitive and user-friendly, making the process of writing and executing tests more enjoyable. I was particularly impressed with the real-time reloading feature, which allows for faster debugging and development, simplifying E2E testing. Bloch’s Builder pattern is a design pattern that simplifies object instantiation in Java, particularly for classes with numerous constructor parameters, as it simplifies the process of object instantiation while maintaining immutability and improving readability. This was a particularly useful design pattern when refactoring the Polymorphism is a core object-oriented programming concept in Java that allows objects to adopt multiple forms and behaviors based on their context. It promotes code cleanliness, extensibility, and reduces coupling between components, resulting in more flexible and modular applications that can evolve and scale easily. By leveraging polymorphism, I was able to reduce the amount of logic in the main method of Discrete event simulator (DES) is a method used to model real-world systems that can be decomposed into a set of logically separate processes that autonomously progress through time. This was a design that was well suited for designing a CLI Wizard, as it allows for maintaining of a deque of prompts that to be shown to the user, while also allowing the addition of new prompts into the deque depending on the user's responses. In RepoSense, a variety of git commands are utilized to get information about the repository. Through undertaking DevOps tasks, I was also exposed to other interesting git commands. Here are some of the interesting ones that I was not aware of before. Researched interesting solutions for free URL shortening, looking into 3 main ways to do it. Read about an in-depth writeup in the
-Github issue here.CS3281 - 2024 Batch
CATcher
MarkBind
RepoSense
TEAMMATES
CS3281 - 2024 Batch
CATcher
MarkBind
RepoSense
TEAMMATES
Key Contributions
Automatic deployment #272, Release changelog automation #285
Refactor certain filters into its own service #259, Refactor sorting #261, Refactor milestone filters #264
Add filters to url #314
Keep filters when switching repo #281
Refactor certain filters into its own service #259, Refactor sorting #261, Refactor milestone filters #264
Add filters to url #314
Keep filters when switching repo #281Project: stdlib-js
My Contributions
My Learning Record
Project: stdlib-js
My Contributions
My Learning Record
Project: Litestar
My Contributions
My Learning Record
Project: Litestar
My Contributions
My Learning Record
MarkBind
MarkBind
Node Package Manager (npm)
Aspects learnt:
npm install
, npm update
, npm run <scripts>
etc. and how they helped streamline the development process."scripts"
, "dependencies"
and how to manage them.Resources:
Stylelint
Aspects learnt:
stylelintrc.js
file, a configuration object for Stylelint to suit our own needs.Resources:
Commander.js
Aspects learnt:
Resources:
Github Actions
Aspects learnt:
.yml
files in .github/workflow
.Node Package Manager (npm)
Aspects learnt:
npm install
, npm update
, npm run <scripts>
etc. and how they helped streamline the development process."scripts"
, "dependencies"
and how to manage them.Resources:
Stylelint
Aspects learnt:
stylelintrc.js
file, a configuration object for Stylelint to suit our own needs.Resources:
Commander.js
Aspects learnt:
Resources:
Github Actions
Aspects learnt:
.yml
files in .github/workflow
.DevOps
Gradle
Bash and Batch Scripting
\r\n
) are present and no trailing whitespaces are present.Codecov
Frontend
Vue
Pug
Cypress
Backend
Bloch’s Builder Pattern
CliArguments.java
class, as it had a large number of constructor parameters, and also required flexible construction as some of the fields were optional. The pattern facilitates immutability and reduces the risk of introducing errors in complex Java classes. Read more about here on Oracle's blog.Polymorphism
RepoSense.java
, by utilizing RunConfigurationDecider
to return the appropriate RunConfiguration
based on the CliArguments
, where the config can be from getRepoConfigurations()
.Discrete Event Simulator (DES)
Misc
Git Commmands/Functionalities
git shortlog
- Summarizes git log
output, where each commit will be grouped by author and title. This is used in RepoSense to easily count the commits by the users.git grep
- A powerful tool that looks for specified patterns in the tracked files in the work tree, blobs registered in the index file, or blobs in given tree objects. Patterns are lists of one or more search expressions separated by newline characters. An empty string as search expression matches all lines. Utilized to write Reposense scripts to perform environmental checks for all files tracked by git, to ensure they end with a newline, no prohibited line endings (\r\n
) are present and no trailing whitespaces are present. Used git docs to learn how to use git grep
properly and what its various flags do..mailmap
- If the file .mailmap exists at the top-level of the repository, it can be used to map author and committer names and email addresses to canonical real names and email addresses. This is useful to map multiple authors and commenters and provides a way to share the mapping with all other users of the repository. Used git docs to learn how to configure git mailmap properly.URL Shortening
Supabase is a prominent open-source alternative to Firebase, aiming to replicate Firebase's features using enterprise-grade open-source tools. It offers a robust platform for developers to build scalable and reliable applications with ease.
Supabase Auth, part of the Supabase ecosystem, is a user management and authentication server written in Go. It facilitates key functionalities such as JWT issuance, Row Level Security with PostgREST, comprehensive user management, and a variety of authentication methods including email/password, magic links, phone numbers, and external providers like Google, Apple, Facebook, and Discord. Originating from Netlify's Auth codebase, it has since evolved significantly in terms of features and capabilities.
Below is a summary of my contributions to Supabase, on both Supabase/supabase and Supabase/gotrue (to be renamed to supabase/auth):
Date | Achievement |
---|---|
12/23 | Merged PR: [#19825] Update SIGNED_IN event documentation (#19974) |
12/23 | Created issue for discovered security vulnerability: signUp leaking existing user role #1365 |
12/23 | Merged PR: fix: sanitizeUser leaks user role (#1366) s |
12/23 | Created PR: [#880] Add function to get user by email identities (#1367) |
12/23 | Merged PR: fix: add check for max password length (#1368) |
12/23 | Discussion on potential solutions for: Email rate limit is triggered even in scenarios where an email doesn't end up being sent (#1236) |
Through my contributions to Supabase, I've gained significant insights and knowledge:
While my experience contributing to Supabase was largely positive, I identified areas for enhancement:
From my engagement with Supabase, I've identified practices that could benefit NUS-OSS projects, particularly the use of Docker for simplifying project setup and ensuring consistency across development environments. This helped save alot of time by avoiding complicated manual setups, and allowed me to focus on resolving the issues.
Supabase is a prominent open-source alternative to Firebase, aiming to replicate Firebase's features using enterprise-grade open-source tools. It offers a robust platform for developers to build scalable and reliable applications with ease.
Supabase Auth, part of the Supabase ecosystem, is a user management and authentication server written in Go. It facilitates key functionalities such as JWT issuance, Row Level Security with PostgREST, comprehensive user management, and a variety of authentication methods including email/password, magic links, phone numbers, and external providers like Google, Apple, Facebook, and Discord. Originating from Netlify's Auth codebase, it has since evolved significantly in terms of features and capabilities.
Below is a summary of my contributions to Supabase, on both Supabase/supabase and Supabase/gotrue (to be renamed to supabase/auth):
Date | Achievement |
---|---|
12/23 | Merged PR: [#19825] Update SIGNED_IN event documentation (#19974) |
12/23 | Created issue for discovered security vulnerability: signUp leaking existing user role #1365 |
12/23 | Merged PR: fix: sanitizeUser leaks user role (#1366) s |
12/23 | Created PR: [#880] Add function to get user by email identities (#1367) |
12/23 | Merged PR: fix: add check for max password length (#1368) |
12/23 | Discussion on potential solutions for: Email rate limit is triggered even in scenarios where an email doesn't end up being sent (#1236) |
Through my contributions to Supabase, I've gained significant insights and knowledge:
While my experience contributing to Supabase was largely positive, I identified areas for enhancement:
From my engagement with Supabase, I've identified practices that could benefit NUS-OSS projects, particularly the use of Docker for simplifying project setup and ensuring consistency across development environments. This helped save alot of time by avoiding complicated manual setups, and allowed me to focus on resolving the issues.
JavaScript based diagramming and charting tool that renders Markdown-inspired text definitions to create and modify diagrams dynamically.
While setting up the MermaidJS code base I realised that the recommended VSCode extension for Vitest (Community made) was deprecated and was replaced with the updated version maintained by the Vitest team. I had then filed an issue and made a PR to update this(merged).
While understanding the codebase to solve this PR (to be solved) which involved adding additional funcionality to git diagrams, I realised that there was an undocumented feature that was merged a few versions ago. I had then filed an issue an added this to the documentation (merged)
I am in the process of converting gitGraph functions from JS to TS in this PR. This is how Mermaid maintains an internal structure of what should be rendered. This would then be followed up by another PR to change the language parser used from BISON to Langium, which provide nicer features for users.
I'm still in the midst of learning this, but I've learned that parsers can be generated using programs such as BISON and Langium. Mermaid is built on JIISON a BISON implementation in JS which has been unofficially deprecated and has been trying to make a move to move away from this to a maintained alternative Langium. I would be trying to learn BISON and rewrite some parts of the git graph parser to make it more flexible in allowing me to implement new features.
Resources: GNU BISON Documentation
JavaScript based diagramming and charting tool that renders Markdown-inspired text definitions to create and modify diagrams dynamically.
While setting up the MermaidJS code base I realised that the recommended VSCode extension for Vitest (Community made) was deprecated and was replaced with the updated version maintained by the Vitest team. I had then filed an issue and made a PR to update this(merged).
While understanding the codebase to solve this PR (to be solved) which involved adding additional funcionality to git diagrams, I realised that there was an undocumented feature that was merged a few versions ago. I had then filed an issue an added this to the documentation (merged)
I am in the process of converting gitGraph functions from JS to TS in this PR. This is how Mermaid maintains an internal structure of what should be rendered. This would then be followed up by another PR to change the language parser used from BISON to Langium, which provide nicer features for users.
I'm still in the midst of learning this, but I've learned that parsers can be generated using programs such as BISON and Langium. Mermaid is built on JIISON a BISON implementation in JS which has been unofficially deprecated and has been trying to make a move to move away from this to a maintained alternative Langium. I would be trying to learn BISON and rewrite some parts of the git graph parser to make it more flexible in allowing me to implement new features.
Resources: GNU BISON Documentation
Over the semester, I worked on various aspects of Markbind such as new feature and bug fixes. -Some of the notable works (finished or in progress) include:
Week | Achievements |
---|---|
2 | Raised Issue: Broken link for emoticon shortcut in UG |
2 | Authored PR: Correct broken UG external link |
3 | Authored PR: Use a more noticeable color for highlight words in fenced code |
4 | Investigated Printing related issues: Incorrect behavior for minimal panel transition, The collapsed page nav appears in the print view, Lower (white) navbar gets printed on mobile, #728, Support a way to generate table of content |
5 | Authored PR: Add line-numbers when soft-wrapping |
6 | Authored PR: Add line-numbers when wrapping is needed for printing |
6 | Raised Issue: Code highlighting not visible in printing |
7 | Authored PR: Add SortableTable plugin (work in progress) |
7 | Reviewed PR: Fix print code highlight |
Week | Achievements |
---|---|
2 | Raised Issue: Broken link for emoticon shortcut in UG |
2 | Authored PR: Correct broken UG external link |
3 | Authored PR: Use a more noticeable color for highlight words in fenced code |
4 | Investigated Printing related issues: Incorrect behavior for minimal panel transition, The collapsed page nav appears in the print view, Lower (white) navbar gets printed on mobile, #728, Support a way to generate table of content |
5 | Authored PR: Add line-numbers when soft-wrapping |
6 | Authored PR: Add line-numbers when wrapping is needed for printing |
6 | Raised Issue: Code highlighting not visible in printing |
7 | Authored PR: Add SortableTable plugin (work in progress) |
7 | Reviewed PR: Fix print code highlight |
Week | Achievements |
---|---|
1 | Merged PR: [#2073] Refactor RepoConfigCsvParser::processLine method to avoid arrowhead style code #2080 |
2 | Reviewed PR: [#2003] Suppress Console Warning #2088 |
2 | Reviewed PR: [#1224] Update .stylelintrc.json to check for spacing #2094 |
2 | Submitted Issue: Suggestions on improvement for memory performance regarding Regex matching #2091 |
2 | Submitted Issue: Suggestions for reducing runtime and memory usage for StreamGobbler #2095 |
3 | Submitted Issue: Refactor parser package for greater organisation of classes #2103 |
3 | Merged PR: [#1958] Use syntax coloring for code blocks in docs #2099 |
4 | Merged PR: [#2103] Refactor parser package for greater organisation of classes #2104 |
5 | Merged PR: [#2076] Refactor RepoConfiguration to simplify constructor complexity #2078 |
5 | Submitted Issue: Refactor CliArguments to conform to RepoConfiguration 's Builder Pattern #2117 |
5 | Submitted Issue: Implement Proper Deep Cloning for RepoConfiguration and CliArguments #2119 |
5 | Submitted Issue: Parameter Verification for RepoConfiguration and CliArguments #2121 |
6 | Reviewed PR: Fix Blurry Favicon #2129 |
6 | Drafted PR: [#2119] Implement Proper Deep Cloning for RepoConfiguration and CliArguments #2124 |
7 | Submitted Issue: Dockerisation of RepoSense #2145 |
7 | Merged PR: [#2117] Refactor CliArguments to conform to RepoConfiguration's Builder Pattern #2118 |
8 | Reviewed PR: [#944] Implement authorship analysis #2140 |
10 | Merged PR: [#2120] Update RepoSense contributors in documentation #2138 |
10 | Submitted Issue: Migrate to Java 11 Syntax and Features #2177 |
10 | Reviewed PR: [#2158] Add More Documentation for Title Component #2159 |
10 | Reviewed PR: [#2151] Update Stylelint #2153 |
10 | Reviewed PR: [#2151] Update LoadingOverlay and Minor Versions of Node Dependencies #2152 |
12 | Reviewed PR: [#2176] Move from Vue CLI to Vite #2178 |
13 | Reviewed PR: [#2001] Extract c-file-type-checkbox from Summary, Authorship and Zoom #2173 |
13 | Allow CI to pass if Codecov fails #2189 |
Reading Week | Merged PR: [#2177] Migrate to Java 11 Syntax and Features #2183 |
Reading Week | Merged PR: [#2184] Fix Inconsistent Line Number Colours #2185 |
Week | Achievements |
---|---|
1 | Merged PR: [#2073] Refactor RepoConfigCsvParser::processLine method to avoid arrowhead style code #2080 |
2 | Reviewed PR: [#2003] Suppress Console Warning #2088 |
2 | Reviewed PR: [#1224] Update .stylelintrc.json to check for spacing #2094 |
2 | Submitted Issue: Suggestions on improvement for memory performance regarding Regex matching #2091 |
2 | Submitted Issue: Suggestions for reducing runtime and memory usage for StreamGobbler #2095 |
3 | Submitted Issue: Refactor parser package for greater organisation of classes #2103 |
3 | Merged PR: [#1958] Use syntax coloring for code blocks in docs #2099 |
4 | Merged PR: [#2103] Refactor parser package for greater organisation of classes #2104 |
5 | Merged PR: [#2076] Refactor RepoConfiguration to simplify constructor complexity #2078 |
5 | Submitted Issue: Refactor CliArguments to conform to RepoConfiguration 's Builder Pattern #2117 |
5 | Submitted Issue: Implement Proper Deep Cloning for RepoConfiguration and CliArguments #2119 |
5 | Submitted Issue: Parameter Verification for RepoConfiguration and CliArguments #2121 |
6 | Reviewed PR: Fix Blurry Favicon #2129 |
6 | Drafted PR: [#2119] Implement Proper Deep Cloning for RepoConfiguration and CliArguments #2124 |
7 | Submitted Issue: Dockerisation of RepoSense #2145 |
7 | Merged PR: [#2117] Refactor CliArguments to conform to RepoConfiguration's Builder Pattern #2118 |
8 | Reviewed PR: [#944] Implement authorship analysis #2140 |
10 | Merged PR: [#2120] Update RepoSense contributors in documentation #2138 |
10 | Submitted Issue: Migrate to Java 11 Syntax and Features #2177 |
10 | Reviewed PR: [#2158] Add More Documentation for Title Component #2159 |
10 | Reviewed PR: [#2151] Update Stylelint #2153 |
10 | Reviewed PR: [#2151] Update LoadingOverlay and Minor Versions of Node Dependencies #2152 |
12 | Reviewed PR: [#2176] Move from Vue CLI to Vite #2178 |
13 | Reviewed PR: [#2001] Extract c-file-type-checkbox from Summary, Authorship and Zoom #2173 |
13 | Allow CI to pass if Codecov fails #2189 |
Reading Week | Merged PR: [#2177] Migrate to Java 11 Syntax and Features #2183 |
Reading Week | Merged PR: [#2184] Fix Inconsistent Line Number Colours #2185 |
Date | Achievements |
---|---|
Nov 27, 2023 | Merged PR: Fix deadline extensions update issue #12601 |
Dec 11, 2023 | Submitted Issue: Copying feedback session: mark session name as mandatory field when copying feedback session |
Dec 11, 2023 | Submitted Issue: Copy course modal: Mandatory fields not highlighted #12653 |
Date | Achievements |
---|---|
Nov 27, 2023 | Merged PR: Fix deadline extensions update issue #12601 |
Dec 11, 2023 | Submitted Issue: Copying feedback session: mark session name as mandatory field when copying feedback session |
Dec 11, 2023 | Submitted Issue: Copy course modal: Mandatory fields not highlighted #12653 |
This semester, I focused my efforts on trying to upgrade the outdated dependencies in CATcher. This was a challenging task as I had to understand the dependencies of the project. Even though the current state of the repository is not what I have hoped to achieve, I hope future devs reading this might find some insights on how to approach this task.
From working with CATcher and SourceAcademy, I have gotten a more solid understanding of how package managers work. Before this, I only saw Node and npm as something we had to install before we can start developing our project. However, the fact is that package managers are a crucial aspect of any project, since they manage the dependencies of the project.
In CATcher, I focused on identifying the dependencies that needed to be updated. I learned that dependencies are managed in the package.json
file, and that the package-lock.json
file is used to lock the dependencies to a specific version. This is important as it ensures that the project is reproducible across different machines. Unfortunately, this is not used in CATcher, but i believe CATcher will benefit from using a lockfile in the future.
Beyond that, I have learnt how to use npm outdated
, npm-check
etc to identify outdated dependencies. I also learnt how to use npm ls
to print out the dependency tree of a given package. This was useful in identifying the dependencies that needed to be updated.
Resources
I also assisted with the tslint to eslint migration in CATcher. While initially a PR done by a mentee, I had to stay involved and understand the changes made as well. I had to understand how to configure eslint to work with the project. This was a challenging task, but I am glad that I managed to complete it.
Resources:
Beyond work done in CATcher, I also worked on SourceAcademy, where I implemented an i18n framework. i18next is a powerful library that allows for easy translation of text in a project. During my implementation of the i18n framework in SourceAcademy, I referenced several implementations of i18n across various established open source repos such as HospitalRun and FreeCodeCamp for any best practices. From these references, I learned how to structure the i18n files and the various translation resources to make it easy for future translators to add on new translations.
Resources:
I had contributed to CATcher as part of IWM, but I have never really approached the Angular aspects of the project.
Essentially, the core ideas behind Angular involves:
@Component
decorator, an HTML template and styles.
-The other key concepts include event bindings and property binding that link the template to the TypeScript class. Knowing these essentials allowed me to fix WATcher PR#57.
Another key part of Angular is its Dependency Injection system and services. Angular allows us to provide dependencies at different levels of the application, and how the dependencies are instantiated.
Finally, as part of fixing "Remove label-filter-bar as module export #92", I also learned about how related components are organized and grouped into modules. Each Module are self-contained and provide a certain set of functionality and components related to that module, thereby achieving separation of concerns.
Resources:
After having 2 separate hotfixes pushed in a single semester, I started to look more deeply into ensuring the robustness of our application. During these 2 hotfixes, bugs were only uncovered during manual testing. However, it is time consuming to conduct manual tests, and we need to find a way to automate it. E2E tests simulate user interactions such as clicks and typing and is a useful way to ensure our end-product is performing as expected.
During this semester, one of the high priority issues was to migrate our E2E solution away from Protractor. As such, I have investigated Cypress and Playwright as two potential E2E solutions.
When performing migration from Protractor to Playwright, I learned about the different strategies E2E tests can be conducted. Typically, we would want to conduct E2E tests against our production server, since that is what our end users will be using. However, since CATcher depends alot on GitHub's API for its functionality, we are unable to perform automated tests against GitHub. A second strategy would be to mock the functions that hit GitHub's API, and we would test solely the functionalities and behaviours of the app. This let me realized that there is a test vs production version of CATcher.
I have also looked into whether it is possible to perform E2E testing against the production server, since one of the bugs fixed in the hotfixes can only be caught if we did not adopt a mocking strategy. One of the key feasibility concerns I had with testing against the GitHub API was simulating user authentication. This was because authenticating with GitHub requires multi-factor authentication, something that is difficult to achieve with automated E2E testing. Some potential solutions to bypassing MFA would be to use TOTP, which can be generated programmatically. More research will be needed in this area.
Resources:
I also picked up Github Actions when contributing to the CI/CD pipeline in Enable linting in Github workflow #81. I learned how Github Actions are set up and how they can be triggered upon pushing to main/master and also on pull requests.
Furthermore, I learnt how we can use matrix strategies to run the same job with different parameters, such as different OS and different node versions.
Resources:
Part of working with CATcher source code was frequently encountering Observables and Observers. RxJS supports Observers
and Observables
, allowing updates to some Observable
to be received by some Observer
that is subscribed to it. With this pattern, we can trigger updates in many dependent objects automatically and asynchronously when some object state changes.
Resources:
The other key concepts include event bindings and property binding that link the template to the TypeScript class. Knowing these essentials allowed me to fix WATcher PR#57.
Another key part of Angular is its Dependency Injection system and services. Angular allows us to provide dependencies at different levels of the application, and how the dependencies are instantiated.
Finally, as part of fixing "Remove label-filter-bar as module export #92", I also learned about how related components are organized and grouped into modules. Each Module are self-contained and provide a certain set of functionality and components related to that module, thereby achieving separation of concerns.
Resources:
After having 2 separate hotfixes pushed in a single semester, I started to look more deeply into ensuring the robustness of our application. During these 2 hotfixes, bugs were only uncovered during manual testing. However, it is time consuming to conduct manual tests, and we need to find a way to automate it. E2E tests simulate user interactions such as clicks and typing and is a useful way to ensure our end-product is performing as expected.
During this semester, one of the high priority issues was to migrate our E2E solution away from Protractor. As such, I have investigated Cypress and Playwright as two potential E2E solutions.
When performing migration from Protractor to Playwright, I learned about the different strategies E2E tests can be conducted. Typically, we would want to conduct E2E tests against our production server, since that is what our end users will be using. However, since CATcher depends alot on GitHub's API for its functionality, we are unable to perform automated tests against GitHub. A second strategy would be to mock the functions that hit GitHub's API, and we would test solely the functionalities and behaviours of the app. This let me realized that there is a test vs production version of CATcher.
I have also looked into whether it is possible to perform E2E testing against the production server, since one of the bugs fixed in the hotfixes can only be caught if we did not adopt a mocking strategy. One of the key feasibility concerns I had with testing against the GitHub API was simulating user authentication. This was because authenticating with GitHub requires multi-factor authentication, something that is difficult to achieve with automated E2E testing. Some potential solutions to bypassing MFA would be to use TOTP, which can be generated programmatically. More research will be needed in this area.
Resources:
I also picked up Github Actions when contributing to the CI/CD pipeline in Enable linting in Github workflow #81. I learned how Github Actions are set up and how they can be triggered upon pushing to main/master and also on pull requests.
Furthermore, I learnt how we can use matrix strategies to run the same job with different parameters, such as different OS and different node versions.
Resources:
Part of working with CATcher source code was frequently encountering Observables and Observers. RxJS supports Observers
and Observables
, allowing updates to some Observable
to be received by some Observer
that is subscribed to it. With this pattern, we can trigger updates in many dependent objects automatically and asynchronously when some object state changes.
Resources:
Sourceacademy is the an online experiential environment used for teaching students computational thinking and is used by the School of Computing in NUS and Uppsala University in Sweden to teach introductory programming modules. The frontend is built using React and Redux.
In this project, I have authored and merged two PRs. They are listed as follows:
Fix double window prompt when uploading users #2943
In this PR, I fixed a long standing bug regarding the UI where two file prompts show up upon clicking a "upload csv" button. To solve this, I first reproduced the issue on my local development environment, and then identified the issue, which happened to be the incorrect use of a <FileInput>
react component within a <CSVReader>
component. The components were imported from a theming library and a CSV parser library respectively.
In this PR, I laid the groundwork for future internationalization work to be done on SourceAcademy. SourceAcademy started out as a project in NUS but has plans to go international, as seen by its use in Uppsala university in Sweden. As such, adding i18n to the project will be crucial for its future.
In this PR, I introduced the use of react-i18next
library, as well as define data structures to allow future translators to easily add on new translations and languages.
React & Redux
Sourceacademy is built in React and Redux, and as such, I have had to learn how to work with these two libraries. While I have used React and Redux before, I have not seen how it can be used in a large scale project like Sourceacademy. In Sourceacademy, I have seen how Redux and Redux Toolkit was used to create a typesafe global state that is shared across the entire application and I appreciate how well structured the code was in the repository.
i18next
i18next is a library that allows for internationalization in a React project. It is a powerful library that allows for easy translation of text in a project. During my implementation of the i18n framework in Sourceacademy, I referenced several implementations of i18n across various established open source repos such as HospitalRun and FreeCodeCamp for any best practices. From these references, I learned how to structure the i18n files and the various translation resources to make it easy for future translators to add on new translations.
Furthermore, the i18n framework that I contributed to has strong type safety and only allows keys that are defined in the translation files to be used, making it easier for future developers to see what keys are allowed on what file. I am grateful for the Sourceacademy maintainers for their guidance in this implementation.
Practices and tools from SourceAcademy that could be adopted by CATcher
SourceAcademy utilises Yarn as their package manager. From almost all points of view, yarn has the exact same functionality as npm, but it is faster and more reliable. As such, we could consider moving over to using Yarn in CATcher as well.
Furthermore, I was particularly impressed with the testing framework that they had to ensure any new changes were not breaking. They made use of jest and had an interactive UI test runner that allowed the developer to see which tests were failing and why. This is something that CATcher could consider adopting as well.
Sourceacademy is the an online experiential environment used for teaching students computational thinking and is used by the School of Computing in NUS and Uppsala University in Sweden to teach introductory programming modules. The frontend is built using React and Redux.
In this project, I have authored and merged two PRs. They are listed as follows:
Fix double window prompt when uploading users #2943
In this PR, I fixed a long standing bug regarding the UI where two file prompts show up upon clicking a "upload csv" button. To solve this, I first reproduced the issue on my local development environment, and then identified the issue, which happened to be the incorrect use of a <FileInput>
react component within a <CSVReader>
component. The components were imported from a theming library and a CSV parser library respectively.
In this PR, I laid the groundwork for future internationalization work to be done on SourceAcademy. SourceAcademy started out as a project in NUS but has plans to go international, as seen by its use in Uppsala university in Sweden. As such, adding i18n to the project will be crucial for its future.
In this PR, I introduced the use of react-i18next
library, as well as define data structures to allow future translators to easily add on new translations and languages.
React & Redux
Sourceacademy is built in React and Redux, and as such, I have had to learn how to work with these two libraries. While I have used React and Redux before, I have not seen how it can be used in a large scale project like Sourceacademy. In Sourceacademy, I have seen how Redux and Redux Toolkit was used to create a typesafe global state that is shared across the entire application and I appreciate how well structured the code was in the repository.
i18next
i18next is a library that allows for internationalization in a React project. It is a powerful library that allows for easy translation of text in a project. During my implementation of the i18n framework in Sourceacademy, I referenced several implementations of i18n across various established open source repos such as HospitalRun and FreeCodeCamp for any best practices. From these references, I learned how to structure the i18n files and the various translation resources to make it easy for future translators to add on new translations.
Furthermore, the i18n framework that I contributed to has strong type safety and only allows keys that are defined in the translation files to be used, making it easier for future developers to see what keys are allowed on what file. I am grateful for the Sourceacademy maintainers for their guidance in this implementation.
Practices and tools from SourceAcademy that could be adopted by CATcher
SourceAcademy utilises Yarn as their package manager. From almost all points of view, yarn has the exact same functionality as npm, but it is faster and more reliable. As such, we could consider moving over to using Yarn in CATcher as well.
Furthermore, I was particularly impressed with the testing framework that they had to ensure any new changes were not breaking. They made use of jest and had an interactive UI test runner that allowed the developer to see which tests were failing and why. This is something that CATcher could consider adopting as well.
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Jun 2023 | Mentor | Question: @angular/common version to use #1191 | Provided guidance on how to resolve the issue |
12 Jun 2023 | PR author | Fix peer dependencies #1193 | |
19 Jul 2023 | Issue Reporter | Documentation for CATcher's parser #1204 | Created issue to discuss documentation for CATcher's parser |
19 Sep 2023 | PR author | Upgrade to Angular 11 #1203 |
Date | Role | Description | Key Achievements |
---|---|---|---|
29 Jun 2023 | PR Reviewer | Detail page detail list #131 | |
29 Jun 2023 | PR Reviewer | Add wrap for username in issues-viewer's card-view #147 | |
29 Jun 2023 | PR Reviewer | Disable milestone filter if there are no milestones #149 | |
29 Jun - 18 Jul | PR Reviewer | Add reset labels feature #150 | Mentored PR author to improve code quality and readability |
30 Jun 2023 | PR Reviewer | Show loading spinner on switch repository #151 | |
28 Oct 2023 | PR Reviewer | Option to Limit Repository Access #215 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 2 | PR Reviewer | Uncaught error when invalid link is clicked #1239 | |
Week 4 | PR Reviewer | Default branch to main #1234 | |
Week 4 | PR Author | Angular 12 #1242 | |
Week 5 | PR Reviewer | Faulty list view when back navigating #1243 | |
Week 6 | PR Reviewer | Fix markdown blockquote preview difference #1245 | |
Week 6 | Issue Reporter | Migrate to ESLint #1247 | |
Week 7 | PR Reviewer | Assisted in the creation of new CATcher release | |
Week 7 - 8 | Issue Reporter & PR Author | Address neglected depenedencies in CATcher | |
Week 11 | PR Reviewer | Add documentation for CATcher's parser #1240 | |
Week 13 | PR Reviewer | Add login redirect #1256 | |
Reading Week | PR Reviewer & Co-author | Migrate from TSLint to ESLint #1250 | Had to step in to complete the PR |
Reading Week | PR Author | Upgrade to Angular 13 #1249 | |
Reading Week | PR Author | Fix e2e regression caused by changes in AuthService #1275 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 4 | PR Reviewer | Show list of hidden users #235 | |
Week 4 | PR Reviewer | Build in Github Actions #239 | |
Week 5 | PR Reviewer | Remove unused session-fix-confirmation component #250 | |
Week 5 | PR Reviewer | Remove unused models #253 | |
Week 6 | PR Reviewer | Upgrade to Angular 11 #252 | |
Recess | PR Reviewer | Fix zone testing import error #269 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 13 | PR Author | Fix double window prompt when uploading users #2943 | Fixed a bug |
Week 13 | PR Author | i18n framework #2946 | Laid the groundwork for internationalization in source academy |
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Jun 2023 | Mentor | Question: @angular/common version to use #1191 | Provided guidance on how to resolve the issue |
12 Jun 2023 | PR author | Fix peer dependencies #1193 | |
19 Jul 2023 | Issue Reporter | Documentation for CATcher's parser #1204 | Created issue to discuss documentation for CATcher's parser |
19 Sep 2023 | PR author | Upgrade to Angular 11 #1203 |
Date | Role | Description | Key Achievements |
---|---|---|---|
29 Jun 2023 | PR Reviewer | Detail page detail list #131 | |
29 Jun 2023 | PR Reviewer | Add wrap for username in issues-viewer's card-view #147 | |
29 Jun 2023 | PR Reviewer | Disable milestone filter if there are no milestones #149 | |
29 Jun - 18 Jul | PR Reviewer | Add reset labels feature #150 | Mentored PR author to improve code quality and readability |
30 Jun 2023 | PR Reviewer | Show loading spinner on switch repository #151 | |
28 Oct 2023 | PR Reviewer | Option to Limit Repository Access #215 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 2 | PR Reviewer | Uncaught error when invalid link is clicked #1239 | |
Week 4 | PR Reviewer | Default branch to main #1234 | |
Week 4 | PR Author | Angular 12 #1242 | |
Week 5 | PR Reviewer | Faulty list view when back navigating #1243 | |
Week 6 | PR Reviewer | Fix markdown blockquote preview difference #1245 | |
Week 6 | Issue Reporter | Migrate to ESLint #1247 | |
Week 7 | PR Reviewer | Assisted in the creation of new CATcher release | |
Week 7 - 8 | Issue Reporter & PR Author | Address neglected depenedencies in CATcher | |
Week 11 | PR Reviewer | Add documentation for CATcher's parser #1240 | |
Week 13 | PR Reviewer | Add login redirect #1256 | |
Reading Week | PR Reviewer & Co-author | Migrate from TSLint to ESLint #1250 | Had to step in to complete the PR |
Reading Week | PR Author | Upgrade to Angular 13 #1249 | |
Reading Week | PR Author | Fix e2e regression caused by changes in AuthService #1275 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 4 | PR Reviewer | Show list of hidden users #235 | |
Week 4 | PR Reviewer | Build in Github Actions #239 | |
Week 5 | PR Reviewer | Remove unused session-fix-confirmation component #250 | |
Week 5 | PR Reviewer | Remove unused models #253 | |
Week 6 | PR Reviewer | Upgrade to Angular 11 #252 | |
Recess | PR Reviewer | Fix zone testing import error #269 |
Date | Role | Description | Key Achievements |
---|---|---|---|
Week 13 | PR Author | Fix double window prompt when uploading users #2943 | Fixed a bug |
Week 13 | PR Author | i18n framework #2946 | Laid the groundwork for internationalization in source academy |
EntityManagers do not always immediate execute the underly SQL statement. One such example is when we create and persist a new entity, the createdAt timestamp is not updated in the entity object in our application until we call flush().
This is because by calling flush() we can ensure that all outstanding SQL statements are executed and that the persistence context and the db is synchronized.
Persistent entities are entities that are known by the persistence provider, Hibernate in this case. An entity(object) can be made persistent by either saving or reading an object from a session. Any changes (e.g., calling a setter) made to persistent entities are automatically persisted into the database.
We can stop hibernate from tracking and automatically updating the entities by calling detach(Entity)
or evict(Entity)
. This will result in the entity becoming detached. While detached, Hibernate will have no longer track the changes made to the entity. To save the changes to the database or make the entity persistent again, we can use merge(Entity)
.
While using the new SQL db, we often find ourselves needing to refer to another related entity for example FeedbackSessionLogs.setStudent(studentEntity)
. This would often require us to query the db for the object and then call the setter. This is inefficient especially if we already have information like the studentEntity
's primary key.
Hibernate provides a getReference()
method which returns a proxy to an entity, that only contains a primary key, and other information are lazily fetched. While creating the proxy, Hibernate does not query the db. Here is an article that goes through different scenarios using reference to see which operations would result in Hibernate performing a SELECT query and which does not. It also includes some information on cached entities in Hibernate.
It is important to note that, since Hibernate does not check that the entity actually exists in the db on creation of the proxy, the proxy might contain a primary key that does not exist in the db. The application should be designed to handle such scenarios when using references. Here is more information on the difference between getReference()
and find()
.
In unit testing, a single component is isolated and tested by replacing its dependencies with stubs/mocks. This allows us to test only the behaviour of the SUT.
Mockito provides multiple methods that help to verify the behaviour of the SUT and also determine how the mocked dependencies are supposed to behave.
verify()
this method allows us to verify that a method of a mocked class is called. It can be combined with other methods like times(x)
which allowsus to verify that the method is only called x
times.
Argument matchers like anyInt()
, anyString()
and allows us to define a custom matcher using argThat()
. These argument matchers can be used to ensure that the correct arguments are being passed to the other dependencies. This is useful if the method you are testing does not return a value useful for determining the correctness of the method.
when()
and thenReturn()
These are methods that allow us to define the behaviour of other dependencies that are not under test.
For e.g., when(mockLogic.someMethod(args)).thenReturn(value)
makes it such that when the SUT invokes someMethod()
with args
from the mockLogic
class, value
is will be returned by someMethod(args)
.
Learnt about how the different features that are provided by GCP and other third parties come together to make Teammates possible.
Most of the information is from the Platform Guide in the teammates-ops repo.
EntityManagers do not always immediate execute the underly SQL statement. One such example is when we create and persist a new entity, the createdAt timestamp is not updated in the entity object in our application until we call flush().
This is because by calling flush() we can ensure that all outstanding SQL statements are executed and that the persistence context and the db is synchronized.
Persistent entities are entities that are known by the persistence provider, Hibernate in this case. An entity(object) can be made persistent by either saving or reading an object from a session. Any changes (e.g., calling a setter) made to persistent entities are automatically persisted into the database.
We can stop hibernate from tracking and automatically updating the entities by calling detach(Entity)
or evict(Entity)
. This will result in the entity becoming detached. While detached, Hibernate will have no longer track the changes made to the entity. To save the changes to the database or make the entity persistent again, we can use merge(Entity)
.
While using the new SQL db, we often find ourselves needing to refer to another related entity for example FeedbackSessionLogs.setStudent(studentEntity)
. This would often require us to query the db for the object and then call the setter. This is inefficient especially if we already have information like the studentEntity
's primary key.
Hibernate provides a getReference()
method which returns a proxy to an entity, that only contains a primary key, and other information are lazily fetched. While creating the proxy, Hibernate does not query the db. Here is an article that goes through different scenarios using reference to see which operations would result in Hibernate performing a SELECT query and which does not. It also includes some information on cached entities in Hibernate.
It is important to note that, since Hibernate does not check that the entity actually exists in the db on creation of the proxy, the proxy might contain a primary key that does not exist in the db. The application should be designed to handle such scenarios when using references. Here is more information on the difference between getReference()
and find()
.
In unit testing, a single component is isolated and tested by replacing its dependencies with stubs/mocks. This allows us to test only the behaviour of the SUT.
Mockito provides multiple methods that help to verify the behaviour of the SUT and also determine how the mocked dependencies are supposed to behave.
verify()
this method allows us to verify that a method of a mocked class is called. It can be combined with other methods like times(x)
which allowsus to verify that the method is only called x
times.
Argument matchers like anyInt()
, anyString()
and allows us to define a custom matcher using argThat()
. These argument matchers can be used to ensure that the correct arguments are being passed to the other dependencies. This is useful if the method you are testing does not return a value useful for determining the correctness of the method.
when()
and thenReturn()
These are methods that allow us to define the behaviour of other dependencies that are not under test.
For e.g., when(mockLogic.someMethod(args)).thenReturn(value)
makes it such that when the SUT invokes someMethod()
with args
from the mockLogic
class, value
is will be returned by someMethod(args)
.
Learnt about how the different features that are provided by GCP and other third parties come together to make Teammates possible.
Most of the information is from the Platform Guide in the teammates-ops repo.
GetCourseJoinStatusAction
and PutDataBundleDocumentsAction
FeedbackSessionsDb
and FeedbackQuestionsDb
GetCourseJoinStatusAction
and PutDataBundleDocumentsAction
FeedbackSessionsDb
and FeedbackQuestionsDb
As part of the v9-migration
, I had to familiarise myself with the Hibernate ORM. It is my first time using Hibernate ORM, as I was only familiar with the Eloquent ORM from Laravel, as well as the ORM from Django. ORMs are extremely beneficial as they essentially translate between data representations used in the application and those in the database. It also makes your code more readable as it simplifies complex queries and makes transitioning between various database engines seamless, should the need arise.
Aspects Learnt:
persist
and merge
to insert or update an entity respectivelyResources
TEAMMATES uses Solr for full-text search, and is structured to function for both the datastore and SQL databases.
Aspects Learnt:
Resources:
Having only used SQLite
and MySQL
in the past, I had to familiarise myself with PostgreSQL as it is the SQL database used in TEAMMATES.
Aspects Learnt:
Resources:
Having had no experience utilising Angular prior to working on TEAMMATES, I was introduced to several neat features that Angular has to offer.
Aspects Learnt:
Angular's component-based architecture makes it easy to build and maintain large applications. Each component is encapsulated with its own functionality and is responsible for a specific UI element. This modularity allowed me to quickly understand and contribute to the project, as I could focus on individual components without being overwhelmed by the entire codebase.
Angular's dependency injection system is a design pattern in which a class receives its dependencies from external sources rather than creating them itself. This approach simplifies the development of large applications by making it easier to manage and test components.
Angular offers the trackBy
function, which I used in conjunction with the *ngFor
directive to manage lists more efficiently. Normally, *ngFor
can be inefficient because it re-renders the entire list when the data changes. However, by implementing trackBy, Angular can track each item's identity and only re-render items that have actually changed. This reduces the performance cost, especially in large lists where only a few items change.
When deploying the staging environment for the ARF upgrade, I managed to work with and gain familiarity with the deployment workflow, as well as several GCP tools and the gcloud
sdk.
Aspects Learnt
Resources:
Snapshot testing with Jest is an effective strategy to ensure that user interfaces remain consistent despite code changes. It's important for developers to maintain updated snapshots and commit these changes as part of their regular development process.
Snapshot tests are particularly useful for detecting unexpected changes in the UI. By capturing the "snapshot" of an output, developers can compare the current component render against a stored version. If changes occur that aren't captured in a new snapshot, the test will fail, signaling the need for a review.
Mockito is a popular Java-based framework used primarily for unit testing. It allows developers to isolate the units of code they are testing, to focus solely on the component of software that is being tested.
Mockito allows developers to create mock implementations of dependencies for a particular class. This way, developers can isolate the behavior of the class itself without needing the actual dependencies to be active. By using mock objects instead of real ones, tests can be simplified as they don’t have to cater to the complexities of actual dependencies, such as database connections or external services. Mockito also provides tools to verify that certain behaviors happened during the test. For example, it can verify that a method was called with certain parameters or a certain number of times.
Resources:
E2E Testing allows us to ensure that the application functions as expected from the perspective of the user. This type of testing simulates real user scenarios to validate the complete functionality of the application. Common tools for conducting E2E testing include Selenium, Playwright, and Cypress.
Throughout the semester, I had to migrate several E2E tests and also create some new ones as part of the ARF project, which exposed me to the Page Object Model, which allows for easier testing and maintenance. It enhances code reusability as the same Page Object Model can be reused across related test cases.
E2E Tests may be the most complicated type of test to write, as it involves multiple components of the application; testing it as a whole, rather than in isolated components. As such, pinpointing the sources of errors or failures can be difficult. E2E Tests can also be flaky at times, passing in one run, and failing on others, which could occur due to numerous reasons such as timing issues, concurrency problems or subtle bugs that occur under specific circumstances. However, it is still highly useful as it helps to identify issues in the interaction between integrated components, and also simulates real user scenarios.
Resources:
As part of the v9-migration
, I had to familiarise myself with the Hibernate ORM. It is my first time using Hibernate ORM, as I was only familiar with the Eloquent ORM from Laravel, as well as the ORM from Django. ORMs are extremely beneficial as they essentially translate between data representations used in the application and those in the database. It also makes your code more readable as it simplifies complex queries and makes transitioning between various database engines seamless, should the need arise.
Aspects Learnt:
persist
and merge
to insert or update an entity respectivelyResources
TEAMMATES uses Solr for full-text search, and is structured to function for both the datastore and SQL databases.
Aspects Learnt:
Resources:
Having only used SQLite
and MySQL
in the past, I had to familiarise myself with PostgreSQL as it is the SQL database used in TEAMMATES.
Aspects Learnt:
Resources:
Having had no experience utilising Angular prior to working on TEAMMATES, I was introduced to several neat features that Angular has to offer.
Aspects Learnt:
Angular's component-based architecture makes it easy to build and maintain large applications. Each component is encapsulated with its own functionality and is responsible for a specific UI element. This modularity allowed me to quickly understand and contribute to the project, as I could focus on individual components without being overwhelmed by the entire codebase.
Angular's dependency injection system is a design pattern in which a class receives its dependencies from external sources rather than creating them itself. This approach simplifies the development of large applications by making it easier to manage and test components.
Angular offers the trackBy
function, which I used in conjunction with the *ngFor
directive to manage lists more efficiently. Normally, *ngFor
can be inefficient because it re-renders the entire list when the data changes. However, by implementing trackBy, Angular can track each item's identity and only re-render items that have actually changed. This reduces the performance cost, especially in large lists where only a few items change.
When deploying the staging environment for the ARF upgrade, I managed to work with and gain familiarity with the deployment workflow, as well as several GCP tools and the gcloud
sdk.
Aspects Learnt
Resources:
Snapshot testing with Jest is an effective strategy to ensure that user interfaces remain consistent despite code changes. It's important for developers to maintain updated snapshots and commit these changes as part of their regular development process.
Snapshot tests are particularly useful for detecting unexpected changes in the UI. By capturing the "snapshot" of an output, developers can compare the current component render against a stored version. If changes occur that aren't captured in a new snapshot, the test will fail, signaling the need for a review.
Mockito is a popular Java-based framework used primarily for unit testing. It allows developers to isolate the units of code they are testing, to focus solely on the component of software that is being tested.
Mockito allows developers to create mock implementations of dependencies for a particular class. This way, developers can isolate the behavior of the class itself without needing the actual dependencies to be active. By using mock objects instead of real ones, tests can be simplified as they don’t have to cater to the complexities of actual dependencies, such as database connections or external services. Mockito also provides tools to verify that certain behaviors happened during the test. For example, it can verify that a method was called with certain parameters or a certain number of times.
Resources:
E2E Testing allows us to ensure that the application functions as expected from the perspective of the user. This type of testing simulates real user scenarios to validate the complete functionality of the application. Common tools for conducting E2E testing include Selenium, Playwright, and Cypress.
Throughout the semester, I had to migrate several E2E tests and also create some new ones as part of the ARF project, which exposed me to the Page Object Model, which allows for easier testing and maintenance. It enhances code reusability as the same Page Object Model can be reused across related test cases.
E2E Tests may be the most complicated type of test to write, as it involves multiple components of the application; testing it as a whole, rather than in isolated components. As such, pinpointing the sources of errors or failures can be difficult. E2E Tests can also be flaky at times, passing in one run, and failing on others, which could occur due to numerous reasons such as timing issues, concurrency problems or subtle bugs that occur under specific circumstances. However, it is still highly useful as it helps to identify issues in the interaction between integrated components, and also simulates real user scenarios.
Resources:
EnrollStudentsAction
, SearchAccountRequestsAction
, AccountRequestSearchIndexingWorkerAction
CoursesLogic
GetFeedbackSessionSubmittedGiverSetAction
and CoursesDb
GetSessionResponseStatsActionIT
AdminNotificationsE2ETest
and AdminSearchPageE2ETest
EnrollStudentsAction
, SearchAccountRequestsAction
, AccountRequestSearchIndexingWorkerAction
CoursesLogic
GetFeedbackSessionSubmittedGiverSetAction
and CoursesDb
GetSessionResponseStatsActionIT
AdminNotificationsE2ETest
and AdminSearchPageE2ETest
Gradle is a very flexible build automation tool used for everything from testing and formatting, to builds and deployments. Unlike with other build automation tools like Maven where build scripts written in XML (a widely hated feature of the tool), Gradle build scripts are written in a domain specific language based on Groovy or Kotlin, which are both JVM based languages. This means that it can interact seamlessly with Java libraries.
Gradle is also much more performant than alternatives like Maven because of its:
RepoSense recently added functionality for hot reload on frontend as a Gradle task, which made frontend development a lot more productive. Unfortunately, the feature isn't available on Linux because the package we were using (Apache Ant's condition package) could not specifically check for it. Migrating to Gradle's own platform package recently taken out of incubation, allowed us to support all 3 prominent operating systems.
References:
Like Gradle, Github Actions help with automation of workflows like CI/CD and project management, and can be triggered by a variety of events (pull requests, issues, releases, forks, etc). It also has a growing library of plugins that make workflows a lot easier to set-up. I was surprised that there is some nice tooling support for GitHub actions on IntelliJ.
GitHub actions allows users to run CI on a variety of operating systems, such as Ubuntu XX.04, macOS and Windows Server (which is virtually the same as Windows 10/11 but with better hardware support and more stringent security features).
GitHub also provides a variety of API to interact with these objects. One quirk I came across with the API was that posting single comments on pull requests need to go through the issues endpoint instead of the pulls endpoint (the endpoint for pulls requires code to be referenced). This doesn't cause problems since issues and pulls will never have identical IDs.
GitHub deployment APIs also returns deployment information in pages, which is a sensible thing to do but can cause slight inconvenience when long running PRs have more deployments than can fit in a page.
Actions and APIs also have some great documentation:
Git exploded in popularity in large part due to Git hosting providers like GitHub. GitLab and Bitbucket are also commonly used Git hosts. RepoSense has thus far only largely supported GitHub, but there is a clear incentive to support other commonly used remotes. This is made a little challenging due to differences in conventions between the sites:
base_url | Commit View | Blame View | |
---|---|---|---|
GitHub | github.com | {base_url}/{org}/{repo_name}/commit/{commit_hash} | {base_url}/{org}/{repo_name}/blame/{branch}/{file_path} |
GitLab | gitlab.com | {base_url}/{org}/{repo_name}/-/commit/{commit_hash} | {base_url}/{org}/{repo_name}/-/blame/{branch}/{file_path} |
Bitbucket | bitbucket.org | {base_url}/{org}/{repo_name}/commits/{commit_hash} | {base_url}/{org}/{repo_name}/annotate/{branch}/{file_path} |
For example, Bitbucket uses the term 'annotate' instead of 'blame' because the word 'blame' is insufficiently positive.
In investigating the output of git remote -v
, I noticed there were 2 remotes (fetch and push) for each remote name, which I was confused by. The utility of the separation of fetch and push remotes is for triangular workflows.
We are probably familiar with the common workflow for updating a branch on a forked repo which involves first pulling updates from upstream master, then making changes and pushing to our own fork. This requires remembering to fetch and push to separate repos. With triangular workflows, you can have fetch and push apply to separate repos but with the same remote name, which makes this process much more convenient.
Cypress is a frontend testing tool for testing applications that run in the browser, with tests that are easy to read and write. It uses browser automation (similar to Selenium) and comes with a browser and relevant dependencies out of the box, so it's very easy to set-up. Cypress also provides a dashboard for convenient monitoring of test runs.
https://docs.cypress.io/guides/overview/why-cypress#In-a-nutshell
Bash scripts can be run in a github action workflow, which greatly expands the scope of things you can do with actions. Bash is quite expressive (I hadn't realised just how much it could do). some cool things I learned you could do:
$*
gives parameter values separated by the value in IFS
(Internal File Separator).python3
with a -c
flag and perform more complex processing with a single line python program.ls 1> out 2> err
)Being relatively new to frontend tools, I found Vue.js to be quite interesting. Vue allows code reusability and abstractions through components. While I didn’t work extensively on the frontend, what I learned from the bits that I did work on was quite cool:
Vue state: I found it interesting that you could have computed properties that are accessed the same way as properties, but can depend on other properties and can dynamically update when these properties change. This is often more elegant than using a Vue watcher to update a field. You can even have computed setters that update dependent properties when set. A watcher, however, can be more appropriate when responses to changing data are expensive or need to be done asynchronously.
Vue custom directives: Directives are ways to reuse lower level DOM logic. Directives can define vue life-cycle hooks and later be bound to components (can actually take in any JavaScript object literal). For implementing lazy loads, I needed to use the vue-observe-visibility (external library) directive with slight modification to the hooks to be compatible with Vue3.
References:
Pug is a templating language that compiles to HTML. It is less verbose and much more maintainable than HTML, and also allows basic presentation logic with conditionals, loops and case statements.
There are a lot of these, and most just remain quirks but some result in actual unintended bugs in production (often in edge cases). It was interesting to see this in our contribution bar logic. A technique sometimes used to extract the integer part of a number is to use parseInt
(it's even suggested in a Stack Overflow answer). It turns out we were using this for calculating the number of contribution bars to display for a user. This works for most values, but breaks when numbers become very large or small (less than 10^-7). In this unlikely situation, we'd display anywhere from 1 to 9 extra bars (moral: use Math.floor
instead!).
An investigation into string representations in browsers led me down a rabbit hole of JavaScript runtimes and engines, and ultimately improved my understanding of JavaScript in general. Different browsers have different JS engines - Chrome uses V8, Firefox uses SpiderMonkey (the original JS engine written by Brendan Eich), Edge used to use Chakra but now also uses V8, Safari uses WebKit, etc. Engines often differ significantly in terms of the pipeline for code execution, garbage collection, and more.
The V8 engine as an example, first parses JavaScript into an Abstract Syntax Tree (AST) which is then compiled into bytecode. This bytecode is interpreted by the Ignition interpreter (Ignition also handles compilation of the AST into bytecode). Code that is revisited often during interpretation is marked "hot" and compiled further into highly efficient machine code. This technique of optimising compilation based on run-time profiling (Just-In-Time (JIT) compilation) is also used in other browser engines like SpiderMonkey and the JVM.
The engine is used for running things that are on the browser stack. JS is run in a single thread, and asynchronous tasks are done through callbacks in a task queue. The main script is first run, with things like promises and timeouts inserting tasks into a task queue. Tasks (and microtasks which are more urgent, lower overhead tasks that can execute when the call stack is empty even while the main script is running) in a task queue wait for the stack to be empty before being executed. Page re-renders are also blocked by running code on the stack (long delays between re-renders are undesirable). Using callbacks and hence not blocking the task queue, allows re-rendering to be done more regularly, improving responsiveness. The precise behaviour of task de-queueing (and lower overhead microtasks) can actually differ between browsers, which causes considerable headache.
References:
Discussions over PRs, issues and generally attempting to solve issues, were a great way to explore design considerations. Here is a non-exhaustive list of interesting points that came up this semester:
In-house vs External Library
In implementing new functionality or extending existing functionality (Git interface for example), there is usually a question of whether it would be easier to maintain features in-house, or use external libraries. It might be a good idea to maintain core functionality in-house since we'd want more fine-grained control over these features and new features can be added/fixed quickly as needed. At the same time, external libraries save time and cost of learning about and solving possibly complex problems.
External libraries can however introduce vulnerabilities (several incidences of dependency sabotage with npm packages like color.js and node-ipc hit fairly close to home over the course of the semester). Hence, selection of libraries should be a well-deliberated process and should include considerations like active-ness of the project and diversity of maintainers.
Recency vs Ubiquity
In maintaining versions of dependencies, it is often important to weigh upgrading to a new version to get the newest features against possibly alienating users who don't already have that version. Neither is necessarily better than the other and will likely depend on the nature of the product. A new product for developers would probably have users who want new versions with the bleeding edge of features. On the other hand products that already have a large user base and aimed at less technical users might want to favour ubiquitous versions. Since RepoSense is aimed at users of all skill levels, including novice developers, we often default to the later approach.
In a similar vein, it might be important to make sure that new features don't break backward compatibility so that the end-user won't face significant hindrances with making upgrades. At the same time, the need to be backwards compatible can be a root of evil, introducing all manners of hacks and fixes. This highlights the importance of foresight in the early stages of development. Also, deciding when to stop backwards compatibility with a significant version bump can be a non-trivial decision. Doing so should come with thorough migration documentation (sparse documentation for Vue2 -> Vue3 migration caused a lot of developer grievances).
Isolated Testing
While it's fairly obvious that modularity with tests is important and that components should be tested in isolation with unchanging inputs, it is easy to let slip lapses in the form of hidden dependencies that prevent components from being isolated, or having inputs that are actually non-static. Some of these issues came up over the course of the semester but it struck me just how easy it was for them to slip by unnoticed. There aren't necessarily language-level features that enforce coupling rules for the most part since many of these dependencies can be quite implicit.
This had me thinking about the importance of being explicit in crucial sections of code, as described below.
Being Explicit
It is important that programmers make the behaviour of certain crucial sections of code as explicit as possible. One way of doing this is through good naming of methods and variables, and grouping statements semantically into methods or classes. Large chunks of code is detrimental and allows implicit slips in behaviour that can go unnoticed. So we might even want to make new special classes that do very specific things to make it clear that it is an important subroutine/behaviour that deserves its own abstraction.
At the same time, high reliance on object orientation can lead to too many classes, each class doing trivial things and with high coupling between the classes leading to spaghetti logic that doesn't do very much to alleviate implicit behaviour. There exists a delicate middle ground characterised by semantically well partitioned code.
Behavioural Consistency
The earlier section on Javascript quirks were a result of an overly accommodating feature integration during the early stages of development. It's become a cautionary tale of sorts of the importance of consistency and predictability in behaviour. In adding new features, it was personally very tempting to allow small inconsistencies in behaviour in favour of simplicity of implementation. While simplicity is a desirable outcome, I'd argue that consistency is more important (small inconsistencies can runaway into larger un-fixable differences).
Consistency can be with respect to various things. For example, we might want that identical inputs behave the same under similar conditions (differing in non-semantic respects) or that similar inputs (differing in non-semantic respects) behave the same under the identical conditions, etc.
git bisect
is a very nice way to find problematic commits. Given a bad commit and a previously good commit, git bisect
does a binary search (either automatically with a test or with manual intervention) to find the problematic commit where the issue was introduced.involves:USER
for all items that involve a user. This was very useful for updating progress.md
. More features here.Gradle is a very flexible build automation tool used for everything from testing and formatting, to builds and deployments. Unlike with other build automation tools like Maven where build scripts written in XML (a widely hated feature of the tool), Gradle build scripts are written in a domain specific language based on Groovy or Kotlin, which are both JVM based languages. This means that it can interact seamlessly with Java libraries.
Gradle is also much more performant than alternatives like Maven because of its:
RepoSense recently added functionality for hot reload on frontend as a Gradle task, which made frontend development a lot more productive. Unfortunately, the feature isn't available on Linux because the package we were using (Apache Ant's condition package) could not specifically check for it. Migrating to Gradle's own platform package recently taken out of incubation, allowed us to support all 3 prominent operating systems.
References:
Like Gradle, Github Actions help with automation of workflows like CI/CD and project management, and can be triggered by a variety of events (pull requests, issues, releases, forks, etc). It also has a growing library of plugins that make workflows a lot easier to set-up. I was surprised that there is some nice tooling support for GitHub actions on IntelliJ.
GitHub actions allows users to run CI on a variety of operating systems, such as Ubuntu XX.04, macOS and Windows Server (which is virtually the same as Windows 10/11 but with better hardware support and more stringent security features).
GitHub also provides a variety of API to interact with these objects. One quirk I came across with the API was that posting single comments on pull requests need to go through the issues endpoint instead of the pulls endpoint (the endpoint for pulls requires code to be referenced). This doesn't cause problems since issues and pulls will never have identical IDs.
GitHub deployment APIs also returns deployment information in pages, which is a sensible thing to do but can cause slight inconvenience when long running PRs have more deployments than can fit in a page.
Actions and APIs also have some great documentation:
Git exploded in popularity in large part due to Git hosting providers like GitHub. GitLab and Bitbucket are also commonly used Git hosts. RepoSense has thus far only largely supported GitHub, but there is a clear incentive to support other commonly used remotes. This is made a little challenging due to differences in conventions between the sites:
base_url | Commit View | Blame View | |
---|---|---|---|
GitHub | github.com | {base_url}/{org}/{repo_name}/commit/{commit_hash} | {base_url}/{org}/{repo_name}/blame/{branch}/{file_path} |
GitLab | gitlab.com | {base_url}/{org}/{repo_name}/-/commit/{commit_hash} | {base_url}/{org}/{repo_name}/-/blame/{branch}/{file_path} |
Bitbucket | bitbucket.org | {base_url}/{org}/{repo_name}/commits/{commit_hash} | {base_url}/{org}/{repo_name}/annotate/{branch}/{file_path} |
For example, Bitbucket uses the term 'annotate' instead of 'blame' because the word 'blame' is insufficiently positive.
In investigating the output of git remote -v
, I noticed there were 2 remotes (fetch and push) for each remote name, which I was confused by. The utility of the separation of fetch and push remotes is for triangular workflows.
We are probably familiar with the common workflow for updating a branch on a forked repo which involves first pulling updates from upstream master, then making changes and pushing to our own fork. This requires remembering to fetch and push to separate repos. With triangular workflows, you can have fetch and push apply to separate repos but with the same remote name, which makes this process much more convenient.
Cypress is a frontend testing tool for testing applications that run in the browser, with tests that are easy to read and write. It uses browser automation (similar to Selenium) and comes with a browser and relevant dependencies out of the box, so it's very easy to set-up. Cypress also provides a dashboard for convenient monitoring of test runs.
https://docs.cypress.io/guides/overview/why-cypress#In-a-nutshell
Bash scripts can be run in a github action workflow, which greatly expands the scope of things you can do with actions. Bash is quite expressive (I hadn't realised just how much it could do). some cool things I learned you could do:
$*
gives parameter values separated by the value in IFS
(Internal File Separator).python3
with a -c
flag and perform more complex processing with a single line python program.ls 1> out 2> err
)Being relatively new to frontend tools, I found Vue.js to be quite interesting. Vue allows code reusability and abstractions through components. While I didn’t work extensively on the frontend, what I learned from the bits that I did work on was quite cool:
Vue state: I found it interesting that you could have computed properties that are accessed the same way as properties, but can depend on other properties and can dynamically update when these properties change. This is often more elegant than using a Vue watcher to update a field. You can even have computed setters that update dependent properties when set. A watcher, however, can be more appropriate when responses to changing data are expensive or need to be done asynchronously.
Vue custom directives: Directives are ways to reuse lower level DOM logic. Directives can define vue life-cycle hooks and later be bound to components (can actually take in any JavaScript object literal). For implementing lazy loads, I needed to use the vue-observe-visibility (external library) directive with slight modification to the hooks to be compatible with Vue3.
References:
Pug is a templating language that compiles to HTML. It is less verbose and much more maintainable than HTML, and also allows basic presentation logic with conditionals, loops and case statements.
There are a lot of these, and most just remain quirks but some result in actual unintended bugs in production (often in edge cases). It was interesting to see this in our contribution bar logic. A technique sometimes used to extract the integer part of a number is to use parseInt
(it's even suggested in a Stack Overflow answer). It turns out we were using this for calculating the number of contribution bars to display for a user. This works for most values, but breaks when numbers become very large or small (less than 10^-7). In this unlikely situation, we'd display anywhere from 1 to 9 extra bars (moral: use Math.floor
instead!).
An investigation into string representations in browsers led me down a rabbit hole of JavaScript runtimes and engines, and ultimately improved my understanding of JavaScript in general. Different browsers have different JS engines - Chrome uses V8, Firefox uses SpiderMonkey (the original JS engine written by Brendan Eich), Edge used to use Chakra but now also uses V8, Safari uses WebKit, etc. Engines often differ significantly in terms of the pipeline for code execution, garbage collection, and more.
The V8 engine as an example, first parses JavaScript into an Abstract Syntax Tree (AST) which is then compiled into bytecode. This bytecode is interpreted by the Ignition interpreter (Ignition also handles compilation of the AST into bytecode). Code that is revisited often during interpretation is marked "hot" and compiled further into highly efficient machine code. This technique of optimising compilation based on run-time profiling (Just-In-Time (JIT) compilation) is also used in other browser engines like SpiderMonkey and the JVM.
The engine is used for running things that are on the browser stack. JS is run in a single thread, and asynchronous tasks are done through callbacks in a task queue. The main script is first run, with things like promises and timeouts inserting tasks into a task queue. Tasks (and microtasks which are more urgent, lower overhead tasks that can execute when the call stack is empty even while the main script is running) in a task queue wait for the stack to be empty before being executed. Page re-renders are also blocked by running code on the stack (long delays between re-renders are undesirable). Using callbacks and hence not blocking the task queue, allows re-rendering to be done more regularly, improving responsiveness. The precise behaviour of task de-queueing (and lower overhead microtasks) can actually differ between browsers, which causes considerable headache.
References:
Discussions over PRs, issues and generally attempting to solve issues, were a great way to explore design considerations. Here is a non-exhaustive list of interesting points that came up this semester:
In-house vs External Library
In implementing new functionality or extending existing functionality (Git interface for example), there is usually a question of whether it would be easier to maintain features in-house, or use external libraries. It might be a good idea to maintain core functionality in-house since we'd want more fine-grained control over these features and new features can be added/fixed quickly as needed. At the same time, external libraries save time and cost of learning about and solving possibly complex problems.
External libraries can however introduce vulnerabilities (several incidences of dependency sabotage with npm packages like color.js and node-ipc hit fairly close to home over the course of the semester). Hence, selection of libraries should be a well-deliberated process and should include considerations like active-ness of the project and diversity of maintainers.
Recency vs Ubiquity
In maintaining versions of dependencies, it is often important to weigh upgrading to a new version to get the newest features against possibly alienating users who don't already have that version. Neither is necessarily better than the other and will likely depend on the nature of the product. A new product for developers would probably have users who want new versions with the bleeding edge of features. On the other hand products that already have a large user base and aimed at less technical users might want to favour ubiquitous versions. Since RepoSense is aimed at users of all skill levels, including novice developers, we often default to the later approach.
In a similar vein, it might be important to make sure that new features don't break backward compatibility so that the end-user won't face significant hindrances with making upgrades. At the same time, the need to be backwards compatible can be a root of evil, introducing all manners of hacks and fixes. This highlights the importance of foresight in the early stages of development. Also, deciding when to stop backwards compatibility with a significant version bump can be a non-trivial decision. Doing so should come with thorough migration documentation (sparse documentation for Vue2 -> Vue3 migration caused a lot of developer grievances).
Isolated Testing
While it's fairly obvious that modularity with tests is important and that components should be tested in isolation with unchanging inputs, it is easy to let slip lapses in the form of hidden dependencies that prevent components from being isolated, or having inputs that are actually non-static. Some of these issues came up over the course of the semester but it struck me just how easy it was for them to slip by unnoticed. There aren't necessarily language-level features that enforce coupling rules for the most part since many of these dependencies can be quite implicit.
This had me thinking about the importance of being explicit in crucial sections of code, as described below.
Being Explicit
It is important that programmers make the behaviour of certain crucial sections of code as explicit as possible. One way of doing this is through good naming of methods and variables, and grouping statements semantically into methods or classes. Large chunks of code is detrimental and allows implicit slips in behaviour that can go unnoticed. So we might even want to make new special classes that do very specific things to make it clear that it is an important subroutine/behaviour that deserves its own abstraction.
At the same time, high reliance on object orientation can lead to too many classes, each class doing trivial things and with high coupling between the classes leading to spaghetti logic that doesn't do very much to alleviate implicit behaviour. There exists a delicate middle ground characterised by semantically well partitioned code.
Behavioural Consistency
The earlier section on Javascript quirks were a result of an overly accommodating feature integration during the early stages of development. It's become a cautionary tale of sorts of the importance of consistency and predictability in behaviour. In adding new features, it was personally very tempting to allow small inconsistencies in behaviour in favour of simplicity of implementation. While simplicity is a desirable outcome, I'd argue that consistency is more important (small inconsistencies can runaway into larger un-fixable differences).
Consistency can be with respect to various things. For example, we might want that identical inputs behave the same under similar conditions (differing in non-semantic respects) or that similar inputs (differing in non-semantic respects) behave the same under the identical conditions, etc.
git bisect
is a very nice way to find problematic commits. Given a bad commit and a previously good commit, git bisect
does a binary search (either automatically with a test or with manual intervention) to find the problematic commit where the issue was introduced.involves:USER
for all items that involve a user. This was very useful for updating progress.md
. More features here.Pandoc is a Haskell library and command-line tool for converting between various document formats. It is a powerful tool that can convert between many different formats, including Markdown, LaTeX, HTML, and many others. It is also extensible, allowing for the creation of custom readers and writers for new formats. Pandoc has 31.8k stars on GitHub and is used by many people and organizations for its powerful and flexible document conversion capabilities.
The Haskell tooling ecosystem (GHC, Cabal, Stack, Haskell LSP) makes writing Haskell quite enjoyable. In particular, Haskell's type inferencing and parametric polymorphism makes it easy to understand and modify code in a general and well-abstracted way. The language also allows for strong editor tooling that also helps make development a smooth experience.
Pandoc is a Haskell library and command-line tool for converting between various document formats. It is a powerful tool that can convert between many different formats, including Markdown, LaTeX, HTML, and many others. It is also extensible, allowing for the creation of custom readers and writers for new formats. Pandoc has 31.8k stars on GitHub and is used by many people and organizations for its powerful and flexible document conversion capabilities.
The Haskell tooling ecosystem (GHC, Cabal, Stack, Haskell LSP) makes writing Haskell quite enjoyable. In particular, Haskell's type inferencing and parametric polymorphism makes it easy to understand and modify code in a general and well-abstracted way. The language also allows for strong editor tooling that also helps make development a smooth experience.
An open source platform providing free resources to learn coding.
Give a description of your contributions, including links to relevant PRs
Merged fix(curriculum): update instructions for step 110 for rpg project #53564
Awaiting Review fix(client): Add live image URL validation for portfolio images #53617
Learnt how we can use image() html object to verify if the image URL is live.
Learnt that the previous test cases can affect the next test cases, so I should run all test cases in order to check if there's problems with loading and saving state.
Learnt to check if I forgot to check logic with loading saved states (adding a portfolio section in user settings, and loading that section portfolio)
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
Learnt to use VSCode to access code on the WSL. Git clone repository on the WSL, not on windows.
FreeCodeCamp has live Discord server and forums with active and dedicated contributors.
Setting up was difficult, and while instructions could be clearer separated into Windows and Mac users (for Windows users, for Mac users), it was good they had detailed instructions.
As with all open source projects, getting help or code reviews can take time. I was fortunate my first PR was an easy fix and quickly reviewed within 15 mins, but my second PR is still awaiting review. Nonetheless, the contributors are helpful and helped point out the cause of my CI/CD issues.
An open source platform providing free resources to learn coding.
Give a description of your contributions, including links to relevant PRs
Merged fix(curriculum): update instructions for step 110 for rpg project #53564
Awaiting Review fix(client): Add live image URL validation for portfolio images #53617
Learnt how we can use image() html object to verify if the image URL is live.
Learnt that the previous test cases can affect the next test cases, so I should run all test cases in order to check if there's problems with loading and saving state.
Learnt to check if I forgot to check logic with loading saved states (adding a portfolio section in user settings, and loading that section portfolio)
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
Learnt to use VSCode to access code on the WSL. Git clone repository on the WSL, not on windows.
FreeCodeCamp has live Discord server and forums with active and dedicated contributors.
Setting up was difficult, and while instructions could be clearer separated into Windows and Mac users (for Windows users, for Mac users), it was good they had detailed instructions.
As with all open source projects, getting help or code reviews can take time. I was fortunate my first PR was an easy fix and quickly reviewed within 15 mins, but my second PR is still awaiting review. Nonetheless, the contributors are helpful and helped point out the cause of my CI/CD issues.
Week | Achievements |
---|---|
1 | Submitted Issue: Tasks To Self-Test Knowledge Unhide Activity Dashboard #221 |
1 | Submitted Issue: Bug when not entering anything into Select repo dialog #224 |
1 | Submitted Issue: Show a list of hidden users at the end #225 |
1 | Submitted Issue: Error toast shown on selecting p.Low or priority.Low label on WATcher repository #227 |
2 | Submitted Issue: Move Activity Dashboard from prototype to release #232 |
4 | Commented on Issue: Bypass logging in if viewing public repos only #236 |
4 | Commented on Issue: Hiding labels do not work as expected #240 |
Week | Achievements |
---|---|
1 | Reviewed PR: Hide 0 issue columns #223 |
1-4 | Reviewed PR: Keep filters option when switching repos #226 |
2 | Reviewed PR: Prevent redirection when repo not set #228 |
2-4 | Reviewed PR: Fix label filter not working #230 |
4 | Reviewed PR: Improve activity dashboard design #233 |
4 | Commented on PR: Refactor test cases (In progress) #234 |
4 | Reviewed PR: Show list of hidden users #235 |
4 | Reviewed PR: Remove unused services #238 |
5 | Commented on PR: Refactor Label model #254 |
6 | Reviewed PR: Add shareable repo-specific URL #255 |
6 | Reviewed PR: Refactor certain filters into its own service #259 |
7 | Reviewed PR: Remove test cases for permissions service #260 |
7 | Reviewed PR: Automatic deployment #272 |
7 | Reviewed PR: Enable pre push hook for npm run test #288 |
8 | Reviewed PR: Refactor milestones to save by name #289 |
Week | Achievements |
---|---|
4 | Contributed PR: Build in Github Actions #239 |
Week | Achievements |
---|---|
1 | Commented on PR: Redirect invalid routes to 404 not found page #1238 |
Week | Achievements |
---|---|
1 | Submitted Issue: Tasks To Self-Test Knowledge Unhide Activity Dashboard #221 |
1 | Submitted Issue: Bug when not entering anything into Select repo dialog #224 |
1 | Submitted Issue: Show a list of hidden users at the end #225 |
1 | Submitted Issue: Error toast shown on selecting p.Low or priority.Low label on WATcher repository #227 |
2 | Submitted Issue: Move Activity Dashboard from prototype to release #232 |
4 | Commented on Issue: Bypass logging in if viewing public repos only #236 |
4 | Commented on Issue: Hiding labels do not work as expected #240 |
Week | Achievements |
---|---|
1 | Reviewed PR: Hide 0 issue columns #223 |
1-4 | Reviewed PR: Keep filters option when switching repos #226 |
2 | Reviewed PR: Prevent redirection when repo not set #228 |
2-4 | Reviewed PR: Fix label filter not working #230 |
4 | Reviewed PR: Improve activity dashboard design #233 |
4 | Commented on PR: Refactor test cases (In progress) #234 |
4 | Reviewed PR: Show list of hidden users #235 |
4 | Reviewed PR: Remove unused services #238 |
5 | Commented on PR: Refactor Label model #254 |
6 | Reviewed PR: Add shareable repo-specific URL #255 |
6 | Reviewed PR: Refactor certain filters into its own service #259 |
7 | Reviewed PR: Remove test cases for permissions service #260 |
7 | Reviewed PR: Automatic deployment #272 |
7 | Reviewed PR: Enable pre push hook for npm run test #288 |
8 | Reviewed PR: Refactor milestones to save by name #289 |
Week | Achievements |
---|---|
4 | Contributed PR: Build in Github Actions #239 |
Week | Achievements |
---|---|
1 | Commented on PR: Redirect invalid routes to 404 not found page #1238 |
One of the largest takeaways from working with MarkBind in the last semester has been Vue.js, an open-source front-end framework that MarkBind uses to build it's UI components. Previously, only knowing the React.js framework, Vue.js is a handy addition to my arsenal. The basics of Vue.js was rather simple to pick up. Reading the Vue.js documentation, and referencing examples of already implemented Vue components in MarkBind, I quickly understood the use of <template>
, <style>
and <script>
. Through Markbind's Developer Guide, I learnt how to easily create different kinds of Vue components and implement them in MarkBind.
As I implemented my first Vue component, Add autogenerated breadcrumbs component #2193, I delved deeper into Vue, exploring the use of data()
, to manage the internal state of Vue components, and methods()
to define methods to be used within the component. I also learnt more about Vue lifecycle hooks, in which I used the mounted
hook to allow the Breadcrumb component to query the SiteNav to figure out the hierarchy of the current page.
As I continued working on improving MarkBind's frontend, I learnt more about Vue's <transition>
component, in particular using transition hooks. While I was working on Fix Quiz expanding between questions #2184, I came realize how useful these hooks were, helping to create seamless transitions for different situations. I relied heavily on Vue.js documentation and StackOverflow Posts as I was researching about Vue's transition hooks.
When I was working on implementing the new Breadcrumb and Collapse/Expand All Buttons components, I had to extensively use Document.querySelector()
and other related methods. I was new to this and had to do some research about how the methods work, what happens if the object cannot be found and handling edge cases. By practicing these while implementing the two components mentioned above, I believe that I have become more proficient in doing this. As a side-effect of this, I have also gained a deeper understanding on how the DOM works.
Resources:
Jest and Vue Test Utils were something that I was new to coming into MarkBind. MarkBind uses Jest together with Vue Test Utils for its snapshot tests, which test Vue components against their expected snapshots. As I was updating and implementing Vue components, I had to update and create the relevant test suites to ensure that the Vue components that I was updating or creating were working as expected. I explored mounting the components, attaching components to a document to allow another component to interact with them.
Resources:
As MarkBind is undergoing a migration to Typescript, I put in some time to learn basic Typescript. This was important as mid-way through the semester, as many of the files were being migrated to Typescript. This has also helped me in reviewing PRs that deals with Typescript migration and PRs which affect the Typscript files in MarkBind.
Resources:
When updating the looks of old components and creating new ones, I had to do some research about what makes a website visually pleasing. My most interesting finds were about the use of golden ratios in design and choosing complementary colours with tools such as Canva's Colour Wheel. I also learnt the different meanings of different icons through exploration and discussions with Update Breadcrumb icons #2265 and Add CollapseExpandButtons component.
I also internalized how to create transitions and effects that fit with the theme of the project, for MarkBind, had a more minimal theme. This was done when updating designs of components in Tweak sitenav design #2204, Update Question/Quiz component design #2131.
As I progressed to start managing the project, I started reviewing and merging PRs. Initially as I reviewed smaller PRs, I had little problem understanding the code and understanding where it can be improved. However, as I reviewed more complex PRs, I began having difficulties understanding the changes quickly. I came across a method to understand code in a more simple manner, the Rubber Duck Debugging method. Using this helped me to try and understand the code line by line and handle more complex changes more managably, helping me to understanding them better.
As I worked on bump nunjucks to 3.2.4 #2411, I was initially not confident what to look out for when upgrading dependencies. However, after I worked on this I understood how to look out for breaking changes and to find out how your project is using it in order to confidently upgrade it without breaking things in the project.
I gained a more in depth understanding about GitHub Actions when I was working on Add install setuptools to ci #2530. Utilizing conditional runs for the macos platform which required a brew install
to get the CI to run properly which would throw errors for other platforms which do not use Homebrew.
As I researched on improving code cleanliness in my projects and found that Husky was a tool that could be used to not only maintain the code cleanliness of projects but could be used for things like running tests as well. Husky has become a mainstay in all my JS projects together with ESLint, Prettier and lint-staged. I spent some time understanding how Husky has changed the way it should be used by deprecating the use of Husky within the package.json
and rather is now in the .husky
folder.
As I researched on AWS SageMaker for my lightning talk and used it during my internship, I got to understand more about AWS SageMaker and its benefits for hosting AI models in the cloud. AWS SageMaker is beneficial smaller players, or applications which have a pattern of use that comes in sporadic bursts as it reduces the upfront cost of expensive AI infrastructure. SageMaker also offers many services that helpes to simplify the development and deployment of AI models.
As I worked on researching on micro-frontends for my internship, I gained a deeper understanding of micro-frontends. Micro-frontends are what micro-services for backends, but for the front-end. Micro-frontends allows the splitting up of the front-end and this brings many benefits. From allowing teams to manage their own vertical stack, by owning their own micro-frontend, to reducing the bundle size, micro-frontends are beneficial to large teams. I also worked on a POC of migrating parts of the application using WebPack 5 Module Federation for Next.js which allowed me to fully appreciate it.
One of the largest takeaways from working with MarkBind in the last semester has been Vue.js, an open-source front-end framework that MarkBind uses to build it's UI components. Previously, only knowing the React.js framework, Vue.js is a handy addition to my arsenal. The basics of Vue.js was rather simple to pick up. Reading the Vue.js documentation, and referencing examples of already implemented Vue components in MarkBind, I quickly understood the use of <template>
, <style>
and <script>
. Through Markbind's Developer Guide, I learnt how to easily create different kinds of Vue components and implement them in MarkBind.
As I implemented my first Vue component, Add autogenerated breadcrumbs component #2193, I delved deeper into Vue, exploring the use of data()
, to manage the internal state of Vue components, and methods()
to define methods to be used within the component. I also learnt more about Vue lifecycle hooks, in which I used the mounted
hook to allow the Breadcrumb component to query the SiteNav to figure out the hierarchy of the current page.
As I continued working on improving MarkBind's frontend, I learnt more about Vue's <transition>
component, in particular using transition hooks. While I was working on Fix Quiz expanding between questions #2184, I came realize how useful these hooks were, helping to create seamless transitions for different situations. I relied heavily on Vue.js documentation and StackOverflow Posts as I was researching about Vue's transition hooks.
When I was working on implementing the new Breadcrumb and Collapse/Expand All Buttons components, I had to extensively use Document.querySelector()
and other related methods. I was new to this and had to do some research about how the methods work, what happens if the object cannot be found and handling edge cases. By practicing these while implementing the two components mentioned above, I believe that I have become more proficient in doing this. As a side-effect of this, I have also gained a deeper understanding on how the DOM works.
Resources:
Jest and Vue Test Utils were something that I was new to coming into MarkBind. MarkBind uses Jest together with Vue Test Utils for its snapshot tests, which test Vue components against their expected snapshots. As I was updating and implementing Vue components, I had to update and create the relevant test suites to ensure that the Vue components that I was updating or creating were working as expected. I explored mounting the components, attaching components to a document to allow another component to interact with them.
Resources:
As MarkBind is undergoing a migration to Typescript, I put in some time to learn basic Typescript. This was important as mid-way through the semester, as many of the files were being migrated to Typescript. This has also helped me in reviewing PRs that deals with Typescript migration and PRs which affect the Typscript files in MarkBind.
Resources:
When updating the looks of old components and creating new ones, I had to do some research about what makes a website visually pleasing. My most interesting finds were about the use of golden ratios in design and choosing complementary colours with tools such as Canva's Colour Wheel. I also learnt the different meanings of different icons through exploration and discussions with Update Breadcrumb icons #2265 and Add CollapseExpandButtons component.
I also internalized how to create transitions and effects that fit with the theme of the project, for MarkBind, had a more minimal theme. This was done when updating designs of components in Tweak sitenav design #2204, Update Question/Quiz component design #2131.
As I progressed to start managing the project, I started reviewing and merging PRs. Initially as I reviewed smaller PRs, I had little problem understanding the code and understanding where it can be improved. However, as I reviewed more complex PRs, I began having difficulties understanding the changes quickly. I came across a method to understand code in a more simple manner, the Rubber Duck Debugging method. Using this helped me to try and understand the code line by line and handle more complex changes more managably, helping me to understanding them better.
As I worked on bump nunjucks to 3.2.4 #2411, I was initially not confident what to look out for when upgrading dependencies. However, after I worked on this I understood how to look out for breaking changes and to find out how your project is using it in order to confidently upgrade it without breaking things in the project.
I gained a more in depth understanding about GitHub Actions when I was working on Add install setuptools to ci #2530. Utilizing conditional runs for the macos platform which required a brew install
to get the CI to run properly which would throw errors for other platforms which do not use Homebrew.
As I researched on improving code cleanliness in my projects and found that Husky was a tool that could be used to not only maintain the code cleanliness of projects but could be used for things like running tests as well. Husky has become a mainstay in all my JS projects together with ESLint, Prettier and lint-staged. I spent some time understanding how Husky has changed the way it should be used by deprecating the use of Husky within the package.json
and rather is now in the .husky
folder.
As I researched on AWS SageMaker for my lightning talk and used it during my internship, I got to understand more about AWS SageMaker and its benefits for hosting AI models in the cloud. AWS SageMaker is beneficial smaller players, or applications which have a pattern of use that comes in sporadic bursts as it reduces the upfront cost of expensive AI infrastructure. SageMaker also offers many services that helpes to simplify the development and deployment of AI models.
As I worked on researching on micro-frontends for my internship, I gained a deeper understanding of micro-frontends. Micro-frontends are what micro-services for backends, but for the front-end. Micro-frontends allows the splitting up of the front-end and this brings many benefits. From allowing teams to manage their own vertical stack, by owning their own micro-frontend, to reducing the bundle size, micro-frontends are beneficial to large teams. I also worked on a POC of migrating parts of the application using WebPack 5 Module Federation for Next.js which allowed me to fully appreciate it.
CS3281: Overall, I believe that because I was the least experienced (or at least I felt I was), I was also able to learn a whole lot from this module, especially front-end-wise.
CS3282: I still feel like I have much more to learn, but at least I do feel a bit more experienced and confident that I know what I am doing, at least somewhat.
While I used Angular to make a PR for TEAMMATES before the semester started, I think I still had a lot more to learn about it, like front-end unit testing (especially this because that initial PR had no tests at that point in time) which I was able to learn when I eventually actually made that PR in the real TEAMMATES repo. Due to the bindings, I had to pay especially close attention to the component testing scenarios of a component with inputs and outputs and a component inside a test host.
However, that was mostly component and snapshot testing. In order to also learn how to do testing for services, I also did testing for the feedback responses service. Though, I learned that testing services seemed largely similar to and yet much simpler than testing components.
Beyond testing, I also learned how to create services themselves in this onboarding task commit where I created the service to get feedback response statistics from the backend. I also learned how to integrate this service with the actual page component in order to actually obtain statistics to display using RxJS.
As for components or their templates, I learned about more about how to use Angular's HTML templates in order to direct inputs to and outputs from a component through property binding and event binding respectively. I also learned about how the custom structural directive tmIsLoading
worked in this PR as I was debugging when I initially wrongly caused the loading spinner to always display when I was in fact trying to display something else (eventually found out it was because I used the same boolean variable used to display the spinner, so don't be like me; check the usages of any variable you reuse). I also learned how to use <ng-container>
and <ng-template>
in that same PR, particularly with structural directives like ngIf
.
Resources:
In order to integrate Angular services that used asynchronous requests with components, I had to learn about Observables and Subscriptions from RxJS. I also had to learn other things from RxJS like the operators pipe
or first
for the previously mentioned component testing I did due to the fact that EventEmitter
objects used for event binding apparently functioned like RxJS Observable
objects.
Resources:
While I have taken some online web development courses in my free time before, I have actually never touched web development in a real project, only backend and mobile application development. Thus, doing some front-end work benefitted me a lot. For example, I was able to use my initially largely untested (and back then, slowly fading) knowledge of HTML and/or Bootstrap to some use such as in my onboarding task commits where I (re-)learned how to align everything nicely using the Bootstrap grid system (sorry if this is really basic) or in TEAMMATES PR #11628. Actually, after doing the front-end stuff in the onboarding task, I decided to go into the back-end for the deadline extensions feature so that I could learn TEAMMATES front to back, but perhaps I should have stayed in the front-end for the deadline extensions feature too to learn more. Still, virtually all my non-deadline extensions feature PRs were front-end related so maybe I was still able to learn as much as I could about the front-end.
Resources:
I learned how to use these to do front-end unit testing in Angular as previously mentioned, particularly things like expect
to check values are as expected, spyOn
to mock services, beforeEach
for common test setup code, and related attributes/functions (toBeTruthy()
, etc.).
Also, I learned about snapshot testing. I initially had no idea this existed before (sorry if this is basic), and yet it seems to be pretty widely used (?) so learning of its existence seemed important.
Resources:
I learned how to use D3 to display charts. I used this to create the feedback responses statistics chart.
Resources:
I was looking into the issue Instructor: Edit rubric question: reorder options using drag and drop #8933; I initially wanted to do a PR before my exams started but I unfortunately had no time to do so. Regardless, I was able to look into how I could possibly do it after my exams when I have time.
I looked through the code base to see how drag and drop is implemented in other question types such as in multiple choice questions and I found out that we use the CDK Drag and Drop module from Angular Material. Angular Material allows Material Design components to be added into Angular. From what I understand, Material Design provides a sort of library or system with customizable front-end components to provide pre-made UI functionality. I have actually used it previously when I did my own side projects for Android, though this is my first time using the drag and drop component (or similar) because it is currently not available on Android. Besides, I have also never used Material Design within Angular at all before.
The nice thing about Angular Material is it hides all the underlying code away and all that is minimally necessary to add is the cdkDrag
Angular directive. Unfortunately, from what I see, it seems that the drag and drop functionality provided by Angular Material does not work very well for table columns, which is the main focus of the issue. In general, it seems that tables are not well supported by Angular Material drag and drop, based on how tables are missing from the official documentation. Fortunately, there are workarounds like from this post from Stack Overflow and its linked StackBlitz project or from this blog post. However, these solutions do not produce satisfactory results, at least to me. When the columns are dragged along rows, the animations and "previews" do not show up for the rest of the rows, only for the row that was clicked on (such as the header). On the other hand, it does work well for dragging rows along columns. I suspect this has to do with how tables work in HTML, which is that they are essentially not really a single element but actually split into multiple table cell elements; this is unlike table rows which are single row elements. This means that Angular Material drag and drop probably works pretty well with rows, adding animations/previews. Unfortunately, this is not the case with columns. I believe that to enable this for table columns, it may be necessary after all to actually implement it from scratch after all, manually checking the location of the mouse and changing the columns appropriately to provide the animations/"previews" while dragging, or other similar implementations.
Still, this was interesting and I did learn things. I also believe that with this, adding drag and drop for the table rows would be pretty simple, if necessary. I could also look through how drag and drop is currrently done in Angular for inspiration on how to do it for the columns, or maybe it actually is possible to do it without implementing the functionality myself.
Resources:
I have previously used Firebase Cloud Firestore, an NoSQL database. I remember how when I used Firestore, I also noticed Datastore, but I just told myself to look at it at another time, and it seems like the time was now. Overall, I found out more about Datastore and how it works, like how it is also a NoSQL database, and I found similarities between entities and documents, and between kinds and collections, which was how I was able to understand it quickly.
For the deadline extensions feature, we had to maintain maps from email addresses to deadlines within the feedback session entities. I learned that this was not a standard Datastore value type so a possible way of storing this would be to store it as a Blob. I also learned that to do this within Objectify, this can be done through the Serialize annotation.
In order to validate requests to update the deadline maps, we needed to check if the emails in the requests actually existed for the corresponding course. One way would be to load every single CourseStudent
entity and every Instructor
entity. However, I learned that this costs a certain amount and not only that, the cost scales for every read of every instance. I found out about projection queries, which only scales with the number of queries, not the number of entities read in that query. This was more economical and thus, I chose to do this instead. Strangely, I do not think projection queries are documented in Objectify, so I had to refer to StackOverflow to find out how to do projection queries within Objectify.
I also learned that projection queries needed indices, and I initially wrongly thought that this was only for the properties that were projected, not other properties within the same query that were, say, filtered for instance. I also previously read that every property of each entity kind already has a built-in index of its own, so I initially wrongly assumed that I did not need to write any more indices for my projection queries. However, Fergus (I believe?) pointed out to me that this was wrong and looking at it again, it does make more sense for all properties used in a query, so both projections and filters, to require a composite index altogether. However, this then came with a downside, as I also found out that indices cost money to maintain too due to their storage costs.
Resources:
I have also only previously used Google Cloud Functions or Firebase Cloud Functions. I also remember how when I used either of them, I also noticed App Engine and then also told myself to look at it at another time, so getting to learn it by joining TEAMMATES, like Datastore, was such a great thing.
I think the main thing I learned was the task queues, though unfortunately, they are already phased out. I am at least hoping that this knowledge is transferable to what I believe is the new equivalent service of Google Cloud, which is Cloud Tasks. Regardless, I had to use task queues in order to run the workers created by Samuel which handle deadline extension entities for the deadline extensions feature.
Resources:
With the migration to SQL, we started to use Hibernate as our object-relational mapping (ORM) tool. While I was familiar with SQL, especially after taking the course CS2102, I was not very familiar with ORMs; I only previously used Django's built-in ORM. I felt Hibernate was extremely verbose compared to Django's ORM. Regardless, I had to use my new Hibernate knowledge to review PRs which used it, while tolerating its verbose syntax. Apparently, this is a popular ORM for Java, so I may need to face this somewhere else, so I guess I need to get used to its syntax.
Resources:
Wilson mentioned Liquibase in our group chat. We also needed to change the database for the account request form (ARF) feature. Thus, I decided to look into this. I never got to dive deep into it because I was not the one who ended up integrating it fully into TEAMMATES (I think it was Nicolas?). Also, the way it was integrated meant that only the release lead generates the Liquibase changelog, so I did not really get to touch it. Still, I did get to learn a bit about Liquibase. From what I saw, it was also quite similar to how Django manages its database migrations, with its changelogs.
Resources:
Back when I was doing CS3281, running tests through Gradle involved only one task, which was the one running our integration tests (which were intended to be unit tests). Now that I came back to TEAMMATES for CS3282, running tests through Gradle now involves more than one task, the task for the unit tests, and the task for the integration tests. When either of the tasks fails, the other task no longer runs. I learned how to prevent that by looking at the documentation of Gradle, and I also tried to integrate this into the GitHub Actions in TEAMMATES by submitting an issue for it (#12900). I also learned a lot more about how Gradle works, like how to create tasks, projects, even multi-project builds, settings, build scripts, initialization scripts, etc. It was pretty interesting. I must admit, I previously only used Gradle when all of it was set up for me, so all I needed to do was run commands with Gradle. Now, I think I can set up a Gradle project, and even adjust the settings and build scripts.
Resources:
CS3281: Overall, I believe that because I was the least experienced (or at least I felt I was), I was also able to learn a whole lot from this module, especially front-end-wise.
CS3282: I still feel like I have much more to learn, but at least I do feel a bit more experienced and confident that I know what I am doing, at least somewhat.
While I used Angular to make a PR for TEAMMATES before the semester started, I think I still had a lot more to learn about it, like front-end unit testing (especially this because that initial PR had no tests at that point in time) which I was able to learn when I eventually actually made that PR in the real TEAMMATES repo. Due to the bindings, I had to pay especially close attention to the component testing scenarios of a component with inputs and outputs and a component inside a test host.
However, that was mostly component and snapshot testing. In order to also learn how to do testing for services, I also did testing for the feedback responses service. Though, I learned that testing services seemed largely similar to and yet much simpler than testing components.
Beyond testing, I also learned how to create services themselves in this onboarding task commit where I created the service to get feedback response statistics from the backend. I also learned how to integrate this service with the actual page component in order to actually obtain statistics to display using RxJS.
As for components or their templates, I learned about more about how to use Angular's HTML templates in order to direct inputs to and outputs from a component through property binding and event binding respectively. I also learned about how the custom structural directive tmIsLoading
worked in this PR as I was debugging when I initially wrongly caused the loading spinner to always display when I was in fact trying to display something else (eventually found out it was because I used the same boolean variable used to display the spinner, so don't be like me; check the usages of any variable you reuse). I also learned how to use <ng-container>
and <ng-template>
in that same PR, particularly with structural directives like ngIf
.
Resources:
In order to integrate Angular services that used asynchronous requests with components, I had to learn about Observables and Subscriptions from RxJS. I also had to learn other things from RxJS like the operators pipe
or first
for the previously mentioned component testing I did due to the fact that EventEmitter
objects used for event binding apparently functioned like RxJS Observable
objects.
Resources:
While I have taken some online web development courses in my free time before, I have actually never touched web development in a real project, only backend and mobile application development. Thus, doing some front-end work benefitted me a lot. For example, I was able to use my initially largely untested (and back then, slowly fading) knowledge of HTML and/or Bootstrap to some use such as in my onboarding task commits where I (re-)learned how to align everything nicely using the Bootstrap grid system (sorry if this is really basic) or in TEAMMATES PR #11628. Actually, after doing the front-end stuff in the onboarding task, I decided to go into the back-end for the deadline extensions feature so that I could learn TEAMMATES front to back, but perhaps I should have stayed in the front-end for the deadline extensions feature too to learn more. Still, virtually all my non-deadline extensions feature PRs were front-end related so maybe I was still able to learn as much as I could about the front-end.
Resources:
I learned how to use these to do front-end unit testing in Angular as previously mentioned, particularly things like expect
to check values are as expected, spyOn
to mock services, beforeEach
for common test setup code, and related attributes/functions (toBeTruthy()
, etc.).
Also, I learned about snapshot testing. I initially had no idea this existed before (sorry if this is basic), and yet it seems to be pretty widely used (?) so learning of its existence seemed important.
Resources:
I learned how to use D3 to display charts. I used this to create the feedback responses statistics chart.
Resources:
I was looking into the issue Instructor: Edit rubric question: reorder options using drag and drop #8933; I initially wanted to do a PR before my exams started but I unfortunately had no time to do so. Regardless, I was able to look into how I could possibly do it after my exams when I have time.
I looked through the code base to see how drag and drop is implemented in other question types such as in multiple choice questions and I found out that we use the CDK Drag and Drop module from Angular Material. Angular Material allows Material Design components to be added into Angular. From what I understand, Material Design provides a sort of library or system with customizable front-end components to provide pre-made UI functionality. I have actually used it previously when I did my own side projects for Android, though this is my first time using the drag and drop component (or similar) because it is currently not available on Android. Besides, I have also never used Material Design within Angular at all before.
The nice thing about Angular Material is it hides all the underlying code away and all that is minimally necessary to add is the cdkDrag
Angular directive. Unfortunately, from what I see, it seems that the drag and drop functionality provided by Angular Material does not work very well for table columns, which is the main focus of the issue. In general, it seems that tables are not well supported by Angular Material drag and drop, based on how tables are missing from the official documentation. Fortunately, there are workarounds like from this post from Stack Overflow and its linked StackBlitz project or from this blog post. However, these solutions do not produce satisfactory results, at least to me. When the columns are dragged along rows, the animations and "previews" do not show up for the rest of the rows, only for the row that was clicked on (such as the header). On the other hand, it does work well for dragging rows along columns. I suspect this has to do with how tables work in HTML, which is that they are essentially not really a single element but actually split into multiple table cell elements; this is unlike table rows which are single row elements. This means that Angular Material drag and drop probably works pretty well with rows, adding animations/previews. Unfortunately, this is not the case with columns. I believe that to enable this for table columns, it may be necessary after all to actually implement it from scratch after all, manually checking the location of the mouse and changing the columns appropriately to provide the animations/"previews" while dragging, or other similar implementations.
Still, this was interesting and I did learn things. I also believe that with this, adding drag and drop for the table rows would be pretty simple, if necessary. I could also look through how drag and drop is currrently done in Angular for inspiration on how to do it for the columns, or maybe it actually is possible to do it without implementing the functionality myself.
Resources:
I have previously used Firebase Cloud Firestore, an NoSQL database. I remember how when I used Firestore, I also noticed Datastore, but I just told myself to look at it at another time, and it seems like the time was now. Overall, I found out more about Datastore and how it works, like how it is also a NoSQL database, and I found similarities between entities and documents, and between kinds and collections, which was how I was able to understand it quickly.
For the deadline extensions feature, we had to maintain maps from email addresses to deadlines within the feedback session entities. I learned that this was not a standard Datastore value type so a possible way of storing this would be to store it as a Blob. I also learned that to do this within Objectify, this can be done through the Serialize annotation.
In order to validate requests to update the deadline maps, we needed to check if the emails in the requests actually existed for the corresponding course. One way would be to load every single CourseStudent
entity and every Instructor
entity. However, I learned that this costs a certain amount and not only that, the cost scales for every read of every instance. I found out about projection queries, which only scales with the number of queries, not the number of entities read in that query. This was more economical and thus, I chose to do this instead. Strangely, I do not think projection queries are documented in Objectify, so I had to refer to StackOverflow to find out how to do projection queries within Objectify.
I also learned that projection queries needed indices, and I initially wrongly thought that this was only for the properties that were projected, not other properties within the same query that were, say, filtered for instance. I also previously read that every property of each entity kind already has a built-in index of its own, so I initially wrongly assumed that I did not need to write any more indices for my projection queries. However, Fergus (I believe?) pointed out to me that this was wrong and looking at it again, it does make more sense for all properties used in a query, so both projections and filters, to require a composite index altogether. However, this then came with a downside, as I also found out that indices cost money to maintain too due to their storage costs.
Resources:
I have also only previously used Google Cloud Functions or Firebase Cloud Functions. I also remember how when I used either of them, I also noticed App Engine and then also told myself to look at it at another time, so getting to learn it by joining TEAMMATES, like Datastore, was such a great thing.
I think the main thing I learned was the task queues, though unfortunately, they are already phased out. I am at least hoping that this knowledge is transferable to what I believe is the new equivalent service of Google Cloud, which is Cloud Tasks. Regardless, I had to use task queues in order to run the workers created by Samuel which handle deadline extension entities for the deadline extensions feature.
Resources:
With the migration to SQL, we started to use Hibernate as our object-relational mapping (ORM) tool. While I was familiar with SQL, especially after taking the course CS2102, I was not very familiar with ORMs; I only previously used Django's built-in ORM. I felt Hibernate was extremely verbose compared to Django's ORM. Regardless, I had to use my new Hibernate knowledge to review PRs which used it, while tolerating its verbose syntax. Apparently, this is a popular ORM for Java, so I may need to face this somewhere else, so I guess I need to get used to its syntax.
Resources:
Wilson mentioned Liquibase in our group chat. We also needed to change the database for the account request form (ARF) feature. Thus, I decided to look into this. I never got to dive deep into it because I was not the one who ended up integrating it fully into TEAMMATES (I think it was Nicolas?). Also, the way it was integrated meant that only the release lead generates the Liquibase changelog, so I did not really get to touch it. Still, I did get to learn a bit about Liquibase. From what I saw, it was also quite similar to how Django manages its database migrations, with its changelogs.
Resources:
Back when I was doing CS3281, running tests through Gradle involved only one task, which was the one running our integration tests (which were intended to be unit tests). Now that I came back to TEAMMATES for CS3282, running tests through Gradle now involves more than one task, the task for the unit tests, and the task for the integration tests. When either of the tasks fails, the other task no longer runs. I learned how to prevent that by looking at the documentation of Gradle, and I also tried to integrate this into the GitHub Actions in TEAMMATES by submitting an issue for it (#12900). I also learned a lot more about how Gradle works, like how to create tasks, projects, even multi-project builds, settings, build scripts, initialization scripts, etc. It was pretty interesting. I must admit, I previously only used Gradle when all of it was set up for me, so all I needed to do was run commands with Gradle. Now, I think I can set up a Gradle project, and even adjust the settings and build scripts.
Resources:
Python is a high-level, general-purpose programming language. CPython is the reference implementation of the Python programming language. Written in C and Python, CPython is the default and most widely used implementation of the Python language.
I added a more meaningful error message when bytearray.extend
is incorrectly used with a str
object input, to tackle the bug highlighted in the GitHub issue, "bytearray.extend: Misleading error message".
str
is a built-in type in Python. str
objects are strings of text; strings are immutable sequences of Unicode code points. bytearray
is another built-in type in Python; bytearray
objects are mutable sequences of single bytes. bytearray.extend
can be used to add all the bytes of another sequence of bytes to the end of the bytearray
object. This means that bytearray.extend
can only be used with inputs that are sequences of individual bytes. In other words, str
objects cannot be used as input to bytearray.extend
because they are not sequences of single bytes.
When a str
object is passed as input into bytearray.extend
, Python correctly raises an error due to the type of the input. However, the error message is misleading, as it states TypeError: 'str' object cannot be interpreted as an integer
. The str
object mentioned can be interpreted as referring to the input passed, which seems to suggest that integers can be passed as input, which is incorrect because integers are not sequences, much less sequences of bytes. In reality, the str
object mentioned is referring to the elements of the sequence represented by the input str
object, which are themselves also str
objects.
The error message is not wrong. However, it is just misleading. The PR I contributed fixed this by doing a check when an error is raised for when the input is a str
object, before changing the error message to a more meaningful one, which would be TypeError: expected iterable of integers; got: 'str'
.
Python uses reStructuredText (RST) to document their project. RST is a lightweight markup language. It is not difficult to use, but it has its own syntax, which is different from the more popular markup languages like Markdown. I had to write a NEWS
entry[1] using RST. I used the Python Developer's Guide page on RST to help me figure out how to write using RST.
My first attempt at fixing the misleading error message was checking the type of the input very early on, even before any error was raised. I believe that in any other project, including in TEAMMATES, my first attempt might be seen as reasonable, and I think it might even be accepted, maybe after only a few minor changes, if any.
However, this was not the most performant way to fix the bug. Checking the type of the input before an error is raised means that the input would be checked even if the input was valid. The first review wanted me to change this, and so I did.
When I made my PR to fix the misleading error message, I was also required to write a NEWS
entry, just like almost every other PR made to the project. In the Python project, NEWS
entries document contributions so that it can be added into the changelog. They are necessary for any contribution made, except for those that do not affect users of the Python programming language itself, including:
NEWS
entryFrom what I understand, changes that are more significant can be highlighted in "What's New in Python" entries.
In comparison, I do not think this is done in TEAMMATES. I think all the changes are mentioned equally in the releases.
If somebody wants to fix a typo in the Python project, they do not need to post a new issue before making a pull request. They can simply make the pull request immediately. From what I know, this is not the case in TEAMMATES. At the very least, it is not explicitly mentioned in the TEAMMATES developer guide.
When a Python version is released, people will use that version of Python. They may continue to use that version for their projects even when much newer Python versions are released. Thus, the Python team needs to continue to support older versions (up to a limit) by making sure that bug fixes, and security patches are also made to the supported older Python versions.
Each version is maintained on their own respective Git branch, but all changes are initially made by submitting a PR to the main branch. The PRs are given labels like needs backport to 3.12
which indicate whether the PR needs to be backported to a specific Git branch for a Python version. When a PR is merged into the main branch, a bot (miss-islington-app) backports it to older Python branches according to the labels. It does this by submitting the same PR to the Git branches of the relevant Python versions. A member of the Python project team can then merge the PR into the Git branches of the relevant Python versions.
In TEAMMATES, we may often have multiple feature branches in addition to the main branch. Fixes may be made to the main branch that are also required on the feature branches. In TEAMMATES, we often integrate these fixes into the feature branches by manually rebasing the feature branch onto the last commit on the main branch or merging the main branch into the feature branch. In other words, unlike in the Python project where changes in the main branch are almost automatically integrated into the other branches, in TEAMMATES, these changes to the main branch are manually integrated into other branches.
Instead of displaying all the changes equally, it may be better to highlight some of them, as they be more significant to more users. Users may not notice those changes if they are displayed equally with the rest, even it may be of interest to them.
For minor contributions, it seems like it would be overkill to need to post an issue before a pull request can be made. If it is not already the case, then maybe we should allow minor contributions without their own GitHub issues. We should also make it clear in the developer guide that this is allowed.
Instead of manually rebasing onto the main branch or manually merging the main branch into a feature branch, maybe it would be better to do it automatically. Maybe a bot can do this for us. A problem I can foresee with this is if there are merge conflicts. However, it is possible to make a PR for merging branches. The merge conflicts may be resolved manually in the branch created for the PR. While this reintroduces some manual work, the merge conflicts should not occur all the time. If this automation is possible, with some of the changes in the main branch being integrated into the feature branches automatically, this may reduce some of the load on the developers.
In the Python project, NEWS
entries document contributions so that it can be added into the changelog.
Python is a high-level, general-purpose programming language. CPython is the reference implementation of the Python programming language. Written in C and Python, CPython is the default and most widely used implementation of the Python language.
I added a more meaningful error message when bytearray.extend
is incorrectly used with a str
object input, to tackle the bug highlighted in the GitHub issue, "bytearray.extend: Misleading error message".
str
is a built-in type in Python. str
objects are strings of text; strings are immutable sequences of Unicode code points. bytearray
is another built-in type in Python; bytearray
objects are mutable sequences of single bytes. bytearray.extend
can be used to add all the bytes of another sequence of bytes to the end of the bytearray
object. This means that bytearray.extend
can only be used with inputs that are sequences of individual bytes. In other words, str
objects cannot be used as input to bytearray.extend
because they are not sequences of single bytes.
When a str
object is passed as input into bytearray.extend
, Python correctly raises an error due to the type of the input. However, the error message is misleading, as it states TypeError: 'str' object cannot be interpreted as an integer
. The str
object mentioned can be interpreted as referring to the input passed, which seems to suggest that integers can be passed as input, which is incorrect because integers are not sequences, much less sequences of bytes. In reality, the str
object mentioned is referring to the elements of the sequence represented by the input str
object, which are themselves also str
objects.
The error message is not wrong. However, it is just misleading. The PR I contributed fixed this by doing a check when an error is raised for when the input is a str
object, before changing the error message to a more meaningful one, which would be TypeError: expected iterable of integers; got: 'str'
.
Python uses reStructuredText (RST) to document their project. RST is a lightweight markup language. It is not difficult to use, but it has its own syntax, which is different from the more popular markup languages like Markdown. I had to write a NEWS
entry[1] using RST. I used the Python Developer's Guide page on RST to help me figure out how to write using RST.
My first attempt at fixing the misleading error message was checking the type of the input very early on, even before any error was raised. I believe that in any other project, including in TEAMMATES, my first attempt might be seen as reasonable, and I think it might even be accepted, maybe after only a few minor changes, if any.
However, this was not the most performant way to fix the bug. Checking the type of the input before an error is raised means that the input would be checked even if the input was valid. The first review wanted me to change this, and so I did.
When I made my PR to fix the misleading error message, I was also required to write a NEWS
entry, just like almost every other PR made to the project. In the Python project, NEWS
entries document contributions so that it can be added into the changelog. They are necessary for any contribution made, except for those that do not affect users of the Python programming language itself, including:
NEWS
entryFrom what I understand, changes that are more significant can be highlighted in "What's New in Python" entries.
In comparison, I do not think this is done in TEAMMATES. I think all the changes are mentioned equally in the releases.
If somebody wants to fix a typo in the Python project, they do not need to post a new issue before making a pull request. They can simply make the pull request immediately. From what I know, this is not the case in TEAMMATES. At the very least, it is not explicitly mentioned in the TEAMMATES developer guide.
When a Python version is released, people will use that version of Python. They may continue to use that version for their projects even when much newer Python versions are released. Thus, the Python team needs to continue to support older versions (up to a limit) by making sure that bug fixes, and security patches are also made to the supported older Python versions.
Each version is maintained on their own respective Git branch, but all changes are initially made by submitting a PR to the main branch. The PRs are given labels like needs backport to 3.12
which indicate whether the PR needs to be backported to a specific Git branch for a Python version. When a PR is merged into the main branch, a bot (miss-islington-app) backports it to older Python branches according to the labels. It does this by submitting the same PR to the Git branches of the relevant Python versions. A member of the Python project team can then merge the PR into the Git branches of the relevant Python versions.
In TEAMMATES, we may often have multiple feature branches in addition to the main branch. Fixes may be made to the main branch that are also required on the feature branches. In TEAMMATES, we often integrate these fixes into the feature branches by manually rebasing the feature branch onto the last commit on the main branch or merging the main branch into the feature branch. In other words, unlike in the Python project where changes in the main branch are almost automatically integrated into the other branches, in TEAMMATES, these changes to the main branch are manually integrated into other branches.
Instead of displaying all the changes equally, it may be better to highlight some of them, as they be more significant to more users. Users may not notice those changes if they are displayed equally with the rest, even it may be of interest to them.
For minor contributions, it seems like it would be overkill to need to post an issue before a pull request can be made. If it is not already the case, then maybe we should allow minor contributions without their own GitHub issues. We should also make it clear in the developer guide that this is allowed.
Instead of manually rebasing onto the main branch or manually merging the main branch into a feature branch, maybe it would be better to do it automatically. Maybe a bot can do this for us. A problem I can foresee with this is if there are merge conflicts. However, it is possible to make a PR for merging branches. The merge conflicts may be resolved manually in the branch created for the PR. While this reintroduces some manual work, the merge conflicts should not occur all the time. If this automation is possible, with some of the changes in the main branch being integrated into the feature branches automatically, this may reduce some of the load on the developers.
In the Python project, NEWS
entries document contributions so that it can be added into the changelog.
While working with Vue components this semester, I've learned more about props
and script
in vue when working on the template for panels through adding a new prop isSeamless
and writing new script for the panel component.
MarkBind uses Jest together with Vue Test Utils for its snapshot tests, which test Vue components against their expected snapshots. While updating the component, I wrote new tests to ensure that the Vue components are working as expected.
An interesting issue I've encountered this semester while researching on integrating a full search functionality is the issue of importing esm like pagefind
into cjs modules. CommonJS uses the require('something')
syntax for importing other modules and ESM uses the import {stuff} from './somewhere'
syntax for importing.
Another crucial difference is that CJS imports are synchronous while ESM imports are asynchronous. As such, when importing ES modules into CJS, the normal require('pagefind')
syntax would result in an error. Instead, you'll need to use await import('pagefind')
to asynchronously import the module. This difference in imports is something that should be taken note of since we use both the ESM import
syntax and CJS require
syntax in various files in MarkBind.
Nunjucks is a rich and powerful templating language for JavaScript. MarkBind supports Nunjucks for templating and I’ve used Nunjucks specifically to create a set of mappings of topics to their pages, and to write macros.
macro
Nunjucks macro
allows one to define reusable chunks of content. A great benefit of macro
is the reduction of code duplication due to its ability to encapsulate chunks of code into templates and its ability to accept parameters so that the output can be customised based on the inputs provided.
set
and import
While combining the syntax pages in this commit, I worked on a set
that keeps track of the various syntax topics and their information. This was a good exercise to experience how to create a variable using set
and import it in other files to access its values using import
.
MarkBind has Vue.js components built on the popular BootStrap framework. Much of Bootstrap's features are supported in and out of these components as well. While creating the portfolio template, I got to learn more about the various components and layouts of Bootstrap.
grid
Bootstrap grid
built with flexbox
and is fully responsive. More specific aspects I've learned
Explored various components offered by Bootstrap, such as accordions, cards, carousels
Explored various components offered by Bootstrap, such as accordions, cards, carousels
I mostly worked on the frontend of Markbind, improving components and the UI of Markbind, as well as contribute to documentation. Some notable issues I've worked on are:
I mostly worked on the frontend of Markbind, improving components and the UI of Markbind, as well as contribute to documentation. Some notable issues I've worked on are:
Vue is a frontend JavaScript framework to build web user interfaces, while Pug is a preprocessor that speeds up writing HTML. In RepoSense, Vue brings the interactivity in the generated reports and Pug makes it easier to write more concise and readable HTML.
Having some experience in frontend libraries/frameworks like React, I did not have too steep a learning curve when learning Vue; however, I still took some time to get used to concepts and the syntax of Vue including state management, the idea of single-file components, conditional rendering, reactive UI elements, and much more. One particular aspect I enjoyed learning and implementing in Vue was the ease of declaring state in a component just within the data()
function. This was, to me, contrasted with React where useState
and useEffect
are more complicated and tricky to use.
Cypress is a testing framework that allows for end-to-end testing of web applications. I used it in RepoSense to write tests for the UI.
Cypress was a new tool to me and I had to learn how to write tests using this tool as well as how to set up the test environment. Many Cypress commands are based on natural words like .then
, .get
, .to.deep
, just to name a few, but the concepts of Cypress like asynchonicity, closures, and its inclusion of jQuery make it unfamiliar to me.
...
Vue is a frontend JavaScript framework to build web user interfaces, while Pug is a preprocessor that speeds up writing HTML. In RepoSense, Vue brings the interactivity in the generated reports and Pug makes it easier to write more concise and readable HTML.
Having some experience in frontend libraries/frameworks like React, I did not have too steep a learning curve when learning Vue; however, I still took some time to get used to concepts and the syntax of Vue including state management, the idea of single-file components, conditional rendering, reactive UI elements, and much more. One particular aspect I enjoyed learning and implementing in Vue was the ease of declaring state in a component just within the data()
function. This was, to me, contrasted with React where useState
and useEffect
are more complicated and tricky to use.
Cypress is a testing framework that allows for end-to-end testing of web applications. I used it in RepoSense to write tests for the UI.
Cypress was a new tool to me and I had to learn how to write tests using this tool as well as how to set up the test environment. Many Cypress commands are based on natural words like .then
, .get
, .to.deep
, just to name a few, but the concepts of Cypress like asynchonicity, closures, and its inclusion of jQuery make it unfamiliar to me.
...
Give an intro to the project here ...
HSR Optimizer is a tool built to help Honkai:Star Rail players figure out how to build their characters by helping to abstract some of the math away in a user friendly interface.
They are very light on
Give a description of your contributions, including links to relevant PRs
feat(#278): add button to ScoringModal to reset all characters - PR merged
[Bug] Recalculate score for saved builds #170 - investigate & PR
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
Give an intro to the project here ...
HSR Optimizer is a tool built to help Honkai:Star Rail players figure out how to build their characters by helping to abstract some of the math away in a user friendly interface.
They are very light on
Give a description of your contributions, including links to relevant PRs
feat(#278): add button to ScoringModal to reset all characters - PR merged
[Bug] Recalculate score for saved builds #170 - investigate & PR
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
This semester, I was involved in the database migration team, both in migrating the application code and creating the scripts to transport the data from Datastore to CloudSQL. -This involves migrating actions in the backend application code, fixing previously undetected bugs, setting up of a development Google Cloud environment, writing base script files for moving and verifying data and mapping entities from non-sql entities to sql entities, debugging errors such as OutOfMemory exceptions during migration and exploring potential speedups.
During this journey, I also played the role of a mentor for one of the CS3281 mentees to help provide guidance on migrating of action code. This includes sharing software engineering best practices such as avoiding mutable instances in Constant files, use of inheritance etc to ensure TEAMMATES code is more maintenable and less bug-prone.
Secondly, I was involved in the SQL Injection testing team as well where I contributed knowledge on common SQL Injection attack inputs and helped formulate the test cases.
Thirdly, I was involved in the design of Multiple Course Structures feature. I participated in the discussion and helped with the implementation of Hibernate entities to be used for future implementation of this feature. This includes updating existing ER designs to support the existing schema and new schema simultaneously.
Finally, I was involved in raising and fixing minor documentation errors such as outdated commands on the developer guide to improve the experience of future developers.
As for external projects, I worked on Scribe-iOS project, a Google Summer of Code and project which is part of Wikimedia foundation to improve code quality of the software. My learning points are further described in "observations.md".
Date | Role | Description | Key Achievement |
---|---|---|---|
24/01/2024 | Issue Reporter | Found and reported issue #12699 with developer documentation ng command | |
24/01/2024 | PR Author | Fixed documentation bug #12699 in TEAMMATES developer documentation with ng command | Fixed documentation bug on key page (TEAMMATES new developer guide) |
07/02/2024 | PR Reviewer | Review of PR #12706 Migrate CreateInstructorAction | |
15/02/2024 | PR Author | Review of PR #12702 Migrated CreateAccountAction | 1. Over 20k LoC 2. Found and fixed previously undetected bugs with HibernateContext and circular toString() errors which cause Stack Overflow crashes 3. migrated 12k LoC of previous json bundle to new SQL bundle format |
20/02/2024 | PR Reviewer | Review of #PR 12741 Migrate feedbackSessionPublishedRemindersAction | |
20/02/2024 | PR Reviewer | Review of #PR 12759 Add tests for FeedbackQuestionsDb | |
20/02/2024 | PR Reviewer and mentor | Review of #PR 12719 Migrate GetResultsSessionAction | Provide guidance on best practices (Avoid shared mutable instances in Const file, only immutable String literals, naming conventions is... for boolean), provide mentorship on using inheritance for NonExistentFeedbackResponse.java instead of instantiating duplicate 'fake' feedback sessions multiple times, aid in explaining code. |
20/02/2024 | PR Contributor | Create Database migration base scripts | |
24/02/2024 | PR Author | Wrote migration script for UsageStatistics | |
25/02/2024 | PR Author | Wrote verification script for UsageStatistics | |
25/02/204 | PR Contributor | Contributed SQL injection ideas to aid in SQL injection testing | Provided SQL injection test cases to be used during SQLi testing |
26/02/2024 | PR Author | Implementing pagination for SQL migration base script | Prevent OutOfMemory errors due to large amount of data loaded and migrated by migrating page by page which can fit into memory |
26/02/2024 | PR Co-Author | Wrote base script for DB migration verification | Debugged issue regarding failure to verify equality of migrated entities due to incorrecly implemented isEqual() method, where instances should use .equals() instead of == to check equality of value |
14/03/2024 | Mentor | Discussion on Multiple course structure (formerly multiple team structures) | Ensure everyone on team understands project requirement, rename to Multiple Course Structure for clarity, since 'Teams' in TEAMMATES means something else, discussed UX flow and UI elements |
28/03/2024 | PR Co-Author / Contributor | Implement Hibernate entities for Multiple Course Structure | Database ERD schema discussion and validation, guidance on many-to-many relationship representation in Hibernate. |
During this journey, I also played the role of a mentor for one of the CS3281 mentees to help provide guidance on migrating of action code. This includes sharing software engineering best practices such as avoiding mutable instances in Constant files, use of inheritance etc to ensure TEAMMATES code is more maintenable and less bug-prone.
Secondly, I was involved in the SQL Injection testing team as well where I contributed knowledge on common SQL Injection attack inputs and helped formulate the test cases.
Thirdly, I was involved in the design of Multiple Course Structures feature. I participated in the discussion and helped with the implementation of Hibernate entities to be used for future implementation of this feature. This includes updating existing ER designs to support the existing schema and new schema simultaneously.
Finally, I was involved in raising and fixing minor documentation errors such as outdated commands on the developer guide to improve the experience of future developers.
As for external projects, I worked on Scribe-iOS project, a Google Summer of Code and project which is part of Wikimedia foundation to improve code quality of the software. My learning points are further described in "observations.md".
Date | Role | Description | Key Achievement |
---|---|---|---|
24/01/2024 | Issue Reporter | Found and reported issue #12699 with developer documentation ng command | |
24/01/2024 | PR Author | Fixed documentation bug #12699 in TEAMMATES developer documentation with ng command | Fixed documentation bug on key page (TEAMMATES new developer guide) |
07/02/2024 | PR Reviewer | Review of PR #12706 Migrate CreateInstructorAction | |
15/02/2024 | PR Author | Review of PR #12702 Migrated CreateAccountAction | 1. Over 20k LoC 2. Found and fixed previously undetected bugs with HibernateContext and circular toString() errors which cause Stack Overflow crashes 3. migrated 12k LoC of previous json bundle to new SQL bundle format |
20/02/2024 | PR Reviewer | Review of #PR 12741 Migrate feedbackSessionPublishedRemindersAction | |
20/02/2024 | PR Reviewer | Review of #PR 12759 Add tests for FeedbackQuestionsDb | |
20/02/2024 | PR Reviewer and mentor | Review of #PR 12719 Migrate GetResultsSessionAction | Provide guidance on best practices (Avoid shared mutable instances in Const file, only immutable String literals, naming conventions is... for boolean), provide mentorship on using inheritance for NonExistentFeedbackResponse.java instead of instantiating duplicate 'fake' feedback sessions multiple times, aid in explaining code. |
20/02/2024 | PR Contributor | Create Database migration base scripts | |
24/02/2024 | PR Author | Wrote migration script for UsageStatistics | |
25/02/2024 | PR Author | Wrote verification script for UsageStatistics | |
25/02/204 | PR Contributor | Contributed SQL injection ideas to aid in SQL injection testing | Provided SQL injection test cases to be used during SQLi testing |
26/02/2024 | PR Author | Implementing pagination for SQL migration base script | Prevent OutOfMemory errors due to large amount of data loaded and migrated by migrating page by page which can fit into memory |
26/02/2024 | PR Co-Author | Wrote base script for DB migration verification | Debugged issue regarding failure to verify equality of migrated entities due to incorrecly implemented isEqual() method, where instances should use .equals() instead of == to check equality of value |
14/03/2024 | Mentor | Discussion on Multiple course structure (formerly multiple team structures) | Ensure everyone on team understands project requirement, rename to Multiple Course Structure for clarity, since 'Teams' in TEAMMATES means something else, discussed UX flow and UI elements |
28/03/2024 | PR Co-Author / Contributor | Implement Hibernate entities for Multiple Course Structure | Database ERD schema discussion and validation, guidance on many-to-many relationship representation in Hibernate. |
Week | Achievements |
---|---|
1 | Reviewed PR: Add whitespace validation #1237 |
2 | Reviewed PR: Fix broken duplicate link #1233 |
2 | Reviewed PR: Redirect invalid routes to 404 not found page #1238 |
4 | Reviewed PR: Preserve line breaks in markdown #1241 |
10 | Reviewed PR: Raising warnings when submitting team response with assignees who aren't in organization #1264 |
13 | Reviewed PR: Add login redirect #1256 |
Week | Achievements |
---|---|
3 | Authored PR: Add documentation for CATcher's parser #1240 |
Week | Achievements |
---|---|
5 | Authored PR (merged): Simplify fix for abrupt panel transition #2421 |
8 | Authored PR (merged): Use intended test input for NodeProcessor test #2462 |
8 | Authored PR (merged): Test logger calls in tests for NodeProcessor #2463 |
10 | Authored PR (merged): Standardise NodeProcessor.data.ts constant names #2483 |
12 | Authored PR (merged): Implement method to process attributes that can be overridden by slots #2511 |
12 | Authored PR (merged): Remove Overridden Question Attributes in Documentation #2513 |
Reading | Authored PR: Add tests for logger output when component attributes are overridden by slots #2525 |
Reading | Authored PR: Add missing documentation for attributes overridden by slots #2526 |
Week | Achievements |
---|---|
10 | Raised Issue: Add logger warnings when slots override attributes in NodeProcessor.ts #2476 |
12 | Raised Issue: Overridden Attribute for Question in Documentation causes Logger Warnings and Build Failure #2512 |
Reading | Raised Issue: Tab-Group Header not displayed #2524 |
Week | Achievements |
---|---|
1 | Reviewed PR: Add whitespace validation #1237 |
2 | Reviewed PR: Fix broken duplicate link #1233 |
2 | Reviewed PR: Redirect invalid routes to 404 not found page #1238 |
4 | Reviewed PR: Preserve line breaks in markdown #1241 |
10 | Reviewed PR: Raising warnings when submitting team response with assignees who aren't in organization #1264 |
13 | Reviewed PR: Add login redirect #1256 |
Week | Achievements |
---|---|
3 | Authored PR: Add documentation for CATcher's parser #1240 |
Week | Achievements |
---|---|
5 | Authored PR (merged): Simplify fix for abrupt panel transition #2421 |
8 | Authored PR (merged): Use intended test input for NodeProcessor test #2462 |
8 | Authored PR (merged): Test logger calls in tests for NodeProcessor #2463 |
10 | Authored PR (merged): Standardise NodeProcessor.data.ts constant names #2483 |
12 | Authored PR (merged): Implement method to process attributes that can be overridden by slots #2511 |
12 | Authored PR (merged): Remove Overridden Question Attributes in Documentation #2513 |
Reading | Authored PR: Add tests for logger output when component attributes are overridden by slots #2525 |
Reading | Authored PR: Add missing documentation for attributes overridden by slots #2526 |
Week | Achievements |
---|---|
10 | Raised Issue: Add logger warnings when slots override attributes in NodeProcessor.ts #2476 |
12 | Raised Issue: Overridden Attribute for Question in Documentation causes Logger Warnings and Build Failure #2512 |
Reading | Raised Issue: Tab-Group Header not displayed #2524 |
TEAMMATES uses Hibernate, an Object-Relational Mapping framework which allows us to interact with the database without writing SQL commands. It abstracts these low-level database interactions, enabling developers to work with high-level objects and queries instead. I read up on some Hibernate basics:
References:
Mockito facilitates unit testing by reducing the setup needed to create and define behaviour of mocked objects. The provided mock
, when/then
and verify
methods not only simplify the test writing process, but also enhance their readability and clarity for future developers.
References:
I was introduced to Docker during the onboarding process. I learnt about containers and the benefits of containerization, such as portability and isolation, and how they enable developers on different infrastructure to work in a consistent environment.
References:
TEAMMATES uses Hibernate, an Object-Relational Mapping framework which allows us to interact with the database without writing SQL commands. It abstracts these low-level database interactions, enabling developers to work with high-level objects and queries instead. I read up on some Hibernate basics:
References:
Mockito facilitates unit testing by reducing the setup needed to create and define behaviour of mocked objects. The provided mock
, when/then
and verify
methods not only simplify the test writing process, but also enhance their readability and clarity for future developers.
References:
I was introduced to Docker during the onboarding process. I learnt about containers and the benefits of containerization, such as portability and isolation, and how they enable developers on different infrastructure to work in a consistent environment.
References:
UpdateStudentAction
FeedbackQuestionsDb
Week | Achievements |
---|---|
4 | Merged PR: [#12048] Migrate UpdateStudentAction #12727 |
5 | Merged PR: [#12048] Add tests for FeedbackQuestionsDb #12759 |
6 | Created issue: Data Migration: Setter changes are persisted before data verification checks #12779 |
UpdateStudentAction
FeedbackQuestionsDb
Week | Achievements |
---|---|
4 | Merged PR: [#12048] Migrate UpdateStudentAction #12727 |
5 | Merged PR: [#12048] Add tests for FeedbackQuestionsDb #12759 |
6 | Created issue: Data Migration: Setter changes are persisted before data verification checks #12779 |
Data migration is critical aspect of software development and system maintenance, it involves moving data efficiently while maintaining data integrity, security, and consistency. Having the chance to be involve in data migration really opened my eyes to its general procedure. We were tasked with migrating NoSQL datastore entity to SQL postgresql.
E2E tests are a type of software testing that evaluates the entire workflow of an application from start to finish, simulating real user interactions. The purpose of E2E testing is to ensure that all components of an application, including the user interface, backend services, databases, and external integrations, work together correctly to achieve the desired functionality. Here's an explanation of E2E tests and how they are conducted. As E2E tests are very expensive to run, it is crucial that we identify the important workflows and simulate the actions involved by interacting with the UI. You then assert the expected conditions are present after the interaction. Teammates uses Selenium to locate and interact with the elements in the UI. I have to admit, this is my first time doing tests for Frontend much less the whole application. It was cool to see the browser jump around and simulate the required action. I also saw the value in this as I managed to uncover many bugs that was not caught in earlier tests.
References:
Mockito facilitates unit testing by mocking dependencies. Mock objects are used to simulated objects that mimic the behaviors of real objects in a controlled way, allowing developers to isolate and test specific components of their code without relying on actual dependencies or external systems. While I have written Stubs in CS2103T, this is my first time using a dedicated mocking library and it has changed my life. I also have used what I have learnt in many job interviews.
mock
method to initialise the mock objectwhen/then
for you to inject the controlled outcomeverify
mainly to check number of invocationsReferences:
TEAMMATES uses Hibernate, an Object-Relational Mapper(ORM). ORM are widely used in software development today as it provides several benefit to developers. While I have used ORMs before, such as Prisma, it is my first time using Hibernate. ORMs simplifies database interactions by allowing developers to work with Java objects directly, abstracting away the complexities of SQL queries. Also, as the name suggest, it allows us to map Java Objects to database table and their relationship. Allowing for easier and seamless operations with the database table. I read up on some Hibernate basics:
References:
I was required to deploy a staging environment for the course entity migration. It was my first time using GCP so I managed to gain familiarity with the vast tools that GCP offers. The guides provided by the seniors was just very descriptive and encouraged me to explore tweaking settings to better fit my use case.
References:
Data migration is critical aspect of software development and system maintenance, it involves moving data efficiently while maintaining data integrity, security, and consistency. Having the chance to be involve in data migration really opened my eyes to its general procedure. We were tasked with migrating NoSQL datastore entity to SQL postgresql.
E2E tests are a type of software testing that evaluates the entire workflow of an application from start to finish, simulating real user interactions. The purpose of E2E testing is to ensure that all components of an application, including the user interface, backend services, databases, and external integrations, work together correctly to achieve the desired functionality. Here's an explanation of E2E tests and how they are conducted. As E2E tests are very expensive to run, it is crucial that we identify the important workflows and simulate the actions involved by interacting with the UI. You then assert the expected conditions are present after the interaction. Teammates uses Selenium to locate and interact with the elements in the UI. I have to admit, this is my first time doing tests for Frontend much less the whole application. It was cool to see the browser jump around and simulate the required action. I also saw the value in this as I managed to uncover many bugs that was not caught in earlier tests.
References:
Mockito facilitates unit testing by mocking dependencies. Mock objects are used to simulated objects that mimic the behaviors of real objects in a controlled way, allowing developers to isolate and test specific components of their code without relying on actual dependencies or external systems. While I have written Stubs in CS2103T, this is my first time using a dedicated mocking library and it has changed my life. I also have used what I have learnt in many job interviews.
mock
method to initialise the mock objectwhen/then
for you to inject the controlled outcomeverify
mainly to check number of invocationsReferences:
TEAMMATES uses Hibernate, an Object-Relational Mapper(ORM). ORM are widely used in software development today as it provides several benefit to developers. While I have used ORMs before, such as Prisma, it is my first time using Hibernate. ORMs simplifies database interactions by allowing developers to work with Java objects directly, abstracting away the complexities of SQL queries. Also, as the name suggest, it allows us to map Java Objects to database table and their relationship. Allowing for easier and seamless operations with the database table. I read up on some Hibernate basics:
References:
I was required to deploy a staging environment for the course entity migration. It was my first time using GCP so I managed to gain familiarity with the vast tools that GCP offers. The guides provided by the seniors was just very descriptive and encouraged me to explore tweaking settings to better fit my use case.
References:
CreateInstructorAction
and InstructorSearchIndexingWorkerAction
.FeedbackResponseCommentDbTest
.FeedbackResultsPageE2ETest
and FeedbackRankOptionE2ETest
.Course
and Section
entity as part of v9-course-migration
.Course
, Section
, Team
, Student
entity.getSessionResultAction
and feedbackResponsesLogic
CreateInstructorAction
and InstructorSearchIndexingWorkerAction
.FeedbackResponseCommentDbTest
.FeedbackResultsPageE2ETest
and FeedbackRankOptionE2ETest
.Course
and Section
entity as part of v9-course-migration
.Course
, Section
, Team
, Student
entity.getSessionResultAction
and feedbackResponsesLogic
In CATcher, I mainly fixed bugs and made small enhancements in functionalities. In WATcher, I contributed to the enhancement of the issue viewer - the main functionality of WATcher.
PRs opened
Week | PR |
---|---|
<1 | #1233 Fix broken duplicate links |
<1 | #1234 Default branch to main |
4 | #1241 Preserve line breaks in markdown |
6 | #1245 Fix markddown blockquote preview difference |
8 | #1256 Add login redirect |
PRs reviewed
Week | PR |
---|---|
5 | #1243 Faulty list view when back navigating |
PRs opened
Week | PR |
---|---|
2 | #230 Fix label filter not working |
3 | #235 Show list of hidden users |
4 | #254 Refactor Label model |
5 | #255 Add shareable repo-specific URL |
7 | #282 Three-state labels |
9 | #309 Hide redundant column pagination |
9 | #310 Status filter checkboxes |
10 | #320 Add preset views |
10 | #326 Fix for no milestone case |
10 | #327 Create release 1.2.0 |
11 | #338 Fix preset view selection appearance |
11 | #346 Hide column issue count |
12 | #360 Optimise Github API calls |
Issues created
Week | Issue |
---|---|
2 | #229 App filters do not work with some label names |
3 | #236 Bypass logging in if viewing public repos only |
3 | #240 Hiding labels do not work as expected |
4 | #251 Add shareable repo-specific URL |
5 | #256 Refactor test cases for Label model |
7 | #277 Add current filters to URL |
7 | #278 Make URL redirect to work with activity dashboard |
8 | #287 Reading property of undefined when switching repo in activity dashboard |
11 | #343 Hide column issue count if only one page |
11 | #344 Change the repo change form from popup to dropdown |
13 | #371 Horizontal scrollbar on an issue card |
PRs reviewed
Week | PR |
---|---|
6 | #261 Refactor sorting |
7 | #281 Keep filters when switching repos |
9 | #307 Add tooltip for hidden users |
9 | #308 Setup grouping strategy and service |
9 | #311 Keep milestones when switching repo |
9 | #313 Integrate Grouping Service |
10 | #314 Add filters to url |
10 | #315 Split 'Without a milestone' option |
10 | #316 Implement group by milestone |
10 | #318 Add sorting by Status |
10 | #322 Update repo on back and forth navigation |
10 | #323 Refactor MilestoneGroupingStrategy to match changes in #315 |
10 | #325 Enable npm run test in Github Action |
10 | #331 Deploy V1.2.0 |
11 | #337 Add icon for PRs without milestones |
11 | #337 Implement dropdown menu for repo change |
12 | #359 Consider open milestone without deadline as currently active |
13 | #374 Fix reset of filters on label fetch |
13 | #375 fix: word-break issue in issue-pr-card-header.component.css #371 |
In CATcher, I mainly fixed bugs and made small enhancements in functionalities. In WATcher, I contributed to the enhancement of the issue viewer - the main functionality of WATcher.
PRs opened
Week | PR |
---|---|
<1 | #1233 Fix broken duplicate links |
<1 | #1234 Default branch to main |
4 | #1241 Preserve line breaks in markdown |
6 | #1245 Fix markddown blockquote preview difference |
8 | #1256 Add login redirect |
PRs reviewed
Week | PR |
---|---|
5 | #1243 Faulty list view when back navigating |
PRs opened
Week | PR |
---|---|
2 | #230 Fix label filter not working |
3 | #235 Show list of hidden users |
4 | #254 Refactor Label model |
5 | #255 Add shareable repo-specific URL |
7 | #282 Three-state labels |
9 | #309 Hide redundant column pagination |
9 | #310 Status filter checkboxes |
10 | #320 Add preset views |
10 | #326 Fix for no milestone case |
10 | #327 Create release 1.2.0 |
11 | #338 Fix preset view selection appearance |
11 | #346 Hide column issue count |
12 | #360 Optimise Github API calls |
Issues created
Week | Issue |
---|---|
2 | #229 App filters do not work with some label names |
3 | #236 Bypass logging in if viewing public repos only |
3 | #240 Hiding labels do not work as expected |
4 | #251 Add shareable repo-specific URL |
5 | #256 Refactor test cases for Label model |
7 | #277 Add current filters to URL |
7 | #278 Make URL redirect to work with activity dashboard |
8 | #287 Reading property of undefined when switching repo in activity dashboard |
11 | #343 Hide column issue count if only one page |
11 | #344 Change the repo change form from popup to dropdown |
13 | #371 Horizontal scrollbar on an issue card |
PRs reviewed
Week | PR |
---|---|
6 | #261 Refactor sorting |
7 | #281 Keep filters when switching repos |
9 | #307 Add tooltip for hidden users |
9 | #308 Setup grouping strategy and service |
9 | #311 Keep milestones when switching repo |
9 | #313 Integrate Grouping Service |
10 | #314 Add filters to url |
10 | #315 Split 'Without a milestone' option |
10 | #316 Implement group by milestone |
10 | #318 Add sorting by Status |
10 | #322 Update repo on back and forth navigation |
10 | #323 Refactor MilestoneGroupingStrategy to match changes in #315 |
10 | #325 Enable npm run test in Github Action |
10 | #331 Deploy V1.2.0 |
11 | #337 Add icon for PRs without milestones |
11 | #337 Implement dropdown menu for repo change |
12 | #359 Consider open milestone without deadline as currently active |
13 | #374 Fix reset of filters on label fetch |
13 | #375 fix: word-break issue in issue-pr-card-header.component.css #371 |
Pug, formerly known as Jade, is a templating language for Node.js and browsers. It simplifies HTML markup by using indentation-based syntax and offers features like variables, includes, mixins, and conditionals, making web development more efficient and readable.
I learnt how to create a Pug template and integrate it into a Vue component.
StackOverflow, ChatGPT, existing codebase, Pug Website
Vue.js is a progressive JavaScript framework used for building user interfaces and single-page applications. It offers a flexible and approachable structure for front-end development, with features like data binding, component-based architecture, and a simple yet powerful syntax.
I learnt the rationale behind the Single File Component structure, as well as how to implement it to refactor code. It was very similar to React in that the framework is structured around components, and props are still used for data flow. I also learnt how to access local files from within the framework to dynamically load data.
StackOverflow, ChatGPT, existing codebase, Vue Website
Cypress is an end-to-end testing framework used primarily for testing web applications. It provides a comprehensive set of tools and features to automate testing workflows, including real-time testing, automatic waiting, and built-in support for modern JavaScript frameworks.
I learnt how to write simple tests with the framework, as well as how to use the E2E live view to debug and design tests.
StackOverflow, ChatGPT, existing codebase, Cypress Documentation
Markdown-it is a popular JavaScript library used for parsing Markdown syntax and converting it into HTML. It provides a simple and flexible way to format text with lightweight markup syntax.
I learnt how to integrate markdown-it into a Vue component to allow for dynamic parsing of Markdown code into HTML for display.
StackOverflow, ChatGPT, markdown-it Documentation, this guide
Vite is a build tool that offers fast development and optimized production builds for web projects, particularly for Vue.js applications.
I learnt how to configure a Vite project.
StackOverflow, ChatGPT, Vite Documentation, Vue CLI to Vite migration guide
ESLint is a pluggable linting utility for JavaScript and TypeScript that helps identify and fix common coding errors and enforce consistent code style. Stylelint is a linter for CSS and Sass that helps enforce consistent coding conventions and identifies potential errors in stylesheets.
I learnt how to implement ESlint and Stylelint rules and modify the config files.
StackOverflow, ChatGPT, Documentation
Pug, formerly known as Jade, is a templating language for Node.js and browsers. It simplifies HTML markup by using indentation-based syntax and offers features like variables, includes, mixins, and conditionals, making web development more efficient and readable.
I learnt how to create a Pug template and integrate it into a Vue component.
StackOverflow, ChatGPT, existing codebase, Pug Website
Vue.js is a progressive JavaScript framework used for building user interfaces and single-page applications. It offers a flexible and approachable structure for front-end development, with features like data binding, component-based architecture, and a simple yet powerful syntax.
I learnt the rationale behind the Single File Component structure, as well as how to implement it to refactor code. It was very similar to React in that the framework is structured around components, and props are still used for data flow. I also learnt how to access local files from within the framework to dynamically load data.
StackOverflow, ChatGPT, existing codebase, Vue Website
Cypress is an end-to-end testing framework used primarily for testing web applications. It provides a comprehensive set of tools and features to automate testing workflows, including real-time testing, automatic waiting, and built-in support for modern JavaScript frameworks.
I learnt how to write simple tests with the framework, as well as how to use the E2E live view to debug and design tests.
StackOverflow, ChatGPT, existing codebase, Cypress Documentation
Markdown-it is a popular JavaScript library used for parsing Markdown syntax and converting it into HTML. It provides a simple and flexible way to format text with lightweight markup syntax.
I learnt how to integrate markdown-it into a Vue component to allow for dynamic parsing of Markdown code into HTML for display.
StackOverflow, ChatGPT, markdown-it Documentation, this guide
Vite is a build tool that offers fast development and optimized production builds for web projects, particularly for Vue.js applications.
I learnt how to configure a Vite project.
StackOverflow, ChatGPT, Vite Documentation, Vue CLI to Vite migration guide
ESLint is a pluggable linting utility for JavaScript and TypeScript that helps identify and fix common coding errors and enforce consistent code style. Stylelint is a linter for CSS and Sass that helps enforce consistent coding conventions and identifies potential errors in stylesheets.
I learnt how to implement ESlint and Stylelint rules and modify the config files.
StackOverflow, ChatGPT, Documentation
Week | Merged PRs |
---|---|
3 | [#1980] Standardise Array Style for Frontend Files #2084 |
3 | [#2003] Suppress Console Warning #2088 |
3 | [#1224] Update .stylelintrc.json to check for spacing #2094 |
4 | [#2001] Extract c-authorship-file component from views/c-authorship #2096 |
6 | [#2112] Move Segment CSS into segment.vue #2113 |
6 | [#467] Add Title Component #2102 |
7 | [#2128] Fix Blurry Favicon #2129 |
10 | [#2001] Extract c-zoom-commit-message component from views/c-zoom |
10 | [#2142] Fix Vulnerabilities |
11 | [#2151] Update LoadingOverlay and Minor Versions of Node Dependencies |
13 | [#2136] Add Tests for Segment CSS |
13 | [#2151] Update Stylelint |
13 | [#2151] Update CSS-related Major Dependencies |
13 | [#2158] Add More Documentation for Title Component |
Reading | [#2184] Fix Inconsistent Line Number Colours |
Reading | [#2151] Update Typescript-related Major Dependencies |
Reading | [#2001] Extract c-file-type-checkboxes from Summary, Authorship and Zoom |
Week | PRs Reviewed |
---|---|
2 | [#2004] Remove redundant Segment class #2085 |
2 | [#1973] Remove redundant User class #2093 |
2 | [#2082] Fix typo in command in Setting Up page #2083 |
2 | [#2091] Improve memory usage by refactoring Regex compilation #2092 |
3 | [#2016] Remove hash symbol from URL when decoding hash #2086 |
3 | [#2103] Refactor parser package for greater organisation of classes #2104 |
3 | [#2098] Add show more button for error messages #2105 |
3 | [#1933] Fix broken DevOps Guide link in Learning Basics #2107 |
4 | [#1878] Updating SystemTestUtil::assertJson to compare Json objects instead of line-by-line analysis #2087 |
5 | [#2091] Minor Enhancements to Existing Regex Code #2115 |
5 | [#2117] Refactor CliArguments to conform to RepoConfiguration's Builder Pattern #2118 |
5 | [#2091] Minor Enhancements to Existing Regex Code #2115 |
6 | [#2109] Add search by tag functionality #2116 |
6 | [#2123] Fix zoom bug if zUser is undefined #2126 |
9 | [#2134] Fix broken code highlighting in Code Panel |
10 | [#944] Implement authorship analysis |
10 | [#2148] Show tags on the ramp chart |
10 | [#2179] Add bin/ to .gitignore |
11 | [#2109] Add search by tag functionality |
13 | Add optimise timeline feature |
13 | Allow CI to pass if Codecov fails |
13 | Fix lint warnings |
13 | Add bin/ to .gitignore |
Week | Issues Submitted |
---|---|
2 | Update Style Checker for Pug Templates and Files #2097 |
5 | Support author-config.csv advanced syntax on CLI #2110 |
5 | Move CSS for Segment component into c-segment.vue #2112 |
7 | Fix Vulnerabilities in Code Base #2142 |
8 | Frontend DevOps: Update Node.js dependencies |
9 | Add More Documentation for Title Component |
9 | One-Stop Config File for Code Portfolio |
10 | Add Blurbs for Repos |
10 | Enforce No Spacing Between Methods in Vue Files |
10 | Replace Vue CLI with Vite |
Week | Merged PRs |
---|---|
3 | [#1980] Standardise Array Style for Frontend Files #2084 |
3 | [#2003] Suppress Console Warning #2088 |
3 | [#1224] Update .stylelintrc.json to check for spacing #2094 |
4 | [#2001] Extract c-authorship-file component from views/c-authorship #2096 |
6 | [#2112] Move Segment CSS into segment.vue #2113 |
6 | [#467] Add Title Component #2102 |
7 | [#2128] Fix Blurry Favicon #2129 |
10 | [#2001] Extract c-zoom-commit-message component from views/c-zoom |
10 | [#2142] Fix Vulnerabilities |
11 | [#2151] Update LoadingOverlay and Minor Versions of Node Dependencies |
13 | [#2136] Add Tests for Segment CSS |
13 | [#2151] Update Stylelint |
13 | [#2151] Update CSS-related Major Dependencies |
13 | [#2158] Add More Documentation for Title Component |
Reading | [#2184] Fix Inconsistent Line Number Colours |
Reading | [#2151] Update Typescript-related Major Dependencies |
Reading | [#2001] Extract c-file-type-checkboxes from Summary, Authorship and Zoom |
Week | PRs Reviewed |
---|---|
2 | [#2004] Remove redundant Segment class #2085 |
2 | [#1973] Remove redundant User class #2093 |
2 | [#2082] Fix typo in command in Setting Up page #2083 |
2 | [#2091] Improve memory usage by refactoring Regex compilation #2092 |
3 | [#2016] Remove hash symbol from URL when decoding hash #2086 |
3 | [#2103] Refactor parser package for greater organisation of classes #2104 |
3 | [#2098] Add show more button for error messages #2105 |
3 | [#1933] Fix broken DevOps Guide link in Learning Basics #2107 |
4 | [#1878] Updating SystemTestUtil::assertJson to compare Json objects instead of line-by-line analysis #2087 |
5 | [#2091] Minor Enhancements to Existing Regex Code #2115 |
5 | [#2117] Refactor CliArguments to conform to RepoConfiguration's Builder Pattern #2118 |
5 | [#2091] Minor Enhancements to Existing Regex Code #2115 |
6 | [#2109] Add search by tag functionality #2116 |
6 | [#2123] Fix zoom bug if zUser is undefined #2126 |
9 | [#2134] Fix broken code highlighting in Code Panel |
10 | [#944] Implement authorship analysis |
10 | [#2148] Show tags on the ramp chart |
10 | [#2179] Add bin/ to .gitignore |
11 | [#2109] Add search by tag functionality |
13 | Add optimise timeline feature |
13 | Allow CI to pass if Codecov fails |
13 | Fix lint warnings |
13 | Add bin/ to .gitignore |
Week | Issues Submitted |
---|---|
2 | Update Style Checker for Pug Templates and Files #2097 |
5 | Support author-config.csv advanced syntax on CLI #2110 |
5 | Move CSS for Segment component into c-segment.vue #2112 |
7 | Fix Vulnerabilities in Code Base #2142 |
8 | Frontend DevOps: Update Node.js dependencies |
9 | Add More Documentation for Title Component |
9 | One-Stop Config File for Code Portfolio |
10 | Add Blurbs for Repos |
10 | Enforce No Spacing Between Methods in Vue Files |
10 | Replace Vue CLI with Vite |
Having had experience in mainly React and NodeJS projects earlier, I was overall more used to creating projects with Functional Components, rather than Class Components as with Angular. However, I realised that one of the key aspects of frontend frameworks, namely reactivity, was in fact the main drivers of development of such frameworks in the first place!
In fact, even React were originally championing the idea of Class Components in order to isolate various web components into areas or responsibility, following rule number 1 of Software Engineering: Single Responsibility. However, while React is largely unopinionated in how you structure your code with regards to the coupling of business logic and HTML, Angular differs by dictating where and how you structure your components.
Angular separates components into modules which comprise of 3 to 4 files:
@Component
decorator;On the other hand, React only dictates that class components should produce some sort of HTML using the render function. Even this is removed with the introduction of Functional Components that are simply functions which render and produce some HTML. React introduces hooks which are often used by developers to manage some state at the component level, using functions with side effects.
Each method has its positives and negatives. Because of its opinionated nature, Angular makes it easy to standardize frontend coding standards and pattern across an entire enterprise, making it an apt choice to use as a tool for OSS development. On the other hand, React allows you to develop code more quickly, with more attention needed to be paid at the rendering lifecycles in order to let the Virtual DOM know when a particular component needs to be rendered again. On top of this, Angular wholely separates business logic from rendered HTML, whereas React takes the does not make this distinction.
Another key point is how React and Angular differentiate in providing context (sharing or passing down state between different branches of the DOM tree). React has its own Context API that is used to share some sort of state between different components, whereas Angular does this by the providers
declaration in the module folder, which results in a set of singletons that are shared by components that exist below it in the tree.
I also picked up RxJS along the way, which was Angular's answer to creating reactive components. RxJS essentially deals with asynchronous pipe/filter, publisher/subscriber behavior which allows values to change and other components or functions to subscribe to these changes. This works considering Angular's Change Detection strategy which I will explain later.
In comparison, React introduced and adopted hooks to encapsulate the behavior of having to rerender. React does this by operating on a Virtual DOM, and appropriately rerendering components and their children in patches when a change was detected. On the other hand, Angular does not have any abstraction to operate and rerender components whose state have changed. Instead, Angular uses a Change Detection Strategy which can be configured by the user (either onPush or Default). Angular Change Detection works by using Zone.js and activating after every async action performed. CD traversal starts at the root component (usually App) and works its way down the component tree updating the DOM as needed. What's happening under the hood is that browser events are registered into Zone.js - Angular's mechanism for orchestrating async events - which emits changes after initial template bindings are created. -...
Zitadel is an open source user management tool that aims to provide easy identity infrastructure, with out-of-the-box features such as
and more solutions. It provides easy integration with oAuth providers such as GitHub, Facebook, O365 and serves as an easy way for enterprises to set up multi-tenancy identity providers with clear separation of identities. Zitadel is written in Go and consists of an interesting mix of server-side rendered authentication (using Go and HTML templates), along with a client side application written in Angular, as well as modularised Core library that uses Event-driven architecture to ensure that all events are not only captured but also traceable.
The team favours transaction safety, with high availability, and have employed and implemented it's own message queue system. It works by placing events into an in-memory queue for subscribers, under the pub-sub model.
Zitadel has 7.1k stars and is used by many organisations as an alternative to other identity infrastructure platforms, due to it's heavy customisability in terms of branding and deployment options.
Deploying both an Angular console application, which is a management interface, as well as Server side pages for authentication (Login, Register and Password Reset pages) were important. Particularly, Zitadel uses HTML templates heavily along with a flexible component system that enables easy internationalisation, which is important for a tool like Zitadel that everyone can use.
Also, learning about gRPC through interactions with the backend was also enlightening as I was more familiar with GraphQL and traditional HTTP endpoints through my experience with CATcher/WATcher and personal projects and internships.
gRPC uses Protocol Buffers (protobufs) by default, which is a lightweight, highly efficient serialization tool; which serves it's purpose when building a distributed application like Zitadel. It also allows for server-side and client-side streaming, both of which are used (particularly for event logging) in Zitadel.
Templ is a HTML templating language for Go that has great developer tooling, including an LSP (Language Server Protocol) for Vim users and extension for VSCode users. With Templ, we can create components that render fragments of HTML and then compose them to create screens, pages, documents or apps.
This allows for
Templ borrows heavily from the Component model in React and Angular, and as such models it's own components as mark up and code that is compiled into functions that return a templ.Component
interface.
This allows for Templ to be used in tandem with htmx, to selectively replace content within a webpage instead of replacing the whole web page within the browser.
I learnt how to build an SSR application using Go, HTMX and Templ by building an example application to provide documentation for i18n support. I also used Server-side Events which enabled minified HTMX runtime to add elements based on the component that was received on the stream endpoint. I also understood how i18n was generally implemented on products with a need to support a variety of languages, as well as building generalised components that decoupled the actual components from the textual UI.
FerretDB was founded to become the de-facto open-source substitute to MongoDB. FerretDB is an open-source proxy, converting the MongoDB 5.0+ wire protocol queries to SQL - using PostgreSQL or SQLite as a database engine.
MongoDB was originally the eye-opening technology for developers that allowed developers to build applications faster than using relational databases. However, with MongoDB abandoning its open-source roots, there was a need for an easy-to-use open-source document database solution, which is what FerretDB aims to fill.
While I did not learn much about the database design in itself, I learnt about Conventional Commits: a standardised format that dictates how developers should write their commit messages. Conventional commits allowed for projects with a large developer base to have visibility and transparency over who did what, when. Furthermore, the standardisation allows for easy automatic changelog generation, important for when products are shipped out to actual users; as well as making it easier for people to contribute to projects.
FerretDB suffered from the lack of implementation of Conventional Commits: without it, it was dependent on the platform (GitHub) the repository was hosted on to actually generate meaningful Changelogs. This added additional dependencies that tied the project with GitHub unnecessarily, instead of allowing the project to be independent of the Git versioning platform it was hosted on (GitLab, BitBucket are suitable alternatives).
As such, Changelog generation was originally done by using the GitHub workflow directly, which overly complicated the release process, necessitating for another way to locally generate the Changelog.
Zitadel is an open source user management tool that aims to provide easy identity infrastructure, with out-of-the-box features such as
and more solutions. It provides easy integration with oAuth providers such as GitHub, Facebook, O365 and serves as an easy way for enterprises to set up multi-tenancy identity providers with clear separation of identities. Zitadel is written in Go and consists of an interesting mix of server-side rendered authentication (using Go and HTML templates), along with a client side application written in Angular, as well as modularised Core library that uses Event-driven architecture to ensure that all events are not only captured but also traceable.
The team favours transaction safety, with high availability, and have employed and implemented it's own message queue system. It works by placing events into an in-memory queue for subscribers, under the pub-sub model.
Zitadel has 7.1k stars and is used by many organisations as an alternative to other identity infrastructure platforms, due to it's heavy customisability in terms of branding and deployment options.
Deploying both an Angular console application, which is a management interface, as well as Server side pages for authentication (Login, Register and Password Reset pages) were important. Particularly, Zitadel uses HTML templates heavily along with a flexible component system that enables easy internationalisation, which is important for a tool like Zitadel that everyone can use.
Also, learning about gRPC through interactions with the backend was also enlightening as I was more familiar with GraphQL and traditional HTTP endpoints through my experience with CATcher/WATcher and personal projects and internships.
gRPC uses Protocol Buffers (protobufs) by default, which is a lightweight, highly efficient serialization tool; which serves it's purpose when building a distributed application like Zitadel. It also allows for server-side and client-side streaming, both of which are used (particularly for event logging) in Zitadel.
Templ is a HTML templating language for Go that has great developer tooling, including an LSP (Language Server Protocol) for Vim users and extension for VSCode users. With Templ, we can create components that render fragments of HTML and then compose them to create screens, pages, documents or apps.
This allows for
Templ borrows heavily from the Component model in React and Angular, and as such models it's own components as mark up and code that is compiled into functions that return a templ.Component
interface.
This allows for Templ to be used in tandem with htmx, to selectively replace content within a webpage instead of replacing the whole web page within the browser.
I learnt how to build an SSR application using Go, HTMX and Templ by building an example application to provide documentation for i18n support. I also used Server-side Events which enabled minified HTMX runtime to add elements based on the component that was received on the stream endpoint. I also understood how i18n was generally implemented on products with a need to support a variety of languages, as well as building generalised components that decoupled the actual components from the textual UI.
FerretDB was founded to become the de-facto open-source substitute to MongoDB. FerretDB is an open-source proxy, converting the MongoDB 5.0+ wire protocol queries to SQL - using PostgreSQL or SQLite as a database engine.
MongoDB was originally the eye-opening technology for developers that allowed developers to build applications faster than using relational databases. However, with MongoDB abandoning its open-source roots, there was a need for an easy-to-use open-source document database solution, which is what FerretDB aims to fill.
While I did not learn much about the database design in itself, I learnt about Conventional Commits: a standardised format that dictates how developers should write their commit messages. Conventional commits allowed for projects with a large developer base to have visibility and transparency over who did what, when. Furthermore, the standardisation allows for easy automatic changelog generation, important for when products are shipped out to actual users; as well as making it easier for people to contribute to projects.
FerretDB suffered from the lack of implementation of Conventional Commits: without it, it was dependent on the platform (GitHub) the repository was hosted on to actually generate meaningful Changelogs. This added additional dependencies that tied the project with GitHub unnecessarily, instead of allowing the project to be independent of the Git versioning platform it was hosted on (GitLab, BitBucket are suitable alternatives).
As such, Changelog generation was originally done by using the GitHub workflow directly, which overly complicated the release process, necessitating for another way to locally generate the Changelog.
Date | Role | Description | Key Achievements |
---|---|---|---|
4 July 2023 | PR Reviewer | Specify node version in package.json | |
12 Oct 2023 | PR Reviewer | Fetch settings json directly without api | |
14 Oct 2023 | Version Release | v3.5.1 | |
4 Mar 2024 | Version Release | v3.5.3 | |
4 Mar 2024 | PR Reviewer | Upgrade to Angular 13 |
Date | Role | Description | Key Achievements |
---|---|---|---|
19 Sep 2023 | PR Reviewer | Improve efficiency of saving and deleting issue models | |
28 Oct 2023 | PR Reviewer | Add Issues Dashboard access by URL | |
20 Feb 2024 | Issue Contributor | Refactor filters into own service | |
20 Feb 2024 | PR Reviewer | Refactor certain filters into its own service #259 | |
22 Feb 2024 | PR Reviewer | Refactor sorting | |
28 Feb 2024 | PR Reviewer | Refactor milestone filters | |
4 March 2024 | PR Reviewer | Refactor title filter | |
14 March 2024 | Version Release | Create version for v1.1.1 | |
18 March 2024 | PR Reviewer | Release changelog automation |
Date | Role | Description | Key Achievements |
---|---|---|---|
15 Mar 2024 | PR Author | fix: Incorrect button positioning on email verification | Fixed template HTML template issue |
3 Apr 2024 | PR Author | fix: Update url to redirect to name change url |
Date | Role | Description | Key Achievements |
---|---|---|---|
4 Apr 2024 | PR Author | docs: Create documentation and examples for Internationalization | Created SSR page with simple Go server using Templ, Echo, HTMX |
Date | Role | Description | Key Achievements |
---|---|---|---|
3 Apr 2024 | PR Author | Add local Changelog generation based on PR titles and labels | Created customised Changelog that queries GitHub API, loads a Changelog template based on GitHub's own Changelog template, groups PRs by labels and renders to stdout |
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Mar 2024 | Issue Contributor | Enhancement: Ensure image runs without dropping into a shell to run the demucs command |
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Mar 2024 | Issue Contributor | [Enhancement] Migrate to aws-sdk-go-v2 |
Date | Role | Description | Key Achievements |
---|---|---|---|
4 July 2023 | PR Reviewer | Specify node version in package.json | |
12 Oct 2023 | PR Reviewer | Fetch settings json directly without api | |
14 Oct 2023 | Version Release | v3.5.1 | |
4 Mar 2024 | Version Release | v3.5.3 | |
4 Mar 2024 | PR Reviewer | Upgrade to Angular 13 |
Date | Role | Description | Key Achievements |
---|---|---|---|
19 Sep 2023 | PR Reviewer | Improve efficiency of saving and deleting issue models | |
28 Oct 2023 | PR Reviewer | Add Issues Dashboard access by URL | |
20 Feb 2024 | Issue Contributor | Refactor filters into own service | |
20 Feb 2024 | PR Reviewer | Refactor certain filters into its own service #259 | |
22 Feb 2024 | PR Reviewer | Refactor sorting | |
28 Feb 2024 | PR Reviewer | Refactor milestone filters | |
4 March 2024 | PR Reviewer | Refactor title filter | |
14 March 2024 | Version Release | Create version for v1.1.1 | |
18 March 2024 | PR Reviewer | Release changelog automation |
Date | Role | Description | Key Achievements |
---|---|---|---|
15 Mar 2024 | PR Author | fix: Incorrect button positioning on email verification | Fixed template HTML template issue |
3 Apr 2024 | PR Author | fix: Update url to redirect to name change url |
Date | Role | Description | Key Achievements |
---|---|---|---|
4 Apr 2024 | PR Author | docs: Create documentation and examples for Internationalization | Created SSR page with simple Go server using Templ, Echo, HTMX |
Date | Role | Description | Key Achievements |
---|---|---|---|
3 Apr 2024 | PR Author | Add local Changelog generation based on PR titles and labels | Created customised Changelog that queries GitHub API, loads a Changelog template based on GitHub's own Changelog template, groups PRs by labels and renders to stdout |
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Mar 2024 | Issue Contributor | Enhancement: Ensure image runs without dropping into a shell to run the demucs command |
Date | Role | Description | Key Achievements |
---|---|---|---|
13 Mar 2024 | Issue Contributor | [Enhancement] Migrate to aws-sdk-go-v2 |
Recharts is a React library that provides an easy way to write & render charts in React applications.
My first contribution was updating the Storybook page of the project. Storybook is a frontend workshop that allows users to render UI components and/or pages in isolation, and is often used for interactive documentation of each component in UI libraries. Within Recharts, in addition to the standard markdown-based documentation of its components, it maintains a Storybook page that documents each component interactively, as well as providing examples of how to achieve common use-cases with the components it provides.
In docs: add storybook example for line trailing icon in LineChart, I added an example of how to add a custom trailing icon to a line within a line chart, which was a common usecase that required a workaround.
The observed workflow/process of this external project has a couple of extremely important differences to our internal project (RepoSense in particular), which I feel we can learn from to improve developer experience, reduce the likelihood of regressions, and speed up turnaround time.
The project has set up automatic hooks (using Husky) that run before every commit and push. These hooks run the linter and automated tests, and prevent any user from pushing if the linter and/or tests fail. What this does is guarantee that by the time a pull request is open, there won't exist any lint or test errors. It is a very common occurence in RepoSense that a contributor will open a pull request, and only then be notified that their code has a bunch of lint errors (this is even more common in frontend PRs). The most likely reason for this is that the linter script in the frontend folder (npm run lint
) is never run during self testing, resulting in newer contributors almost always not being aware of the presence of the lint checks until the first time they open a PR and the CI runs. We can potentially save a lot of headache by implementing automatic git hooks into RepoSense, at least for linting the frontend codebase at a minimum. This would also probably speed up turnaround/development time, since there usually is quite a lot of time wasted when reviewers have to ask contributors to fix their linting errors.
The project also utilizes snapshot testing in their automated tests. They use Vitest to run snapshot tests, which is a library that we're considering in RepoSense as well. Snapshot testing involves the automatic creation of snapshot files that stores the output at the time the tests were run, and compares future outputs to these refefrence values. The snapshot files are usually checked into version control, and can be examined alongside code changes. Here's an example of a snapshot file and snapshot check in the external project's codebase. This is definitely something that we can consider implementing within RepoSense, with the intention of preventing regressions.
The project makes a great effort to properly tag & maintain a list of good first issues for newer contributors, which usually consist of smaller and easier issues that don't require deep knowledge of large portions of the codebase to tackle. This was crucial in enabling my experience of contributing, and we should look to paying more attention to this in RepoSense, especially for each new batch.
Recharts is a React library that provides an easy way to write & render charts in React applications.
My first contribution was updating the Storybook page of the project. Storybook is a frontend workshop that allows users to render UI components and/or pages in isolation, and is often used for interactive documentation of each component in UI libraries. Within Recharts, in addition to the standard markdown-based documentation of its components, it maintains a Storybook page that documents each component interactively, as well as providing examples of how to achieve common use-cases with the components it provides.
In docs: add storybook example for line trailing icon in LineChart, I added an example of how to add a custom trailing icon to a line within a line chart, which was a common usecase that required a workaround.
The observed workflow/process of this external project has a couple of extremely important differences to our internal project (RepoSense in particular), which I feel we can learn from to improve developer experience, reduce the likelihood of regressions, and speed up turnaround time.
The project has set up automatic hooks (using Husky) that run before every commit and push. These hooks run the linter and automated tests, and prevent any user from pushing if the linter and/or tests fail. What this does is guarantee that by the time a pull request is open, there won't exist any lint or test errors. It is a very common occurence in RepoSense that a contributor will open a pull request, and only then be notified that their code has a bunch of lint errors (this is even more common in frontend PRs). The most likely reason for this is that the linter script in the frontend folder (npm run lint
) is never run during self testing, resulting in newer contributors almost always not being aware of the presence of the lint checks until the first time they open a PR and the CI runs. We can potentially save a lot of headache by implementing automatic git hooks into RepoSense, at least for linting the frontend codebase at a minimum. This would also probably speed up turnaround/development time, since there usually is quite a lot of time wasted when reviewers have to ask contributors to fix their linting errors.
The project also utilizes snapshot testing in their automated tests. They use Vitest to run snapshot tests, which is a library that we're considering in RepoSense as well. Snapshot testing involves the automatic creation of snapshot files that stores the output at the time the tests were run, and compares future outputs to these refefrence values. The snapshot files are usually checked into version control, and can be examined alongside code changes. Here's an example of a snapshot file and snapshot check in the external project's codebase. This is definitely something that we can consider implementing within RepoSense, with the intention of preventing regressions.
The project makes a great effort to properly tag & maintain a list of good first issues for newer contributors, which usually consist of smaller and easier issues that don't require deep knowledge of large portions of the codebase to tackle. This was crucial in enabling my experience of contributing, and we should look to paying more attention to this in RepoSense, especially for each new batch.
AncientBeast is a turn-based strategy game that has been around for 13 years, with a small but active developer base and player community. It is played directly on the browser and supports various game modes including online multiplayer. The current version being worked on is v0.5.
I have mainly worked on the improvement of visuals, adding some information on the hexagon grid upon some user action.
My first issue was to show a 'Skip turn' icon when the user hovers over an active unit. To solve this issue (PR here), I added some assets to the assets loaded as well as an additional hint type. Then, I added the 'Skip turn' icon if the new hint type was called.
My second issue was to show the selected ability above the hovered hexagon when targeting the ability. This issue presented a different challenge, since the hint types above are only used for units. To solve this issue (PR here and still ongoing), I had to add the unit abilities to the assets, and then add an 'ability' class to the hovered hex's overlay visual state, removing the class as necessary. Then, I checked if the 'ability' class was present, got the appropriate ability asset, and set it to be shown above the hexagon.
I plan to take on this third issue after the PR above is merged, as it is makes use of the changes made in the above 2 PRs.
AncientBeast makes use of Phaser as its game engine (which is also open source). Having had not much experience with game development, I found using Phaser quite challenging, and had to rely a lot on community help. I mainly learnt about how to manipulate GameObjects, such as Sprites, in Phaser as well as how they are animated and rendered.
Another thing I learnt about was testing a game's UI using Jest. The approach AncientBeast took relied mostly on getting game objects at certain x
and y
positions, as well as checking what existed at certain coordinates to ensure that actions were correctly handled. Some of the difficulties faced were in trying not to have tests depend too much on implementation (e.g. of Creatures in the game), and so tests had to be a bit more general.
More broadly, I learnt about game development in general. There is a lot more attention paid to anything that the user might try to do, including just hovering over something. I think this level of detail to user actions (and how it can be handled neatly in the codebase) is quite unique to game development.
Unfortunately, I think AncientBeast's code is quite messy. The files are long, variables are sometimes inappropriately named (e.g. some are just named o
), and basically a lot of functionality is packed together in one place, reducing readability. There are deprecated calls as well, which I think should be fixed together with the deprecation. Furthermore, there is a lack of developer documentation, which further poses a challenge to new contributors. For example, I think I would have greatly benefitted from some flows of standard actions, or generally how classes interact with each other. Better documentation might also lead to better structuring of code, which AncientBeast needs. Another point of improvement would be type safety, since I found that quite a few variables are just typed as any
.
That being said, I think AncientBeast does well in contributor management. Once someone expresses an interest in an issue, they will be assigned it and given a soft deadline, usually of 2 weeks. If they cannot complete it and/or are unresponsive, then the issue will usually be assigned to someone else, or just left without an assignee, indicating that it is available. I think we could benefit from implementing a similar approach, else our issues get inundated with "is this issue still available?" comments and waiting for people to say whether or not they are still working on it. The soft deadline also helps to push progress along.
Another thing we could adopt is having a standardised code format. I've noticed many discrepancies in coding style throughout our codebase, and I think having a standardised coding style (with non-controversial rules) would make our code neater and in some cases more readable.
AncientBeast is a turn-based strategy game that has been around for 13 years, with a small but active developer base and player community. It is played directly on the browser and supports various game modes including online multiplayer. The current version being worked on is v0.5.
I have mainly worked on the improvement of visuals, adding some information on the hexagon grid upon some user action.
My first issue was to show a 'Skip turn' icon when the user hovers over an active unit. To solve this issue (PR here), I added some assets to the assets loaded as well as an additional hint type. Then, I added the 'Skip turn' icon if the new hint type was called.
My second issue was to show the selected ability above the hovered hexagon when targeting the ability. This issue presented a different challenge, since the hint types above are only used for units. To solve this issue (PR here and still ongoing), I had to add the unit abilities to the assets, and then add an 'ability' class to the hovered hex's overlay visual state, removing the class as necessary. Then, I checked if the 'ability' class was present, got the appropriate ability asset, and set it to be shown above the hexagon.
I plan to take on this third issue after the PR above is merged, as it is makes use of the changes made in the above 2 PRs.
AncientBeast makes use of Phaser as its game engine (which is also open source). Having had not much experience with game development, I found using Phaser quite challenging, and had to rely a lot on community help. I mainly learnt about how to manipulate GameObjects, such as Sprites, in Phaser as well as how they are animated and rendered.
Another thing I learnt about was testing a game's UI using Jest. The approach AncientBeast took relied mostly on getting game objects at certain x
and y
positions, as well as checking what existed at certain coordinates to ensure that actions were correctly handled. Some of the difficulties faced were in trying not to have tests depend too much on implementation (e.g. of Creatures in the game), and so tests had to be a bit more general.
More broadly, I learnt about game development in general. There is a lot more attention paid to anything that the user might try to do, including just hovering over something. I think this level of detail to user actions (and how it can be handled neatly in the codebase) is quite unique to game development.
Unfortunately, I think AncientBeast's code is quite messy. The files are long, variables are sometimes inappropriately named (e.g. some are just named o
), and basically a lot of functionality is packed together in one place, reducing readability. There are deprecated calls as well, which I think should be fixed together with the deprecation. Furthermore, there is a lack of developer documentation, which further poses a challenge to new contributors. For example, I think I would have greatly benefitted from some flows of standard actions, or generally how classes interact with each other. Better documentation might also lead to better structuring of code, which AncientBeast needs. Another point of improvement would be type safety, since I found that quite a few variables are just typed as any
.
That being said, I think AncientBeast does well in contributor management. Once someone expresses an interest in an issue, they will be assigned it and given a soft deadline, usually of 2 weeks. If they cannot complete it and/or are unresponsive, then the issue will usually be assigned to someone else, or just left without an assignee, indicating that it is available. I think we could benefit from implementing a similar approach, else our issues get inundated with "is this issue still available?" comments and waiting for people to say whether or not they are still working on it. The soft deadline also helps to push progress along.
Another thing we could adopt is having a standardised code format. I've noticed many discrepancies in coding style throughout our codebase, and I think having a standardised coding style (with non-controversial rules) would make our code neater and in some cases more readable.
As part of the V9 migration, I had to rewrite the logic to query from the SQL database using Hibernate ORM API instead of from Datastore.
TEAMMATES' back-end code follows the Object-Oriented (OO) paradigm. The code works with objects. This allows easy mapping of objects in the problem domain (e.g. app user) to objects in code (e.g. User
).
For the data to persist beyond a single session, it must be stored/persisted into a database. V9 uses PostgreSQL, a relational database management system (RDBMS) to store data.
It is difficult to translate data from a relational model to an object models, resulting in the object-relational impedance mismatch.
A Object/Relational Mapping (ORM) framework helps bridge the object-relational impedance mismatch, allowing us to work with data from an RDBMS in a familiar OO fashion.
Jakarta Persistence, formerly known as Java Persistence API (JPA) is an API for persistence and ORM. -Hibernate implements this specification.
The Criteria API allows us to make database queries using objects in code rather than using query strings. The queries can be built based on certain criteria (e.g. matching field).
Using Join<X, Y>
, we can navigate to related entities in a query, allowing us to access fields of a related class. For example, when querying a User
, we can access its associated Account
.
Hibernate maintains a persistence context, which serves as a cache of objects. This context allows for in-code objects to be synced with the data in the database.
Using persist()
, merge()
, and remove()
, we can create, update, and remove an object's data from the database. These methods schedule SQL statements according to the current state of the Java object.
clear()
clears the cached state and stops syncing existing Java objects with their corresponding database data. flush()
synchronises the cached state with the database state. When writing integration tests, I found it helpful to clear()
and flush()
the persistence contexts before every test, to ensure that operations done in one test do not affect the others in unexpected ways.
To isolate units in unit testing, it is useful to create mocks or stubs of other components that are used by the unit.
We can create a mock of a class using mock()
. We can then use this mocked object as we would a normal object (e.g. calling methods). Afterwards, we can verify several things, such as whether a particular method was called with particular parameters, and how many times a particular method call was performed.
If a method needs to return a value when called, the return value can be stubbed before the method of the mocked object is called. The same method can be stubbed with different outputs for different parameters. Exceptions can also be stubbed in a similar way.
As part of the instructor account request form (ARF) project, I had to create an Angular form.
Angular has 2 form types: template-driven, and reactive.
Template-driven forms have implicit data models which are determined by the form view itself. Template-driven forms are simpler to add, but are more complicated to test and scale.
Reactive forms require an explicitly-defined data model that is then bound to the view. The explicit definition of the model in reactive forms makes it easier to scale and test, particularly for more complex forms.
Standard HTML attributes may still need to be set on Angular form inputs to ensure accessibility. For instance, Angular's Required
validator does not set the required
attribute on the element, which is used by screen readers, so we need to set it also. Another example would be setting the aria-invalid
attribute when validation fails.
To make inline validation messages accessible, use aria-describedby
to make it clear which input the error is associated with.
Angular has some built-in validator functions that can be used to validate form inputs, and allows for custom validators to be created. Validators can be synchronous or asynchronous.
By default, all validators run when the input values change. When there are many validators, the form may lag if validation is done this frequently. To improve performance, the form or input's updateOn
option can be set to submit
or blur
to only run the validators on submit or blur.
git rebase
can be used to keep branch commit history readable and remove clutter from frequent merge commits.
In particular, the --onto
option allows the root to be changed, which is useful when rebasing onto a branch that has itself been modified or rebased.
Each Git commit has a committer date and an author date. When rebasing, the committer date is altered. To prevent this, use --committer-date-is-author-date
.
The Criteria API allows us to make database queries using objects in code rather than using query strings. The queries can be built based on certain criteria (e.g. matching field).
Using Join<X, Y>
, we can navigate to related entities in a query, allowing us to access fields of a related class. For example, when querying a User
, we can access its associated Account
.
Hibernate maintains a persistence context, which serves as a cache of objects. This context allows for in-code objects to be synced with the data in the database.
Using persist()
, merge()
, and remove()
, we can create, update, and remove an object's data from the database. These methods schedule SQL statements according to the current state of the Java object.
clear()
clears the cached state and stops syncing existing Java objects with their corresponding database data. flush()
synchronises the cached state with the database state. When writing integration tests, I found it helpful to clear()
and flush()
the persistence contexts before every test, to ensure that operations done in one test do not affect the others in unexpected ways.
To isolate units in unit testing, it is useful to create mocks or stubs of other components that are used by the unit.
We can create a mock of a class using mock()
. We can then use this mocked object as we would a normal object (e.g. calling methods). Afterwards, we can verify several things, such as whether a particular method was called with particular parameters, and how many times a particular method call was performed.
If a method needs to return a value when called, the return value can be stubbed before the method of the mocked object is called. The same method can be stubbed with different outputs for different parameters. Exceptions can also be stubbed in a similar way.
As part of the instructor account request form (ARF) project, I had to create an Angular form.
Angular has 2 form types: template-driven, and reactive.
Template-driven forms have implicit data models which are determined by the form view itself. Template-driven forms are simpler to add, but are more complicated to test and scale.
Reactive forms require an explicitly-defined data model that is then bound to the view. The explicit definition of the model in reactive forms makes it easier to scale and test, particularly for more complex forms.
Standard HTML attributes may still need to be set on Angular form inputs to ensure accessibility. For instance, Angular's Required
validator does not set the required
attribute on the element, which is used by screen readers, so we need to set it also. Another example would be setting the aria-invalid
attribute when validation fails.
To make inline validation messages accessible, use aria-describedby
to make it clear which input the error is associated with.
Angular has some built-in validator functions that can be used to validate form inputs, and allows for custom validators to be created. Validators can be synchronous or asynchronous.
By default, all validators run when the input values change. When there are many validators, the form may lag if validation is done this frequently. To improve performance, the form or input's updateOn
option can be set to submit
or blur
to only run the validators on submit or blur.
git rebase
can be used to keep branch commit history readable and remove clutter from frequent merge commits.
In particular, the --onto
option allows the root to be changed, which is useful when rebasing onto a branch that has itself been modified or rebased.
Each Git commit has a committer date and an author date. When rebasing, the committer date is altered. To prevent this, use --committer-date-is-author-date
.
Week | Achievements |
---|---|
3 | Merged PR: Migrate join course action #12722 |
4 | Merged PR: Migrate search students action #12735 |
5 | Submitted PR: Add test cases for Feedback response Db |
6 | Merged PR: Add locale for java datetime formatter |
6 | Merged PR: Migrate Notification Banner E2E |
7 | Multiple section/team structure tech design (User flow and Requirements) |
8 | Multiple section/team structure tech design (Created UI wireframes) |
Week | Achievements |
---|---|
3 | Merged PR: Migrate join course action #12722 |
4 | Merged PR: Migrate search students action #12735 |
5 | Submitted PR: Add test cases for Feedback response Db |
6 | Merged PR: Add locale for java datetime formatter |
6 | Merged PR: Migrate Notification Banner E2E |
7 | Multiple section/team structure tech design (User flow and Requirements) |
8 | Multiple section/team structure tech design (Created UI wireframes) |
Refine is a React Framework for building internal tools, admin panels, dashboards & B2B apps with unmatched flexibility. It uses TypeScript. It simplifies the development process and eliminate repetitive tasks by providing industry-standard solutions for crucial aspects of a project, including authentication, access control, routing, networking, state management, and i18n.
PR 1: docs(core): add DataProvider interface definition #5653
Initially I thought contributing to documentation is easy, but I realize that contributing to documentation requires good understanding of the codebase structure and the workflow.
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
The PR review process in Refine is surprisingly fast. My PR is reviewed within one week.
Observations of contributing process:
Issues that are labelled good-first-issues often have comments that ask to be assigned the tasks. But the maintainers tend to take long to reply them. At the time they got back to potential contributers, contributers might already not be interested in it.
Refine has a Changeset
system where contributors need to label the impact on packages such as whether it requires a major version bump in any packages, as Refine uses a monorepo structure.
Refine is well-documented and its core team is quite active in issues, which is a hugh advantage for first time contributers because their questions got answered immediately. However, their good-first-issues still have high barriers to entry because of the complicated code structure. That being said, their maintainers make a good effort to explain what might need to be done to submit a PR in the issues, making it easier to understand.
Refine is a React Framework for building internal tools, admin panels, dashboards & B2B apps with unmatched flexibility. It uses TypeScript. It simplifies the development process and eliminate repetitive tasks by providing industry-standard solutions for crucial aspects of a project, including authentication, access control, routing, networking, state management, and i18n.
PR 1: docs(core): add DataProvider interface definition #5653
Initially I thought contributing to documentation is easy, but I realize that contributing to documentation requires good understanding of the codebase structure and the workflow.
Give tools/technologies you learned here. Include resources you used, and a brief summary of the resource.
The PR review process in Refine is surprisingly fast. My PR is reviewed within one week.
Observations of contributing process:
Issues that are labelled good-first-issues often have comments that ask to be assigned the tasks. But the maintainers tend to take long to reply them. At the time they got back to potential contributers, contributers might already not be interested in it.
Refine has a Changeset
system where contributors need to label the impact on packages such as whether it requires a major version bump in any packages, as Refine uses a monorepo structure.
Refine is well-documented and its core team is quite active in issues, which is a hugh advantage for first time contributers because their questions got answered immediately. However, their good-first-issues still have high barriers to entry because of the complicated code structure. That being said, their maintainers make a good effort to explain what might need to be done to submit a PR in the issues, making it easier to understand.