From d12b4d522e7b6c42a5e126b8030480d9eedf82b8 Mon Sep 17 00:00:00 2001 From: cassandraGoose Date: Fri, 21 Apr 2023 15:28:33 -0600 Subject: [PATCH 1/2] update fit-lit 1 whats-cookin 1 overlook and travel tracker to utilize the same testing verbiage; update wc1 o and tt to update testing rubric section; update wc1 to have dropdowns in rubric --- projects/module-2/fitlit-part-one-agile.md | 2 +- projects/module-2/whats-cookin-part-one.md | 34 ++++++++++++++-------- projects/overlook.md | 23 ++++++++------- projects/travel-tracker.md | 23 ++++++++------- 4 files changed, 47 insertions(+), 35 deletions(-) diff --git a/projects/module-2/fitlit-part-one-agile.md b/projects/module-2/fitlit-part-one-agile.md index 3d9291e89..de3eff86b 100644 --- a/projects/module-2/fitlit-part-one-agile.md +++ b/projects/module-2/fitlit-part-one-agile.md @@ -314,7 +314,7 @@ For the rubric sections below, you will be scored as **Wow**, **Yes** or **Not Y ### Test-Driven Development 💫ON TRACK💫 can look like: -- Application has a robust and thorough test suite that covers all functions that do not update the dom. +- Application has a robust and thorough test suite - Test suite is organized. - Each function is tested in its own it block. - All scenarios/outcomes/paths are tested for your functions, including happy and sad paths. diff --git a/projects/module-2/whats-cookin-part-one.md b/projects/module-2/whats-cookin-part-one.md index f7700b086..85576f2e9 100644 --- a/projects/module-2/whats-cookin-part-one.md +++ b/projects/module-2/whats-cookin-part-one.md @@ -184,22 +184,22 @@ The expectation for Mod 2 is that you will avoid using `async/await`. We know ` ### Testing -You should *NOT* use the original data files in the `data` directory for testing. These are big files, to begin with, and a real-world dataset would have millions of records. That's far too big to use every time you want to run a test. +You should *NOT* use the original data files in the `data` directory for testing. These are big files to begin with, and a real-world dataset would have millions of records. That’s far too big to use every time you want to run a test. -Instead, you should create small, sample datasets that match the structure of the application data and use these for your test data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. +Instead, for your tests, you should create small, sample datasets that match the structure of the application data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. -You are expected to research and implement `beforeEach` in your test files. +**You are *expected* to:** +- Build a robust testing suite. This might include testing pure functions in your `scripts.js`. -**You are *expected* to test:** -* All functions that do not update the DOM. This means everything in your `scripts.js` file should be tested. +**Remember to test all possible outcomes (happy/sad/etc). Ask yourself:** -Remember to test all possible outcomes (happy/sad/etc). Ask yourself: - - Does the function return anything? - - Are there different possible outcomes to test for based on different arguments being passed in? +- Does the function return anything? +- Are there different possible outcomes to test based on different arguments being passed in? -**You are *not expected* to test:** -* DOM manipulation / DOM manipulating function (like `document.querySelector(...)`) -* Fetch calls +**You are *not expected* to test:** + +- DOM manipulation / DOM manipulating functions (like `document.querySelector(...)`) +- Fetch calls --- @@ -253,11 +253,14 @@ For this project, an average of 0.5 is considered a yes - a passing project that While M2 rubrics do not have a separate section for WOWs like in M1, there are a few WOW examples noted throughout. In addition to these WOW bullets, you can strive for a WOW by demonstrating not just competency, but excellence and thoroughness across the rubric sections. +
### Functional Expectations * Wow: Application fulfills all requirements *as well as* an extension. * Yes: Application fulfills all requirements of iterations 1-3 without bugs. * Not Yet: Application crashes or has missing functionality or bugs. +
+
### JavaScript & Style / Functional Programming / Fetch - Code is divided into logical components each with a clean, single responsibility - Array prototype methods are used to iterate instead of for loops @@ -266,9 +269,11 @@ While M2 rubrics do not have a separate section for WOWs like in M1, there are a - Code leverages JavaScript's truthy/falsey principles - Demonstrates efforts towards making functions pure when possible. *Note: Purity is not possible for every function in a FE application. Strive for it only when it makes sense.* - WOW option: Effectively implements one or more closure throughout project. *Note: See Closures lesson on M2 lesson page as a resource.* +
+
### Test-Driven Development -- Application has a robust and thorough test suite that covers all functions that do not update the dom. +- Application has a robust and thorough test suite - Test suite is organized. - Each function is tested in its own it block. - All scenarios/outcomes/paths are tested for your functions, including happy and sad paths. @@ -278,13 +283,18 @@ While M2 rubrics do not have a separate section for WOWs like in M1, there are a - For example: If you need to test a sad path of searching for recipes with a tag that no recipes match, you need to create test data that simulates that scenario so you can test it. - `beforeEach` hook is used to DRY up test files - There are no failing/pending tests upon submission +
+
### User Interface - The application can stand on its own to be used by an instructor without guidance from a developer on the team. - UI/UX is intuitive and easy to read/use - Helpful messaging is displayed to prevent user confusion - For example: If a user searches for a recipe and finds no matching results, a message is displayed to indicated that the search worked, nothing is broken, there just aren't any matching recipes found. - WOW option: Design is responsive across small, medium and large breakpoints. +
+ +--- ### Collaboration and Professionalism - See "Minimum Collaboration and Professionalism Expectations" above. diff --git a/projects/overlook.md b/projects/overlook.md index fdd3ec910..95eabd237 100644 --- a/projects/overlook.md +++ b/projects/overlook.md @@ -166,21 +166,22 @@ password: overlook2021 ## Testing -You should NOT use the original data files in the data directory for testing. These are big files, to begin with, and a real-world dataset would have millions of records. That's far too big to use every time you want to run a test. +You should *NOT* use the original data files in the `data` directory for testing. These are big files to begin with, and a real-world dataset would have millions of records. That’s far too big to use every time you want to run a test. -Instead, you should create small, sample datasets that match the structure of the application data and use these for your test data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. +Instead, for your tests, you should create small, sample datasets that match the structure of the application data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. -You are expected to test: +**You are *expected* to:** +- Build a robust testing suite. This might include testing pure functions in your `scripts.js`. -All functions that do not update the DOM. This means everything in your scripts.js file should be tested. -Remember to test all possible outcomes (happy/sad/etc). Ask yourself: +**Remember to test all possible outcomes (happy/sad/etc). Ask yourself:** -Does the function return anything? -Are there different possible outcomes to test for based on different arguments being passed in? -You are not expected to test: +- Does the function return anything? +- Are there different possible outcomes to test based on different arguments being passed in? -* DOM manipulation / DOM manipulating function (like document.querySelector(...)) -* Fetch calls +**You are *not expected* to test:** + +- DOM manipulation / DOM manipulating functions (like `document.querySelector(...)`) +- Fetch calls ## Accessibility @@ -259,7 +260,7 @@ While M2 rubrics do not have a separate section for WOWs like in M1, there are a
### Testing -* Application has a robust and thorough test suite that covers functions that do not update the dom. +* Application has a robust and thorough test suite * Testing includes happy and sad paths * Test suite is organized - a new developer could easily identify what function is causing a test to fail * Rather than using the production data, small sample data is stored in its own file and used for testing. diff --git a/projects/travel-tracker.md b/projects/travel-tracker.md index 838b80824..53cb8a51f 100644 --- a/projects/travel-tracker.md +++ b/projects/travel-tracker.md @@ -158,21 +158,22 @@ password: travel ## Testing -You should NOT use the original data files in the data directory for testing. These are big files, to begin with, and a real-world dataset would have millions of records. That's far too big to use every time you want to run a test. +You should *NOT* use the original data files in the `data` directory for testing. These are big files to begin with, and a real-world dataset would have millions of records. That’s far too big to use every time you want to run a test. -Instead, you should create small, sample datasets that match the structure of the application data and use these for your test data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. +Instead, for your tests, you should create small, sample datasets that match the structure of the application data. By creating this sample dataset, you will also know if your functions are working correctly because you can do the calculations by hand with a much smaller dataset. -You are expected to test: +**You are *expected* to:** +- Build a robust testing suite. This might include testing pure functions in your `scripts.js`. -All functions that do not update the DOM. This means everything in your scripts.js file should be tested. -Remember to test all possible outcomes (happy/sad/etc). Ask yourself: +**Remember to test all possible outcomes (happy/sad/etc). Ask yourself:** -Does the function return anything? -Are there different possible outcomes to test for based on different arguments being passed in? -You are not expected to test: +- Does the function return anything? +- Are there different possible outcomes to test based on different arguments being passed in? -DOM manipulation / DOM manipulating function (like document.querySelector(...)) -Fetch calls +**You are *not expected* to test:** + +- DOM manipulation / DOM manipulating functions (like `document.querySelector(...)`) +- Fetch calls ## Accessibility @@ -261,7 +262,7 @@ While M2 rubrics do not have a separate section for WOWs like in M1, there are a
### Testing -* Application has a robust and thorough test suite that covers functions that do not update the dom. +* Application has a robust and thorough test suite * Testing includes happy and sad paths * Test suite is organized - a new developer could easily identify what function is causing a test to fail * Rather than using the production data, small sample data is stored in its own file and used for testing. From e72853ea06a7c7c0e90d60ccdb8e8ce0c74df13e Mon Sep 17 00:00:00 2001 From: cassandraGoose Date: Fri, 21 Apr 2023 15:50:02 -0600 Subject: [PATCH 2/2] add drop downs for flashcards --- projects/module-2/flash-cards.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/projects/module-2/flash-cards.md b/projects/module-2/flash-cards.md index 9c779968f..a0b2a3ab0 100644 --- a/projects/module-2/flash-cards.md +++ b/projects/module-2/flash-cards.md @@ -222,11 +222,14 @@ Your README should include the following, in this order: For the rubric sections below, you will be scored as **Wow**, **Yes** or **Not Yet** depending on whether you have demonstrated competency in that area. Each section lists examples of what types of things we may be looking for as demonstrations of competency. Just as there are many ways to approach code, there are many many ways to demonstrate competency. These are just some examples. +
### Functional Expectations * Wow: Application fulfills all requirements *as well as* an extension from iteration 4. * Yes: Application fulfills all requirements of iterations 1-3 without bugs. * Not Yet: Application crashes (game is not playable) or has missing functionality or bugs. +
+
### JavaScript & Style / Functional Programming On track looks like: @@ -237,7 +240,9 @@ On track looks like: - Demonstrates efforts towards making functions pure when possible. *Note: Purity is not possible for every function in a FE application. Strive for it only when it makes sense.* WOW Option: Effectively implements one or more closures throughout project. *Note: See Closures lesson on the Module 2 lessons page as a resource.* +
+
### Testing On track looks like: @@ -249,5 +254,6 @@ On track looks like: - There are no failing/pending tests upon submission WOW Option: mocha's `beforeEach` hook is used to DRY up test files +
Project is due at **9PM on Thursday of Week 1**.