diff --git a/docs/book/README.md b/docs/book/README.md index c5ff495..9e2f473 100644 --- a/docs/book/README.md +++ b/docs/book/README.md @@ -12,15 +12,19 @@ Welcome to the Open-source ML observability course! The course starts on **October 16, 2023**. \ [Sign up](https://www.evidentlyai.com/ml-observability-course) to save your seat and receive weekly course updates. +# How to participate? +* **Join the course**. [Sign up](https://www.evidentlyai.com/ml-observability-course) to receive weekly updates with course materials and information about office hours. +* **Course platform [OPTIONAL]**. If you want to receive a course certificate, you should **also** [register](https://evidentlyai.thinkific.com/courses/ml-observability-course) on the platform and complete all the assignments before **December 1, 2023**. + +The course starts on **October 16, 2023**. The videos and course notes for the new modules will be released during the course cohort. + # Links -* **Newsletter**. [Sign up](https://www.evidentlyai.com/ml-observability-course) to receive weekly updates with the course materials. + * **Discord community**. Join the [community](https://discord.gg/PyAJuUD5mB) to ask questions and chat with others. -* **Course platform**. [Register](https://evidentlyai.thinkific.com/courses/ml-observability-course) if you want to submit assignments and receive the certificate. This is optional. * **Code examples**. Will be published in this GitHub [repository](https://github.com/evidentlyai/ml_observability_course) throughout the course. -* **Enjoying the course?** [Star](https://github.com/evidentlyai/evidently) Evidently on GitHub to contribute back! This helps us create free, open-source tools and content for the community. - +* **YouTube playlist**. [Subscribe](https://www.youtube.com/playlist?list=PL9omX6impEuOpTezeRF-M04BW3VfnPBRF) to the course YouTube playlist to keep tabs on video updates. -The course starts on **October 16, 2023**. The videos and course notes for the new modules will be released during the course cohort. +**Enjoying the course?** [Star](https://github.com/evidentlyai/evidently) Evidently on GitHub to contribute back! This helps us create free, open-source tools and content for the community. # What the course is about This course is a deep dive into ML model observability and monitoring. diff --git a/docs/book/SUMMARY.md b/docs/book/SUMMARY.md index a96ba4a..902f7e1 100644 --- a/docs/book/SUMMARY.md +++ b/docs/book/SUMMARY.md @@ -17,6 +17,7 @@ * [2.4. Data quality in machine learning](ml-observability-course/module-2-ml-monitoring-metrics/data-quality-in-ml.md) * [2.5. Data quality in ML [CODE PRACTICE]](ml-observability-course/module-2-ml-monitoring-metrics/data-quality-code-practice.md) * [2.6. Data and prediction drift in ML](ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-in-ml.md) + * [2.7. Deep dive into data drift detection [OPTIONAL]](ml-observability-course/module-2-ml-monitoring-metrics/data-drift-deep-dive.md) * [2.8. Data and prediction drift in ML [CODE PRACTICE]](ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-code-practice.md) * [Module 3: ML monitoring for unstructured data](ml-observability-course/module-3-ml-monitoring-for-unstructured-data.md) * [Module 4: Designing effective ML monitoring](ml-observability-course/module-4-designing-effective-ml-monitoring.md) diff --git a/docs/book/ml-observability-course/module-1-introduction/ml-lifecycle.md b/docs/book/ml-observability-course/module-1-introduction/ml-lifecycle.md index 3845dd7..984653e 100644 --- a/docs/book/ml-observability-course/module-1-introduction/ml-lifecycle.md +++ b/docs/book/ml-observability-course/module-1-introduction/ml-lifecycle.md @@ -17,11 +17,11 @@ You can perform different types of evaluations at each of these stages. For exam * During data preparation, exploratory data analysis (EDA) helps to understand the dataset and validate the problem statement. * At the experiment stage, performing cross-validation and holdout testing helps validate and test if ML models are useful. -![](<../../../images/2023109\_course\_module1\_fin\_images.005.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.005-min.png>) However, the work does not stop here! Once the best model is deployed to production and starts bringing business value, every erroneous prediction has its costs. It is crucial to ensure that this model functions stably and reliably. To do that, one must continuously monitor the production ML model and data. -![](<../../../images/2023109\_course\_module1\_fin\_images.008.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.008-min.png>) ## What can go wrong in production? @@ -34,21 +34,21 @@ Many things can go wrong once you deploy an ML model to the real world. Here are * Data schema changes in the upstream system, third-party APIs, or catalogs. * Data loss at source when dealing with broken sensors, logging errors, database outages, etc. -![](<../../../images/2023109\_course\_module1\_fin\_images.011.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.011-min.png>) **Broken upstream model**. Often, not one model but a chain of ML models operates in production. If one model gives wrong outputs, it can affect downstream models. -![](<../../../images/2023109\_course\_module1\_fin\_images.012.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.012-min.png>) **Concept drift**. Gradual concept drift occurs when the target function continuously changes over time, leading to model degradation. If the change is sudden – like the recent pandemic – you’re dealing with sudden concept drift. **Data drift**. Distribution changes in the input features may signal data drift and potentially cause ML model performance degradation. For example, a significant number of users coming from a new acquisition channel can negatively affect the model trained on user data. Chances are that users from different channels behave differently. To get back on track, the model needs to learn new patterns. -![](<../../../images/2023109\_course\_module1\_fin\_images.015.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.015-min.png>) **Underperforming segments**. A model might perform differently on diverse data segments. It is crucial to monitor performance across all segments. -![](<../../../images/2023109\_course\_module1\_fin\_images.016.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.016-min.png>) **Adversarial adaptation**. In the era of neural networks, models might face adversarial attacks. Monitoring helps detect these issues on time. diff --git a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-architectures.md b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-architectures.md index 6592e8e..8b454a5 100644 --- a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-architectures.md +++ b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-architectures.md @@ -14,7 +14,7 @@ It is essential to start monitoring ML models as soon as you deploy them to prod **Ad hoc reporting** is a great alternative when your resources are limited. You can use Python scripts to calculate and analyze metrics in your notebook. This is a good first step in logging model performance and data quality. -![](<../../../images/2023109\_course\_module1\_fin\_images.061.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.061-min.png>) ## Monitoring frontend @@ -24,13 +24,13 @@ When it comes to visualizing the results of monitoring, you also have options. **One-off reports**. You can also generate reports as needed and create visualizations or specific one-off analyses based on the model logs. You can create your own reports in Python/R or use different BI visualization tools. -![](<../../../images/2023109\_course\_module1\_fin\_images.065.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.065-min.png>) **BI Systems**. If you want to create a dashboard to track ML monitoring metrics over time, you can also reuse existing business intelligence or software monitoring systems. In this scenario, you must connect existing tools to the ML metric database and add panels or plots to the dashboard. **Dedicated ML monitoring**. As a more sophisticated approach, you can set up a separate visualization system that gives you an overview of all your ML models and datasets and provides an ongoing, updated view of metrics. -![](<../../../images/2023109\_course\_module1\_fin\_images.066.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.066-min.png>) ## Summing up diff --git a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-metrics.md b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-metrics.md index 39d31fb..3fd0a43 100644 --- a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-metrics.md +++ b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-metrics.md @@ -25,6 +25,6 @@ ML model performance metrics help to ensure that ML models work as expected: The ultimate measure of the model quality is its impact on the business. Depending on business needs, you may want to monitor clicks, purchases, loan approval rates, cost savings, etc. This is typically custom to the use case and might involve collaborating with product managers or business teams to determine the right business KPIs. -![](<../../../images/2023109\_course\_module1\_fin\_images.034.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.034-min.png>) For a deeper dive into **ML model quality and relevance** and **data quality and integrity** metrics, head to [Module 2](../module-2-ml-monitoring-metrics/readme.md). diff --git a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-observability.md b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-observability.md index 1441c6c..adcc69d 100644 --- a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-observability.md +++ b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-observability.md @@ -31,7 +31,7 @@ Accordingly, you also need to track data quality and model performance metrics. **Ground truth is not available immediately** to calculate ML model performance metrics. In this case, you can use proxy metrics like data quality to monitor for early warning signs. -![](<../../../images/2023109\_course\_module1\_fin\_images.024.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.024-min.png>) ## ML monitoring vs ML observability @@ -59,7 +59,7 @@ ML monitoring and observability help: * **Trigger actions**. Based on the calculated data and model health metrics, you can trigger fallback, model switching, or automatic retraining. * **Document ML model performance** to provide information to the stakeholders. -![](<../../../images/2023109\_course\_module1\_fin\_images.030.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.030-min.png>) ## Who should care about ML monitoring and observability? @@ -72,7 +72,7 @@ The short answer: everyone who cares about the model's impact on business. At th Other stakeholders include model users, business stakeholders, support, and compliance teams. -![](<../../../images/2023109\_course\_module1\_fin\_images.031.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.031-min.png>) ## Summing up diff --git a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-setup.md b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-setup.md index 609a4ba..57544aa 100644 --- a/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-setup.md +++ b/docs/book/ml-observability-course/module-1-introduction/ml-monitoring-setup.md @@ -17,7 +17,7 @@ While setting up an ML monitoring system, it makes sense to align the complexity * **Feedback loop and environmental stability**. Both influence the cadence of metrics calculations and the choice of specific metrics. * **Service criticality**. What is the business cost of model quality drops? What risks should we monitor for? More critical models might require a more complex monitoring setup. -![](<../../../images/2023109\_course\_module1\_fin\_images.050.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.050-min.png>) ## Model retraining cadence @@ -26,7 +26,7 @@ ML monitoring and retraining are closely connected. Some retraining factors to k * How you implement the retraining: whether you want to monitor the metrics and retrain on a trigger or set up a predefined retraining schedule (for example, weekly). * Issues that prevent updating the model too often, e.g., complex approval processes, regulations, need for manual testing. -![](<../../../images/2023109\_course\_module1\_fin\_images.052.png>) +![](<../../../images/2023109\_course\_module1\_fin\_images.052-min.png>) ## Reference dataset diff --git a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-drift-deep-dive.md b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-drift-deep-dive.md new file mode 100644 index 0000000..6e16a84 --- /dev/null +++ b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-drift-deep-dive.md @@ -0,0 +1,204 @@ +# 2.7. Deep dive into data drift detection [OPTIONAL] + +{% embed url="https://www.youtube.com/watch?v=N47SHSP6RuY&list=PL9omX6impEuOpTezeRF-M04BW3VfnPBRF&index=13" %} + +**Video 7**. [Data and prediction drift in ML](https://www.youtube.com/watch?v=N47SHSP6RuY&list=PL9omX6impEuOpTezeRF-M04BW3VfnPBRF&index=13), by Emeli Dral + +Welcome to the deep dive into data drift detection! We will cover the following topics: + +**Data drift detection methods:** +* More on methods to detect data drift +* Strategies for choosing drift detection approach + +**Special cases:** +* Detecting drift for large datasets +* Detecting drift for real-time models +* Using drift as a retraining trigger + +**Useful tips:** +* How to interpret prediction and data drift together? +* What to do after drift is detected + +## Drift detection methods + +Let’s have a closer look at the commonly used approaches to drift detection. + +**Parametric statistical tests** +There are both **one-sample** and **two-sample** parametric tests.: +* **If you only have current data** and no reference data is available, you can use **Z-test and T-test for mean** (m = m0) or **one-proportion Z-test** (p = p0) to detect data drift. These methods can work if you have interpretable features – e.g., salary or age – as you need to develop the hypotheses on distribution values. +* **If reference data is available**, you can use two-sample parametric tests to compare distributions: for example, **two-proportions Z-test** or **two-sample Z-test and T-test for means** (for normally distributed samples). + +Some considerations to keep in mind when using parametric tests: +* **They require different tests for different features.** For example, some tests assume that your data are normally distributed. +* **They are more sensitive to drift than non-parametric tests.** If you work on a problem where you have a small dataset and want to react to even minor deviations – it makes sense to use parametric tests. +* **Hard to fine-tune if you have a lot of features.** If you have many features with different feature types, you need to invest a lot of time in choosing the right test for each feature. + +It makes sense to use parametric tests if you have a small number of interpretable features and work on critical use cases (e.g., in healthcare). + +![](<../../../images/2023109\_course\_module2.081-min.png>) + +**Non-parametric statistical tests** +Non-parametric tests are less demanding to the properties of data samples and thus are widely used. Examples include **Kolmogorov-Smirnov** test, K-sample **Anderson-Darling** test, **Pearson’s chi-squared** test, **Fisher’s/Barnard’s** exact test for small samples, etc. + +When using non-parametric tests, consider the following: +* **Feature type.** You can use heuristics to choose suitable tests based on the feature type, e.g., numerical, categorical, or binary. +* **Sensitivity.** Non-parametric tests are less sensitive to drift than parametric tests. +* **Data volumes.** It makes sense to use non-parametric tests for low-volume datasets or samples (e.g., less than 1000 objects). + +![](<../../../images/2023109\_course\_module2.082-min.png>) + +**Distance-based approaches** +Distance-based methods measure how far two distributions are from each other and thus are easy to interpret. For example, you can calculate **Wasserstein distance**, **Jensen-Shannon divergence**, or **Population Stability Index** (PSI). + +Some considerations to keep in mind when using distance-based methods to detect data drift: +* **Variety of metrics is available.** Roughly any metric that shows difference/similarity between distributions can be used as a drift detection method. +* **High interpretability compared to statistical tests.** Often, it makes more sense to pick an interpretable metric rather than a statistical test. +* **Data volume.** It makes sense to use distance-based methods for larger datasets (e.g., > 1000 objects). + +![](<../../../images/2023109\_course\_module2.083-min.png>) + +**Domain classification** +This approach uses binary classifiers to distinguish between reference and current data. It can be used to detect data drift in different data types, including embeddings, unstructured data (such as texts), and multimodal data. + +{% hint style="info" %} +**Further reading:** [Which test is the best? We compared 5 methods to detect data drift on large datasets](https://www.evidentlyai.com/blog/data-drift-detection-large-datasets). +{% endhint %} + +## How to choose a drift detection approach? + +**Data drift detection is a heuristic.** There is no strict law. This is why it is important that you consider why you want to detect drift, and what method might make sense for you. + +Consider your problem statement and dataset properties. For example: +* If your use case is sensitive, you might want to use parametric tests. +* If interpretability is important, consider distance-based methods. +* Domain classification can be a good choice if you work with various data types – text, videos, tabular data. + +To choose the right drift detection approach for your particular problem statement, you can consider two options: + +**1. Go with defaults.** + +In this scenario, you pick some reasonable defaults to start and adjust the sensitivity as you proceed with monitoring. +* Start with basic assumptions. Do you want to detect drift for the whole dataset or only consider drift in important features? +* Pick reasonable metrics and thresholds. For example, for numerical features, you can pick Wasserstein Distance at 0.1 threshold. +* Start monitoring. +* Visualize results. +* Adjust based on false alarms, sensitivity, and drift interpretations. + +**2. Experiment.** + +In this scenario, you use historical data to tweak detection parameters using past known drifts. Here is an example of an experiment: +* Take data for a stable period +* Take data with known drift or simulate drift using synthetic data. +* Apply different drift detection approaches. Experiment with tests, thresholds, window size and/or bucketing parameters. +* Choose the optimal approach that detects known drifts and minimizes false alarms. + +## Special cases +There are some special cases to keep in mind when detecting data drift: + +**Large datasets.** +Statistics was made to work with samples. Having many objects and/or features in a dataset can lead to some tests being “too sensitive” or taking too long to compute. If this is the case, you can use **sampling** to pick representative observations and apply tests on top of them. Alternatively, you can try **bucketing** to aggregate observations and reduce the amount of data. For example, you can detect drift on top of hourly data instead of minutely. + +![](<../../../images/2023109\_course\_module2.089-min.png>) + +**Non-batch models.** +While some metrics can be calculated in real-time, we need to generate a batch of data to detect data drift. + +The solution is to use **window functions** to perform tests on continuous data streams. You can pick a window function (i.e., moving windows with/without moving reference), choose the window and step size to create batches for comparison, and “compare” the windows. + +![](<../../../images/2023109\_course\_module2.090-min.png>) + +**Feature drift as a retraining trigger.** +There are both pros and cons of using drift detection as the retraining trigger. + +Generally, we do not recommend retraining a model every time the drift is detected because: +* **Data might be low-quality.** Retraining the model on corrupted data will be useless if data drift occurs due to data processing issues. +* **Data might be insufficient.** Sometimes, we just don’t have enough data for new model training. +* **Data might be non-representative.** Look out for unstable periods, e.g., pandemic, seasonal spikes, etc. + +Instead, try to understand data drift first: +* **Data drift as an investigation trigger.** Try to figure out the root cause of the detected drift. +* **Data drift as a labeling trigger.** You can use a data drift signal to start the labeling process to be able to compute the actual model quality metrics. + +If you use data drift as a retraining trigger, it is critical to implement a solid evaluation process before roll-out to make sure the new model performs well. + +![](<../../../images/2023109\_course\_module2.091-min.png>) + +## How to interpret data and prediction drift together? + +It often makes sense to monitor both prediction drift (change in the model outputs) and data drift (change in the model features). + +However, data and prediction drift do not necessarily mean that something is wrong. Let’s look at two examples of data and prediction drift detected together or independently. + +**Scenario 1. Data drift: detected. Prediction drift: not detected.** +There are both positive and negative ways to interpret it. + +**Positive interpretation:** +* Important features did not change. +* Model is robust enough to survive drift. +* No need to intervene. + +**Negative interpretation:** +* Important features changed. +* Model should have reacted but did not. It does not extrapolate well. +* We need to intervene. + +**Scenario 2. Data drift: detected. Prediction drift: detected.** +Again, there are positive and negative ways of interpreting it. + +**Positive interpretation:** +* Important features changed. +* Model reacts and extrapolates well (e.g., prices lower -> higher sales) +* No need to intervene. + +**Negative interpretation:** +* Important features changed. +* Model behavior is unreasonable. +* We need to intervene. + +## What to do if drift is detected? + +Here are some possible steps to take if the drift is detected: + +**1. Check the data quality.** +Make sure the drift is “real” and try to interpret where the drift is coming from. Data entry errors, stale features, and lost data are data quality issues disguised as data drift. If this is the case, fix the data first. + +![](<../../../images/2023109\_course\_module2.098-min.png>) + +**2. Investigate the drift.** +Analyze which features have changed and how much. To understand the shift, you can: +* Visualize distributions +* Analyze correlation changes +* Check descriptive stats +* Evaluate segments +* Seek real-world explanations (e.g., a new marketing campaign) +* Team up with domain experts + +![](<../../../images/2023109\_course\_module2.100-min.png>) + +**3. Doing nothing is also an option.** +You might treat the drift as a false alarm, be satisfied with how the model handles drift, or simply decide to wait. + +![](<../../../images/2023109\_course\_module2.101-min.png>) + +**4. Actively reacting to drift.** +However, often, we need to react when the drift is detected: +* **Retrain the model.** Get new labels and actual values and re-fit the same model on the latest data. +* **Rebuild the model.** If the change is significant, you might need to rebuild the training pipeline and test new model architectures. +* **Tune the model.** For example, you can change a threshold for drift detection. +* **Use a fallback strategy.** Decide without ML: switch to manual processing, heuristics, or non-ML models. + +![](<../../../images/2023109\_course\_module2.102-min.png>) + +{% hint style="info" %} +**Further reading:** ["My data drifted. What's next?" How to handle ML model drift in production.](https://www.evidentlyai.com/blog/ml-monitoring-data-drift-how-to-handle). +{% endhint %} + +## Summing up + +We discussed different drift detection methods and how to choose the optimal approach for your dataset and problem statement. We covered special cases like handling large datasets, calculating drift for real-time models, and using drift as a retraining trigger. We also learned how to interpret data and prediction drift and what to do if drift is detected. + +Further reading: +* [Which test is the best? We compared 5 methods to detect data drift on large datasets](https://www.evidentlyai.com/blog/data-drift-detection-large-datasets) +* ["My data drifted. What's next?" How to handle ML model drift in production.](https://www.evidentlyai.com/blog/ml-monitoring-data-drift-how-to-handle) + +Up next: code practice on how to detect data drift using the open-source [Evidently](https://github.com/evidentlyai/evidently) Python library. diff --git a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-in-ml.md b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-in-ml.md index ab996d6..aea501f 100644 --- a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-in-ml.md +++ b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-prediction-drift-in-ml.md @@ -10,15 +10,15 @@ When ground truth is unavailable or delayed, we cannot calculate ML model qualit **Prediction drift** shows changes in the distribution of **model outputs** over time. Without target values, this is the best proxy of the model behavior. Detected changes in the model outputs may be an early signal of changes in the model environment, data quality bugs, pipeline errors, etc. -![](<../../../images/2023109\_course\_module2.058.png>) +![](<../../../images/2023109\_course\_module2.058-min.png>) **Feature drift** demonstrates changes in the distribution of **input features** over time. When we train the model, we assume that if the input data remains reasonably similar, we can expect similar model quality. Thus, data distribution drift can be an early warning about model quality decay, important changes in the model environment or user behavior, unannounced changes to the modeled process, etc. -![](<../../../images/2023109\_course\_module2.060.png>) +![](<../../../images/2023109\_course\_module2.060-min.png>) Prediction and feature drift can serve as early warning signs for model quality issues. They can also help pinpoint a root cause when the model decay is already observed. -![](<../../../images/2023109\_course\_module2.065.png>) +![](<../../../images/2023109\_course\_module2.065-min.png>) Some key considerations about data drift to keep in mind: * **Prediction drift is usually more important than feature drift**. If you monitor one thing, look at the outputs. @@ -51,11 +51,11 @@ Here is how the defaults are implemented in the Evidently open-source library. **For small datasets (<=1000)**, you can use Kolmogorov-Smirnov test for numerical features, Chi-squared test for categorical features, and proportion difference test for independent samples based on Z-score for binary categorical features. -![](<../../../images/2023109\_course\_module2.070.png>) +![](<../../../images/2023109\_course\_module2.070-min.png>) **For large datasets (>1000)**, you might use Wasserstein Distance for numerical features and Jensen-Shannon divergence for categorical features. -![](<../../../images/2023109\_course\_module2.071.png>) +![](<../../../images/2023109\_course\_module2.071-min.png>) ## Univariate vs. multivariate drift diff --git a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-quality-in-ml.md b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-quality-in-ml.md index 1fdd9df..d0c3bf7 100644 --- a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-quality-in-ml.md +++ b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/data-quality-in-ml.md @@ -17,7 +17,7 @@ Some common data processing issues are: Issues can also arise if the data schema changes or data is lost at the source (e.g., broken in-app logging or frozen sensor values). If you have several models interacting with each other, broken upstream models can affect downstream models. -![](<../../../images/2023109\_course\_module2.041.png>) +![](<../../../images/2023109\_course\_module2.041-min.png>) ## Data quality metrics and analysis @@ -30,7 +30,7 @@ Issues can also arise if the data schema changes or data is lost at the source ( Then, you can visualize and compare statistics and data distributions of the current data batch and reference data to ensure data stability. -![](<../../../images/2023109\_course\_module2.047.png>) +![](<../../../images/2023109\_course\_module2.047-min.png>) When it comes to monitoring data quality, you must define the conditions for alerting. diff --git a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/evaluate-ml-model-quality.md b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/evaluate-ml-model-quality.md index 207a40b..f5f6d1a 100644 --- a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/evaluate-ml-model-quality.md +++ b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/evaluate-ml-model-quality.md @@ -16,7 +16,7 @@ When it comes to standard ML monitoring, we usually start by measuring ML model * **Many segments with different quality**. Aggregated metrics might not provide insights for diverse user/object groups. In this case, we need to monitor quality metrics for each segment separately. * **The target function is volatile**. Volatile target function can lead to fluctuating performance metrics, making it difficult to differentiate between local quality drops and major performance issues. -![](<../../../images/2023109\_course\_module2.005.png>) +![](<../../../images/2023109\_course\_module2.005-min.png>) ## Early monitoring metrics @@ -27,7 +27,7 @@ Early monitoring focuses on metrics derived from consistently available data: in * **Data drift** to monitor changes in the input feature distributions. * **Output drift** to observe shifts in model predictions. -![](<../../../images/2023109\_course\_module2.006.png>) +![](<../../../images/2023109\_course\_module2.006-min.png>) ## Module 2 structure @@ -46,7 +46,7 @@ This module includes both theoretical parts and code practice for each of the ev * [OPTIONAL] Theory: a deeper dive into data drift detection methods and strategies. * Practice: building a sample report in Python to detect data and prediction drift for various data type. -![](<../../../images/2023109\_course\_module2.007.png>) +![](<../../../images/2023109\_course\_module2.007-min.png>) ## Summing up diff --git a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/ml-quality-metrics-classification-regression-ranking.md b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/ml-quality-metrics-classification-regression-ranking.md index 1956e71..6b6d22d 100644 --- a/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/ml-quality-metrics-classification-regression-ranking.md +++ b/docs/book/ml-observability-course/module-2-ml-monitoring-metrics/ml-quality-metrics-classification-regression-ranking.md @@ -12,7 +12,7 @@ You need **monitoring** to be able to maintain the ML model's relevance by detec But there is a caveat: to calculate classification, regression, and ranking quality metrics, **you need labels**. If you can, consider labeling at least part of the data to be able to compute them. -![](<../../../images/2023109\_course\_module2.009.png>) +![](<../../../images/2023109\_course\_module2.009-min.png>) ## Classification quality metrics @@ -24,7 +24,7 @@ A classification problem in ML is a task of assigning predefined categories or c * [**ROC-AUC**](https://www.evidentlyai.com/classification-metrics/explain-roc-curve) works for probabilistic classification and evaluates the model's ability to rank correctly. * **Logarithmic loss** demonstrates how close the prediction probability is to the actual value. It is a good metric for probabilistic problem statement. -![](<../../../images/2023109\_course\_module2.012.png>) +![](<../../../images/2023109\_course\_module2.012-min.png>) Methods to help visualize and understand classification quality metrics include: * [**Confusion matrix**](https://www.evidentlyai.com/classification-metrics/confusion-matrix) shows the number of correct predictions – true positives (TP) and true negatives (TN) – and the number of errors – false positives (FP) and false negatives (FN). You can calculate precision, recall, and F1-score based on these values. @@ -32,7 +32,7 @@ Methods to help visualize and understand classification quality metrics include: * **Class separation quality** helps visualize correct and incorrect predictions for each class. * **Error analysis**. You can also map predicted probabilities or model errors alongside feature values and explore if a specific type of misclassification is connected to the particular feature values. -![](<../../../images/2023109\_course\_module2.016.png>) +![](<../../../images/2023109\_course\_module2.016-min.png>) {% hint style="info" %} **Further reading:** [What is your model hiding? A tutorial on evaluating ML models](https://www.evidentlyai.com/blog/tutorial-2-model-evaluation-hr-attrition). @@ -47,7 +47,7 @@ Regression models provide numerical output which is compared against actual valu * **Mean Absolute Percentage Error (MAPE)** averages all absolute errors in %. Works well for datasets with objects of different scale (i.e., tens, thousands, or millions). * **Symmetric MAPE** provides different penalty for over- or underestimation. -![](<../../../images/2023109\_course\_module2.020.png>) +![](<../../../images/2023109\_course\_module2.020-min.png>) Some of the methods to analyze and visualize regression model quality are: * **Predicted vs. Actual** value plots and Error over time plots help derive patterns in model predictions and behavior (e.g., Does the model tend to have bigger errors during weekends or hours of peak demand?). @@ -55,7 +55,7 @@ Some of the methods to analyze and visualize regression model quality are: You can also map extreme errors alongside feature values and explore if a specific type of error is connected to the particular feature values. -![](<../../../images/2023109\_course\_module2.025.png>) +![](<../../../images/2023109\_course\_module2.025-min.png>) ## Ranking quality metrics @@ -69,7 +69,7 @@ We need to estimate the order of objects to measure quality in ranking tasks. So * **Recall @k** is a coverage of all relevant objects in top-K results. * **Lift @k** reflects an improvement over random ranking. -![](<../../../images/2023109\_course\_module2.028.png>) +![](<../../../images/2023109\_course\_module2.028-min.png>) If you work on a recommender system, you might want to consider additional – “beyond accuracy” – metrics that reflect RecSys behavior. Some examples are: * Serendipity diff --git a/docs/images/2023109_course_module1_fin_images.005-min.png b/docs/images/2023109_course_module1_fin_images.005-min.png new file mode 100644 index 0000000..a2e44d0 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.005-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.005.png b/docs/images/2023109_course_module1_fin_images.005.png deleted file mode 100644 index 4d12db4..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.005.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.008-min.png b/docs/images/2023109_course_module1_fin_images.008-min.png new file mode 100644 index 0000000..c198e9d Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.008-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.008.png b/docs/images/2023109_course_module1_fin_images.008.png deleted file mode 100644 index 615c721..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.008.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.011-min.png b/docs/images/2023109_course_module1_fin_images.011-min.png new file mode 100644 index 0000000..a29dc42 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.011-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.011.png b/docs/images/2023109_course_module1_fin_images.011.png deleted file mode 100644 index ddc7b52..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.011.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.012-min.png b/docs/images/2023109_course_module1_fin_images.012-min.png new file mode 100644 index 0000000..37c04ae Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.012-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.012.png b/docs/images/2023109_course_module1_fin_images.012.png deleted file mode 100644 index e2fd6d3..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.012.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.014-min.png b/docs/images/2023109_course_module1_fin_images.014-min.png new file mode 100644 index 0000000..16299b7 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.014-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.015-min.png b/docs/images/2023109_course_module1_fin_images.015-min.png new file mode 100644 index 0000000..72df0b9 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.015-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.015.png b/docs/images/2023109_course_module1_fin_images.015.png deleted file mode 100644 index 56e9507..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.015.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.016-min.png b/docs/images/2023109_course_module1_fin_images.016-min.png new file mode 100644 index 0000000..bdf3d9a Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.016-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.016.png b/docs/images/2023109_course_module1_fin_images.016.png deleted file mode 100644 index 7da4bb8..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.016.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.024-min.png b/docs/images/2023109_course_module1_fin_images.024-min.png new file mode 100644 index 0000000..05c77b6 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.024-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.024.png b/docs/images/2023109_course_module1_fin_images.024.png deleted file mode 100644 index 27a13db..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.024.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.028-min.png b/docs/images/2023109_course_module1_fin_images.028-min.png new file mode 100644 index 0000000..bff8fcb Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.028-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.030-min.png b/docs/images/2023109_course_module1_fin_images.030-min.png new file mode 100644 index 0000000..382b72e Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.030-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.030.png b/docs/images/2023109_course_module1_fin_images.030.png deleted file mode 100644 index fc0b228..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.030.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.031-min.png b/docs/images/2023109_course_module1_fin_images.031-min.png new file mode 100644 index 0000000..1825a5b Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.031-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.031.png b/docs/images/2023109_course_module1_fin_images.031.png deleted file mode 100644 index 1266d83..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.031.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.034-min.png b/docs/images/2023109_course_module1_fin_images.034-min.png new file mode 100644 index 0000000..a7546a1 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.034-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.034.png b/docs/images/2023109_course_module1_fin_images.034.png deleted file mode 100644 index ec69872..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.034.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.050-min.png b/docs/images/2023109_course_module1_fin_images.050-min.png new file mode 100644 index 0000000..f9eeba2 Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.050-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.050.png b/docs/images/2023109_course_module1_fin_images.050.png deleted file mode 100644 index 336407e..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.050.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.052-min.png b/docs/images/2023109_course_module1_fin_images.052-min.png new file mode 100644 index 0000000..c78631b Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.052-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.052.png b/docs/images/2023109_course_module1_fin_images.052.png deleted file mode 100644 index f747fdf..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.052.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.061-min.png b/docs/images/2023109_course_module1_fin_images.061-min.png new file mode 100644 index 0000000..c8ec10a Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.061-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.061.png b/docs/images/2023109_course_module1_fin_images.061.png deleted file mode 100644 index 2878351..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.061.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.065-min.png b/docs/images/2023109_course_module1_fin_images.065-min.png new file mode 100644 index 0000000..ef8292f Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.065-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.065.png b/docs/images/2023109_course_module1_fin_images.065.png deleted file mode 100644 index 1cd59b7..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.065.png and /dev/null differ diff --git a/docs/images/2023109_course_module1_fin_images.066-min.png b/docs/images/2023109_course_module1_fin_images.066-min.png new file mode 100644 index 0000000..c251cda Binary files /dev/null and b/docs/images/2023109_course_module1_fin_images.066-min.png differ diff --git a/docs/images/2023109_course_module1_fin_images.066.png b/docs/images/2023109_course_module1_fin_images.066.png deleted file mode 100644 index 7f6bde4..0000000 Binary files a/docs/images/2023109_course_module1_fin_images.066.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.005-min.png b/docs/images/2023109_course_module2.005-min.png new file mode 100644 index 0000000..535ca4b Binary files /dev/null and b/docs/images/2023109_course_module2.005-min.png differ diff --git a/docs/images/2023109_course_module2.005.png b/docs/images/2023109_course_module2.005.png deleted file mode 100644 index d193652..0000000 Binary files a/docs/images/2023109_course_module2.005.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.006-min.png b/docs/images/2023109_course_module2.006-min.png new file mode 100644 index 0000000..a209f78 Binary files /dev/null and b/docs/images/2023109_course_module2.006-min.png differ diff --git a/docs/images/2023109_course_module2.006.png b/docs/images/2023109_course_module2.006.png deleted file mode 100644 index b6ac49b..0000000 Binary files a/docs/images/2023109_course_module2.006.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.007-min.png b/docs/images/2023109_course_module2.007-min.png new file mode 100644 index 0000000..bfe31ba Binary files /dev/null and b/docs/images/2023109_course_module2.007-min.png differ diff --git a/docs/images/2023109_course_module2.007.png b/docs/images/2023109_course_module2.007.png deleted file mode 100644 index 090d036..0000000 Binary files a/docs/images/2023109_course_module2.007.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.009-min.png b/docs/images/2023109_course_module2.009-min.png new file mode 100644 index 0000000..691ddc7 Binary files /dev/null and b/docs/images/2023109_course_module2.009-min.png differ diff --git a/docs/images/2023109_course_module2.009.png b/docs/images/2023109_course_module2.009.png deleted file mode 100644 index 0c771a1..0000000 Binary files a/docs/images/2023109_course_module2.009.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.012-min.png b/docs/images/2023109_course_module2.012-min.png new file mode 100644 index 0000000..e4b8c20 Binary files /dev/null and b/docs/images/2023109_course_module2.012-min.png differ diff --git a/docs/images/2023109_course_module2.012.png b/docs/images/2023109_course_module2.012.png deleted file mode 100644 index fdddf66..0000000 Binary files a/docs/images/2023109_course_module2.012.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.016-min.png b/docs/images/2023109_course_module2.016-min.png new file mode 100644 index 0000000..b3757cd Binary files /dev/null and b/docs/images/2023109_course_module2.016-min.png differ diff --git a/docs/images/2023109_course_module2.016.png b/docs/images/2023109_course_module2.016.png deleted file mode 100644 index 05cadba..0000000 Binary files a/docs/images/2023109_course_module2.016.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.020-min.png b/docs/images/2023109_course_module2.020-min.png new file mode 100644 index 0000000..ccfb475 Binary files /dev/null and b/docs/images/2023109_course_module2.020-min.png differ diff --git a/docs/images/2023109_course_module2.020.png b/docs/images/2023109_course_module2.020.png deleted file mode 100644 index a6a57ad..0000000 Binary files a/docs/images/2023109_course_module2.020.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.025-min.png b/docs/images/2023109_course_module2.025-min.png new file mode 100644 index 0000000..abe416f Binary files /dev/null and b/docs/images/2023109_course_module2.025-min.png differ diff --git a/docs/images/2023109_course_module2.025.png b/docs/images/2023109_course_module2.025.png deleted file mode 100644 index 48ba4da..0000000 Binary files a/docs/images/2023109_course_module2.025.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.028-min.png b/docs/images/2023109_course_module2.028-min.png new file mode 100644 index 0000000..a1a18b7 Binary files /dev/null and b/docs/images/2023109_course_module2.028-min.png differ diff --git a/docs/images/2023109_course_module2.028.png b/docs/images/2023109_course_module2.028.png deleted file mode 100644 index 6986f8f..0000000 Binary files a/docs/images/2023109_course_module2.028.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.041-min.png b/docs/images/2023109_course_module2.041-min.png new file mode 100644 index 0000000..4032751 Binary files /dev/null and b/docs/images/2023109_course_module2.041-min.png differ diff --git a/docs/images/2023109_course_module2.041.png b/docs/images/2023109_course_module2.041.png deleted file mode 100644 index 0afed0a..0000000 Binary files a/docs/images/2023109_course_module2.041.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.047-min.png b/docs/images/2023109_course_module2.047-min.png new file mode 100644 index 0000000..d2953dd Binary files /dev/null and b/docs/images/2023109_course_module2.047-min.png differ diff --git a/docs/images/2023109_course_module2.047.png b/docs/images/2023109_course_module2.047.png deleted file mode 100644 index 55033cd..0000000 Binary files a/docs/images/2023109_course_module2.047.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.058-min.png b/docs/images/2023109_course_module2.058-min.png new file mode 100644 index 0000000..e097574 Binary files /dev/null and b/docs/images/2023109_course_module2.058-min.png differ diff --git a/docs/images/2023109_course_module2.058.png b/docs/images/2023109_course_module2.058.png deleted file mode 100644 index ab9d337..0000000 Binary files a/docs/images/2023109_course_module2.058.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.060-min.png b/docs/images/2023109_course_module2.060-min.png new file mode 100644 index 0000000..11b7269 Binary files /dev/null and b/docs/images/2023109_course_module2.060-min.png differ diff --git a/docs/images/2023109_course_module2.060.png b/docs/images/2023109_course_module2.060.png deleted file mode 100644 index 1800100..0000000 Binary files a/docs/images/2023109_course_module2.060.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.065-min.png b/docs/images/2023109_course_module2.065-min.png new file mode 100644 index 0000000..adf6edb Binary files /dev/null and b/docs/images/2023109_course_module2.065-min.png differ diff --git a/docs/images/2023109_course_module2.065.png b/docs/images/2023109_course_module2.065.png deleted file mode 100644 index 4c0eefb..0000000 Binary files a/docs/images/2023109_course_module2.065.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.070-min.png b/docs/images/2023109_course_module2.070-min.png new file mode 100644 index 0000000..526ee28 Binary files /dev/null and b/docs/images/2023109_course_module2.070-min.png differ diff --git a/docs/images/2023109_course_module2.070.png b/docs/images/2023109_course_module2.070.png deleted file mode 100644 index 9fc8fc0..0000000 Binary files a/docs/images/2023109_course_module2.070.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.071-min.png b/docs/images/2023109_course_module2.071-min.png new file mode 100644 index 0000000..866f4f6 Binary files /dev/null and b/docs/images/2023109_course_module2.071-min.png differ diff --git a/docs/images/2023109_course_module2.071.png b/docs/images/2023109_course_module2.071.png deleted file mode 100644 index c2e3ed5..0000000 Binary files a/docs/images/2023109_course_module2.071.png and /dev/null differ diff --git a/docs/images/2023109_course_module2.081-min.png b/docs/images/2023109_course_module2.081-min.png new file mode 100644 index 0000000..129ef10 Binary files /dev/null and b/docs/images/2023109_course_module2.081-min.png differ diff --git a/docs/images/2023109_course_module2.082-min.png b/docs/images/2023109_course_module2.082-min.png new file mode 100644 index 0000000..13c86cb Binary files /dev/null and b/docs/images/2023109_course_module2.082-min.png differ diff --git a/docs/images/2023109_course_module2.083-min.png b/docs/images/2023109_course_module2.083-min.png new file mode 100644 index 0000000..eddce83 Binary files /dev/null and b/docs/images/2023109_course_module2.083-min.png differ diff --git a/docs/images/2023109_course_module2.089-min.png b/docs/images/2023109_course_module2.089-min.png new file mode 100644 index 0000000..5dbbc3d Binary files /dev/null and b/docs/images/2023109_course_module2.089-min.png differ diff --git a/docs/images/2023109_course_module2.090-min.png b/docs/images/2023109_course_module2.090-min.png new file mode 100644 index 0000000..00bf5be Binary files /dev/null and b/docs/images/2023109_course_module2.090-min.png differ diff --git a/docs/images/2023109_course_module2.091-min.png b/docs/images/2023109_course_module2.091-min.png new file mode 100644 index 0000000..0bc8a93 Binary files /dev/null and b/docs/images/2023109_course_module2.091-min.png differ diff --git a/docs/images/2023109_course_module2.098-min.png b/docs/images/2023109_course_module2.098-min.png new file mode 100644 index 0000000..80ec18e Binary files /dev/null and b/docs/images/2023109_course_module2.098-min.png differ diff --git a/docs/images/2023109_course_module2.100-min.png b/docs/images/2023109_course_module2.100-min.png new file mode 100644 index 0000000..a39fe07 Binary files /dev/null and b/docs/images/2023109_course_module2.100-min.png differ diff --git a/docs/images/2023109_course_module2.101-min.png b/docs/images/2023109_course_module2.101-min.png new file mode 100644 index 0000000..af8c50c Binary files /dev/null and b/docs/images/2023109_course_module2.101-min.png differ diff --git a/docs/images/2023109_course_module2.102-min.png b/docs/images/2023109_course_module2.102-min.png new file mode 100644 index 0000000..065d6d8 Binary files /dev/null and b/docs/images/2023109_course_module2.102-min.png differ