diff --git a/fine-tuning.md b/fine-tuning.md
index a720ebf..e16348c 100644
--- a/fine-tuning.md
+++ b/fine-tuning.md
@@ -94,7 +94,7 @@ Data preparation plays a big role in the fine-tuning process for vision based mo
[Dreambooth Image Generation Fine-Tuning](https://dreambooth.github.io)
```
-Models such as [Stable Diffusion](https://stability.ai/stable-diffusion) can also be tailored through fine-tuning to generate specific images. For instance, by supplying Stable Diffusion with a dataset of pet pictures and fine-tuning it, the model becomes capable of generating images of that particular pet in diverse styles.
+Models such as [Stable Diffusion](https://stability.ai/stable-image) can also be tailored through fine-tuning to generate specific images. For instance, by supplying Stable Diffusion with a dataset of pet pictures and fine-tuning it, the model becomes capable of generating images of that particular pet in diverse styles.
The dataset for fine-tuning an image generation model needs to contain two things:
diff --git a/index.md b/index.md
index c151a8c..285c05f 100644
--- a/index.md
+++ b/index.md
@@ -57,7 +57,7 @@ Spot something outdated or missing? Want to start a discussion? We welcome any o
- let us know in the comments at the end of each chapter
- [ create issues](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-an-issue)
-- [ open pull requests](https://docs.github.com/en/get-started/quickstart/contributing-to-projects)
+- [ open pull requests](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project)
```
### Editing the Book
diff --git a/model-formats.md b/model-formats.md
index 6d15189..9bee119 100644
--- a/model-formats.md
+++ b/model-formats.md
@@ -11,7 +11,7 @@ Integration with Deep Learning Frameworks | 🟢 [most](onnx-support) | 🟡 [gr
Deployment Tools | 🟢 [yes](onnx-runtime) | 🔴 no | 🟢 [yes](triton-inference)
Interoperability | 🟢 [yes](onnx-interoperability) | 🔴 no | 🔴 [no](tensorrt-interoperability)
Inference Boost | 🟡 moderate | 🟢 good | 🟢 good
-Quantisation Support | 🟡 [good](onnx-quantisation) | 🟢 [good](ggml-quantisation) | 🟡 [moderate](tensorrt-quantisation)
+Quantisation Support | 🟢 [good](onnx-quantisation) | 🟢 [good](ggml-quantisation) | 🟡 [moderate](tensorrt-quantisation)
Custom Layer Support| 🟢 [yes](onnx-custom-layer) | 🔴 limited | 🟢 [yes](tensorrt-custom-layer)
Maintainer | [LF AI & Data Foundation](https://wiki.lfaidata.foundation) | https://github.com/ggerganov | https://github.com/NVIDIA
```
diff --git a/references.bib b/references.bib
index 99a926d..797c55c 100644
--- a/references.bib
+++ b/references.bib
@@ -451,7 +451,7 @@ @online{octoml-fine-tuning
title={The beginner's guide to fine-tuning Stable Diffusion},
author={Justin Gage},
year=2023,
-url={https://octoml.ai/blog/the-beginners-guide-to-fine-tuning-stable-diffusion}
+url={https://octo.ai/blog/the-beginners-guide-to-fine-tuning-stable-diffusion}
}
@article{small-data-tds,
title={Is "Small Data" The Next Big Thing In Data Science?},
diff --git a/references.md b/references.md
index a07ce77..47148e5 100644
--- a/references.md
+++ b/references.md
@@ -7,7 +7,6 @@
- "Catching up on the weird world of LLMs" (summary of the last few years) https://simonwillison.net/2023/Aug/3/weird-world-of-llms
- "Open challenges in LLM research" (exciting post title but mediocre content) https://huyenchip.com/2023/08/16/llm-research-open-challenges.html
-- https://github.com/zeno-ml/zeno-build/tree/main/examples/analysis_gpt_mt/report
- "Patterns for Building LLM-based Systems & Products" (Evals, RAG, fine-tuning, caching, guardrails, defensive UX, and collecting user feedback) https://eugeneyan.com/writing/llm-patterns
```{figure-md} llm-patterns
diff --git a/sdk.md b/sdk.md
index 4ccdc17..b13dbc3 100644
--- a/sdk.md
+++ b/sdk.md
@@ -178,7 +178,7 @@ LLaMAIndex seems more tailor made for deploying LLM apps in production. However,
![banner](https://litellm.vercel.app/img/docusaurus-social-card.png)
-As the name suggests a light package that simplifies the task of getting the responses form multiple APIs at the same time without having to worry about the imports is known as the [LiteLLM](https://litellm.ai). It is available as a python package which can be accessed using `pip` Besides we can test the working of the library using the [playground](https://litellm.ai/playground) that is readily available.
+As the name suggests a light package that simplifies the task of getting the responses form multiple APIs at the same time without having to worry about the imports is known as the [LiteLLM](https://docs.litellm.ai). It is available as a python package which can be accessed using `pip`
### Completions
diff --git a/unaligned-models.md b/unaligned-models.md
index 2dd3c71..ff678be 100644
--- a/unaligned-models.md
+++ b/unaligned-models.md
@@ -19,7 +19,7 @@ Model | Reference Model | Training Data | Features
[](#fraudgpt) | 🔴 unknown | 🔴 unknown | Phishing email, {term}`BEC`, Malicious Code, Undetectable Malware, Find vulnerabilities, Identify Targets
[](#wormgpt) | 🟢 [](models.md#gpt-j-6b) | 🟡 malware-related data | Phishing email, {term}`BEC`
[](#poisongpt) | 🟢 [](models.md#gpt-j-6b) | 🟡 false statements | Misinformation, Fake news
-[](#wizardlm-uncensored) | 🟢 [](models.md#wizardlm) | 🟢 [available](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) | Uncensored
+[](#wizardlm-uncensored) | 🟢 [](models.md#wizardlm) | 🟢 [available](https://huggingface.co/datasets/cognitivecomputations/wizard_vicuna_70k_unfiltered) | Uncensored
[](#falcon-180b) | 🟢 N/A | 🟡 partially [available](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Unaligned
```
@@ -109,10 +109,10 @@ Model Censoring {cite}`erichartford-uncensored`
Uncensoring {cite}`erichartford-uncensored`, however, takes a different route, aiming to identify and
eliminate these alignment-driven restrictions while retaining valuable knowledge. In the case of
-[WizardLM Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored), it closely follows the uncensoring
+[WizardLM Uncensored](https://huggingface.co/cognitivecomputations/WizardLM-7B-Uncensored), it closely follows the uncensoring
methods initially devised for models like [](models.md#vicuna), adapting the script
used for [Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) to work seamlessly with
-[WizardLM's dataset](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
+[WizardLM's dataset](https://huggingface.co/datasets/cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered).
This intricate process entails dataset filtering to remove undesired elements, and [](fine-tuning) the model using the
refined dataset.
@@ -125,9 +125,9 @@ For a comprehensive, step-by-step explanation with working code see this blog: {
Similar models have been made available:
-- [WizardLM 30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
-- [WizardLM 13B-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
-- [Wizard-Vicuna 13B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored)
+- [WizardLM 30B-Uncensored](https://huggingface.co/cognitivecomputations/WizardLM-30B-Uncensored)
+- [WizardLM 13B-Uncensored](https://huggingface.co/cognitivecomputations/WizardLM-13B-Uncensored)
+- [Wizard-Vicuna 13B-Uncensored](https://huggingface.co/cognitivecomputations/Wizard-Vicuna-13B-Uncensored)
### Falcon 180B