Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor fixes #235

Merged
merged 5 commits into from
Jun 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 7 additions & 8 deletions ch05/01_main-chapter-code/exercise-solutions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -660,7 +660,7 @@
"metadata": {},
"outputs": [],
"source": [
"from gpt_generate import assign, load_weights_into_gpt\n",
"from gpt_generate import load_weights_into_gpt\n",
"\n",
"\n",
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
Expand Down Expand Up @@ -788,10 +788,10 @@
"NEW_CONFIG.update({\"context_length\": 1024, \"qkv_bias\": True})\n",
"\n",
"gpt = GPTModel(NEW_CONFIG)\n",
"gpt.eval();\n",
"gpt.eval()\n",
"\n",
"load_weights_into_gpt(gpt, params)\n",
"gpt.to(device);\n",
"gpt.to(device)\n",
"\n",
"torch.manual_seed(123)\n",
"train_loss = calc_loss_loader(train_loader, gpt, device)\n",
Expand All @@ -816,15 +816,15 @@
"source": [
"In the main chapter, we experimented with the smallest GPT-2 model, which has only 124M parameters. The reason was to keep the resource requirements as low as possible. However, you can easily experiment with larger models with minimal code changes. For example, instead of loading the 1558M instead of 124M model in chapter 5, the only 2 lines of code that we have to change are\n",
"\n",
"```\n",
"```python\n",
"settings, params = download_and_load_gpt2(model_size=\"124M\", models_dir=\"gpt2\")\n",
"model_name = \"gpt2-small (124M)\"\n",
"```\n",
"\n",
"The updated code becomes\n",
"\n",
"\n",
"```\n",
"```python\n",
"settings, params = download_and_load_gpt2(model_size=\"1558M\", models_dir=\"gpt2\")\n",
"model_name = \"gpt2-xl (1558M)\"\n",
"```"
Expand Down Expand Up @@ -907,8 +907,7 @@
"metadata": {},
"outputs": [],
"source": [
"from gpt_generate import generate, text_to_token_ids, token_ids_to_text\n",
"from previous_chapters import generate_text_simple"
"from gpt_generate import generate, text_to_token_ids, token_ids_to_text"
]
},
{
Expand Down Expand Up @@ -958,7 +957,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.11"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion ch07/02_dataset-utilities/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The `find-near-duplicates.py` function can be used to identify duplicates and ne



```python
```bash
python find-near-duplicates.py --json_file instruction-examples.json
```

Expand Down
4 changes: 2 additions & 2 deletions setup/02_installing-python-libraries/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@ I used the following libraries listed [here](https://github.com/rasbt/LLMs-from-

To install these requirements most conveniently, you can use the `requirements.txt` file in the root directory for this code repository and execute the following command:

```
```bash
pip install -r requirements.txt
```


Then, after completing the installation, please check if all the packages are installed and are up to date using

```
```bash
python python_environment_check.py
```

Expand Down