-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reshape introduction and welcome episodes #26
base: main
Are you sure you want to change the base?
Conversation
🆗 Pre-flight checks passed 😃This pull request has been checked and contains no modified workflow files or spoofing. Results of any additional workflows will appear here when they are done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice changes! I left some small comments here and there.
You could consider moving the welcome to index.md
, similar to: https://github.com/carpentries-incubator/deep-learning-intro/blob/2edad370a4e1639f57804324e398b8762cb9a11a/index.md?plain=1#L2 which is rendered like this.
You could also consider adding a prerequisites box like in that lesson, but maybe that already exists somewhere in your lesson or is planned in an issue.
episodes/00-welcome.md
Outdated
- Using natural language to produce a desired response from a large language model (LLM), i.e. prompt engineering | ||
- Other? | ||
|
||
## Datasets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be part of the setup instructions. Or do you have a good reason for not putting it in there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my idea was (similarly to the intro to deep learning to provide people with some level of context re- datasets and links to download it so I think I'll make this explicit
@svenvanderburg @laurasootes and I have reshaped the first episode. We have merged preprocessing and word embeddings and reframed both around the task of tracing the semantic shift of three dutch words from 1950 to 1989: ijzeren, televisie and mobiel. I think the task is quite nice and it works nicely but to be accomplished it requires a lot of background info that Laura and I have condensed now in one episode. It could however be streamlined even mode, with your help! Note that the task is not trivial at all, and we have decided to follow the structure of the deep learning lesson and squeeze it into one episode only. The reason for this is that the next episodes (transformers bert, transformers genAI) will build upon the preprocessing and word embeddings concepts we have introduced here, so we can't unpack these info more than that I think. Let me know if this episode is clear or it requires further refinement |
Forgot to add that the file you should look at is called |
@n400peanuts I am in quite busy period so I will postpone this until the week of 9 december, I hope that is OK? You can also go ahead and merge this if it blocks your progress. I can always have a general review of episode 1. |
This PR solves issue #24 . The introduction and welcome part have been simplified and redundant info has been cut. This has resulted into an effective merge of the intro and welcome into a single episode. Moreover, this episode now introduces immediately an exercise to kick-off the course in a good vibe.
Note that the dataset part is still not completed, as transformers and LLM episodes are still in development and their datasets are not finalised yet.