Skip to content

Latest commit

 

History

History
95 lines (69 loc) · 8.1 KB

Learning.md

File metadata and controls

95 lines (69 loc) · 8.1 KB

Suggested Learning Resources

Contents

How to Leverage LLMs for Building Various Copilot Applications

To effectively integrate Large Language Models (LLMs) into app development, it's crucial to have a solid understanding of LLMs. Therefore, this section presents a list of learning resources. For optimal learning, it is advised to pursue the courses in the sequence provided below.

  • Base Model Knowledge: The LLM base models are trained on millions of data available publicly and hence can be used as-is for building copilots for many of the use cases. This is possible by the technique of prompt engineering – with simple prompts or with few shot learning (providing examples) Few of the possibilities (but not limited to) of building your own copilots that can do:
    • Code generation:
      • Generate code snippets or complete functions based on natural language queries or specifications.
      • Provide general programming syntax and logic, while the user data can provide domain-specific details and preferences.
      • Check for time complexity of the code.
      • Generate documentation for the code.
    • Content creation:
      • Create engaging and informative content for blogs, websites, social media, or newsletters based on keywords, topics, or prompts.
      • Generate content with linguistic fluency and diversity, while the user can provide style, tone, and audience awareness.
    • Data analysis:
      • Execute data analysis tasks such as data cleaning, visualization, exploration, or modeling based on natural language commands or questions.
      • Provide general statistical analysis and report generation from the structured data provided by the user along with suitable prompting.
    • Text mining:
      • Extract relevant entities, concepts, relations, or events from a given text or document, such as names, dates, locations, topics, sentiments, or opinions.
    • Text Categorization & Summarization:
      • Execute tasks like summarizing text data like reviews about products and services, categorization into different segments based on the description/text data associated and so on.
  • LLM On/With your data
    1. RAG pattern:
    2. Fine-tuning (To do)
      • Full finetuning
      • Parameter efficient Fine- tuning
    3. Hybrid approach (RAG + Fine-tuning) (To do)

LLM

  • For an amazing introductory course on LLMs, check out the DeepLearning.AI course Generative AI with Large Language Models (20+h) external link. This course provides a strong foundation in understanding LLMs, including the Attention Is All You Need algorithm, language model workings, embedding, and the challenges with Large Language Models. It also covers finetuning strategies and how smaller fine-tuned models can achieve comparable or even better results than large models.

    Attention Is All You Need algorithm Attention Is All You Need algorithm

    Course also dives into Generative AI project lifecycle.

    Generative AI project lifecycle Generative AI project lifecycle

  • If you prefer a shorter introduction, you can watch the Intro to Large Language Models (1h) external link video.

App Development

DeepLearning.AI is often recomened in this section. DeepLearning.AI is led by a team of instructors from Stanford University and deeplearning.ai, offers a comprehensive platform for learning AI development. They provide a wide range of courses, specializations, and professional certificates.

  • Prompt engineering refers to the practice of crafting and optimizing input prompts by selecting appropriate words, phrases, sentences, punctuation, and separator characters to effectively use LLMs for a wide variety of applications. In other words, prompt engineering is the art of communicating with an LLM. Refer to these resources to learn more about prompt engineering:

  • There are two main approaches to send prompts to LLM:

    • Leveraging API libraries
    • Using SDKs/Frameworks that utilize API libraries

    We will review API libs and SDKs/Frameworks later in this document but for now complete some of the following courses to get a better understanding how to send prompts to LLMs:

Quality.

Verifying the quality of responses generated by LLMs in applications is an emerging area in software development. The courses listed below offer valuable insights into this process:

Introductory courses to Vector DBs.

For more great courses on LLMs, visit DeepLearning.AI Short Courses external link. DeepLearning.AI constantly adds new courses, so it is recommended to check back often. As of the beginning of 2024, DeepLearning.AI remains one of the foundational sources for learning about LLMs and app development leveraging LLMs.

Additional resources