Skip to content

Commit

Permalink
Lintergit add docs/ai-chat.md !
Browse files Browse the repository at this point in the history
  • Loading branch information
I-I-IT committed Nov 11, 2024
1 parent f0becda commit ad3f20c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/ai-chat.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ cover: ai-chatbots.webp
- [:material-account-cash: Surveillance Capitalism](basics/common-threats.md#surveillance-as-a-business-model){ .pg-brown }
- [:material-close-outline: Censorship](basics/common-threats.md#avoiding-censorship){ .pg-blue-gray }

Since the release of **ChatGPT** in 2022, interacting with **Large Language Models** (*LLMs*) has become common. **LLMs can help us** write better, understand unfamiliar subjects, or answer a wide range of questions.
Since the release of **ChatGPT** in 2022, interacting with **Large Language Models** (*LLMs*) has become common. **LLMs can help us** write better, understand unfamiliar subjects, or answer a wide range of questions.
Based on vast amount of data scraped from the web, they statistically predict the next word. However, to improve the quality of LLMs, developers of AI software often use [Reinforcement Learning from Human Feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback). This means AI **companies** might **read your private AI chats** to evaluate and **improve their models**. But that means that those private conversations must be stored, which introduces a risk of **data breaches**. Furthermore, there is a real possibility that the LLM will leak your private chat information in future conversations with other users. To solve those problems, you can use trusted and privacy-focused providers or run AI models locally so your data never leaves your device.

<details class="admonition info" markdown>
Expand Down

0 comments on commit ad3f20c

Please sign in to comment.