Skip to content

Commit

Permalink
upload
Browse files Browse the repository at this point in the history
  • Loading branch information
momozzing committed Dec 4, 2023
1 parent dcf3c10 commit ebe3f8b
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions _posts/2023-12-03-효율적인 LLM 학습전략.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,9 @@ Quantization이란?

기존의 언어모델의 파라미터는 Float32로 표현되었는데, 이를 FP16, BF16, FP8, NF4 등 같은 Low bit로 압축을 한다.

![Alt text](image.png)

![Alt text](image-1.png)
![image](https://github.com/momozzing/KLUE-TOD/assets/60643542/04672591-7ef7-48d4-94c9-6edfef2a35a1)

![image](https://github.com/momozzing/KLUE-TOD/assets/60643542/105db370-8b01-481e-8551-2ce1cdeb9df3)

## **Parameter-Efficient Fine-Tuning (PEFT)**

Expand Down

0 comments on commit ebe3f8b

Please sign in to comment.