Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding LoKrModel Class to paddle.peft library #9269

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from

Conversation

WhuanY
Copy link

@WhuanY WhuanY commented Oct 15, 2024

PR types

New features

PR changes

Others

Description

Adding LoKrModel, LoKrLinear and LoKrConfig to support a new lora-like adapter. Current implementation only supports contains Linear Modules. Motivation and discussion on such PR issue is at: #9226

Please provide suggestions on the current implementation!

Copy link

paddle-bot bot commented Oct 15, 2024

Thanks for your contribution!

Copy link

codecov bot commented Oct 15, 2024

Codecov Report

Attention: Patch coverage is 76.66667% with 84 lines in your changes missing coverage. Please review.

Project coverage is 53.04%. Comparing base (1fc9429) to head (4723293).
Report is 86 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/peft/lokr/lokr_model.py 78.53% 41 Missing ⚠️
paddlenlp/peft/lokr/lokr_layers.py 65.04% 36 Missing ⚠️
paddlenlp/trainer/trainer.py 20.00% 4 Missing ⚠️
paddlenlp/peft/lokr/lokr_config.py 94.44% 3 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9269      +/-   ##
===========================================
+ Coverage    53.02%   53.04%   +0.02%     
===========================================
  Files          657      680      +23     
  Lines       106311   108195    +1884     
===========================================
+ Hits         56367    57388    +1021     
- Misses       49944    50807     +863     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@WhuanY
Copy link
Author

WhuanY commented Oct 16, 2024

@DesmonDay

按照建议我已经提交了只有Linear Layer的LoKr实现供你们整体查看。麻烦有时间审阅下并给出需要修改的意见。

@greycooker
Copy link
Contributor

好的收到,我们这边会尽快review


# This module is set to be in alignment with code design paradiam of ...utils.env

LOKR_WEIGHTS_NAME = "lokr_model_state.pdparams"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done this

@@ -0,0 +1,19 @@
# Copyright 2023-present the HuggingFace Inc. team.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copyright注意修改

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done this

paddlenlp/peft/lokr/__init__.py Show resolved Hide resolved
def add_lora_split_mapping(self, module_name, is_column=False):
self.lora_split_mapping[module_name] = is_column

def _get_tensor_parallel_mappings(self, config, is_split=True):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没有使用tensor_parallel 和pipeline parallel先把没用到的相关的逻辑删掉

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done this

self.quantized = True
if lokr_module is None:
raise ValueError("LoKr strategy only supports paddle.nn.Linear right now")
if getattr(lokr_module, "quant_weight", None) is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quant相关逻辑没用到的也先删掉

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done this

if any(isinstance(layer, lokr_layer) for lokr_layer in AVAILABLE_LAYERS):
layer.disable_lokr = True

def enable_lokr(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以参考lora,在lora_layer中加入disable_lokr这个参数

@WhuanY
Copy link
Author

WhuanY commented Oct 17, 2024

辛苦了!第一次参与开源。可能错误较多,我会尽早修改问题,重新提交,供你们审阅。 Have a good day!

@lugimzzz lugimzzz closed this Oct 22, 2024
@lugimzzz lugimzzz reopened this Oct 22, 2024
@lugimzzz
Copy link
Contributor

lugimzzz commented Oct 22, 2024

辛苦了!第一次参与开源。可能错误较多,我会尽早修改问题,重新提交,供你们审阅。 Have a good day!

感谢对PaddleNLP的贡献,我们非常欢迎社区开发者参与到PaddleNLP的开发中来。我会在重新提交代码后尽快进行review,期待提交的代码能早日合入到项目中!

@lugimzzz
Copy link
Contributor

可以再次review请告知我,我会尽快开始review

@WhuanY
Copy link
Author

WhuanY commented Oct 30, 2024

可以再次review请告知我,我会尽快开始review

按照要求我已经

  1. 去掉了暂时不涉及的并行逻辑;
  2. 增加了disable_lokr参数和相应办法
  3. 增加了test/peft/lokr_model.py,并通过了基本的测试;
  4. 根据test过程中发现的bug更新了部分LoKrLinear,包括重置初始化方式、更正前向传播Bug。

目前我可以想到的接下来可以做的是:

  1. 在unified_checkpoint中支持LoKrModel
  2. 增加该适配器的合并参数脚本
    如有问题和改进方向请说明,辛苦了!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants