Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support more padding method #813

Open
Soptq opened this issue Jul 28, 2024 · 1 comment
Open

Feature request: support more padding method #813

Soptq opened this issue Jul 28, 2024 · 1 comment

Comments

@Soptq
Copy link

Soptq commented Jul 28, 2024

Feature request

Currently, only zero padding is supported in Concrete ML. Is it possible to also support other padding methods like reflective padding (torch.nn.ReflectionPad2d)?

Motivation

In many applications we want to make sure that the padding alters the local feature structure and global statistics the least (e.g. Image-to-image translation). Using zero padding in these applications will result in peaking edges[1][2].

Reflective padding is already supported In ONNX spec, and I think the implementation in Concrete ML should not be hard (I'm not very familiar with Concrete ML codebase, so correct me if I'm wrong). Specifically, I think we can implement a 'reflect' branch in: https://github.com/zama-ai/concrete-ml/blob/main/src/concrete/ml/onnx/onnx_impl_utils.py#L61-L66, where instead of copying the original x to the center of x_pad, we can copy different parts (9 parts: left, upper left, upper, upper right, right, bottom right, bottom, bottom left, center) of the original x to corresponding part of x_pad.

[1] https://arxiv.org/abs/1703.10593
[2] https://arxiv.org/abs/1811.11718

@Soptq Soptq changed the title Feature request: support mode padding method Feature request: support more padding method Jul 28, 2024
@andrei-stoian-zama
Copy link
Collaborator

It seems doable:

Currently in QuantizedPad there is:

        assert_true(
            self.mode == "constant",
            "Padding operator only supports padding with a constant",
        )

this needs to be changed to allow "reflect" from ONNX. Then in numpy_onnx_pad a new parameter for the mode should be added. The enlarged padded tensor is created here:


            x_pad = fhe_ones(tuple(padded_shape)) * numpy.int64(pad_value)`

to implement reflect one only needs to do the 9 appropriate copies from x into x_pad. doing some thing like x_pad[pads[top]:height-pads[bottom],0:pads[0]] = x[:,pads[0]:0] should be doable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants