Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Naive server and use padding auto-negotiation #76

Closed
klzgrad opened this issue May 25, 2020 · 13 comments
Closed

Remove Naive server and use padding auto-negotiation #76

klzgrad opened this issue May 25, 2020 · 13 comments

Comments

@klzgrad
Copy link
Owner

klzgrad commented May 25, 2020

#72 discovers Caddy does not pass backend server's headers in 200 OK response, rendering header padding ineffectual. To actually fix this I have to create a Naive fork of Caddy's forwardproxy. Since I'm to fork and maintain a Naive forwardproxy, it's possible to simply subsume all of Naive server into this forwardproxy, i.e. the payload padding encapsulation layer.

This means a major architectural change, as the most users should no longer need to run the naive binary on their server but have to build the Naive fork of Caddy forwardproxy, which though is easy to do. Now it is:

[Browser → Naïve (client)] ⟶ Censor ⟶ [Caddy with forwardproxy Naïve fork] ⟶ Internet

Also the use case of HAProxy still has to be supported, as it doesn't do forward proxying itself. So the old functionality will not be removed.

[Browser → Naïve (client)] ⟶ Censor ⟶ [HAProxy → Naïve (server)] ⟶ Internet


Another medium change I'm testing is turning on payload padding by default through auto-negotiation.

  • Naive client will turn on payload padding automatically if the server response contains header padding. This will cause the first connection to be not Fast Open as before, but the rest connections will be the same. The client will send header padding regardless.
  • New client vs old Caddy server: Payload padding won't turn on because server's header padding doesn't pass through Caddy.
  • Naive forwardproxy will turn on payload padding automatically if the client request contains header padding. The server will send header padding regardless.
  • New Caddy server vs old client: may also send header padding without actually using payload padding, so headers like Padding: ... does not turn on payload padding server-side.

With these I'm removing the option --padding, which proves to be error-prone for some users.

@klzgrad
Copy link
Owner Author

klzgrad commented May 26, 2020

Compatibility test cases during the transition:

  • Old ./naive -> Old Caddy: No change
    • New ./naive -> Old Caddy: Naive client detects no padding
    • Old ./naive -> New Caddy: Caddy detects padding (header, no payload)
    • New ./naive -> New Caddy: Naive client and Caddy both detect padding (header, payload)
  • Old ./naive --padding -> Old Caddy: Never worked
    • Old ./naive --padding -> New Caddy: Caddy detects padding (header, no payload), not working
  • Chrome.exe -> Old Caddy: No change
    • Chrome.exe -> New Caddy: Caddy detects no padding
  • Old ./naive --padding -> Old Caddy + Old ./naive --padding: No change
    • New ./naive -> Old Caddy + Old ./naive --padding: Naive client detects no padding, broken
  • Old ./naive -> Old Caddy + Old ./naive --padding: Never worked
  • Old ./naive -> HAProxy + Old ./naive: No change
    • New ./naive -> HAProxy + Old ./naive: Naive client detects padding (header, no payload)
    • Old ./naive -> HAProxy + New ./naive: Naive server detects padding (header, no payload)
    • New ./naive -> HAProxy + New ./naive: Naive client and server both detects padding (header, payload)
  • Old ./naive -> HAProxy + Old ./naive --padding: Never worked
    • New ./naive -> HAProxy + Old ./naive --padding: Naive client detects padding (header, no payload), not working
  • Old ./naive --padding -> HAProxy + Old ./naive: Never worked
    • Old ./naive --padding -> HAProxy + New ./naive: Naive server detects padding (header, no payload), not working
  • Old ./naive --padding -> HAProxy + Old ./naive --padding: No change

Verify:

  • Purpose of Fast Open in Caddy forwardproxy

Padding negotiation:

  • Protocols capable of padding negotiation (using HTTP/1/2/3 headers): http://, https://, quic://
  • Protocols incapable of padding negotiation: direction://, socks://, redir://
  • incapable --listen + incapable --proxy: No padding
  • incapable --listen + capable --proxy: C->S add padding, C<-S remove padding
  • capable --listen + incapable --proxy: C->S remove padding, C<-S add padding
  • capable --listen + capable --proxy: No padding

The padding byte format:

  • For CONNECT requests, a Padding header with [16, 32] uncompressed bytes is added. My intuition was that making this too large would make it an entropy feature, which turns out not far from practice https://ieeexplore.ieee.org/document/8855280, which exploits obfs4's excessively large random handshake padding sizes.
  • For 200 responses, a Padding header with [30, 62] uncompressed bytes is added.
  • For payload: plen := rand(256), data -> [datalen / 256, datalen % 256, plen, data[:], padding[0:plen]]
  • Add/remove payload padding: only performs padding for the first 4 reads/writes

The purpose of this padding is to protect the payload with TLS handshakes of known sizes. Therefore only the first 4 reads/writes are padded. The choice of the padding length is based on #1 (comment) and also not to make it too large resulting in detectable entropy. But the padding is currently limited to size of [0, 255], away from the ideal distribution of [200, 800]. Changing this is a wire format breaking change though.

Security issues:

  • In the new approach there is an initial handshake that is not Fast Open. The handshakes change from ↑↑↓↓(CONNECT, ClientHello, 200, ServerHello) to ↑↓↑↓ (CONNECT, 200, ClientHello, ServerHello). I should probably look into direction n-grams also. Though the easiest mitigation is to start each connection with a bunch of real non-tunneled HTTP requests (e.g. download the proxy server's fronting home page), killing off the possibility of exploiting info leak during tunnel handshakes once and for all.
  • Although Fast Open itself is probably a little suspect because two short packets get sent side by side. The mitigation is a bit hard to implement though, too deep in the net stack.
  • In summary, the current potential attack vectors are:
    • packet length histogram (h2 control frames, stuff-in-TLS-in-h2 DATA overhead, uniform distribution random generator)
    • direction n-grams (poor man's LTSM)
    • packet timing (CONNECT and ClientHello back-to-back)

@darhwa
Copy link

darhwa commented May 29, 2020

In fact we have a different approach to do padding. I found it after more reading of h2 specs, and would like to propose it here.

H2's HEADERS and DATA frame types have built-in padding feature. Existing h2 implementations can handle received padded frames, however no one use it (or expose the interface) in sending frames. Since you are planning to maintain both client and server, why not take advantage of this native padding feature?

Using h2 native padding have at least such pros:

  • No compatibility issue at all. Any combinations of padded/non-padded client and servers work together out-of-box. Server treats padded and non-padded(for example, Enhance http outbound to work with https and http/2 upstreams v2ray/v2ray-core#2488) clients as one.

  • Negotiation is also not needed. Fast Open and padding are enabled for the first stream. First few packets are what adversaries care about the most.

  • Less code.

and cons:

  • We must maintain the patched server too. This is the same case as your original propse, so not a concern.

  • Unknown difficulty in patching naiveproxy and the servers.

I've tried and found it not difficult in chrome's h2 stack. I implemented it in a new branch https://github.com/darhwa/naiveproxy/tree/native_padding and tested it for more than 1 day without problem. I can see in wireshark that it works as expected. The patch can even be applied to chrome itself. Please have a look at it when you have time.

But things are not so simple for caddy (golang). I found golang maintainers have explicitly rejected a proposal from sergeyfrolov for supporting padding at higher layer (golang/go#26900). However, it's still doable considering that you are going to maintain your own caddy and users don't need to build it themselves. I'm also going to dive into haproxy source as soon as I have time.

What's your opinion?

@emacsenli
Copy link

emacsenli commented May 31, 2020

Does v83.0.4103.61-2 support this RFC ? I found padding is not working

#78

@darhwa
Copy link

darhwa commented May 31, 2020

@emacsenli Did you download the released pkg or compiled from master yourself? The released version changed nothing about padding.

@emacsenli
Copy link

@emacsenli Did you download the released pkg or compiled from master yourself? The released version changed nothing about padding.

I am with released binary file .I will try compile later.

@darhwa
Copy link

darhwa commented May 31, 2020

@emacsenli

Does v83.0.4103.61-2 support this RFC ?

No. What in this RFC is not implemented.

I will try compile later.

You should double-check your config, and your server status. v83.0.4103.61-2 is not broken. To compile your own binary won't hopefully help.

@klzgrad
Copy link
Owner Author

klzgrad commented Jun 1, 2020

https://www.mail-archive.com/[email protected]/msg28781.html I tried this a while ago. The attitude of Go devs reveals a deeper truth: H2 padding is used by no one and not well tested and it's an uphill battle if you try to use it against server bugs (even with pure client side padding). This project is structured to minimize maintenance work so it's more likely to survive. Forking forwardproxy the plugin is already the limit. Forking Go's H2 stack is not sustainable.

As for the RFC, there's quite some as hoc guesswork accumulated in the paddings all around. It's time to collect some new histograms, update the lit review, and do some big self purging before committing to any particular padding methods. This may take a few months.

@klzgrad
Copy link
Owner Author

klzgrad commented Jun 9, 2020

Resolutions of the above security issues:

Even though # 1 listed many characteristics, most would appear in regular H2 connections. Real issues of H2 tunnels basically boil down to two:

  • Occurrences of H2 control frames that do not appear in normal H2 connections. There are not many of this kind (-13, H2 client sending too many WINDOW_UPDATE/RST_STREAM; 200 HEADERS too short; CONNECT HEADERS too short), and can be patched manually in H2 stack.
  • Stuff-in-TLS-in-h2 DATA tunneling overhead, which results in distinct features if combined with handshakes of known protocols, or known control frames of H2 (H2-in-TLS-in-H2 DATA). This can be solved with a padding scheme. Though non-handshake "known control frames of H2" wouldn't be as serious because they are already somewhat masked by legitimate lengths.

Direction n-gram analysis (poor man's LTSM) - there is no practical countermeasure to this analysis more advanced than simple padding. Thus it remains a pure theoretical exercise useful for evaluating obfuscation strength.

Packet timing (in particular CONNECT and ClientHello back-to-back). This is expensive to remove. Hopefully H2 multiplexing combined length padding to both of the two packets can mitigate this to an extent.


New padding scheme:

  • CONNECT HEADERS: pad towards [48, 72] non-indexed header.
  • 200 HEADERS: pad towards [48, 96] non-indexed header.
  • RST_STREAM: Replace with END_STREAM+RST_STREAM (two h2 frames in one tls record) and pad towards [48, 72] with H2 padding. (RST_STREAM does not support padding).
  • First 8 payload with length <= 768 inside H2 DATA: pad with [0, 256]. (Extended from 4 to 8 to cover potential H2 setup in payload)
    • Not very fancy, but we'll see how it goes.
    • Fancy variants include: Suppress top-k common lengths; Pad towards [2x, 3x].

[x, y]: a non-uniform, entropy-limited random generator that produce random numbers in [x, y]. (Self-similar distribution is not generated fast enough.) TBD Even it's not very clear how this is better than simply uniform without accidentally introducing some deducible artificiality. "Limited entropy" remains poorly defined.

@darhwa
Copy link

darhwa commented Jun 10, 2020

To replace RST_STREAM frames with END_STREAM flagged DATA frames is not a good idea. If you begin downloading a large file then cancel it, naiveproxy will still receive the entire file from server. It's also not feasible to make server treat END_STREAM as RST_STREAM, because normal requests also mark the last frame as END_STREAM.

Chrome's h2 implementation shoots exactly one TLS record for one h2 frame. That's not mandatory. I think we can modify it to merge multiple h2 frames in one TLS record in some cases. For each RST_STREAM we can precede it with a pure padding DATA. (Update: I've implemented it in my native_padding branch. It proves to work!) CONNECT HEADER can be merged with the next frame.

@klzgrad
Copy link
Owner Author

klzgrad commented Jun 11, 2020

It's not inherently a bad idea as it merely enters half-closed local state, which is the usual process of a request. Though half-closed states are less clean to manage than resets.

It seems you're putting two frames of content into one SpdySerializedFrame. If this can somehow cheat the existing spdy traffic accounting then it'll be cleaner than END_STREAM.

I accept the general direction of this. You can send a PR. Though I don't like the extra steps to get at this. You don't need all that padding infrastructure or buffer bouncing. Just patch CreateRstStream and put in an even manually crafted byte sequence and it will work.

@darhwa
Copy link

darhwa commented Jun 12, 2020

I've created a PR (#85). Besides the changes mentioned above, I also padded WINDOW_UPDATE like what I did for RST_STREAM.

@klzgrad klzgrad changed the title RFC: Remove Naive server and use padding auto-negotiation WIP: Remove Naive server and use padding auto-negotiation Jun 13, 2020
@klzgrad
Copy link
Owner Author

klzgrad commented Jun 13, 2020

Except for the camouflage preamble, the padding protocol and other UI are updated:

  • CONNECT HEADERS padded correctly
  • 200 HEADERS padded correctly
  • RST_STREAM padded correctly
  • First 8 payloads are padded
  • --padding option is deprecated, almost useless.
  • Breaks backward compatibility in payload padding protocol.

To test:

I created a build here https://github.com/klzgrad/naiveproxy/releases/tag/v83.0.4103.61-3.

Run with bare options:

./naive --listen=socks://:1081 --proxy=https://user:[email protected] --log

Build and run Caddy v2 with naivety:

git clone -b naive https://github.com/klzgrad/forwardproxy
go get -u github.com/caddyserver/xcaddy/cmd/xcaddy
~/go/bin/xcaddy build --with github.com/caddyserver/forwardproxy=./forwardproxy
sudo setcap cap_net_bind_service=+ep ./caddy
./caddy run --config caddy.json

caddy.json (static certificate):

{
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "listen": [":443"],
          "routes": [{
            "handle": [{
              "handler": "forward_proxy",
              "hide_ip": true,
              "hide_via": true,
              "auth_user": "user",
              "auth_pass": "pass",
              "probe_resistance": {"domain": "secret.localhost"}
            }]
          }, {
            "match": [{"host": ["example.com"]}],
            "handle": [{
              "handler": "file_server",
              "root": "/var/www/html"
            }],
            "terminal": true
          }],
          "tls_connection_policies": [{
            "match": {"sni": ["example.com"]},
            "certificate_selection": {"any_tag": ["cert0"]}
          }]
        }
      }
    },
    "tls": {
      "certificates": {
        "load_files": [{
          "certificate": "example.com.crt",
          "key": "example.com.key",
          "tags": ["cert0"]
        }]
      }
    }
  }
}

caddy.json (Let's Encrypt):

{
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "listen": [":443"],
          "routes": [{
            "handle": [{
              "handler": "forward_proxy",
              "hide_ip": true,
              "hide_via": true,
              "auth_user": "user",
              "auth_pass": "pass",
              "probe_resistance": {"domain": "secret.localhost"}
            }]
          }, {
            "match": [{"host": ["example.com", "www.example.com"]}],
            "handle": [{
              "handler": "file_server",
              "root": "/var/www/html"
            }],
            "terminal": true
          }],
          "tls_connection_policies": [{
            "match": {"sni": ["example.com", "www.example.com"]}
          }]
        }
      }
    },
    "tls": {
      "automation": {
        "policies": [{
          "subjects": ["example.com", "www.example.com"],
          "issuer": {
            "email": "[email protected]",
            "module": "acme"
          }
        }]
      }
    }
  }
}

@klzgrad klzgrad changed the title WIP: Remove Naive server and use padding auto-negotiation Remove Naive server and use padding auto-negotiation Jun 16, 2020
@klzgrad
Copy link
Owner Author

klzgrad commented Jun 16, 2020

Haven't got any issues since update. Considered fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants