Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

match monolake http performance with nginx #116

Closed
xiaosongyang-sv opened this issue Aug 19, 2024 · 9 comments
Closed

match monolake http performance with nginx #116

xiaosongyang-sv opened this issue Aug 19, 2024 · 9 comments
Assignees
Labels
C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead.
Milestone

Comments

@xiaosongyang-sv
Copy link
Collaborator

xiaosongyang-sv commented Aug 19, 2024

we need match monolake http performance with nginx. for current https performance results, monolake matches nginx. but for http, there is 20% - 25% gap:

|--------------------------|-------------------|--------------------|---------------------|--------------------|-------------------|--------------------|---------------------|--------------------|

Case Tiny Requests/sec Small Requests/sec Medium Requests/sec Large Requests/sec Tiny Transfer/sec Small Transfer/sec Medium Transfer/sec Large Transfer/sec
http-result-4c-monolake 160969.16 154858.08 90215.55 9190.98 54892953.60 539523809.28 1030792151.04 1524713390.08
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------
http-result-4c-nginx 190923.94 179160.28 114646.49 9306.86 69688360.96 628495482.88 1320702443.52 1546188226.56
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------
http-result-4c-traefik 9985.18 5133.71 11988.72 9078.24 3407872.00 17720934.40 137688514.56 1503238553.60
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------
https-result-4c-monolake 143351.22 121503.00 67396.11 8023.95 48884613.12 423320616.96 774048317.44 1331439861.76
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------
https-result-4c-nginx 130013.41 117888.48 66158.46 8040.46 47458549.76 413547888.64 761412976.64 1331439861.76
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------
https-result-4c-traefik 9927.86 11931.26 13899.86 7787.11 3386900.48 41565552.64 159645696.00 1288490188.80
-------------------------- ------------------- -------------------- --------------------- -------------------- ------------------- -------------------- --------------------- --------------------

monolake (proxy) config:

[runtime]
# runtime_type = "legacy"
worker_threads = 4
entries = 8192

[servers.server_basic2]
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:8402" }
    [[servers.server_basic2.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10082" } }]

[servers.server_basic3]
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:8403" }
    [[servers.server_basic3.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10083" } }]

[servers.server_basic4]
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:8404" }
    [[servers.server_basic4.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10084" } }]

[servers.server_basic5]
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:8405" }
    [[servers.server_basic5.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10085" } }]

[servers.server_tls2]
    tls = { chain = "examples/certs/cert.pem", key = "examples/certs/key.pem" }
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:6442" }
    [[servers.server_tls2.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10082" } }]

[servers.server_tls3]
    tls = { chain = "examples/certs/cert.pem", key = "examples/certs/key.pem" }
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:6443" }
    [[servers.server_tls3.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10083" } }]

[servers.server_tls4]
    tls = { chain = "examples/certs/cert.pem", key = "examples/certs/key.pem" }
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:6444" }
    [[servers.server_tls4.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10084" } }]

[servers.server_tls5]
    tls = { chain = "examples/certs/cert.pem", key = "examples/certs/key.pem" }
    name = "monolake.cloudwego.io"
    listener = { type = "socket", value = "0.0.0.0:6445" }
    [[servers.server_tls5.routes]]
        path = '/'
        upstreams = [{ endpoint = { type = "uri", value = "http://127.0.0.1:10085" } }]

nginx server config:

[ec2-user@ip-172-31-22-170 ~]$ sudo cat /etc/nginx/nginx.conf


#user  nobody;
worker_processes  4;
worker_rlimit_nofile 100000;

error_log /var/log/nginx/error.log crit;

#pid        logs/nginx.pid;

events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    # cache informations about FDs, frequently accessed files
    # can boost performance, but you need to test those values
    #open_file_cache max=200000 inactive=20s;
    #open_file_cache_valid 30s;
    #open_file_cache_min_uses 2;
    #open_file_cache_errors on;

    # to boost I/O on HDD we can disable access logs
    access_log off;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece, it is better than sending them one by one
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    # https://brooker.co.za/blog/2024/05/09/nagle.html
    # https://news.ycombinator.com/item?id=10608356
    #tcp_nodelay on;

    # allow the server to close connection on non responding client, this will free up memory
    #reset_timedout_connection on;

    # request timed out -- default 60
    #client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    #send_timeout 2;

    # server will close connection after this time -- default 75
    keepalive_timeout 75;

    # number of requests client can make over keep-alive -- for testing environment
    keepalive_requests 1000000000;

    #gzip  on;

    server {
        listen       10082;
        server_name  localhost;

        #charset koi8-r;

        access_log  off;
	open_file_cache max=1000;

        location / {
            root   /usr/share/nginx/html/server2;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

    server {
        listen       10083;
        server_name  localhost;

        #charset koi8-r;

        access_log  off;
	open_file_cache max=1000;

        location / {
            root   /usr/share/nginx/html/server3;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

    server {
        listen       10084;
        server_name  localhost;

        #charset koi8-r;

        access_log  off;
	open_file_cache max=1000;

        location / {
            root   /usr/share/nginx/html/server4;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

    server {
        listen       10085;
        server_name  localhost;

        #charset koi8-r;

        access_log  off;
	open_file_cache max=1000;

        location / {
            root   /usr/share/nginx/html/server5;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }


    # HTTPS server
    #
    # server {
    #     listen       8443 ssl;
    #     server_name  localhost;

    #     ssl_certificate      cert.pem;
    #     ssl_certificate_key  cert.key;

    #     ssl_session_cache    shared:SSL:1m;
    #     ssl_session_timeout  5m;

    #     ssl_ciphers  HIGH:!aNULL:!MD5;
    #     ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }

    # }
    include servers/*;
}

[ec2-user@ip-172-31-22-170 ~]$

nginx proxy config:

[ec2-user@ip-172-31-2-253 ~]$ cat monolake/benchmark/proxy/nginx/nginx.conf

#user  nobody;
worker_processes  auto;

worker_rlimit_nofile 1000000;

error_log /var/log/nginx/error.log crit;

#pid        logs/nginx.pid;

events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    #use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    #multi_accept on;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    # types_hash_bucket_size 128;
    types_hash_max_size 32768;

    # cache informations about FDs, frequently accessed files
    # can boost performance, but you need to test those values
    #open_file_cache max=200000 inactive=20s;
    #open_file_cache_valid 30s;
    #open_file_cache_min_uses 2;
    #open_file_cache_errors on;

    # to boost I/O on HDD we can disable access logs
    access_log off;

    error_log off;
    #client_body_buffer_size 10K;
    #client_header_buffer_size 1k;
    #client_max_body_size 8m;
    #large_client_header_buffers 2 1k;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece, it is better than sending them one by one
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    # https://brooker.co.za/blog/2024/05/09/nagle.html
    # https://news.ycombinator.com/item?id=10608356
    #tcp_nodelay on;

    # allow the server to close connection on non responding client, this will free up memory
    #reset_timedout_connection on;

    # request timed out -- default 60
    #client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    #send_timeout 2;

    # server will close connection after this time -- default 75
    keepalive_timeout 75;

    # number of requests client can make over keep-alive -- for testing environment
    #keepalive_requests 100000;
    keepalive_requests 1000000000;

    #gzip  on;

    # Upstreams for https
    upstream ssl_file_server_com2 {
        server 172.31.22.170:10082;
        keepalive 1024;
    }

    upstream ssl_file_server_com3 {
        server 172.31.22.170:10083;
        keepalive 1024;
    }

    upstream ssl_file_server_com4 {
        server 172.31.22.170:10084;
        keepalive 1024;
    }

    upstream ssl_file_server_com5 {
        server 172.31.22.170:10085;
        keepalive 1024;
    }

    server {
        listen       8100;
        server_name  172.31.22.170;

        #charset koi8-r;

        access_log      off;

        location / {
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
        location /server2 {
                proxy_pass http://ssl_file_server_com2/;
		proxy_http_version 1.1;
    		proxy_set_header Connection "";
        }
        location /server3 {
                proxy_pass http://ssl_file_server_com3/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }
        location /server4 {
                proxy_pass http://ssl_file_server_com4/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }
        location /server5 {
                proxy_pass http://ssl_file_server_com5/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }
    }

    # HTTPS server
    #
    server {
        listen       8443 ssl;
        server_name  172.31.22.170;
        access_log      off;

        ssl_certificate      /etc/nginx/cert.pem;
        ssl_certificate_key  /etc/nginx/cert.key;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }

        location /server2 {
                proxy_buffering off;
                proxy_pass http://ssl_file_server_com2/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }

        location /server3 {
                proxy_buffering off;
                proxy_pass http://ssl_file_server_com3/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }

        location /server4 {
                proxy_buffering off;
                proxy_pass http://ssl_file_server_com4/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }

        location /server5 {
                proxy_buffering off;
                proxy_pass http://ssl_file_server_com5/;
		proxy_http_version 1.1;
                proxy_set_header Connection "";
        }
    }
    include servers/*;
}

[ec2-user@ip-172-31-2-253 ~]$

client (wrk2) for monolake:

./wrk -d 1m -c 640 -t 64 -R 200000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8402 > http-result-4c-monolake-tiny.txt
./wrk -d 1m -c 640 -t 64 -R 180000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8403 > http-result-4c-monolake-small.txt
./wrk -d 1m -c 640 -t 64 -R 100000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8404 > http-result-4c-monolake-medium.txt
./wrk -d 1m -c 640 -t 64 -R 100000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8405 > http-result-4c-monolake-large.txt

client (wrk2) for nginx:

./wrk -d 1m -c 640 -t 64 -R 210000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8100/server2 > http-result-4c-nginx-tiny.txt
./wrk -d 1m -c 640 -t 64 -R 200000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8100/server3 > http-result-4c-nginx-small.txt
./wrk -d 1m -c 640 -t 64 -R 120000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8100/server4 > http-result-4c-nginx-medium.txt
./wrk -d 1m -c 640 -t 64 -R 10000 --latency http://$MONOLAKE_BENCHMARK_PROXY_IP:8100/server5 > http-result-4c-nginx-large.txt

we need find if we can improve monolake performance, either by configuration, or hardcode some default value in the code.

@xiaosongyang-sv
Copy link
Collaborator Author

MONOLAKE_BENCHMARK_PROXY_IP=172.31.2.253 (proxy service ip)
server ip: 172.31.22.170

@goldenrye goldenrye added the C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead. label Aug 20, 2024
@goldenrye goldenrye added this to the v1.0 milestone Aug 20, 2024
@goldenrye goldenrye assigned ihciah and unassigned har23k Aug 20, 2024
@goldenrye
Copy link
Contributor

@ihciah can you take a look at this issue, for small-size HTTP paylod Monolake is 20% less performance than Nginx

@goldenrye goldenrye assigned har23k and unassigned ihciah Aug 26, 2024
@har23k
Copy link
Contributor

har23k commented Aug 28, 2024

@xiaosongyang-sv: nginx proxy has worker_processes auto;, while the server has it set to 4. The results will not be valid in that case.

@xiaosongyang-sv
Copy link
Collaborator Author

nginx proxy actually uses 4 workers and CPU usage is around 400%; server CPU is below 120%.

@har23k
Copy link
Contributor

har23k commented Aug 29, 2024

With worker_processes auto, nginx can theoretically use more than 4 cores correct? For a fair comparison with 4c-monolake, we need to restrict the worker processes to 4 for nginx as well.

@xiaosongyang-sv
Copy link
Collaborator Author

The proxy runs on a machine with 4 CPUs so it most has 4 CPUs. I can change the nginx config but the result is same.

@ihciah
Copy link
Collaborator

ihciah commented Aug 29, 2024

An unrelated topic: It is recommended to bind all threads that can be bound to cores to obtain more stable test results.

@har23k
Copy link
Contributor

har23k commented Aug 29, 2024

The proxy runs on a machine with 4 CPUs so it most has 4 CPUs. I can change the nginx config but the result is same.

Makes sense. If the machine has 4 CPUs then it shouldn't be an issue for these results. But all users might not have 4 core machines (even though the readme suggests using one), better to change it to 4 here

@har23k
Copy link
Contributor

har23k commented Nov 8, 2024

@har23k har23k closed this as completed Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead.
Development

No branches or pull requests

4 participants