-
-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory allocation attack? #627
Comments
Yes thats true, bungeecoord/waterfall had many exploits but md5 claims that they don't even work xd, waterfall does something but still their "antidos" is trash.
Imo bungeecord has fucked networking system, even vanilla's one is better xd. So probabbly it's a encryption response or other packet that can has very big data, waterfall and bungeecord doesn't have good limiter for it |
Do you have the beginning of the attack ? (the logs before the java.lang.OutOfMemoryError: Direct buffer memory) ? |
Here is the log from 5 minutes before the attack: In short, this: |
Is that amount of server list pings normal for your server? |
Yeah telling truth is offensive :( |
There is a difference between telling the truth and just being an ass about
it
Each event pipeline thread iirc gets its own native buffer, this would
imply that too many event threads fired up or something, don't think there
is an actual leak here but I have no means to reproduce this to
investigate. Many of these issues can be mitigated with basic configuration
of s firewall to throttle connections in the event of an attack
…On Sat, 10 Apr 2021, 18:28 なるみ, ***@***.***> wrote:
@narumii <https://github.com/narumii>
This offensive language will not get you anywhere. You did not provide any
useful information.
I suggest that the maintainers of waterfall delete your comment.
I suggest that you try out #609
<#609>, there's also a link to a
test Waterfall jar in there. That might help with your problem.
Yeah telling truth is offensive :(
Big "DoS mitigations" that doens't works, also "DoS mitigations" from
velocity doesn't work properly idk why /shrug
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#627 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJMAZEQBE2I6E7XOIWISTLTICDEBANCNFSM42WRWDVQ>
.
|
Yeah pretty much. 2-3 pings per second is what i would consider normal for the server.
As already mentioned, i am rate limiting connections, filetering bad packets and limiting total connections per ip. |
this specific case doesn't look like a basic firewall setup will help, I think I know what they're doing and it's shamefully an artifact of a service exposed to the internet doing its job, I think I have a way to limit the damage but will impact performance for some people relying on certain aspects of how netty works already |
I'm now running the test version from @Janmm14. |
@electronicboy Due to the recent log4j exploits i had to switch to the latest official build. Since, our servers were attacked again with the same result. Your fix in #628 doesn't seem to be working or did break at some time in between. The attacker(s) where able to OOM Bungeecord within seconds using only about 40 requests.
Running Waterfall build 473, Java 17, Debian |
@Sneakometer That fix in 628 was intentionally reverted here: f17de74 |
That was intentional indeed, there are now some packets that can easily exceed that size (up to 16mb) which is already a whole ton too much |
Is this problem happening because the client sends too large packets or because every packet, even if very small, is getting 16mib ram reserved? Can a legit client send such large packets (or only the server) or could we have a different memory pool with different settings for the packets sent by the client? |
The client is telling that the incoming packet size is very high to force netty to allocate the maximum amont of ram. (16mib) but they send the packet very very slowly. This way with a very little amount of client, you can create an oom. |
But we do not allocate a buffer with the size before the full packet is there in Varint21FrameDecoder. So until then, the buffer we have is completely handled by netty and that should only grow as more data arrives? |
This is seemingly to me akin to the slowloris attacks done against apache, reducing the native buffer size was NOT a fix, in any form shape or capacity was it really a fix, it just did as much of a mitigation here as possible towards junk being allocated in the trunk, which, pre 1.18 was a trivial way to at least mitigate this to some degree Netty has a buffer pool which allows these native, direct buffers to be pooled rather than the expensive allocations of them across the board, you can increase your direct memory limit or use whatever system property it was to mitigate directly allocating these into direct memory, but, these buffers are slowly filled up and are shared across stuff; Here there's just enough connections using those buffers that it tries to allocate a new one and fails The client isn't telling the thing to allocate a huge buffer, the buffer size is generally fixed (resizing these is expensive, so, you wanna avoid that) |
@electronicboy |
the buffers are fixed size, so all you've gotta do is cause enough of them to be created |
That really does not sound like the right thing to do from netty. |
resizing the buffers is stupidly expensive, so it is the right thing to do, the big issue here is that you need to drain them at a decent pace, this is basically IMHO a massive architecture issue across the board |
@Sneakometer Did you made changes to the connection throttle configuration? |
This really does not sound like it could be true. |
it's not supposed to be a buffer per connection, basically; This all gets nuanced on the technicalities of netty |
Not sure what the default is, but it's set to |
Netty changed default to what we had, so apparantly this change does not affect the maximum size of buffers. |
I do not recall seing any resizing logic for the buffers, afaik the idea is that they're fixed size to prevent constant rescaling, most apps using netty are designed to deal with processing the queue effectively so that backpressure doesn't occur, etc |
ah, so, we set the capacity of the buffers, the thing has logic to allocate less by default looking at it, but, maybe tries to always reserve the capacity? Thus, too many buffers = hitting the limit |
Hello Waterfall community, i'm owning a smaller minecraft server with about 50 max concurrent players.
I am recently facing bot attacks where multiple ips (proxies?) connect to the waterfall proxy, each allocating 16MB direct memory and thus rendering the server unusable in seconds.
I've allocated 512MB memory to waterfall, which was plently for the last 3 years. I still doubled it to 1 gig for now, but the DOS "attack" still manages to fill the ram in seconds.
This exception is spammed to console during attack:
You could argue that this is a ddos attack and can't be fixed by bungeecord/waterfall.
However, the host machine was using about 30% cpu and 10% of it's network resources, it's really only bungeecord that is struggeling to keep up with that many requests.
I do use iptables to rate limit new connections per ip to the bc, but this does not really help as the connections come from too many different ips (proxy list?). I now added a global rate limit for syn packets to bungeecord, which somewhat mitigates the attack by not crashin the server. However, no new players can join while an attack is running. So this is not a permanent option :/
I also don't make profit from my server, so i can't afford professional layer 7 ddos mitigation. Hoping to get help here is my only option.
Any help is appreciated.
The text was updated successfully, but these errors were encountered: