Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage after 60days and 30min #2137

Open
Playit3110 opened this issue Dec 11, 2024 · 11 comments
Open

High CPU usage after 60days and 30min #2137

Playit3110 opened this issue Dec 11, 2024 · 11 comments

Comments

@Playit3110
Copy link

Hi,
I run i2pd 2.54.0 and had it running for 60 days. It worked fine untill my CPU usage spiked at 80% on a RPI 4 and continued to stay at 80%. Now I decided to restart the i2pd service and get still around 60%. Why is the CPU usage so high?

I looked into the logs and found that many RouterInfos where "not found". Also I got some Errors:

10:44:16@515/info - NTCP2: Connect error Operation canceled
10:44:17@515/info - NTCP2: Connect error Operation canceled
10:44:17@515/warn - NTCP2: SessionConfirmed read error: End of file
10:44:18@605/error - Streaming: No packets have been received yet
10:44:19@515/info - NTCP2: Connect error Operation canceled
10:44:20@515/info - NTCP2: Connect error Operation canceled
10:44:22@605/error - Streaming: No packets have been received yet
10:44:24@605/error - I2PTunnel: Read error: End of file
10:44:25@605/error - I2PTunnel: Read error: End of file
10:44:26@605/error - Streaming: No packets have been received yet
10:44:26@605/error - Streaming: No packets have been received yet
10:44:26@515/warn - NTCP2: SessionCreated read error: Connection reset by peer
10:44:27@605/error - Streaming: No packets have been received yet
10:44:28@515/warn - NTCP2: SessionCreated read error: End of file
10:44:29@515/info - NTCP2: Connect error Operation canceled
10:44:29@515/info - NTCP2: Connect error Operation canceled
10:44:30@605/error - Streaming: No packets have been received yet
10:44:31@515/info - NTCP2: Connect error Operation canceled
10:44:33@605/error - Streaming: No packets have been received yet
10:44:34@515/info - NTCP2: Connect error Operation canceled

I also got a lot of SSU2 warnings.

I hope we find a solution.

@Vort
Copy link
Contributor

Vort commented Dec 11, 2024

  1. It may be result of DoS attack. Updating to newer version (manually built) may help. I'm using 2.54.0-124-g48b62340 right now and looks like it works fine.
  2. Do you have enough "file" descriptors available in your system? I suspect default value may be too small.

@Playit3110
Copy link
Author

How many file descriptors do I need?

@Vort
Copy link
Contributor

Vort commented Dec 11, 2024

8192 should be enough.

@Playit3110
Copy link
Author

Thank you, i will try it

@LLE8
Copy link

LLE8 commented Dec 11, 2024

In contrib/i2pd.service
LimitNOFILE=8192
by default since Jan 1, 2023

In my experience on VDS normal CPU usage is 70-100 % with traffic 25-40 Mbit/s
2 cores
/proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 13
model name      : QEMU Virtual CPU version 2.5+
stepping        : 3
microcode       : 0x1
cpu MHz         : 1999.999
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm pti
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown mmio_unknown
bogomips        : 3999.99
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
  - cut -

@orignal
Copy link
Contributor

orignal commented Dec 11, 2024

type top -H and see which thread consumes most CPU

@Playit3110
Copy link
Author

Playit3110 commented Dec 12, 2024

In contrib/i2pd.service
LimitNOFILE=8192
by default since Jan 1, 2023

In my experience on VDS normal CPU usage is 70-100 % with traffic 25-40 Mbit/s
2 cores
/proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 13
model name      : QEMU Virtual CPU version 2.5+
stepping        : 3
microcode       : 0x1
cpu MHz         : 1999.999
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm pti
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown mmio_unknown
bogomips        : 3999.99
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
  - cut -

I have this config already but still had this high CPU usage

type top -H and see which thread consumes most CPU

I did it with htop and also top -H -u i2pd and got the main process with the highest CPU usage

Edit:
Now the usage got down, but why is it after a while so high? It happend to me at least 10 times with mostly the newest release of I2Pd. I even put repo.i2pd.xyz in my apt listing so it uses more uptodate versions.

@LLE8
Copy link

LLE8 commented Dec 12, 2024

Normal operation
image

Operation under DoS-attack
#1509 (comment)
#1509 (comment)

@orignal
Copy link
Contributor

orignal commented Dec 12, 2024

Now the usage got down, but why is it after a while so high? It happend to me at least 10 times with mostly the newest release of I2Pd.

Do you run any I2P server tunnels there? Maybe your services were the target of attacks.

@Playit3110
Copy link
Author

Maybe it was both of your ideas, a DOS attack on an service that i run. I looked in my logs and there where many requests with only little time in between. I just dont get why, my webserver for example, didnt have the same high CPU usage. Is it because of the extra encryption?

@orignal
Copy link
Contributor

orignal commented Dec 12, 2024

There was another kind of I2P specific attack. The mitigation was implemented it trunk. Either build yourself or wait for the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants