Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Querying any resource kills the game #17

Open
nahia95 opened this issue Jun 19, 2019 · 5 comments
Open

Querying any resource kills the game #17

nahia95 opened this issue Jun 19, 2019 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@nahia95
Copy link

nahia95 commented Jun 19, 2019

So I've found a weird issue. If you query "r.resourceCurrent" or "r.resourceCurrentMax", the game starts to lag a lot. Doesn't matter if you do it via WebSocket or HTTP API.

I have installed MemGraph to see memory usage and there is a notable spike in RAM usage. Also it starts to do more frecuent GC calls. After a few minutes, the memory usage and the GC calls starts to go up and up until the game is unplayable because it's stuttering.

let parametros = {
	"+": [
		"v.atmosphericPressure",
		"v.dynamicPressure",
		"v.verticalSpeed",
		"v.surfaceSpeed",
		"v.geeForce",
		"v.altitude",
		"v.sasValue",
		"o.ApA",
		"o.timeToAp",
		"o.PeA",
		"o.timeToPe",
//		"r.resourceCurrent[LiquidFuel]",
//		"r.resourceCurrent[SolidFuel]",
//		"r.resourceCurrentMax[LiquidFuel]",
//		"r.resourceCurrentMax[SolidFuel]",
		"f.throttle",
		"n.heading2",
		"n.pitch2",
		"n.roll2",
		"p.paused"
	],
	"rate": 500
};	

That's my list of subscriptions for telemachus. Notice how the reources are commented out.
If I start a web socket with those parameters, everything works nice and awesome. But the moment I add any of those four commented parameters, the game starts to lag.

Here I'm in orbit with a Probodobodyne HECS2 core. The vessel only has a couple solar panels and the antena. It doesn't have any fuel or engines.
You can see the memory usage is rather normal with a few GC calls that don't lag the game. It has been running for 15 minutes or so.
(I'm connected via telemachus and querying all the data except for the resources)

image

And this is the moment when I add a resource to the query list. You can clearly see the spikes and the GC calls going crazy.

image

Closing my web browser has no effect. The lag spykes are still happening and the only way to go back to normal is restarting the game.

@dsolmann dsolmann pinned this issue Jul 14, 2019
@dsolmann dsolmann added the bug Something isn't working label Jul 14, 2019
@dsolmann
Copy link
Member

Thx for report, will try to fix that. I thibk, that this is happening, because the mod querying KSP data too frequently in the Main thread. I'll try to detach it to different thread.

@Aman-Anas
Copy link

Just curious, but is there a fixed release yet? The stuttering makes the game unplayable, which is a slight issue when doing IVA landings.

@dsolmann dsolmann unpinned this issue Jan 26, 2020
@Row-Bear
Copy link
Contributor

Row-Bear commented Apr 11, 2020

With KSP 1.9.1 and Telemachus 1.7.64:
I've been trying to replicate this issue but have had mixed results.
In one occasion, RAM usage of KSP went from 3.3 to 5.8 GB overnight and KSP crashed after unknown time.
In 3 other occasions, memory usage remains stable at or below the initial 3.3 GB for hours. Initially running 2+ hours while only polling sensors, then 2+ hours polling resources as well, no increased memory use or slowdowns.

Edit: It seems that polling Oxidizer Current Max at least still results in a memory leak.
Memory usage increased with 1.75 GB per hour as I kept polling that.
Now i'm gonna try if there is a difference between Resources.

Edit2: When polling only r.resourceMax[Oxidizer] memory usage does not grow (after a run of 6 hours). When I added r.resourceCurrentMax[Oxidizer] it seemed to increase again.
But as I write this and check, it's actually down 2GB from what it is normally... ??

@Row-Bear
Copy link
Contributor

Row-Bear commented Apr 14, 2020

Okay, some findings of more extensive testing, KSP 1.9.1 + all DLC + Telemachus 1.7.64.
Test setup: a command pod with RTG power source, battery and Telemachus Blade, in orbit around the sun at 2384 Gm (2323 year orbit.. sorry Valentina..)
With that set up, I opened the Telemachus UI and started polling readings, in small groups or singles, then leave it running for >1 hour, noting RAM issue at start and end. Not all runs were of the same length, so I'll add total memory usage increase and memory usage increase per hour.
Each run also included polls to sensor values, due to the graphs on Telemachus UI.

In summary: I only got memory leaking when r.resource.currentMax[] is polled.

Sensors polled during run Duration of run RAM used by KSP at Start (MB) RAM used by KSP at end (MB) RAM increase per hour
r.resource.currentMax[LiquidFuel] 3h10m 3030 11550 2700 MB/hour
Science sensors only 4h42m 3015 3020 n/a
r.resource[ElectricCharge] 7h10m 3168 2887 n/a
r.resource[ElectricCharge] r.resource[Oxidizer] 1h00m 2887 2887 n/a
r.resource[ElectricCharge] r.resource[Oxidizer] r.resource[LiquidFuel] 1h20m 2887 2892 n/a
r.resource[ElectricCharge] r.resource[Oxidizer] r.resource[LiquidFuel] r.resource[MonoPropellant] 1h00m 2892 2891 n/a
r.resource[ElectricCharge] r.resource[Oxidizer] r.resource[LiquidFuel] r.resource[MonoPropellant] r.resource.CurrentMax[ElectricCharge] 1h20m 2891 4363 1150 MB/hour
r.resource[ElectricCharge] r.resource[Oxidizer] r.resource[LiquidFuel] r.resource[MonoPropellant] r.resource.CurrentMax[ElectricCharge] 2h20m 3143 5109 850 MB/hour
r.resource.currentMax[LiquidFuel] 2h02m 3136 4656 760 MB/hour
r.resource.currentMax[LiquidFuel] r.resource.currentMax[Oxidizer] r.resource.currentMax[ElectricCharge] r.resource.currentMax[MonoPropellant] 1h40m 3281 4479 720 MB/hour
r.resourceMax[LiquidFuel] r.resourceMax[Oxidizer] r.resourceMax[ElectricCharge] 1h20m 3145 2937 n/a
r.resourceMax[LiquidFuel] r.resourceMax[Oxidizer] r.resourceMax[ElectricCharge] r.resourceNameList[LiquidFuel] r.resourceNameList[Oxidizer] r.resourceNameList[ElectricCharge] 1h00m 2937 2913 n/a

@sidrus sidrus self-assigned this Apr 17, 2020
@Aman-Anas
Copy link

Has this bug been resolved in the latest release? Sorry for asking again, but just want to check if it's been/being looked at

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants