Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mappings use more memory than expected #965

Open
Martinsdevs opened this issue Aug 24, 2024 · 18 comments
Open

Mappings use more memory than expected #965

Martinsdevs opened this issue Aug 24, 2024 · 18 comments
Labels
bug Something isn't working

Comments

@Martinsdevs
Copy link

Describe the bug
Memory Leak: 100% ram get used very fast

Software brand
1.21.1 Paper. Great server host. No problem and runs fast.

Plugins
Run tests with https://heaphero.io/.
It told me clearly, where the memory leak was. (Tested this 2 times)

Also, tested this without the plugin. Came out without any leaks

28,046 instances of "java.lang.Class", loaded by "" occupy 241,775,368 (24.87%) bytes.
Biggest instances:

class com.github.retrooper.packetevents.protocol.world.states.WrappedBlockState @ 0x6269887a0 - 81,319,576 (8.36%) bytes.
class com.mojang.datafixers.types.Type @ 0x618d59a90 - 62,428,800 (6.42%) bytes.
class com.mojang.datafixers.functions.Fold @ 0x61e9139d8 - 53,648,016 (5.52%) bytes.

Expected behavior
No memory leak. but it came.

Screenshots

Additional context

@Martinsdevs Martinsdevs added the bug Something isn't working label Aug 24, 2024
@Tofaa2
Copy link
Contributor

Tofaa2 commented Aug 24, 2024

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

@Martinsdevs
Copy link
Author

It is. I have 8gb ram.

@semenishchev
Copy link

semenishchev commented Aug 24, 2024

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.

However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.

Modern papers have spark built in, so run /spark profiler start --alloc --thread * and upload the results here

@casperwtf
Copy link

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.

However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.

Modern papers have spark built in, so run /spark profiler start --alloc --thread * and upload the results here

No, under that logic why is com.mojang.datafixers.types.Type taking almost 5gb & com.mojang.datafixers.functions.Fold taking just over 4gb.

@Tofaa2
Copy link
Contributor

Tofaa2 commented Aug 25, 2024

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.

However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.

Modern papers have spark built in, so run /spark profiler start --alloc --thread * and upload the results here

I am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands

@Tofaa2
Copy link
Contributor

Tofaa2 commented Aug 25, 2024

It is. I have 8gb ram.

Your jvm is running on 1gb, notice how the pe objects are using 8% of your memory, while being 80 mb

@Martinsdevs
Copy link
Author

https://spark.lucko.me/rORVhYTRMh Here it is. It can be that I have something wrong ofc. I don't doubt that either.

I use libs disguise, with the packetevent plugin

@Martinsdevs
Copy link
Author

https://spark.lucko.me/N6SkNKch8M This is without those 2 plugins

@semenishchev
Copy link

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x [80 / 1000000](tel:80 / 1000000)) = 6505 megabytes.
However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.
Modern papers have spark built in, so run /spark profiler start --alloc --thread * and upload the results here

I am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands

Aren’t this is 81319576 instances? The only way to calculate this is to know how much 1 instance takes. I’m not very familiar with that profiler

@semenishchev
Copy link

https://spark.lucko.me/rORVhYTRMh Here it is. It can be that I have something wrong ofc. I don't doubt that either.

I use libs disguise, with the packetevent plugin

This profiler has “no data”

@Martinsdevs
Copy link
Author

https://spark.lucko.me/CqJLMmwb81 here is a new report, with the plugins :)

@semenishchev
Copy link

https://spark.lucko.me/CqJLMmwb81 here is a new report, with the plugins :)

It looks like a third party problem. I see nothing wrong here

@Tofaa2
Copy link
Contributor

Tofaa2 commented Aug 25, 2024

There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram

Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x [80 / 1000000](tel:80 / 1000000)) = 6505 megabytes.
However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.
Modern papers have spark built in, so run /spark profiler start --alloc --thread * and upload the results here

I am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands

Aren’t this is 81319576 instances? The only way to calculate this is to know how much 1 instance takes. I’m not very familiar with that profiler

No, it is that many bytes, not instances, it says it after the % symbol. This is also a known number in pe that wrapped block states take up 81 mb of ram

@DevNatan
Copy link

DevNatan commented Sep 23, 2024

Same issue here, a lot of WrappedBlockState, 700MB .hprof. Just create some worlds or load some chunks idk, put packetevents alone in the server and u can reproduce.

My servers were being killed by OOM suddenly, mem just goes up infinitely until it reaches the limit and dies. I tested it in a dedicated server, alone, only one server, only with packetevents, 2GB memory. Startup server, create some worlds or wait some players to join, wait 20m~30m and the server dies.

@booky10
Copy link
Collaborator

booky10 commented Sep 23, 2024

Are you sure this is caused by packetevents? Yes, packetevents will consume memory, like every other plugin.
An actual "memory leak" is caused by something consuming potentially infinite amounts of memory, accumulating over time.
Its not a memory leak if packetevents uses e.g. 80MiB to load some registries on startup. If you actually find a memory leak, please send a heapdump here or through Discord.


My servers were being killed by OOM suddenly

Aside from that, your OOM-killer-issue is probably caused by your container memory limit being too low and not accounting for JVM-/OS-overhead.

@DemonDxv
Copy link

Same issue here, a lot of WrappedBlockState, 700MB .hprof. Just create some worlds or load some chunks idk, put packetevents alone in the server and u can reproduce.

My servers were being killed by OOM suddenly, mem just goes up infinitely until it reaches the limit and dies. I tested it in a dedicated server, alone, only one server, only with packetevents, 2GB memory. Startup server, create some worlds or wait some players to join, wait 20m~30m and the server dies.

My .hprof

.hprof header (i can send the entire .hprof if u need after removing internal code references) image

  1. WrappedBlockState
  2. My Plugin that shades Packetevents
  3. Class loader

Tests - same result for all - tested with Latest paper build, Paper Legacy v1.8, PandaSpigot and ImanitySpigot too Machine specs: Dedicated - Ryzen 7950X 32T / 128GB / 1TB

1st test server specs: On Host - Linux ARM - JDK 17, G1GC, equal Xms and Xmx.

2nd test server specs: On Pterodactyl w/ Linux x64 - JDK 21, Parallel GC, equal Xms and Xmx limited to 85%.

3rd test server specs: On Host - Linux x64 - JDK 21, Generational ZGC (note: impossible to use ZGC , server dies even earlier), equal Xms and Xmx

4th test server specs: On Pterodactyl - Linux x64 - JDK 21, with Grim Anticheat that shades Packetevents, equal Xms and Xmx

If u need some research i can rec a video what happens in: 30m server running with and without packetevents

I have this exact same problem, hopefully it gets resolved.

@neziw
Copy link

neziw commented Dec 9, 2024

Having same problem on latest PE

@booky10 booky10 changed the title Memory Leak Mappings use more memory than expected Dec 12, 2024
@booky10
Copy link
Collaborator

booky10 commented Dec 12, 2024

A solution to this "high" memory usage would be lazy loading of mapping files
Although this should be made optional through some sort of setting or system property

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants