Memory related crashes with parse-server starting 3.5.0

Hi, we are having issues with our server application using Parse Server preventing us to ship more to our production servers (issue is on development servers).

Environment:

  • AWS Elastic Beanstalk (Node.js running on 64bit Amazon Linux/4.9.2 / Node.js 10.16.0) running on an auto-scaling fleet of 1-4 c5.xlarge instances for this application (this is development, never really scales)
  • MongoDB database hosted by mLab on a private cluster
  • latest functional version was using parse-server v3.4.4

Issue description:

  • memory usage goes up until server crashes
  • 100% reproducible in our setup
  • it takes around 15 to 30 minutes from deploy to crash

What we tried:

  • terminate the machine to obtain a new one, same issue
  • rollback to our production version using parse-server 3.4.4, this works
  • only add back parse-server 3.5.0, the issue happens
  • only add back parse-server 3.6.0, the issue happens
  • exploring the logs but they are not helpful: nothing in there before the crash and impossible to access the machine once it crashed.

I identified the culprit when updating parse-server from 3.4.4 to 3.5.0 or 3.6.0 (tried both). On the following graph, you can see on the left the memory curve when the server is using parse-server 3.4.4, it grows up then stabilizes around 500MB; on the right when using 3.5.0 or 3.6.0, the memory usage goes up until it reaches the limits of our servers (100% memory used), then the server crashes.

Is anyone else experiencing similar issues? Any idea how to debug this issue?

Happy to provide more information if needed.

Thanks.

@cyrilchandelier thanks for reporting. I will try to reproduce. Since it is a development server, what kind of request and how many / minute you are doing during this 30 min test? Also, do you have something special in your schema? How many classes, any class with a large amount of fields? some field storing a huge amount of data with byte array or something?

Thanks for helping @davimacedo.

We are running jobs against an old copy of real data, so even development server as a throughput of around 1000 requests per minute. These requests are mostly fetching entries in different table, updating fields (sometimes based on other count queries), some of them have individual saves, some are using batches.

In term of schema, we are talking about 20 classes ranging from 5-10 entries to up to 4 millions rows, there is one collection storing large data (in app purchase receipts) but it’s not used at all on development (no scripts querying these).

I don’t think it will be easily reproducible, 3.5.0 has been out there for a while now.

Would it be possible for you to increase the server memory (maybe 2x) and run the test again so we can figure out if the memory will be growing forever or will be stable at some point? What I’m trying to figure out: is it a memory leak or is the new version loading more memory than the older? Could you please also compare the memory usage right after initializing the process when using the two different versions?

I ran several tests today:

  • upgrading parse-server to latest (3.7.2): same outcome, crash around 800MB memory usage
  • upgrading my server machines to double (c5.2xlarge instead of c5.xlarge): I observed the memory stabilizing around the 900MB mark

Any idea what would cause this?

Thanks.

Do you have enableSingleSchemaCache or directAccess enabled these should reduce your memory usage.

You could use the RedisCacheAdapter also.

What version of Node are you running?

There are a few threads on performance and debugging