-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error R14 (Memory quota exceeded) #8
Comments
Yikes - yes please lemme know if the memory leaks go away when you're not using the cluster. I've used this for a long time in production without problems, but there could still be memory leaks I suppose! I'll stay tuned... |
Hi, Brian.
|
This is not a forky issue. Unless you're using the latest io.js you're most likely using an unstable version of cluster. |
I am having forky max memory and swap as well, it just creates more and more threads that take a memory chunk until the server crashes. |
Yes -- same thing happening to me. Any thoughts? Would it help to reduce the # of workers forky creates? |
@marclar I just pulled out the Forky layer and spanned the application across multiple nodes with the database decoupled. Sorry it isn't a true solution but it solves the problem in some form. |
As in, you implemented your own solution with the cluster module? Not sure
|
I just have the same application running on several servers with a load balancer distributing load across them. If I were writing a nodejs application from scratch I might look into forky more seriously but what I have is an inherited project with limited budget. |
Hello folks. I (@fiftythree) have also been running Forky in production for years now with no issue. I just wanted to chime in with one simple possible explanation: Forky launches multiple Node processes on the same dyno. That means a single dyno will use more memory than it would without Forky. Specifically, if your Node process uses ~100 MB on its own, Forky running 10 workers will use ~1 GB. So instead of a Forky memory leak, it's probably simply the case that you're just running more workers than your dyno's memory capacity. Reducing the number of workers, or bumping up your dyno size, will both probably fix it. And you can monitor your memory usage via Heroku Metrics. If it helps, Heroku provides some helper configs to let you dynamically figure out the optimal number of workers to spawn based on your dyno size. https://devcenter.heroku.com/articles/node-concurrency Hope this helps! |
Here's some more specific things we do, if that helps:
Hope this helps also! |
A gentleman and a scholar, @aseemk ;) |
Hi!
Thanks for this great library.
Unfortunately, while trying to use it on Heroku, the dyno is getting killed because it's exceeding the memory quota. This is the exact log:
It works fine for a while (no error logs), and then it starts exceeding the quota in a loop, until I restart Heroku and everything is back to normal for a while.
(the HEAD requests are newrelic's)
I seem to be following all the steps as indicated in the README. Perhaps the memory leaks are my fault, but I've never seen these messages before using forky. I'll try not using the cluster for a while, and see if the errors still occur.
Thoughts, anyone?
The text was updated successfully, but these errors were encountered: