-
-
Notifications
You must be signed in to change notification settings - Fork 167
Fix memory leaks. #2703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix memory leaks. #2703
Conversation
|
To facilitate testing this I have created two patches. One for the develop branch and one for the branch for this pull request. They are in the attached file. Edit: Here is an updated patch for this pull request. The one in the zip file has an offset after another change to this pull request: destroy-test-pr.patch.txt To apply the patches run After applying the patch run the webwork2 app either via morbo or hypnotoad. Note that to see the output with hypnotoad you will not be able to use the systemd service. Instead from the webwork2 root directory run:
Then open webwork in the browser, and navigate to several different pages. If you are on the develop branch, you will see that most of the modules in question are not destroyed. Note that if you have multiple authentication modules (like LTI 1.1 and LTI 1.3), then you will see the unused authentication modules destroyed when they are skipped to move on to the next authentication module because the controller releases the reference to those. However, the used authen module will not be destroyed. After visiting numerous pages then stop the webwork2 app (using Ctrl-C with morbo or If you are using this pull request, then you will see all of these objects destroyed after each request completes as you move from page to page in the browser. |
|
Note that this will have a minor merge conflict with #2702 in the |
be832e1 to
2ffeaf6
Compare
|
I checked out develop and applied the patch. I stopped The log file is owned by Then I ran However using a web browser, I could not reach WeBWorK. Neither at its usual address nor at http://127.0.0.1:80. I used ctrl-C to exit. To start regular business back up I git stashed the patch changes and ran When I added myself to the I then checked to see that I was indeed in the |
|
I guess my instructions for running hypnotoad without the service are not quite correct for most deployments. I should have instead told you to run As to your issue in general, you might need to reboot to get the |
|
I updated the instructions above. If you deploy webwork2 to serve directly say with the server user
Note that it is also important that the last command be run from the webwork2 root directory. |
|
By the way, it is odd that you got errors running |
|
OK, I'm not sure I'm completely up and back yet, but I did get this far: and I can enter a course from a web browser. However I am unsure where I should be looking to see modules destroyed. As you can see above, I was taken back to the usual terminal prompt. There is no feed. I navigated to many pages and there was no change in the terminal. Then I ran: So I'm not seeing everything get destroyed at any stage. Am I doing this wrong? |
|
Wait I tried again and now I'm seeing a feed in the terminal. |
|
OK, I ran But now I can't get to a course from a web browser. Either at its usual address or at http://127.0.0.1:80. Firefox just gives me the "Unable to connect" page. |
|
This is happening on a remote server and I'm not sure what to do to access its localhost (127.0.0.1) from a web browser running on my laptop. |
If you see that, it means that you already had webwork2 running. I guess I need to add another step. Make sure you run
You will see all of the output in the terminal once we get it running in the foreground properly.
You will also need to use I apologize for not giving the best instructions for testing this with hypnotoad. On my local machine I don't use root for running the webwork2 app, so things are simpler. |
|
I think my issue now is my most recent post. If localhost on a remote server is where the web application is running, how do I actually access the application from a web browser on my local computer? |
|
You should be able to access the application from the web browser the same way that you usually do when you are using the systemd service. The commands that I have given are just doing what the service does to run hypnotoad. The rest will work the same. Of course you will need to view the terminal on the server to see the output. That won't be in the browser. |
|
Ahh, wait. Remove the |
|
Although, I tested this on a server that is serving directly in a production like environment and had |
|
No luck. I'm doing this on the development server my school set up for me. To get there, I must turn on my school VPN. Normally, I then go to Just now I ran: and then I refresh the browser, still at the same URL, and I get the "unable to connect" page in Firefox. I quit with command-C and then run |
|
Can you still access the server in the browser when you are using the systemd service? |
|
Yes. I just noticed my |
|
OK, each time I start the systemd service back up I see that it is clobbering the Then running hypnotoad, it seems like I'm good to continue testing. |
|
If you can access the server with the systemd service, then you will also be able to access it doing it this way. We just have to do everything the way the systemd service does. One thing to note is that when you run An alternative approach to this is to change |
Alex-Jordan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was able to test using the posted patches and see processes be destroyed as described.
pstaabp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running on my Mac with morbo, I'm seeing everything destroyed on each page request.
If needed, I will try to find another Dev server to test.
|
Do you mean that you are seeing everything destroyed on each request with this pull request, or with develop? With the develop branch that won't happen. You might see some things destroyed, but certainly not everything. |
|
To clarify, I was seeing everything destroyed on the PR, but not on develop. I think this is how I read this should be happening. |
|
Yes, that is correct. I was just seeking clarification. Thanks. |
It turns out that none of the ContentGenerator controller objects are
being destroyed when a request finishes. So each hypnotoad process (or
morbo in development) has every ContentGenerator controller object for
every request it renders saved in memory until the process ends. That
means that everything it had a reference to is also saved in memory.
That includes a `WeBWorK::CourseEnvironment` instance, a
`WeBWorK::Authen` instance, a `WeBWorK::Authz` instance, and a
`WeBWorK::DB` instance (and everything it has a reference to).
Furthermore, even if the controller objects are destroyed and the
`WeBWorK::DB` instance with it, none of the `WeBWorK::DB::Schema`
instances (one for each table) are ever destroyed.
There are two things that cause these references to be kept when they
shouldn't be.
The first is the more obvious circular reference..
A `WeBWorK::Controller` object (from with the `WeBWorK::ContentGenerator`
modules derive) keeps a reference to a `WeBWorK::Authz` instance, and
that instance keeps a reference back to the controller. However, the
`WeBWorK::Authz` doesn't weaken the reference back to the controller.
That was my fault in the conversion to Mojolicious I commented out the
`weaken` statement that prevented this circular reference. That was
because in the initial conversion the controller didn't have a reference
to the `WeBWorK::Authz` instance, and so it was going out of scope and
causing problems. However, when the reference to that instance was
added that should have been uncommented.
Another case of this is that `WeBWorK::Authen::LTIAdvanced` and
`WeBWorK::Authen::LTIAdvantage` packages were keeping a circular
reference to the controller as well. The new methods in those packages
was just deleted so that they use the `WeBWorK::Authen` new method
which already does the right thing.
A third case occurs with the `WeBWorK::DB` instance and the
`WeBWorK::DB::Schema` instances both of which hold references to each
other.
The other thing that causes an extra reference to be kept is an
anonymous subroutine (or closure) using an instance. In this case Perl
forces the instance to be kept in scope for usage in the closure.
The global `$SIG{__WARN__}` handler defined in `Mojolicious::WeBWorK`
uses the `$c` controller instance, and that is what prevents the
`WeBWorK::ContentGenerator` modules from going out of scope. So that
instance in the `around_action` hook needs to be weakened.
For the `WeBWorK::DB::Schema::NewSQL::Std` and `WeBWorK::DB::Schema::NewSQL::Merge`
objects the issue is the `transform_table` and `transform_all` closures
for the sql abstract instances. Those prevent the schema objects from
going out of scope and so the `$self` in the `sql_init` methods where
those closures are defined needs to be weakened as well.
that the $SIG{__WARN__} handler is reset in the after_dispatch hook so
that the reference to the controller is released.
It turns out that none of the ContentGenerator controller objects are being destroyed when a request finishes. So each hypnotoad process (or morbo in development) has every ContentGenerator controller object for every request it renders saved in memory until the process ends. That means that everything it had a reference to is also saved in memory. That includes a
WeBWorK::CourseEnvironmentinstance, aWeBWorK::Autheninstance, aWeBWorK::Authzinstance, and aWeBWorK::DBinstance (and everything it has a reference to).Furthermore, even if the controller objects are destroyed and the
WeBWorK::DBinstance with it, none of theWeBWorK::DB::Schemainstances (one for each table) are ever destroyed.There are two things that cause these references to be kept when they shouldn't be.
The first is the more obvious circular reference..
A
WeBWorK::Controllerobject (from with theWeBWorK::ContentGeneratormodules derive) keeps a reference to aWeBWorK::Authzinstance, and that instance keeps a reference back to the controller. However, theWeBWorK::Authzdoesn't weaken the reference back to the controller. That was my fault in the conversion to Mojolicious I commented out theweakenstatement that prevented this circular reference. That was because in the initial conversion the controller didn't have a reference to theWeBWorK::Authzinstance, and so it was going out of scope and causing problems. However, when the reference to that instance was added that should have been uncommented.Another case of this is that
WeBWorK::Authen::LTIAdvancedandWeBWorK::Authen::LTIAdvantagepackages were keeping a circular reference to the controller as well. The new methods in those packages was just deleted so that they use theWeBWorK::Authennew method which already does the right thing.A third case occurs with the
WeBWorK::DBinstance and theWeBWorK::DB::Schemainstances both of which hold references to each other.The other thing that causes an extra reference to be kept is an anonymous subroutine (or closure) using an instance. In this case Perl forces the instance to be kept in scope for usage in the closure.
The global
$SIG{__WARN__}handler defined inMojolicious::WeBWorKuses the$ccontroller instance, and that is what prevents theWeBWorK::ContentGeneratormodules from going out of scope. So that instance in thearound_actionhook needs to be weakened.Edit: Instead of weakening the controller in the
around_actionhook, just make sure that the$SIG{__WARN__}handler is reset in theafter_dispatchhook which removes the code reference to the handler defined in thearound_actionhook, and thus releases its reference on the controller.For the
WeBWorK::DB::Schema::NewSQL::StdandWeBWorK::DB::Schema::NewSQL::Mergeobjects the issue is thetransform_tableandtransform_allclosures for the sql abstract instances. Those prevent the schema objects from going out of scope and so the$selfin thesql_initmethods where those closures are defined needs to be weakened as well.