The easiest way to achieve that is:
CherryClass Root: view: def index(self): while 1: pass
The way you can tell that your server is spinning is very easy: you're no longer able to connect to it, and the cpu time used by the process (which is displayed when you do a ps on Unix for instance) keeps growing, meaning that the process is not stalled, but it's actually doing something.
You might be thinking "I'm not that stupid ! I will never write such an obvious loop !".
But here is another less trivial case that actually happened to me (and that's the reason why I wrote this HowTo):
CherryClass Root: function: def extractText(self, code): # Remove text between < and > while 1: i=code.find('<') if i==-1: break j=code.find('>', i) if j!=-1: code=code[:i]+' '+code[j+1:] return code mask: def index(self, code=''): <html><body> Extracted text: <py-eval="self.extractText(code)"> <form action="index"> New text: <textarea name="code"></textarea> <input type=submit> </form> </body></html>
If you compile this code, run it and test it with some HTML code, it might run fine for a while, but in some cases it will start "spinning" ! The reason is that the extractText function is buggy in some cases: if the code is not proper HTML (for instance, a tag is opened with a "<" but never closed with a ">"), the function will enter an infinite loop. Correcting that is very easy once you know what's happening (just add "else: break" after the second "if" of the function).
But when this is happening on your production server, all you see is that your CherryPy server sometimes start spinning. Besides, the server might run fine for days, and start spinning all the sudden, making it very hard to reproduce it in your development environment and thus almost impossible to find out where it comes from !
The next section explain how to easily debug that to track down the culprit ...
All you have to do is wait for your CherryPy server to start spinning. Once it happens, fire up gdb using the name of the Python program that's running your CherryPy server (with the correct version). This might be for instance:
gdb python2.1 or gdb python2.2
attach 7457
Now comes the clever trick: run the following command in gdb:
call PyRun_SimpleString("import sys, traceback; sys.stderr=open('/tmp/tb','w',0); traceback.print_stack()")
This will save the traceback in the /tmp/tb file. Just exit gdb and look at this file ... It should be obvious where the server was stuck. In our example, the file contained the following lines:
File "TestServer.py", line 454, in ? try: _serveForever(_masterSocket) File "TestServer.py", line 215, in _serveForever _handleRequest(_wfile) File "TestServer.py", line 363, in _handleRequest response.body=eval("%s.%s(%s)"%(_myClass,_function, _paramStr)) File "<string>", line 0, in ? File "TestServer.py", line 61, in index _page.append(str(self.extractText(code))) File "TestServer.py", line 55, in extractText if j!=-1: code=code[:i]+' '+code[j+1:] File "<string>", line 1, in ?
PS: Thanks to Barry Warsaw for this trick (which was originally posted to a Zope mailing list).
See About this document... for information on suggesting changes.