Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove multiprocess finalizer and improve subprocess docs #265

Merged
merged 34 commits into from
Mar 9, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
3160976
Stop using multiprocess finalizers. Better document workarounds for u…
ionelmc Feb 16, 2019
1210cdc
Rename file.
ionelmc Feb 16, 2019
0533855
Update docs/subprocess-support.rst
blueyed Feb 16, 2019
892f4e1
Update docs/subprocess-support.rst
blueyed Feb 16, 2019
b494c37
Update docs/subprocess-support.rst
blueyed Feb 16, 2019
5d49467
Remove the note about windows completely.
ionelmc Feb 16, 2019
1cb8ce6
Well ... bring back the finalizer, but make the cleanup reentrant.
ionelmc Feb 17, 2019
8014767
Remove this attribute and rely on pytest_cov.embed's internal storage…
ionelmc Feb 17, 2019
38c89a8
Ignore SIG_DFL (lil regression).
ionelmc Feb 17, 2019
5374581
Avoid doubly registering the signal handler (so the previous handler …
ionelmc Feb 17, 2019
710e4cc
Add missing global.
ionelmc Feb 17, 2019
8a2bfa8
Add a test for #250.
ionelmc Feb 17, 2019
4046fc2
Add a windows specific test.
ionelmc Feb 17, 2019
65d50cf
Some renaming to better reflect what is actually tested.
ionelmc Feb 17, 2019
49f55d6
Add few more multiprocessing tests and change some details for skips.
ionelmc Feb 17, 2019
b449d92
Skip a bunch of stuff on windows+pypy - it's broken, see https://gith…
ionelmc Feb 18, 2019
c32c158
Correct some assertions. Revert bogus change.
ionelmc Feb 18, 2019
ab9d7bc
Run this fewer times. Maybe travis too slow for such heavy test.
ionelmc Feb 18, 2019
cd0c4a4
Rework a bit the mp pool integration tests to generate a line of code…
ionelmc Feb 21, 2019
3d3b488
Some cleanup.
ionelmc Feb 21, 2019
2291d76
Skip this on windows/pypy (xdist broken).
ionelmc Feb 22, 2019
8f1b9e6
Extend assertion a bit.
ionelmc Feb 22, 2019
478152e
Change the docs again to reflect the current implementation.
ionelmc Feb 22, 2019
7aa50d9
Fix escaping.
ionelmc Feb 23, 2019
3709127
Use travis_wait (mainly for pypy which often times out).
ionelmc Feb 23, 2019
35f38f4
Use travispls instead - travis_wait is so broken ...
ionelmc Feb 23, 2019
bcdce59
Don't run pypy3 on Windows.
ionelmc Feb 23, 2019
c11fe04
Skip the terminate tests on PyPy and remove travispls (doesn't work o…
ionelmc Feb 23, 2019
103d1ef
Remove the automatic SIGTERM handler install from the afterfork
ionelmc Feb 25, 2019
66d8ade
Avoid having stray tracers around. This fixes an "AssertionError: Exp…
ionelmc Feb 25, 2019
65959fc
Avoid writing bogus data files from dead coverage tracers.
ionelmc Feb 25, 2019
fdc43ec
Allow COV_CORE_SOURCE to be empty (it'd be converted to None). Also u…
ionelmc Feb 25, 2019
7557f67
Fix cleanup leaving unusable state.
ionelmc Feb 25, 2019
42f0307
Always skip this on PyPy as it sometimes fail with `error: release un…
ionelmc Mar 9, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ environment:
- TOXENV: 'py27-t310-c45,py27-t40-c45,py27-t41-c45'
- TOXENV: 'py34-t310-c45,py34-t40-c45,py34-t41-c45'
- TOXENV: 'py35-t310-c45,py35-t40-c45,py35-t41-c45'
- TOXENV: 'pypy-t310-c45,pypy-t40-c45,pypy-t41-c45,pypy3-t310-c45,pypy3-t40-c45,pypy3-t41-c45'
- TOXENV: 'pypy-t310-c45,pypy-t40-c45,pypy-t41-c45'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be kept, but the problem appears to be that only pypy gets installed (unconditionally)?!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See pytest-dev/pytest@ec69514 for installing it only if needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't install pypy there at all - it's preinstalled. Either way, I don't want to bother with pypy3 yet on Windows (it's beta anyway, and xdist is broken on pypy windows, so why bother with extensive testing on such a platform).


init:
- ps: echo $env:TOXENV
Expand Down
2 changes: 1 addition & 1 deletion ci/bootstrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@

template_vars = {'tox_environments': tox_environments}
for py_ver in '27 34 35 py'.split():
template_vars['py%s_environments' % py_ver] = [x for x in tox_environments if x.startswith('py' + py_ver)]
template_vars['py%s_environments' % py_ver] = [x for x in tox_environments if x.startswith('py' + py_ver + '-')]

for name in os.listdir(join("ci", "templates")):
with open(join(base_path, name), "w") as fh:
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Contents:
reporting
debuggers
xdist
mp
subprocess-support
plugins
markers-fixtures
changelog
Expand Down
70 changes: 0 additions & 70 deletions docs/mp.rst

This file was deleted.

4 changes: 2 additions & 2 deletions docs/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ Alternatively you can have this in ``tox.ini`` (if you're using `Tox <https://to

[testenv]
setenv =
COV_CORE_SOURCE={toxinidir}/src
COV_CORE_SOURCE=
COV_CORE_CONFIG={toxinidir}/.coveragerc
COV_CORE_DATAFILE={toxinidir}/.coverage.eager
COV_CORE_DATAFILE={toxinidir}/.coverage

And in ``pytest.ini`` / ``tox.ini`` / ``setup.cfg``::

Expand Down
169 changes: 169 additions & 0 deletions docs/subprocess-support.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
==================
Subprocess support
==================

Normally coverage writes the data via a pretty standard atexit handler. However, if the subprocess doesn't exit on its
own then the atexit handler might not run. Why that happens is best left to the adventurous to discover by waddling
though the Python bug tracker.

pytest-cov supports subprocesses and multiprocessing, and works around these atexit limitations. However, there are a
few pitfalls that need to be explained.

If you use ``multiprocessing.Pool``
===================================

**pytest-cov** automatically registers a multiprocessing finalizer. The finalizer will only run reliably if the pool is
closed. Closing the pool basically signals the workers that there will be no more work, and they will eventually exit.
Thus one also needs to call `join` on the pool.

If you use ``multiprocessing.Pool.terminate`` or the context manager API (``__exit__``
will just call ``terminate``) then the workers can get SIGTERM and then the finalizers won't run or complete in time.
Thus you need to make sure your ``multiprocessing.Pool`` gets a nice and clean exit:

.. code-block:: python

from multiprocessing import Pool

def f(x):
return x*x

if __name__ == '__main__':
p = Pool(5)
try:
print(p.map(f, [1, 2, 3]))
finally:
p.close() # Marks the pool as closed.
p.join() # Waits for workers to exit.


If you must use the context manager API (e.g.: the pool is managed in third party code you can't change) then you can
register a cleaning SIGTERM handler like so:

.. code-block:: python

from multiprocessing import Pool

def f(x):
return x*x

if __name__ == '__main__':
try:
from pytest_cov.embed import cleanup_on_sigterm
except ImportError:
pass
else:
cleanup_on_sigterm()

with Pool(5) as p:
print(p.map(f, [1, 2, 3]))

If you use ``multiprocessing.Process``
======================================

There's similar issue when using the ``Process`` objects. Don't forget to use ``.join()``:

.. code-block:: python

from multiprocessing import Process

def f(name):
print('hello', name)

if __name__ == '__main__':
try:
from pytest_cov.embed import cleanup_on_sigterm
except ImportError:
pass
else:
cleanup_on_sigterm()

p = Process(target=f, args=('bob',))
try:
p.start()
finally:
p.join() # necessary so that the Process exists before the test suite exits (thus coverage is collected)

.. _cleanup_on_sigterm:

If you got custom signal handling
=================================

**pytest-cov 2.6** has a rudimentary ``pytest_cov.embed.cleanup_on_sigterm`` you can use to register a SIGTERM handler
that flushes the coverage data.

**pytest-cov 2.7** adds a ``pytest_cov.embed.cleanup_on_signal`` function and changes the implementation to be more
robust: the handler will call the previous handler (if you had previously registered any), and is re-entrant (will
defer extra signals if delivered while the handler runs).

For example, if you reload on SIGHUP you should have something like this:

.. code-block:: python

import os
import signal

def restart_service(frame, signum):
os.exec( ... ) # or whatever your custom signal would do
signal.signal(signal.SIGHUP, restart_service)

try:
from pytest_cov.embed import cleanup_on_signal
except ImportError:
pass
else:
cleanup_on_signal(signal.SIGHUP)

Note that both ``cleanup_on_signal`` and ``cleanup_on_sigterm`` will run the previous signal handler.

Alternatively you can do this:

.. code-block:: python

import os
import signal

try:
from pytest_cov.embed import cleanup
except ImportError:
cleanup = None

def restart_service(frame, signum):
if cleanup is not None:
cleanup()

os.exec( ... ) # or whatever your custom signal would do
signal.signal(signal.SIGHUP, restart_service)

If you use Windows
==================

On Windows you can register a handler for SIGTERM but it doesn't actually work. It will work if you
`os.kill(os.getpid(), signal.SIGTERM)` (send SIGTERM to the current process) but for most intents and purposes that's
completely useless.

Consequently this means that if you use multiprocessing you got no choice but to use the close/join pattern as described
above. Using the context manager API or `terminate` won't work as it relies on SIGTERM.

However you can have a working handler for SIGBREAK (with some caveats):

.. code-block:: python

import os
import signal

def shutdown(frame, signum):
# your app's shutdown or whatever
signal.signal(signal.SIGBREAK, shutdown)

try:
from pytest_cov.embed import cleanup_on_signal
except ImportError:
pass
else:
cleanup_on_signal(signal.SIGBREAK)

The `caveats <https://stefan.sofa-rockers.org/2013/08/15/handling-sub-process-hierarchies-python-linux-os-x/>`_ being
roughly:

* you need to deliver ``signal.CTRL_BREAK_EVENT``
* it gets delivered to the whole process group, and that can have unforeseen consequences
51 changes: 37 additions & 14 deletions src/pytest_cov/embed.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,19 @@
that code coverage is being collected we activate coverage based on
info passed via env vars.
"""
import atexit
import os
import signal

active_cov = None
_active_cov = None


def multiprocessing_start(_):
global _active_cov
cov = init()
if cov:
multiprocessing.util.Finalize(None, cleanup, args=(cov,), exitpriority=1000)
_active_cov = cov
multiprocessing.util.Finalize(None, cleanup, exitpriority=1000)


try:
Expand All @@ -36,27 +39,29 @@ def multiprocessing_start(_):
def init():
# Only continue if ancestor process has set everything needed in
# the env.
global active_cov
global _active_cov

cov_source = os.environ.get('COV_CORE_SOURCE')
cov_config = os.environ.get('COV_CORE_CONFIG')
cov_datafile = os.environ.get('COV_CORE_DATAFILE')
cov_branch = True if os.environ.get('COV_CORE_BRANCH') == 'enabled' else None

if cov_datafile:
if _active_cov:
cleanup()
# Import what we need to activate coverage.
import coverage

# Determine all source roots.
if cov_source == os.pathsep:
if cov_source in os.pathsep:
cov_source = None
else:
cov_source = cov_source.split(os.pathsep)
if cov_config == os.pathsep:
cov_config = True

# Activate coverage for this process.
cov = active_cov = coverage.Coverage(
cov = _active_cov = coverage.Coverage(
source=cov_source,
branch=cov_branch,
data_suffix=True,
Expand All @@ -75,28 +80,44 @@ def _cleanup(cov):
if cov is not None:
cov.stop()
cov.save()
cov._auto_save = False # prevent autosaving from cov._atexit in case the interpreter lacks atexit.unregister
try:
atexit.unregister(cov._atexit)
except Exception:
pass


def cleanup(cov=None):
global active_cov
def cleanup():
global _active_cov
global _cleanup_in_progress
global _pending_signal

_cleanup(cov)
if active_cov is not cov:
_cleanup(active_cov)
active_cov = None
_cleanup_in_progress = True
_cleanup(_active_cov)
_active_cov = None
_cleanup_in_progress = False
if _pending_signal:
_signal_cleanup_handler(*_pending_signal)
_pending_signal = None


multiprocessing_finish = cleanup # in case someone dared to use this internal

_previous_handlers = {}
_pending_signal = None
_cleanup_in_progress = False


def _signal_cleanup_handler(signum, frame):
global _pending_signal
if _cleanup_in_progress:
_pending_signal = signum, frame
return
cleanup()
_previous_handler = _previous_handlers.get(signum)
if _previous_handler == signal.SIG_IGN:
return
elif _previous_handler:
elif _previous_handler and _previous_handler is not _signal_cleanup_handler:
_previous_handler(signum, frame)
elif signum == signal.SIGTERM:
os._exit(128 + signum)
Expand All @@ -105,8 +126,10 @@ def _signal_cleanup_handler(signum, frame):


def cleanup_on_signal(signum):
_previous_handlers[signum] = signal.getsignal(signum)
signal.signal(signum, _signal_cleanup_handler)
previous = signal.getsignal(signum)
if previous is not _signal_cleanup_handler:
_previous_handlers[signum] = previous
signal.signal(signum, _signal_cleanup_handler)


def cleanup_on_sigterm():
Expand Down
Loading