Python debugging, profiling and testing

It only looks simple when it’s finished



Why did it break? How is it slow?

Understanding python’s execution model

To understand how python code execution can go slow, or fail, it helps to understand the execution model Philip Guo’s pythontutor.com deserves a shout out here for the app demonstrating what is going on with basic python execution. However, Philip is the kind of person who gruffly deletes his articles from the internet with extreme prejudice, so take what he says with a grain of salt.

Reloading edited code

Changing code? Sometimes it’s complicated to work out how to load some big dependency tree of stuff. There is an autoreload extension which in principle reloads everything that has changed.

%load_ext autoreload
%autoreload 2

If I don’t trust it I can do it manually using deepreload. I can even monkey patch traditional reload to be deep.

import builtins
from IPython.lib import deepreload
builtins.reload = deepreload.reload

That didn’t work reliably for me. If you load them both at the same time, stuff gets weird. Don’t do that.

Also, this is incompatible with snakeviz. Errors ensue.

Debugging

Built-in debugger

Let’s say there is a line in my code that fails:

1/0

In vanilla python if I want to debug the last exception (the post-mortem debugger) I do:

import pdb; pdb.pm()

and if I want to drop into a debugger from some bit of code, I write:

import pdb; pdb.set_trace()

or in python 3.7+:

breakpoint()

This ia a pretty good solution that works well and is avaialble AFAICT everywhere. The main problem is that they constantly change the recommended way of invoking the debugger. Get ready for a LONG LIST OF ALTERNATIVES.

If I want have a debugger with rich autocomplete, there is a nice one in ipython. Here’s a manual way to drop into the ipython debugger from code, according to Christoph Martin and David Hamann:

from IPython.core.debugger import Tracer; Tracer()()  # < 5.1
from IPython.core.debugger import set_trace; set_trace()  # >= v5.1

However, that’s not how you are supposed to do it in polite society. Persons of quality are rumoured to invoke their debuggers via so-called magics, e.g. the %debug magic to set a breakpoint.

%debug [--breakpoint filename:line_number_for_breakpoint]

Pish posh, who thinks in line-numbers? set_trace wastes less time for humans per default.

An actual use I would make of this is to drop into post-mortem debugging; Without the argument %debug activates post-mortem mode. And if I want to drop automatically into the post mortem debugger for every error:

%pdb on

1/0

Props to Josh Devlin for explaining this and some other handy tips, and also Gaël Varoquaux.

If that seems abstruse or verbose, ipdb exposes the enhanced debugger from ipython simply and explicitly:

pip install ipdb
import ipdb; ipdb.set_trace()

or:

import ipdb; ipdb.pm()

ipdb doesn’t work in jupyter, which has some incompatible interaction loop going on.

%debug does, but it's fairly horrible, because juptyer frontends are a mess.

Memory leaks

Python 3 has tracemalloc built in. this is a powerful python memory analyser, although bare-bones. Mike Lin walks you though it. Benoit Bernard explains various options that run on older pythons, including, most usefully IMO, obgraph which draws you an actual diagram of where the leaking things are. More full features, Pympler provide GUI-backed memory profiling, including the magically handy thing of tracking referrers using its refbrowser.

Alternative debugging

Of course, this is python, so the built-in stuff is surrounded by a fizzing haze of short-lived re-implementations of everything, that exist for an instant then annihilate one another, like quantum virtual particles in the void.

Viztracer

VizTracer

… is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code to help you intuitively understand your code and figure out the time consuming part of your code.

VizTracer can display every function executed and the corresponding entry/exit time from the beginning of the program to the end, which is helpful for programmers to catch sporatic (sic) performance issues.

Sure, sounds fine.

pysnooper

PySnooper claims:

instead of carefully crafting the right print lines, you just add one decorator line to the function you’re interested in. You’ll get a play-by-play log of your function, including which lines ran and when, and exactly when local variables were changed.

I always think I’d like to use this, but in practice I don’t.

Pyrasite

pyrasite injects code into running python processes, which enables more exotic debuggery, and realtime object mutation and stuff and of course, memory and performance profiling.

Yet more

Gaël recommended some extra debuggers:

Jeez, OK. But wait there are more.

  • My brother Andy likes the PyCharm/IntelliJ IDE’s built-in python debugger.
  • There are many other debuggers.
  • That’s too many debuggers
  • Realistically I won’t use any of them, because the inbuilt one is OK, and already hard enough to keep in my head without putting more points of failure in the mix
  • Stop making debuggers

Profiling

Maybe it’s not crashing, but taking too long? You want a profiler. There are, of course, lots of profiles, and they dwell in a city built upon the remains of a previous city, inhabited by other profilers lost to time.

First, we pause to note that debugging tools pysnoooper and viztracer both have profiling features. Next, profiling-specific alternatives:

Built-in profiler

Profile functions using cProfile:

import cProfile as profile
profile.runctx('print(predded.shape)', globals(), locals())

CProfile is not so hip any longer. There are some other ones that are more fashionable.

py-spy

py-spy

[…] lets you visualize what your Python program is spending time on without restarting the program or modifying the code in any way. Py-Spy is extremely low overhead: it is written in Rust for speed and doesn’t run in the same process as the profiled Python program, nor does it interrupt the running program in any way. This means Py-Spy is safe to use against production Python code. […]

This project aims to let you profile and debug any running Python program, even if the program is serving production traffic. […]

Py-spy works by directly reading the memory of the python program using the process_vm_readv system call on Linux, the vm_read call on macOS or the ReadProcessMemory call on Windows.

Figuring out the call stack of the Python program is done by looking at the global PyInterpreterState variable to get all the Python threads running in the interpreter, and then iterating over each PyFrameObject in each thread to get the call stack.

Native ipython can run profiler magically:

%%prun -D somefile.prof
files = glob.glob('*.txt')
for file in files:
  with open(file) as f:
    print(hashlib.md5(f.read().encode('utf-8')).hexdigest())

snakeviz includes a handy magic to automatically save stats and launch the profiler. (Gotcha: you have to have the snakeviz CLI already on the path when you launch ipython.)

%load_ext snakeviz
%%snakeviz
files = glob.glob('*.txt')
for file in files:
  with open(file) as f:
    print(hashlib.md5(f.read().encode('utf-8')).hexdigest())

This is incompatible with autoreload and gives weird errors if you run them both in the same session.

Visualising profiles

Testing

You will be amazed to learn that there are a lot of frameworks. No, really. I know, right? The most common seem to be unittest, py.test and nose.

FWIW I’m no fan of nose; my experience of it was that I spent a lot of time debugging weird failures getting lost in its attempts to automagically help me. This might be because I didn’t deeply understand what I was doing, but the other frameworks didn’t require me to understand so deeply the complexities of their attempts to simplify my life.

Reference: Useful debugger commands

! statement
Execute the (one-line) statement in the context of the current stack frame, even if it mirrors the name of a debugger command This is the most useful command, because the debugger parser is horrible and will always interpret anything it conceivably can as a debugger command instead of a python command, which is confusing and misleading. So preface everything with ! to be safe.
h(elp) [command]
Guess
w(here)
Print your location in current stack
d(own) [count]/up [count]
Move the current frame count (default one) levels down/ in the stack trace (to a newer frame).
b(reak) [([filename:]lineno | function) [, condition]]
The one that is tedious to do manually. Without argument, list all breaks and their metadata.
tbreak [([filename:]lineno | function) [, condition]]
Temporary breakpoint, which is removed automatically when it is first hit.
cl(ear) [filename:lineno | bpnumber [bpnumber …]]
Clear specific or all breakpoints
disable [bpnumber [bpnumber …]]/enable [bpnumber [bpnumber …]]
disable is mostly the same as clear, but you can re-enable
ignore bpnumber [count]
ignore a breakpoint a specified number of times
condition bpnumber [condition]
Set a new condition for the breakpoint
commands [bpnumber]
Specify a list of commands for breakpoint number bpnumber. The commands themselves appear on the following lines. Type end to terminate the command list.
s(tep)
Execute the next line, even if that is inside an invoked function.
n(ext)
Execute the next line in this function.
unt(il) [lineno]
continue to line lineno, or the next line with a highetr number than the current one
r(eturn)
Continue execution until the current function returns.
c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.
j(ump) lineno
Set the next line that will be executed. Only available in the bottom-most frame. It is not possible to jump into weird places like the middle of a for loop.
l(ist) [first[, last]]
List source code for the current file.
ll | longlist
List all source code for the current function or frame.
a(rgs)
Print the argument list of the current function.
p expression
Evaluate the expression in the current context and print its value.
pp expression
Like the p command, except the value of the expression is pretty-printed using the pprint module.
whatis expression
Print the type of the expression.
source expression
Try to get source code for the given object and display it.
display [expression]/undisplay [expression]
Display the value of the expression if it changed, each time execution stops in the current frame.
interact
Start an interactive interpreter (using the code module) whose global namespace contains all the (global and local) names found in the current scope.
alias [name [command]]/unalias name
Create an alias called name that executes command.
q(uit)
Pack up and go home

The alias one needs another look, right? How even does it…

As an example, here are two useful aliases from the manual, for the .pdbrc file:

# Print instance variables (usage `pi classInst`)

alias pi for k in %1.__dict__.keys(): print(“%1.”,k,“=”,%1.__dict__[k])
# Print instance variables in self

alias ps pi self

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.