Python debugging, profiling and testing
It only looks simple when it’s finished
April 28, 2019 — March 20, 2024
Why did it break? How is it slow?
1 Understanding python’s execution model
To understand how python code execution can go slow, or fail, it helps to understand the execution model Philip Guo’s pythontutor.com deserves a shout out here for the app demonstrating what is going on with basic python execution. However, Philip is the kind of person who gruffly deletes his articles from the internet with extreme prejudice, which is behaviour indistinguishable from that of a crank, so take what he says with a grain of salt.
2 Reloading edited code
Changing code? Sometimes it’s complicated to work out how to load some big dependency tree of stuff. There is an autoreload extension which in principle reloads everything that has changed recently:
It usually works, but I have managed to break it with some edge cases. If I don’t trust the reload, I can force manually using deepreload. I can even monkey patch traditional reload
to be deep, I read somewhere:
That didn’t work reliably for me. If I load them both at the same time, stuff gets weird. Don’t do that.
Also, this is incompatible with snakeviz
profiling. Errors ensue.
3 Debugging
3.1 Built-in debugger
Let’s say there is a line in my code that fails:
In vanilla python if I want to debug the last exception (the post-mortem debugger) I do:
and if I want to drop into a debugger from some bit of code, I write:
or in python 3.7+:
This is a pretty good solution that works well and is available AFAICT everywhere. The main problem is that they constantly change the recommended way of invoking the debugger. Get ready for a LONG LIST OF ALTERNATIVES.
If I want a debugger with rich autocomplete, there is a nice one in ipython. Here’s a manual way to drop into the ipython debugger from code, according to Christoph Martin and David Hamann:
from IPython.core.debugger import Tracer; Tracer()() # < 5.1
from IPython.core.debugger import set_trace; set_trace() # >= v5.1
However, that’s not how we are supposed to do it in polite society. Persons of quality are rumoured to invoke their debuggers via so-called magics, e.g. the %debug magic to set a breakpoint at a certain line number:
Pish posh, who thinks in line-numbers? set_trace
wastes less time for humans per default.
An actual use I would make of %debug
is to drop into post-mortem debugging; Without the argument, %debug
activates post-mortem mode. And if I want to drop automatically into the post mortem debugger for every error:
Props to Josh Devlin for explaining this and some other handy tips, and also Gaël Varoquaux.
If that seems abstruse or verbose, ipdb exposes the enhanced debugger from ipython simply and explicitly:
or:
ipdb doesn’t work in jupyter, whose interaction loop is incompatible. %debug
does, but it’s fairly horrible, because juptyer frontends are a mess and various things break; e.g. if I try to execute non-debugger code while in the debugger the entire notebook sometimes freezes unrecoverably; this is very easy to do because the debug console is small and easy to miss when trying to click on it. Any time I find myself needing to debug debugging in jupyter I am briefly filled with despair, then I remember that there is no overwhelming moral imperative for me to use jupyter for anything and I can switch to ipython or vs code.
4 Alternative debugging UIs
Of course, this is python, so the built-in stuff is wreathed in a fizzing haze of short-lived re-implementations that exist probabilistically for an instant then annihilate, like virtual particles in the void. Trillions of debuggers were potentially invented then abandoned on github in the time it took you to read this sentence; Some radiate outwards like Hawking radiation, only to recede away from you in the expanding space of version dependency.
4.1 VS Code debugger
4.2 pudb
Pudb seems to be very close to the native debugger but with console enhancements
- Syntax-highlighted source, the stack, breakpoints and variables are all visible at once and continuously updated. This helps you be more aware of what’s going on in your program. Variable displays can be expanded, collapsed and have various customization options.
- Simple, keyboard-based navigation using single keystrokes makes debugging quick and easy. PuDB understands cursor-keys and Vi shortcuts for navigation. Other keys are inspired by the corresponding pdb commands.
- Drop to a Python shell in the current environment by pressing “!”. Or open a command prompt alongside the source-code via “Ctrl-X”.
- Ability to control the debugger from a separate terminal.
4.3 PyCharm
My brother Andy likes the PyCharm/IntelliJ IDE’s built-in python debugger. I have not used it.
4.4 Viztracer
… is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code to help you intuitively understand your code and figure out the time consuming part of your code.
VizTracer can display every function executed and the corresponding entry/exit time from the beginning of the program to the end, which is helpful for programmers to catch sporatic (sic) performance issues.
Sure, sounds fine.
4.5 pysnooper
PySnooper claims:
instead of carefully crafting the right print lines, you just add one decorator line to the function you’re interested in. You’ll get a play-by-play log of your function, including which lines ran and when, and exactly when local variables were changed.
I always think I’d like to use this, but in practice I don’t.
4.6 Pyrasite
pyrasite injects code into running python processes, which enables more exotic debuggery, and realtime object mutation and stuff and of course, memory and performance profiling.
4.7 Yet more
Gaël recommended some extra debuggers:
- aiomonitor is REPL-injection for async python
- pudb, a curses-style debugger, is popular.
- The
trepan
family of debuggers, trepan3k (python 3), trepan (python 2), ipython-trepan (theoretically ipython but looks unmaintained). Docs live here.
Jeez, OK. But wait there are more.
- There are many other debuggers.
- That’s too many debuggers
- Realistically I won’t use any of them, because the inbuilt one is OK, and already hard enough to keep in my head without putting more points of failure in the mix
- Stop making debuggers
5 Memory leaks
Python 3 has tracemalloc built in. this is a powerful python memory analyser, although bare-bones. Mike Lin walks us though it. Benoit Bernard explains various options that run on older pythons, including, most usefully IMO, objgraph which draws us an actual diagram of where the leaking things are. More full-featured, Pympler provide GUI-backed memory profiling, including the magically handy thing of tracking referrers using its refbrowser.
5.1 Memray
Memory specialist. bloomberg/memray: Memray is a memory profiler for Python
Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.
Notable features:
- 🕵️♀️ Traces every function call so it can accurately represent the call stack, unlike sampling profilers.
- ℭ Also handles native calls in C/C++ libraries so the entire call stack is present in the results.
- 🏎 Blazing fast! Profiling slows the application only slightly. Tracking native code is somewhat slower, but this can be enabled or disabled on demand.
- 📈 It can generate various reports about the collected memory usage data, like flame graphs.
- 🧵 Works with Python threads.
- 👽🧵 Works with native-threads (e.g. C++ threads in C extensions).
Memray can help with the following problems:
- Analyze allocations in applications to help discover the cause of high memory usage.
- Find memory leaks.
- Find hotspots in code that cause a lot of allocations.
Note that Memray only works on Linux and cannot be installed on other platforms.
5.2 Scalene
See below.
6 Profiling
Maybe it’s not crashing, but simply taking too long? Then I want a profiler. There are, of course, lots of profilers, and they each dwell in a city built upon the remains of a previous city, inhabited by other profilers lost to time. Searching for a good profile is not so simple, for we encounter profilers from various archaeological strata as we excavate the internet, and each was acclaimed in its day.
First, we pause to note that debugging tools pysnooper and viztracer both have profiling features. Also we might want to profile various things, such as code speed, code memory use and the trade-off between speed and memory. All the below options have different micro-specialties across this area. Next, profiling-specific alternatives:
6.1 Built-in profiler
Profile functions using cProfile:
CProfile is not so hip any longer. There are some other ones that are more fashionable.
6.2 Scalene
… is a high-performance CPU, GPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than many other profilers while delivering far more detailed information. It is also the first profiler ever to incorporate AI-powered proposed optimizations.
Includes web-gui and VS code integration.
Maybe the freshest thing here? Colleagues of mine love it but I have not used it.
6.3 py-spy
[…] lets you visualize what your Python program is spending time on without restarting the program or modifying the code in any way. Py-Spy is extremely low overhead: it is written in Rust for speed and doesn’t run in the same process as the profiled Python program, nor does it interrupt the running program in any way. This means Py-Spy is safe to use against production Python code. […]
This project aims to let you profile and debug any running Python program, even if the program is serving production traffic. […]
Py-spy works by directly reading the memory of the python program using the
process_vm_readv
system call on Linux, thevm_read
call on macOS or theReadProcessMemory
call on Windows.Figuring out the call stack of the Python program is done by looking at the global PyInterpreterState variable to get all the Python threads running in the interpreter, and then iterating over each PyFrameObject in each thread to get the call stack.
Native ipython can run profiler magically:
%%prun -D somefile.prof
files = glob.glob('*.txt')
for file in files:
with open(file) as f:
print(hashlib.md5(f.read().encode('utf-8')).hexdigest())
Great worked example — Making Python 100x faster with less than 100 lines of Rust:
Python has a built in Profiler (
cProfile
), but in this case it’s not really the right tool for the job:
- It’ll introduce a lot of overhead to all the Python code, and none for native code, so our results might be biased.
- We won’t be able to see into native frames, meaning we aren’t going to be able to see into our Rust code.
We are going to use
py-spy
(GitHub).
py-spy
is a sampling profiler which can see into native frames.They also mercifully publish pre-built wheels to pypi, so we can just
pip install py-spy
and get to work.
6.4 Score-P
HPC-friendly profiling can be provided by scorep
a python binding onf the popular multiprocessing score function. Gocht, Schöne, and Frenzel (2021):
In this paper, we present the Python bindings for Score-P, which make it easy for users to trace and profileFootnote 4 their Python applications, including the usage of (multi-threaded) libraries, MPI parallelism and accelerator usage.
6.5 Austin
I do not know much about this.
6.6 Visualising profiles
snakeviz is a browser-based system that might be ok for the output of CProfile profiles
ftrace profiles
- Chrome’s catapult system can view traces -
chrome://tracing/
orbrave://tracing/
in the browser - They have a new UI called perfetto
- Chrome’s catapult system can view traces -
convert the output to cachegrind format for visualisation in the many
cachegrind
tools.py-spy includes built-in flame graphs
runsnakerun — the original python profiling visualizer, now expired.
SnakeViz includes a handy magic to automatically save stats and launch the profiler. (Gotcha: I have to have the snakeviz CLI already on the path when I launch ipython.)
%load_ext snakeviz
%%snakeviz
files = glob.glob('*.txt')
for file in files:
with open(file) as f:
print(hashlib.md5(f.read().encode('utf-8')).hexdigest())
This is incompatible with autoreload
and gives weird errors if I run them both in the same session.
7 Testing
You may not be amazed to learn that there are many frameworks. The most common seem to be unittest, py.test and nose.
- More robust tests.
- Jacon Kaplan-Moss likes pytest and he’s good let’s copy him.
FWIW I’m no fan of nose; my experience of it was that I spent a lot of time debugging weird failures getting lost in its attempts to automagically help me. This might be because I didn’t deeply understand what I was doing, but the other frameworks didn’t require me to understand so deeply the complexities of their attempts to simplify my life.
8 Typing
9 Reference: Useful step debugger commands
For the in-built step debugger the following commands are especially useful:
! statement
-
Execute the (one-line) statement in the context of the current stack frame, even if it mirrors the name of a debugger command This is the most useful command, because the debugger parser is horrible and will always interpret anything it conceivably can as a debugger command instead of a python command, which is confusing and misleading. So preface everything with
!
to be safe. h(elp) [command]
- Guess
w(here)
- Print your location in current stack
d(own) [count]
/up [count]
- Move the current frame count (default one) levels down/ in the stack trace (to a newer frame).
b(reak) [([filename:]lineno | function) [, condition]]
- The one that is tedious to do manually. Without argument, list all breaks and their metadata.
tbreak [([filename:]lineno | function) [, condition]]
- Temporary breakpoint, which is removed automatically when it is first hit.
cl(ear) [filename:lineno | bpnumber [bpnumber …]]
- Clear specific or all breakpoints
disable [bpnumber [bpnumber …]]
/enable [bpnumber [bpnumber …]]
-
disable
is mostly the same asclear
, but you can re-enable
ignore bpnumber [count]
- ignore a breakpoint a specified number of times
condition bpnumber [condition]
- Set a new condition for the breakpoint
commands [bpnumber]
-
Specify a list of commands for breakpoint number
bpnumber
. The commands themselves appear on the following lines. Typeend
to terminate the command list. s(tep)
- Execute the next line, even if that is inside an invoked function.
n(ext)
- Execute the next line in this function.
unt(il) [lineno]
-
continue to line
lineno
, or the next line with a highetr number than the current one r(eturn)
- Continue execution until the current function returns.
c(ont(inue))
- Continue execution, only stop when a breakpoint is encountered.
j(ump) lineno
- Set the next line that will be executed. Only available in the bottom-most frame. It is not possible to jump into weird places like the middle of a for loop.
l(ist) [first[, last]]
- List source code for the current file.
ll | longlist
- List all source code for the current function or frame.
a(rgs)
- Print the argument list of the current function.
p expression
- Evaluate the expression in the current context and print its value.
pp expression
- Like the p command, except the value of the expression is pretty-printed using the pprint module.
whatis expression
- Print the type of the expression.
source expression
- Try to get source code for the given object and display it.
display [expression]
/undisplay [expression]
- Display the value of the expression if it changed, each time execution stops in the current frame.
interact
- Start an interactive interpreter (using the code module) whose global namespace contains all the (global and local) names found in the current scope.
alias [name [command]]
/unalias name
- Create an alias called name that executes command.
q(uit)
- Pack up and go home
The alias
one needs another look, right? How even does it…
As an example, here are two useful aliases from the manual, for the .pdbrc
file: