Open Source Challenge: Open Your Source, More

Posted January 28, 2011 by rhettinger
Categories: Documentation, Open Source, Python

We all agree, open source is a good thing. It is posited that many eyeballs make all bugs shallow. With crowd sourcing, your users become contributors. And even seasoned programmers benefit from time spent reading code. Further, code can serve as an adjunct to documentation. If you really want to know what software is doing, read the source.

Once upon a time in Python land, it seemed everyone had the standard library source at their fingertips. Newbies would read the source to learn from programming luminaries like Tim Peters and Fredrik Lundh. Respondents on the famously friendly Python newsgroup would answer questions by referring to the source, and the phrase “Use the Source Luke” was a familiar refrain.

Times have changed. While core-developers have the source at their fingertips, other developers seem to have lost touch with the source. People get to know code in their own projects, but rarely peer into other projects.

I’ve worked in several Python shops, have done consulting work with first-rate programmers, and discussed programming at meet-ups and conventions. I’m disappointed to report that it is becoming increasingly rare to look at another project’s source code, even among top-notch programmers.

At first, I wanted to blame the practice of tool mashups. If you’re furiously grafting a dozen tools together, who has time to study them in depth? But then, that’s not true; when the docs don’t answer your questions, it’s not a waste of time to examine the relevant source code, provided you know exactly where to look.

Why don’t people look for answers in the code? I place the blame on huge code repositories and on installers that move the code out-of-sight and out-of-mind.

A fresh checkout of Python 3.2 has 216 sub-directories with almost 478,000 lines of Python and 398,000 lines of C.¹ I’ve worked with that code extensively for almost a decade, yet every day I find something new. Unless you know where to look, searching the source for an answer can be a time consuming intellectual investment.

Repository labyrinths are only half the problem. The rest of the blame lies with installers. They all treat human-readable scripts like they were binaries and tuck the code away in a dark corner. On my Mac, the source for the heapq module is stored at /Library/Frameworks /Python.framework‌/Versions/3.2/lib/python3.2/heapq.py. Likewise, the pip installer puts the source for Mercurial in /Library/Python /2.6/site-packages/hgext. News flash: no one is going to view this code.

Large code bases and obscure installation paths are a fact of life, but there is something we can do to reconnect our users with the source code. Starting with Python 3.2, I’ve gone through the module documentation and added direct links to the relevant source code.

For example, in the heapq documentation, the first line under the heading now provides a source link:

This idea is not new. The heap package documentation for the Go Programming Language also starts with a source link:

There you have it — direct, convenient and relevant navigation into a large code base. Perhaps users will start reading code again. Maybe you’ll receive better bug reports, perhaps accompanied by code patches and documentation improvements. Maybe someone will read your code and become a better programmer. Maybe you’ll be inspired to write better code.

My challenge to you open-sourcers is to open your source, more. Provide direct links from the docs. It is not sufficient to expose your repository as one big ball of mud. Make it easy to go right to heart of an issue.


“Publishing programs is a healthy habit. Every program I’ve written knowing it was to be published was improved by that knowledge. I think more clearly when I’m writing for an audience, and find it help to pretend there always is one.” — Rob Pike (quoted from Programming Pearls)

“I had known for a long time that the programs I construct for publication in a book, or the programs that I construct in front of a class, have tended to be comparatively free of errors, because I am forced to clarify my thoughts as I do the programming.” — Donald Knuth (quoted from Literate Programming)


My challenge to you and hope for the future:

  • A universal practice of docs linking back to source code
  • Users getting reacquainted with the sources of their tools
  • Programmers writing code as if they expected it to be read

Footnote:

[1] Sizing the Python 3.2 repository:

$ svn co http://svn.python.org/projects/python/branches/py3k tmp
$ cd tmp
$ find . -type 'd' | grep -v 'svn' | wc -l
     216
$ wc -l $(find . -name "*.py" | egrep -v 'svn|build') | tail -1
  477868 total
$ wc -l $(find . -name "*.[ch]" | egrep -v 'svn|build') | tail -1
  398407 total

Python’s super() considered super!

Posted May 26, 2011 by rhettinger
Categories: Algorithms, Documentation, Inheritance, Open Source, Python

If you aren’t wowed by Python’s super() builtin, chances are you don’t really know what it is capable of doing or how to use it effectively.

Much has been written about super() and much of that writing has been a failure. This article seeks to improve on the situation by:

  • providing practical use cases
  • giving a clear mental model of how it works
  • showing the tradecraft for getting it to work every time
  • concrete advice for building classes that use super()
  • favoring real examples over abstract ABCD diamond diagrams.

The examples for this post are available in both Python 2 syntax and Python 3 syntax.

Using Python 3 syntax, let’s start with a basic use case, a subclass for extending a method from one of the builtin classes:

class LoggingDict(dict):
    def __setitem__(self, key, value):
        logging.info('Setting %r to %r' % (key, value))
        super().__setitem__(key, value)

This class has all the same capabilities as its parent, dict, but it extends the __setitem__ method to make log entries whenever a key is updated. After making a log entry, the method uses super() to delegate the work for actually updating the dictionary with the key/value pair.

Before super() was introduced, we would have hardwired the call with dict.__setitem__(self, key, value). However, super() is better because it is a computed indirect reference.

One benefit of indirection is that we don’t have to specify the delegate class by name. If you edit the source code to switch the base class to some other mapping, the super() reference will automatically follow. You have a single source of truth:

class LoggingDict(SomeOtherMapping):            # new base class
    def __setitem__(self, key, value):
        logging.info('Setting %r to %r' % (key, value))
        super().__setitem__(key, value)         # no change needed

In addition to isolating changes, there is another major benefit to computed indirection, one that may not be familiar to people coming from static languages. Since the indirection is computed at runtime, we have the freedom to influence the calculation so that the indirection will point to some other class.

The calculation depends on both the class where super is called and on the instance’s tree of ancestors. The first component, the class where super is called, is determined by the source code for that class. In our example, super() is called in the LoggingDict.__setitem__ method. That component is fixed. The second and more interesting component is variable (we can create new subclasses with a rich tree of ancestors).

Let’s use this to our advantage to construct a logging ordered dictionary without modifying our existing classes:

class LoggingOD(LoggingDict, collections.OrderedDict):
    pass

The ancestor tree for our new class is: LoggingOD, LoggingDict, OrderedDict, dict, object. For our purposes, the important result is that OrderedDict was inserted after LoggingDict and before dict! This means that the super() call in LoggingDict.__setitem__ now dispatches the key/value update to OrderedDict instead of dict.

Think about that for a moment. We did not alter the source code for LoggingDict. Instead we built a subclass whose only logic is to compose two existing classes and control their search order.

__________________________________________________________________________________________________________________

Search Order

What I’ve been calling the search order or ancestor tree is officially known as the Method Resolution Order or MRO. It’s easy to view the MRO by printing the __mro__ attribute:

>>> pprint(LoggingOD.__mro__)
(<class '__main__.LoggingOD'>,
 <class '__main__.LoggingDict'>,
 <class 'collections.OrderedDict'>,
 <class 'dict'>,
 <class 'object'>)

If our goal is to create a subclass with an MRO to our liking, we need to know how it is calculated. The basics are simple. The sequence includes the class, its base classes, and the base classes of those bases and so on until reaching object which is the root class of all classes. The sequence is ordered so that a class always appears before its parents, and if there are multiple parents, they keep the same order as the tuple of base classes.

The MRO shown above is the one order that follows from those constraints:

  • LoggingOD precedes its parents, LoggingDict and OrderedDict
  • LoggingDict precedes OrderedDict because LoggingOD.__bases__ is (LoggingDict, OrderedDict)
  • LoggingDict precedes its parent which is dict
  • OrderedDict precedes its parent which is dict
  • dict precedes its parent which is object

The process of solving those constraints is known as linearization. There are a number of good papers on the subject, but to create subclasses with an MRO to our liking, we only need to know the two constraints: children precede their parents and the order of appearance in __bases__ is respected.

__________________________________________________________________________________________________________________

Practical Advice

super() is in the business of delegating method calls to some class in the instance’s ancestor tree. For reorderable method calls to work, the classes need to be designed cooperatively. This presents three easily solved practical issues:

  • the method being called by super() needs to exist
  • the caller and callee need to have a matching argument signature
  • and every occurrence of the method needs to use super()

1) Let’s first look at strategies for getting the caller’s arguments to match the signature of the called method. This is a little more challenging than traditional method calls where the callee is known in advance. With super(), the callee is not known at the time a class is written (because a subclass written later may introduce new classes into the MRO).

One approach is to stick with a fixed signature using positional arguments. This works well with methods like __setitem__ which have a fixed signature of two arguments, a key and a value. This technique is shown in the LoggingDict example where __setitem__ has the same signature in both LoggingDict and dict.

A more flexible approach is to have every method in the ancestor tree cooperatively designed to accept keyword arguments and a keyword-arguments dictionary, to remove any arguments that it needs, and to forward the remaining arguments using **kwds, eventually leaving the dictionary empty for the final call in the chain.

Each level strips-off the keyword arguments that it needs so that the final empty dict can be sent to a method that expects no arguments at all (for example, object.__init__ expects zero arguments):

class Shape:
    def __init__(self, shapename, **kwds):
        self.shapename = shapename
        super().__init__(**kwds)        

class ColoredShape(Shape):
    def __init__(self, color, **kwds):
        self.color = color
        super().__init__(**kwds)

cs = ColoredShape(color='red', shapename='circle')

2) Having looked at strategies for getting the caller/callee argument patterns to match, let’s now look at how to make sure the target method exists.

The above example shows the simplest case. We know that object has an __init__ method and that object is always the last class in the MRO chain, so any sequence of calls to super().__init__ is guaranteed to end with a call to object.__init__ method. In other words, we’re guaranteed that the target of the super() call is guaranteed to exist and won’t fail with an AttributeError.

For cases where object doesn’t have the method of interest (a draw() method for example), we need to write a root class that is guaranteed to be called before object. The responsibility of the root class is simply to eat the method call without making a forwarding call using super().

Root.draw can also employ defensive programming using an assertion to ensure it isn’t masking some other draw() method later in the chain.  This could happen if a subclass erroneously incorporates a class that has a draw() method but doesn’t inherit from Root.:

class Root:
    def draw(self):
        # the delegation chain stops here
        assert not hasattr(super(), 'draw')

class Shape(Root):
    def __init__(self, shapename, **kwds):
        self.shapename = shapename
        super().__init__(**kwds)
    def draw(self):
        print('Drawing.  Setting shape to:', self.shapename)
        super().draw()

class ColoredShape(Shape):
    def __init__(self, color, **kwds):
        self.color = color
        super().__init__(**kwds)
    def draw(self):
        print('Drawing.  Setting color to:', self.color)
        super().draw()

cs = ColoredShape(color='blue', shapename='square')
cs.draw()

If subclasses want to inject other classes into the MRO, those other classes also need to inherit from Root so that no path for calling draw() can reach object without having been stopped by Root.draw. This should be clearly documented so that someone writing new cooperating classes will know to subclass from Root. This restriction is not much different than Python’s own requirement that all new exceptions must inherit from BaseException.

3) The techniques shown above assure that super() calls a method that is known to exist and that the signature will be correct; however, we’re still relying on super() being called at each step so that the chain of delegation continues unbroken. This is easy to achieve if we’re designing the classes cooperatively – just add a super() call to every method in the chain.

The three techniques listed above provide the means to design cooperative classes that can be composed or reordered by subclasses.

__________________________________________________________________________________________________________________

How to Incorporate a Non-cooperative Class

Occasionally, a subclass may want to use cooperative multiple inheritance techniques with a third-party class that wasn’t designed for it (perhaps its method of interest doesn’t use super() or perhaps the class doesn’t inherit from the root class). This situation is easily remedied by creating an adapter class that plays by the rules.

For example, the following Moveable class does not make super() calls, and it has an __init__() signature that is incompatible with object.__init__, and it does not inherit from Root:

class Moveable:
    def __init__(self, x, y):
        self.x = x
        self.y = y
    def draw(self):
        print('Drawing at position:', self.x, self.y)

If we want to use this class with our cooperatively designed ColoredShape hierarchy, we need to make an adapter with the requisite super() calls:

class MoveableAdapter(Root):
    def __init__(self, x, y, **kwds):
        self.movable = Moveable(x, y)
        super().__init__(**kwds)
    def draw(self):
        self.movable.draw()
        super().draw()

class MovableColoredShape(ColoredShape, MoveableAdapter):
    pass

MovableColoredShape(color='red', shapename='triangle',
                    x=10, y=20).draw()

__________________________________________________________________________________________________________________

Complete Example – Just for Fun

In Python 2.7 and 3.2, the collections module has both a Counter class and an OrderedDict class. Those classes are easily composed to make an OrderedCounter:

from collections import Counter, OrderedDict

class OrderedCounter(Counter, OrderedDict):
     'Counter that remembers the order elements are first seen'
     def __repr__(self):
         return '%s(%r)' % (self.__class__.__name__,
                            OrderedDict(self))
     def __reduce__(self):
         return self.__class__, (OrderedDict(self),)

oc = OrderedCounter('abracadabra')

__________________________________________________________________________________________________________________

Notes and References

* When subclassing a builtin such as dict(), it is often necessary to override or extend multiple methods at a time. In the above examples, the __setitem__ extension isn’t used by other methods such as dict.update, so it may be necessary to extend those also. This requirement isn’t unique to super(); rather, it arises whenever builtins are subclassed.

* If a class relies on one parent class preceding another (for example, LoggingOD depends on LoggingDict coming before OrderedDict which comes before dict), it is easy to add assertions to validate and document the intended method resolution order:

position = LoggingOD.__mro__.index
assert position(LoggingDict) < position(OrderedDict)
assert position(OrderedDict) < position(dict)

* Good write-ups for linearization algorithms can be found at Python MRO documentation and at Wikipedia entry for C3 Linearization.

* The Dylan programming language has a next-method construct that works like Python’s super(). See Dylan’s class docs for a brief write-up of how it behaves.

* The Python 3 version of super() is used in this post. The full working source code can be found at:  Recipe 577720. The Python 2 syntax differs in that the type and object arguments to super() are explicit rather than implicit. Also, the Python 2 version of super() only works with new-style classes (those that explicitly inherit from object or other builtin type). The full working source code using Python 2 syntax is at Recipe 577721.
__________________________________________________________________________________________________________________

Acknowledgements

Serveral Pythonistas did a pre-publication review of this article.  Their comments helped improve it quite a bit.

They are:  Laura Creighton, Alex Gaynor, Philip Jenvey, Brian Curtin, David Beazley, Chris Angelico, Jim Baker, Ethan Furman, and Michael Foord.  Thanks one and all.

Regaining Lost Knowledge

Posted February 6, 2010 by rhettinger
Categories: Algorithms

Tags: , , ,

A recent Python newsgroup query asked for an efficient solution to the problem of computing a running median as a large sliding window advances over a stream of data

One category of replies can be classified as clever.  The respondents used their innate intelligence and knowledge of Python for a fresh look at the problem.  Their solutions focused on the fact the position of the median doesn’t move much between successive updates.  Unfortunately, these solutions were catastrophically slow for large data windows.

Another category of reply relied on education.  A respondent remembered that QuickSelect is a fast O(n) way of finding a median in unsorted data. I responded with an ASPN recipe implementing QuickSelect (written by yours truly). These posts represented progress, a triumph of education over cleverness, but even that improved solution was unusably slow for large window sizes.

A more promising type of reply relied on research. Surely, this problem had been solved before. Indeed, there is a published paper: Efficient Algorithm for Computing a Running Median by Soymya D. Mohanty with an O(sqrt(n)) solution. Score one for science!

However, that solution was trumped by respondents who characterized the solution mathematically, “the obvious way to compute a running median involves a tree structure so you can quickly insert and delete elements, and find the median. That would be asymptotically O(log n) but messy to implement.” Fortunately, such an implementation exists using the blist Python extension. Alas, we had a good solution but not a portable one. Without the extension module, the B+ tree structure is non-trivial to implement.

When I thought about the problem, the mathematical characterization suggested data structures that maintained sorted data with O(log n) updates, and previous education indicated a skiplist would fit the bill, but it took cleverness to discover that indexing the skiplist to find the median could be reduced to O(log n) time by adding link widths to the structure.  This thinking led to my solution which is easily portable across languages and scales well to very large window sizes.

Had I discovered something new under the sun?  Yes and no.

Yes, as far as I can tell the idea of using an indexable skiplist to solve the running median problem in O(log n) time had never been presented before anywhere else.  The best published solution was Mohanty’s O(sqrt n) solution. Score one for combining mathematical characterization with education and cleverness.

And no, the big inspiration of figuring out how to make a skiplist indexable was not a new result.  Score a big failure for research.  Everywhere I had looked for skiplist resources, only the basics were presented (insertion and deletion in O(log n) time).  No resource mentioned indexable skiplists.  The previous work on the problem had effectively been lost.  An entire generation of programmers was learning about skiplists but not being taught that they could be made efficiently indexable.

To help the world regain this lost knowledge, I updated the wikipedia entry for skiplists to show how to make them indexable with my Python recipe and I’ve added a link to Pugh’s earlier research on the problem.

Will that wikipedia entry really solve the problem of lost knowledge?  The page view statistics suggest that it will.  Only time will tell.