Tuesday, February 12, 2013

2-day Twisted class in San Francisco: Early Bird ends Friday

If you want to build reliable, well-tested network applications in Python, Twisted may be the tool you need. In this two-day class, taking place in March 11th and 12th (right before PyCon) we will cover the basic principles and core APIs of Twisted. Early bird pricing will save you $100, and ends in just three more days.

Covered material will include:
  • Understanding Event Loops: we'll re-implement Twisted's core APIs step-by-step (reactor, transport, protocol), explaining the why and how of event-driven networking.
  • TCP Clients and Servers.
  • Scheduling Timed Events.
  • Deferreds: the motivation and uses of Twisted's result callback abstraction. 
  • Producers and Consumers: dealing with large amounts of data.
  • Unit Testing: how to why test your networking code.
  • A large, self-paced exercise, implementing a HTTP server and client from scratch using pre-written unit tests as guidance, and our help as needed. (These last two points will also be presented at PyCon, at the Twisted testing tutorial.)

To learn more and sign up for the class visit our Eventbrite page.


Abous us:

Jean-Paul Calderone has consulted for Fortune 500 companies, startups and research institutions. He has taught Twisted tutorials at PyCon, Fluendo SA in Barcelona, and Rackspace Inc. Jean-Paul has been one of the core Twisted maintainers since 2002, and is the maintainer of pyOpenSSL.

Itamar Turner-Trauring spent many years working on distributed applications as part of ITA Software and then Google's airline reservation system, coding in Python (often using Twisted), C++ and a little bit of Common Lisp. Itamar has also worked on projects ranging from a reliable multicast messaging system with congestion control, a prototype-based configuration language, to a multimedia kiosk for a museum. Itamar has been one of the core Twisted maintainers since 2001.

Wednesday, February 6, 2013

Mock Assurances

I recently tried the mock library; it's quite useful, and in general using it was a pleasant experience... until things turned scary. While refactoring some code and corresponding tests I hit a point where a test should have been failing, and yet was nonetheless passing. A little investigation led me to the problem, and that's when I got really nervous.

Here's a rather silly example of two unit tests using mock.
import unittest
import mock

class C:
    def function(self, x):
        pass

class Tests(unittest.TestCase):
    def test_positive(self):
        C2 = mock.Mock(spec=C)
        obj = C2()
        obj.function(1)
        obj.function.assert_called_once_with(1)

    def test_negative(self):
        C2 = mock.Mock(spec=C)
        obj = C2()
        obj.function(1)
        obj.function.assert_not_called()


if __name__ == '__main__':
    unittest.main()
One would expect test_negative to fail, but in fact both tests pass:
$ python mocktests.py 
..
------------------------------------
Ran 2 tests in 0.001s

OK
Oops.

If you've used mock before, you probably know what's going on. The mock library creates new attributes on demand if they don't exist. Thus:
>>> import mock
>>> obj = mock.Mock()
>>> obj.assert_called_once_with
<bound method Mock.assert_called_once_with of <mock.Mock object at 0x7f6e1c18a990>>
>>> obj.assert_not_called
<Mock name='mock.assert_not_called' id='140110894509008'>
Once again, oops. I had invented a new assertion method assert_not_called which doesn't actually exist, and mock had happily created a new object for me. My test was completely broken. My mistake, and therefore my fault. And in fact the mock documentation does mention the possibility of this happening, deep within the bowels of the API documentation: "Because mocks auto-create attributes on demand, and allow you to call them with arbitrary arguments, if you misspell one of these assert methods then your assertion is gone." An appropriate fix is provided, the mock.create_autospec API. It would have been better to include this suggestion in the intro documentation; better yet would be preventing the problem in the first place.

By their very nature, test assertions do nothing silently in the expected case. It's thus quite dangerous to have a library specifically intended for testing where typos create calls that are supposed to be assertions but actually assert nothing, silently. In general, I prefer tools that don't assume I'm perfect. If I never made mistakes I wouldn't need to write tests in the first place, at least for code where I was the only maintainer.

What's more, this will also be a problem if new assertion methods are ever added to future versions of mock. Imagine developer A writes tests using a new assertion only available in mock v1.1; when she runs the tests, they work correctly. Developer B is working on the same code base, but forgot to upgrade and is using mock v1.0 that lacks the new assertion. When he runs the tests they are not actually testing what they seem to be testing. Oops.

The basic design flaw here is having a public API on objects that also create arbitrary attributes on demand. The whole public API of mock.Mock (assertions, attributes, etc.) should be exposed as module-level functions, so that typos or misremembering the API will result in a nice informative AttributeError. Until that happens, I will stick to mock.create_autospec, and avoid mock.Mock or mock.MagicMock.