Decorators are quite often used in python code. It makes stuff lot more easier. Consider this awesome decorator in django.

@login_required def some_view(request): # do stuff here

`login_required`

is said to be a decorator, and `some_view`

is said to be the decorated function. This decorator ensures that the user is logged in before this view is processed – otherwise they are redirected to another page ( In fact – the page to be redirected can be specified through the decorator itself )

# How does this work?

Before seeing how decorators work, there are some stuff that you need to know.

## Closures

Python supports the concept of closures. Consider the following.

def outer_function(args): def inner_function(more_args): # some stuff here # notice that both args and more_args # are accessible here print args, more_args pass return inner_function fun = outer_function(1) fun(2) # prints 1 2

Everytime I call the function `outer_function`

a new definition of `inner_function`

is created and it is returned. The fun thing is, even though the scope of `args`

is only within the definition of `outer_function`

it is available when `inner_function`

is executed.

That was confusing. Let me give a better, practical example.

Functional programming languages have a concept of `partial`

functions. Lets make a partial function.

def partial(function, *args, **kwargs): def ret(*iargs, **ikwargs): return function(*(args+iargs), **dict(ikwargs, **kwargs)) return ret

Now I’m going to create a partial function out of `add`

, which takes two parameters and returns their sum.

def add(x,y): return x+y >>> add(5,6) 11

The partial function I create will be called `add5`

, and will take a single parameter, add that to 5 and return the sum. To create a partial function, we call the function `partial`

as follows

>>> add5 = partial(add, 5) # the first parameter to be supplied to add is 5 >>> add5(6) 11 >>> add5(10) 20

Notice how the value we passed in `partial`

gets “stored” somehow? This works because the returned function remembers all its variables in its scope and the enclosing scopes. This can be confirmed by printing the value of `locals()`

within the `returned`

function.

... def ret(*iargs, **ikwargs): print locals() return function(*(args+iargs), **dict(ikwargs, **kwargs)) return ret ... >>> add5 = partial(add, 5) >>> add5(6) {'args': (5,), 'iargs': (6,), 'kwargs': {}, 'function': <function add at 0x92e1844>>, 'ikwargs': {}} 11

so clearly, the value passed to partial is accessible to the inner function. keep this in mind, we’re going to use this concept in decorators.

## Why decorate?

Decorating a function is often required for modifying the result of function itself, or for separating out stuff that is not related to the action performed by the function.

Let us see examples for both.

A very trivial example to consider will be, suppose you have a series of functions which returns a string. And the string thus returned has to be converted to upper cased. A decorator, uppercase, can be written as follows.

def uppercase(func): def ret(*args, **kwargs): return func(*args, **kwargs).upper() return ret def string_function(string__): # do some stuff with string, and return it return string__ string_function = uppercase(string_function) # apply the decorator

What the heck? where’s the `@thing`

I’ve seen for decorators?

@uppercase def string_function(..): ...

is equivalent to

def string_function(..): ... string_function = uppercase(string_function)

So you see, `@uppercase`

is just syntactic sugar for calling the function `uppercase`

with the function to be decorated as the argument. Personally, whenever I see a decorator in `@decorator`

pattern, I mentally parse it as `func = decorator(func)`

Lets see another example. This is a very widely used application of the decorators. We’re going to write a `memoize`

decorator.

## Memoize? What the heck is that?

Memoization is a really cool technique which can be applied for pure functions. Pure functions – as you might know, always gives same output for the same input values – so its often useful to store the output for already computed inputs. When we get an input which is already computed, we can simply retrieve the output from the storage, instead of computing it all over again. Think of it as a cache.

Lets consider an example. Fibonacci series is a well known series. If you don’t know what it is, you should check it out, its fun!

A naive recursion implementation to find nth item of Fibonacci series [ with n starting from 0 ]could be written as follows.

def fib(n): if n == 0 or n == 1: return n else: return fib(n-1)+fib(n-2)

Lets run this for bigger numbers.

In [28]: %timeit fib(35) 1 loops, best of 3: 8.56 s per loop

Of course, you may get a different output, but that’s incredibly long for me. Lets think of some ways to reduce the time.

`fib(5)`

can be expressed as `fib(3)+fib(4)`

which in turn can be expressed as `fib(2)+fib(1)+fib(3)+fib(2)`

. See how the pattern repeats – we need to calculate fib for lower values again and again.

Given that `fib`

itself is a pure function, why not store the values as we compute? Lets apply memoization in this case.

## A memoize decorator

Our strategy to attack this problem is to have a dictionary, with each key having value as function(key). As we compute newer `function(key)`

values, we store an entry `key: function(key)`

in the dictionary.

def memoize(function): cache = {} def compute(key): if key not in cache: cache[key] = function(key) return cache[key] return compute

Now, lets apply this `memoize`

decorator to our `fibonacci`

function

@memoize def fib(n): ...

Time to run the tests again.

In [37]: %timeit fib(35) 1000000 loops, best of 3: 345 ns per loop

Yaaay! look at the improvement, from 8ish seconds, it has come down to 345 nanoseconds. So there you go, that’s the kind of performance boost you can get by using memoization.

That’s it for now.