Best practices for Using Functional Programming in Python

Introduction

Python is a very versatile, high-level programming language. It has a generous standard library, support for multiple programming paradigms, and a lot of internal transparency. If you choose, you can peek into lower layers of Python and modify them – and even modify the runtime on the fly as the program executes.

I’ve recently noticed an evolution in the way Python programmers use the language as they gain more experience. Like many new Python programmers, I appreciated the simplicity and user friendliness of the the basic looping, function, and class definition syntax when I was first learning. As I mastered basic syntax, I became curious about intermediate and advanced features like inheritance, generators, and metaprogramming. However, I wasn’t quite sure when to use them, and would often jump at opportunities to practice that weren’t a great fit. For a while, my code became more complex and harder to read. Then, as I kept iterating – especially if I kept working on the same codebase – I gradually reverted back to mostly using functions, loops, and singleton classes.

With that being said, the other features exist for a reason, and they’re important tools to understand. “How to write good code” is obviously an expansive topic – and there’s no single right answer! Instead, my goal with this blog post is to zero in on a specific aspect: functional programming as applied to Python. I’ll dig into what it is, how it can be used in Python, and how – according to my experience – it’s used best.

What is functional programming?

Functional programming, or FP, is a coding paradigm in which the building blocks are immutable values and “pure functions” that share no state with other functions. Every time a pure function has a given input, it will return the same output – without mutating data or causing side effects. In this sense, pure functions are often compared to mathematical operations. For example, 3 plus 4 will always equal 7, regardless of what other mathematical operations are being done, or how many times you’ve added things together before.

With the building blocks of pure functions and immutable values, programmers can create logical structures. Iteration can be replaced with recursion, because it is the functional way to cause the same action to occur multiple times. The function calls itself, with new inputs, until the parameters meet a termination condition. In addition, there are higher-order functions, which take in other functions as input and/or return them as output. I’ll describe some of these later on.

Although functional programming has existed since the 1950s, and is implemented by a long lineage of languages, it doesn’t fully describe a programming language. Clojure, Common Lisp, Haskell, and OCaml are all functional-first languages with different stances on other programming language concepts, like the type system and strict or lazy evaluation. Most of them also support side effects such as writing to and reading from files in some way or another – usually all very carefully marked as impure.

Functional programming can have a reputation for being abstruse, and for favoring elegance or concision over practicality. Large companies rarely rely on functional-first languages at scale, or at least do so on a smaller level than other languages like C++, Java, or Python. FP, however, is really just a framework for thinking about logical flows, with its upsides and downsides, and it is composable with other paradigms.

What does Python support?

Though Python is not primarily a functional language, it is able to support functional programming relatively easily because everything in Python is an object. That means that function definitions can be assigned to variables and passed around.

def add(a, b):
    return a + b

plus = add

plus(3, 4)  # returns 7

Lambda

The “lambda” syntax allows you to create function definitions in a declarative way. The keyword lambda comes from the greek letter used in the formal mathematical logic for describing functions and variable bindings abstractly, “lambda calculus”, which has existed for even longer than functional programming. The other term for this concept is “anonymous function”, since lambda functions can be used in-line without ever actually needing a name. If you do choose to assign an anonymous function to a variable, they perform exactly the same as any other function.

(lambda a, b: a + b)(3, 4)  # returns 7

addition = lambda a, b: a + b
addition(3, 4)  # returns 7

The most common place I see lambda functions “in the wild” is for functions that take in a callable. A “callable” is anything that can be invoked with parentheses – practically speaking classes, functions and methods. Amongst those, the most common use is to declare a relative prioritization via the argument key when sorting data structures.

authors = ['Octavia Butler', 'Isaac Asimov', 'Neal Stephenson', 'Margaret Atwood', 'Usula K Le Guin', 'Ray Bradbury']
sorted(authors, key=len)  # Returns list ordered by length of author name
sorted(authors, key=lambda name: name.split()[-1])  # Returns list ordered alphabetically by last name.

The downside to inline lambda functions is that they show up with no name in stack traces, which can make debugging more difficult.

Functools

The higher-order functions that are the meat-and-potatoes of functional programming are available in Python either in builtins or via the functools library. map and reduce may ring a bell as a way to run distributed data analysis at scale, but they are also two of the most important higher-order functions. map applies a function to every item in a sequence, returning the resultant sequence, and reduce uses a function to collect every item in a sequence into a single value.

val = [1, 2, 3, 4, 5, 6]

# Multiply every item by two
list(map(lambda x: x * 2, val)) # [2, 4, 6, 8, 10, 12]
# Take the factorial by multiplying the value so far to the next item
reduce(lambda: x, y: x * y, val, 1) # 1 * 1 * 2 * 3 * 4 * 5 * 6

There are a pile of other higher order functions that manipulate functions in other ways, notably partial, which locks in some of the parameters to the function. This is also known as “currying”, a term named after FP pioneer Haskell Curry.

def power(base, exp):
     return base ** exp
cube = partial(power, exp=3)
cube(5)  # returns 125

For a detailed tour of introductory FP concepts in Python, written in the way a functional-first language would use them, I recommend Mary Rose Cook’s article here.

These functions can turn many-line loops into incredibly concise one-liners. However, they are often harder for the average programmer to grapple with, especially when compared to the almost-English flow of imperative Python. Personally, I can never remember the argument order, or which function does exactly what, even though I’ve looked them up many times. I do encourage playing with them to get to know FP concepts, and I describe some cases in which they may be the right choice in a shared codebase in the next section.

 

Decorators

Higher-order functions are also baked into everyday Python via decorators. One way of declaring decorators reflects that, and the @ symbol is basically a syntactic sugar for passing in the decorated function as an argument to the decorator. Here is a simple decorator that sets up retries around a piece of code and returns the first successful value, or gives up and raises the most recent exception after 3 attempts.

def retry(func):
    def retried_function(*args, **kwargs):
        exc = None
        for _ in range(3):
            try:
               return func(*args, **kwargs)
            except Exception as exc:
               print("Exception raised while calling %s with args:%s, kwargs: %s. Retrying" % (func, args, kwargs).

        raise exc
     return retried_function

@retry
def do_something_risky():
    ...

retried_function = retry(do_something_risky)  # No need to use `@`

This decorator leaves the input and output types and values as exactly the same — but that’s not a requirement. Decorators can add or remove arguments or change their type. They can also be configured via parameters themselves. I want to stress that decorators themselves are not necessarily “purely functional”; they can (and often do, as in the example above) have side effects – they just happen to use higher order functions.

Like many intermediate or advanced Python techniques, this is very powerful and often confusing. The name of the function you called will be different from the name in the stack traces, unless you use the functools.wraps decorator to annotate. I have seen decorators do very complicated or important things, like parse values out of json blobs or handle authentication. I’ve also seen multiple layers of decorators on the same function or method definition, which requires knowing the decorator application order to understand. I think it can be helpful to use the builtin decorators like `staticmethod` or write simple, clearly named decorators that save a lot of boilerplate, but especially if you want to make your code compatible with type checking, anything that changes the input or output types can easily edge into “too clever”.

My recommendations

Functional programming is interesting, and learning paradigms that are outside your current comfort zone is always good for building flexibility and allowing you to look at problems in different ways. However, I wouldn’t recommend writing a lot of functional-first Python, especially in a shared or long-lived codebase. Aside from the pitfalls of each feature I mentioned above, here’s why:

  • In order to begin using Python, it’s not required to understand FP. You’re likely to confuse other readers, or your future self.
  • You have no guarantee that any of the code you rely on (pip modules or your collaborators’ code) is functional and pure. You also don’t know whether your own code is as pure as you hope for it to be – unlike functional-first languages, the syntax or compiler don’t help enforce purity and help eliminate some types of bugs. Mashing up side effects and higher level functions can be extremely confusing, because you end up with two kinds of complexity to reason through, and then the multiplicative effect of the two together.
  • Using higher-order function with type comments is an advanced skill. Type signatures often become long and unwieldy nests of Callable. For example, the correct way to type a simple higher order decorator that returns the input function is by declaring F = TypeVar[‘F’, bound=Callable[..., Any]] then annotating as def transparent(func: F) -> F: return func. Or, you may be tempted to bail and use Any instead of trying to figure out the correct signature.

So what parts of functional programming should be used?

Pure functions

When possible and reasonably convenient, try to keep functions “pure”, and keep state that changes in well-thought-out, well marked places. This makes unit testing a lot easier – you avoid having to do as much set-up, tear-down, and mocking, and the tests are more likely to be predictable regardless of the order they run in.

Here is a non-functional example.

dictionary = ['fox', 'boss', 'orange', 'toes', 'fairy', 'cup']
def puralize(words):
   for i in range(len(words)):
       word = words[i]
       if word.endswith('s') or word.endswith('x'):
           word += 'es'
       if word.endswith('y'):
           word = word[:-1] + 'ies'
       else:
           word += 's'
       words[i] = word

def test_pluralize():
    pluralize(dictionary)
    assert dictionary == ['foxes', 'bosses', 'oranges', 'toeses', 'fairies', 'cups']

The first time you run test_pluralize, it will pass, but every time after it’s going to fail, as the s and esget appended ad infinitum. To make it a pure function, we could rewrite it as:

dictionary = ['fox', 'boss', 'orange', 'toes', 'fairy', 'cup']
def puralize(words):
   result = []
   for word in words:
       word = words[i]
       if word.endswith('s') or word.endswith('x'):
           plural = word + 'es')
       if word.endswith('y'):
           plural = word[:-1] + 'ies'
       else:
           plural = +  's'
       result.append(plural)
    return result

def test_pluralize():
    result = pluralize(dictionary)
    assert result == ['foxes', 'bosses', 'oranges', 'toeses', 'fairies', 'cups']

Note that I’m not actually using FP-specific concepts, but rather just making and returning a new object instead of mutating and reusing the old one. This way, if anyone has a reference remaining to the input list they won’t be surprised.

This is a bit of a toy example, but imagine instead you’re passing in and mutating some complex object, or maybe even doing operations via a connection to a database. You’ll probably want to write many types of test cases, but you’d have to be very careful about the order or deal with cost of wiping and recreating state. That kind of effort is best saved for end-to-end integration tests, not smaller unit tests.

Understanding (and avoiding) mutability

Pop quiz, which of the following data structures are mutable?

  1. list
  2. tuple
  3. set
  4. dict
  5. string

Submit

Why is this important? Sometimes lists and tuples feel interchangeable, and it’s tempting to write code that uses a random combination of the two. Then tuples error as soon as you try to do a mutation operation such as assigning to an element. Or, you try to use a list as a dictionary key, and see a TypeError, which occurs precisely because lists are mutable. Tuples and strings can be used as dictionary keys because they’re immutable and can be deterministically hashed, and all the other data structures can’t because they might change in value even when the object identity is the same.

Most importantly, when you pass around dicts/lists/sets, they can be mutated unexpectedly in some other context. This is a mess to debug. The mutable default parameter is a classic case of this:

def add_bar(items=[]):
    items.append('bar')
    return items

l = add_bar()  # l is ['bar']
l.append('foo')
add_bar() # returns ['bar', 'foo', 'bar']

Dictionaries, sets and lists are powerful, performant, Pythonic and extremely useful. Writing code without them would be inadvisable. That being said, I always use a tuple or None (swapping it out for an empty dict or list later) as default parameters, and I try to avoiding passing mutable data structures around from context to context without being on guard to the fact they might be modified.

Limiting use of classes

Often, classes (and their instances) carry that double-edged sword of mutability. The more I program in Python, the more I put off making classes until they’re clearly necessary, and I almost never use mutable class attributes. This can be hard for those coming from highly object oriented languages like Java, but many things that are usually or always done via a class in another language are fine to keep at the module level in Python. For example, if you need to group functions or constants or namespace then, they can be put into a separate .py file together.

Frequently, I see classes used to hold a small collection of variable names with values, when a namedtuple (or typing.NamedTuple for type specificity) would work just as well, and be immutable.

from collections import namedtuple
VerbTenses = namedtuple('VerbTenses', ['past', 'present', 'future'])
# versus
class VerbTenses(object):
    def __init__(self, past, present, future):
        self.past = past,
        self.present = present
        self.future = future

If you do need to provide a source of state, and multiple views into that state and ways to change it, then classes are an excellent choice. In addition, I tend to prefer singleton pure functions over static methods, so they can be used composably in other contexts.

Mutable class attributes are highly dangerous, because they belong to the class definition rather than the instance, so you can end up accidentally mutating state across multiple instances of the same class!

class Bus(object):
     passengers = set()
     def add_passenger(self, person):
        self.passengers.add(person)

bus1 = Bus()
bus2 = Bus()
bus1.add_passenger('abe')
bus2.add_passenger('bertha')
bus1.passengers  # returns ['abe', 'bertha']
bus2.passengers  # also ['abe', 'bertha']

Idempotency

Any realistic, large and complex system has occasions when it will have to fail and retry. The concept “idempotency” exists in API design and matrix algebra as well, but within functional programming, an idempotent function returns the same thing when you pass in previous output. Therefore, redoing something always converges to the same value. A more useful version of the ‘pluralize’ function above would check if something was already in plural form before trying to calculate how to make it plural, for example.

Sparing use of lambdas and higher order functions

I find it often quicker and clearer to use lambdas in the case of short operations like in an ordering key for sort. If a lambda gets longer than one line, however, a regular function definition is probably better. And passing functions around in general can be useful for avoiding repetition, but I try to keep in mind whether the extra structure obscures the clarity too much. Often times, breaking out into smaller composable helpers is clearer.

Generators and higher level functions, when necessary

Occasionally you will encounter an abstract generator or iterator, maybe one that returns a large or even infinite sequence of values. A good example of this is range. In Python 3, it is now a generator by default (equivalent to xrange in Python 2), in part to save you from out-of-memory errors when you try iterate over a large number, like range(10**10). If you want to do some operation on every item in a potentially-large generator, then using tools like map and filter may be the best option.

Similarly, if you don’t know how many values your newly written iterator might return — and it’s likely large — defining a generator could be the way to go. However, not everyone will be savvy about consuming it, and may decide to collect the result in a list comprehension, resulting in the OOM error you were trying to avoid in the first place. Generators, Python’s implementation of stream programming, are also not necessarily purely functional – so all the same caveats around safety apply as any other style of Python programming.

Concluding thoughts

Getting to know your programming language of choice well by exploring its features, libraries and internals will undoubtedly help you debug and read code faster. Knowing about and using ideas from other languages or programing language theory can also be fun, interesting, and make you a stronger and more versatile programmer. However, being a Python power-user ultimately means not just knowing what you *could* do, but understanding when which skills would be more efficient. Functional programming can be incorporated into Python easily. To keep its incorporation elegant, especially in shared code spaces, I find it best to use a purely functional mindset to make code more predictable and easy, all the while maintaining simplicity and idiomaticity.

This post is a part of Kite’s new series on Python. You can check out the code from this and other posts on our GitHub repository.

This article was originally published at: Kite.com