In this post, we will be talking about how Python likes to deal with “list-like objects”. We will be diving into some quirks of Python that might seem a bit weird and, in the end, we will hopefully teach you how to build something that could actually be useful while avoiding common mistakes.
This tells us the following: If your class subclasses
Sequence
and defines the
__getitem__
and
__len__
methods, then:
- calling isinstance(obj, Sequence) will return True and
- they will also have the other 5 methods:
__contains__
,
__iter__
,
__reversed__
,
index
and
count
(You can verify the second statement by checking out the source code of Sequence; it’s neither big nor complicated)
The first statement is not really surprising, but it is important because it turns out that
isinstance(obj, Sequence) == True
is the “official” way of saying that obj is a readable list-like object in Python.
What is interesting here is that, even without subclassing from Sequence, Python already gave
__contains__
,
__iter__
and
__reversed__
to our
FakeList
class from Part 1. Lets put the last two mixin methods to the test:
f.index('two')
# AttributeError: 'FakeList' object has no attribute 'index'
f.count('two')
# AttributeError: 'FakeList' object has no attribute 'count'
We can fix this by subclassing FakeList from Sequence
+from collections.abc import Sequence
-class FakeList:
+class FakeList(Sequence):
def __getitem__(self, index):
...
f.index('two')
# <<< 2
f.count('two')
# <<< 1
So the bottom line of all this is:
If you want to make something that can be “officially” considered a readable list-like object in Python, make it subclass Sequence and implement at least the
__getitem__
and
__len__
methods
The same conclusion holds true for all the ABCs listed in the documentation. For example, if you want to make a fully legitimate read-write list-like object, you would simply have to subclass from MutableSequence and implement the
__getitem__
,
__len__
,
__setitem__
,
__detitem__
and insert
methods (the ones in the ‘Abstract methods’ column).
There is a note in the documentation which is interesting, so we are going to include it here verbatim:
Implementation note: Some of the mixin methods, such as
__iter__()
,
__reversed__()
and
index()
, make repeated calls to the underlying
__getitem__()
method. Consequently, if
__getitem__()
is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden.
Part 3: Chainable Methods
We are going to shift topics away from list-like objects now. Don’t worry, everything will come together in the end. Let’s make another useless class.
class Counter:
def __init__(self):
self._count = 0
def increment(self):
self._count += 1
def __repr__(self):
return f"<Counter: {self._count}>"
c = Counter()
c.increment()
c.increment()
c.increment()
c
# <<< <Counter: 3>
Nothing surprising here.
It would be nice if we could make the
.increment
calls chainable, i.e., if we could do:
c = Counter().increment().increment().increment()
c
# <<< <Counter: 3>
The easiest way to accomplish this is to have .increment() return the
Counter
object itself:
class Counter:
def __init__(self):
self._count = 0
def increment(self):
self._count += 1
+ return self
def __repr__(self):
return f"<Counter: {self._count}>"
However, this is not advisable. Here is an email from Guido van Rossum (the creator of Python) from 2003:
I'd like to explain once more why I'm so adamant that sort() shouldn't return
'self'.
This comes from a coding style (popular in various other languages, I believe
especially Lisp revels in it) where a series of side effects on a single object
can be chained like this:
x.compress().chop(y).sort(z)
which would be the same as
x.compress()
x.chop(y)
x.sort(z)
I find the chaining form a threat to readability; it requires that the reader
must be intimately familiar with each of the methods. The second form makes it
clear that each of these calls acts on the same object, and so even if you
don't know the class and its methods very well, you can understand that the
second and third call are applied to x (and that all calls are made for their
side-effects), and not to something else.
I'd like to reserve chaining for operations that return new values, like string
processing operations:
y = x.rstrip("\n").split(":").lower()
There are a few standard library modules that encourage chaining of side-effect
calls (pstat comes to mind). There shouldn't be any new ones; pstat slipped
through my filter when it was weak.
--Guido van Rossum (home page:
Here is how I interpret this. If someone reads this snippet:
obj.do_something()
they will assume that
.do_something()
- mutates obj in some way, and/or
- has an interesting side-effect
- probably returns
None
When they read this snippet:
obj2 = obj1.do_something()
they will assume that:
-
.do_something()
does not change
obj1
in any way
-
obj2
will have a new value, either a different type (eg a result status) or a slightly mutated copy of
obj1
These assumptions break down when methods
return self
c1 = Counter().increment()
c2 = c1.increment()
c1
# <<< <Counter: 2>
c2
# <<< <Counter: 2>
c1 == c2
# <<< True
Someone not familiar with the implementation of
Counter
would assume that
c1
would hold the value
1
How do we fix this? My suggestion is: make the class’s initializer accept any optional arguments required to fully describe the instance’s state. Then, chainable methods will return a new instance with the appropriate, slightly changed, state.
class Counter:
- def __init__(self):
- self._count = 0
+ def __init__(self, count=0):
+ self._count = count
def increment(self):
- self._count += 1
- return self
+ return Counter(self._count + 1)
def __repr__(self):
return f"<Counter: {self._count}>"
Let’s try it out:
c1 = Counter().increment()
c2 = c1.increment()
c1
# <<< <Counter: 1>
c2
# <<< <Counter: 2>
c1 == c2
# <<< False
It might be a little better if we also do this:
class Counter:
def __init__(self, count=0):
self._count = count
def increment(self):
- return Counter(self._count + 1)
+ return self.__class__(self._count + 1)
def __repr__(self):
return f"<Counter: {self._count}>"
so that
.increment()
works for subclasses of
Counter
We essentially made the
Counter
objects immutable, unless someone changes the “private”
_count
attribute by hand.
Part 4: Bringing Everything Together
It’s now time to build something actually useful. Let’s consume an API and access the responses like lists. We are going to use the Transifex API (v3). Let’s start with a snippet:
import os
import requests
class TxCollection:
HOST = "https://rest.api.transifex.com"
def __init__(self, url):
response = requests.get(
self.HOST + url,
headers={'Content-Type': "application/vnd.api+json",
'Authorization': f"Bearer {os.environ['API_TOKEN']}"},
)
response.raise_for_status()
self.data = response.json()['data']
organizations = TxCollection("/organizations")
organizations.data[0]['attributes']['name']
# <<< 'diegobz'
Now let’s make this behave like a list:
-import os
+import os, reprlib, collections
import requests
-class TxCollection:
+class TxCollection(collections.abc.Sequence):
HOST = "https://rest.api.transifex.com"
def __init__(self, url):
response = requests.get(
self.HOST + url,
headers={'Content-Type': "application/vnd.api+json",
'Authorization': f"Bearer {os.environ['API_TOKEN']}"},
)
response.raise_for_status()
- self.data = response.json()['data']
+ self._data = response.json()['data']
+ def __getitem__(self, index):
+ return self._data[index]
+
+ def __len__(self):
+ return len(self._data)
+
+ def __repr__(self):
+ result = ", ".join((reprlib.repr(item['id']) for item in self))
+ result = f"<TxCollection ({len(self)}): {result}>"
+ return result
organizations = TxCollection("/organizations")
organizations
# <<< <TxCollection (3): 'o:diegobz', 'o:kb_org', 'o:transifex'>
organizations[2]
# <<< {'id': 'o:transifex',
# ... 'type': 'organizations',
# ... 'attributes': {
# ... 'name': 'Transifex',
# ... 'slug': 'transifex',
# ... 'logo_url': 'https://txc-assets-775662142440-prod.s3.amazonaws.com/mugshots/435381b2e0.jpg',
# ... 'private': False},
# ... 'links': {'self': 'https://rest.api.transifex.com/organizations/o:transifex'}}
What is interesting here is that we know that our class is a legitimate readable list-like object because we fulfilled the requirements we set in Part 2: we subclassed from
collections.abc.Sequence
and implemented the
__getitem__
and
__len__
methods.
Now, if you are familiar with Django querysets, you will know that you can apply filters to them and that their evaluation is applied lazily, i.e. evaluated on demand, after the filters have been set. Let’s try to apply this logic here, first by making our collections lazy:
import os, reprlib, collections
import requests
class TxCollection(collections.abc.Sequence):
HOST = "https://rest.api.transifex.com"
def __init__(self, url):
+ self._url = url
+ self._data = None
+ def _evaluate(self):
+ if self._data is not None:
+ return
response = requests.get(
- self.HOST + url,
+ self.HOST + self._url,
headers={'Content-Type': "application/vnd.api+json",
'Authorization': f"Bearer {os.environ['API_TOKEN']}"},
)
response.raise_for_status()
self._data = response.json()['data']
def __getitem__(self, index):
+ self._evaluate()
return self._data[index]
def __len__(self):
+ self._evaluate()
return len(self._data)
def __repr__(self):
result = ", ".join((reprlib.repr(item['id']) for item in self))
result = f"<TxCollection ({len(self)}): {result}>"
return result
organizations = TxCollection("/organizations")
organizations
# <<< <TxCollection (3): 'o:diegobz', 'o:kb_org', 'o:transifex'>
Our lazy evaluation:
- Will only be triggered when we try to access the collection like a list
- Will abort early if the collection has already been evaluated
To drive point 1 home, I will point out that our
__repr__
method (the one that was called when we typed
organizations <ENTER>
into our python terminal) does not explicitly trigger an evaluation, but triggers it nevertheless. The for item in self part in its first line will start an iteration, which will call
__getitem__
(as we saw in Part 1), which will trigger the evaluation. Even if it didn’t, the
len(self)
part in the second line would also trigger the evaluation.
Playing with metaprogramming, which in this context means making things behave like things that they are not, can be tricky, dangerous and cause bugs, as anyone who has played with
__setattr__
and ran into RecursionErrors can attest to. This is the beauty of the conclusion from Part 2: we want to make
TxCollection
behave like a list and we know exactly which parts of the code trigger that behavior:
__getitem__
and
__len__
. That’s the only parts we need to add our lazy evaluation to in order to be 100% confident that
TxCollection
will properly behave like a readable list.
Now let’s apply filtering. We will intentionally do it the wrong way, by returning self, so that we can see the flaws outlined in Part 3 in the context of this example. Then we will fix it.
class TxCollection(collections.abc.Sequence):
HOST = "https://rest.api.transifex.com"
def __init__(self, url):
self._url = url
+ self._params = {}
self._data = None
def _evaluate(self):
if self._data is not None:
return
response = requests.get(
self.HOST + self._url,
+ params=self._params,
headers={'Content-Type': "application/vnd.api+json",
'Authorization': f"Bearer {os.environ['API_TOKEN']}"},
)
response.raise_for_status()
self._data = response.json()['data']
+ def filter(self, **filters):
+ self._params.update({f'filter[{key}]': value
+ for key, value in filters.items()})
+ return self
# def __getitem__, __len__, __repr__
Let’s take this out for a spin:
TxCollection("/resource_translations").\
filter(resource="o:kb_org:p:kb1:r:fileless", language="l:el")
# <<< <TxCollection (3): 'o:kb_org:p:k...72e4fdb0:l:el',
# ... 'o:kb_org:p:k...e877d7ee:l:el',
# ... 'o:kb_org:p:k...ed953f8f:l:el'>
(Note: There are some Transifex-API-v3-specific things here, like how filtering is applied and what the IDs of the objects look like, that you don’t have to worry about. If you are interested, you can check out the documentation)
And now let’s demonstrate the flaw we outlined in Part 3:
c1 = TxCollection("/resource_translations").\
filter(resource="o:kb_org:p:kb1:r:fileless", language="l:el")
c2 = c1.filter(translated="true")
c1
# <<< <TxCollection (1): 'o:kb_org:p:k...72e4fdb0:l:el'>
c2
# <<< <TxCollection (1): 'o:kb_org:p:k...72e4fdb0:l:el'>
c1 == c2
# <<< True
We know from our previous run that
c1
should have a size of 3, but it got overwritten when we applied
.filter()
to it.
Also,
c1 = TxCollection("/resource_translations").\
filter(resource="o:kb_org:p:kb1:r:fileless", language="l:el")
_ = list(c1)
c2 = c1.filter(translated="true")
c1
# <<< <TxCollection (3): 'o:kb_org:p:k...72e4fdb0:l:el',
# ... 'o:kb_org:p:k...e877d7ee:l:el',
# ... 'o:kb_org:p:k...ed953f8f:l:el'>
c2
# <<< <TxCollection (3): 'o:kb_org:p:k...72e4fdb0:l:el',
# ... 'o:kb_org:p:k...e877d7ee:l:el',
# ... 'o:kb_org:p:k...ed953f8f:l:el'>
c1 == c2
# <<< True
We forced an evaluation before we applied the second filter (with
_ = list(c1)
), so the second filter was ignored, in both
c1
and
c2
To fix this, we will do the same thing we did in Part 3: we will add optional arguments to the initializer that describe the whole state of a
TxCollection
object and have
.filter()
return a slightly mutated copy of self.
class TxCollection(collections.abc.Sequence):
HOST = "https://rest.api.transifex.com"
- def __init__(self, url):
+ def __init__(self, url, params=None):
+ if params is None:
+ params = {}
self._url = url
- self._params = {}
+ self._params = params
self._data = None
# def _evaluate
- def filter(self, **filters):
- self._params.update({f'filter[{key}]': value
- for key, value in filters.items()})
- return self
+ def filter(self, **filters):
+ params = dict(self._params) # Make a copy
+ params.update({f'filter[{key}]': value
+ for key, value in filters.items()})
+ return self.__class__(self._url, params)
# def __getitem__, __len__, __repr__
(Note: we didn’t set
params={}
as the default value in the initializer because you shouldn’t use mutable default arguments)
c1 = TxCollection("/resource_translations").\
filter(resource="o:kb_org:p:kb1:r:fileless", language="l:el")
c2 = c1.filter(translated="true")
c1
# <<< <TxCollection (3): 'o:kb_org:p:k...72e4fdb0:l:el',
# ... 'o:kb_org:p:k...e877d7ee:l:el',
# ... 'o:kb_org:p:k...ed953f8f:l:el'>
c2
# <<< <TxCollection (1): 'o:kb_org:p:k...72e4fdb0:l:el'>
c1 == c2
# <<< False
Works like a charm!
We concluded Part 3 by saying that the class we made creates immutable objects, which is why it is safe to use chainable methods on them. What is interesting here is that
TxCollection
objects are not immutable. So, how do we ensure that implementing chainable methods is safe? The answer is that the state of a
TxCollection
consists of two parts:
- The
_url
and
_params
attributes that are immutable.
- The
_data
attribute which is dynamic. But:
it will only be evaluated once and it has a deterministic relationship with the immutable parts. The only way for_data
to be evaluated differently is to change
_url
and
_params
, which can only happen if we make a mutated copy of the original object via
.filter()
Conclusion
I hope this has been interesting. You can write powerful and expressive code with what is explained here, hopefully without introducing bugs
Source link