- Syntactical simplicity
- Can be both compiled and interpreted, which makes it very flexible
- Large open source ecosystem build around it
- List comprehension
- First class support on all platforms
The biggest of the preceding is #1, and it’s the reason I think Python should be the 1st language anyone learns. The simple syntax elucidates a lot of complicated concepts (e.g. objects, classes, functional programming, etc.) that other language syntaxes obfuscate.
Bear in mind that most of my programming experience is in Mathematica, Fortran, and Matlab, all of which are very consistent languages. Also, I am by no means a Python expert, so if anything I say here is wrong, be sure to correct me.
That said, there are a few things about Python that make me sad:
Incompatibilities among different versions
Python 3.* is not backwards compatible with 2.*. Both version numbers are maintained simultaneously so that there are 2 current Python versions (really?). Oh yeah, and code written in version x.n might not work in version x.n+y – where x, n, and y are integers.
Fascinating Horrifying stuff, really.
% operator semantics
When used between integers,
% is the modulus function, meaning
a % b returns the remainder of dividing
b. When used with strings, however,
% does something entirely different.
"Hi %s name %s" % ("my", "is") produces
"Hi my name is". There’s nothing wrong per se with that, but it’s completely different from the integer functionality. An operator shouldn’t mean wildly different things depending on the data type it’s used with.
Function notation varies based on the data type of the argument
Most Python functions use prefix notation, e.g.
f(x). Some functions, such as
upper(), use postfix notation, e.g.
x.f(). While this is consistent with Python’s class methods approach – both
upper() are string class methods – I find it unusual compared to the other languages I’m used to in which all functions have the same notations available to them.*
range() and list/string indices have inconsistent/confusing semantics
range(x,y) – where
y are integers – gives you a list that begins with
x and ends with
range(y) gives you a list from
y-1. I’m guessing the semantics of
range(x,y) is a list starting at
x of length
x = 0 if it’s not an argument. This is incredibly convoluted compared to simply having
range(x,y) return a list of integers from
y inclusive and
range(y) return a list from
This ridiculousness carries over into list/string slicing (generating continuous subset lists from an input list via specifying the indices of the desired input list elements). Python indexes both lists and strings starting from 0, ergo the indices of
listA = [a, b, c, d, e, f] are
[0, 1, 2, 3, 4, 5] respectively.
listA[p] returns the item at index
p. This makes total sense by itself, but starting indices at 0 causes an issue I’ll address later.
listA[p:q] returns everything from index
p to the index before
q. This makes no sense.
listA[p:q] should return everything from index
p to index
Starting indices at 0, while mathematically correct, leads to oddities when attempting to retrieve an item at the end of the list by using negative indices. For example, for
stringB = "abcdef",
a, but to pick up the last item –
f – counting from the end of the list requires
stringB[-1]. If list indexing started at 1 as it does in the real world – as opposed to computer science theory – then indices of
-1 would pick up the first and last times in lists, respectively.
The obvious solution to all of this is to make Python count like people in the real world do: indices should start at 1, and
range() should take inclusive endpoints as arguments.
.sort() works in place
I subscribe to the belief that functions should NOT change their arguments directly, as you risk destroying original data held in memory. The argument that doing so prevents machines from crashing by running out of memory is silly.
Operating on dictionaries can produce unordered output
This means that any time you loop through a dictionary, you will go through every key, but you are not guaranteed to get the output in any particular order. I can’t imagine how difficult this makes troubleshooting operations on large dictionaries.
Despite all of the above, I maintain that if you learn only 1 programming language, Python should be it. Unless you’re writing numerical solvers, in which case Fortran should be your only choice 😛
*Mathematica is an extreme example of this:
x//f all have the same effect