Back to the main index

Scientific Python: Transitioning from MATLAB to Python

Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium).

This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy.

There is more Python learning material available on our lab's wiki.

Author: Maarten Demeyer
Year: 2014
Copyright: Public Domain as in CC0

Contents

A Quick Recap

Data types

Depending on what kind of values you want to store, Python variables can be of different data types. For instance:

In [ ]:
my_int = 5
print my_int, type(my_int)

my_float = 5.0
print my_float, type(my_float)

my_boolean = False
print my_boolean, type(my_boolean)

my_string = 'hello'
print my_string, type(my_string)

Lists

One useful data type is the list, which stores an ordered, mutable sequence of any data type, even mixed

In [ ]:
my_list = [my_int, my_float, my_boolean, my_string]
print type(my_list)

for element in my_list:
    print type(element)

To retrieve or change specific elements in a list, indices and slicing can be used.

Indexing starts at zero. Slices do not include the last element.

In [ ]:
print my_list[1]

my_list[1] = 3.0
my_sublist = my_list[1:3]

print my_sublist
print type(my_sublist)

Functions

Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages.

Functions can have both required and optional input arguments. When the function has no output argument, it returns None.

In [ ]:
# Function with a required and an optional argument
def regress(x, c=0, b=1):
    return (x*b)+c

print regress(5)         # Only required argument
print regress(5, 10, 3)  # Use argument order
print regress(5, b=3)    # Specify the name to skip an optional argument
In [ ]:
# Function without return argument
def divisible(a,b):
    if a%b:
        print str(a) + " is not divisible by " + str(b)
    else:
        print str(a) + " is divisible by " + str(b)

divisible(9,3)
res = divisible(9,2)
print res
In [ ]:
# Function with multiple return arguments
def add_diff(a,b):
    return a+b, a-b

# Assigned as a tuple
res = add_diff(5,3)
print res

# Directly unpacked to two variables
a,d = add_diff(5,3)
print a
print d

Objects

Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this.

In [ ]:
my_list = [1, False, 'boo']
my_list.append('extra element')
my_list.remove(False)
print my_list

The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions.

The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead.

In [ ]:
return_arg = my_list.append('another one')
print return_arg
print my_list
In [ ]:
my_string = 'kumbaya, milord'
return_arg = my_string.replace('lord', 'lard')
print return_arg
print my_string

Do you remember why list functions are in-place, while string functions are not?

Numpy

Why we need Numpy

While lists are great, they are not very suitable for scientific computing. Consider this example:

In [ ]:
subj_length = [180.0,165.0,190.0,172.0,156.0]
subj_weight = [75.0,60.0,83.0,85.0,62.0]
subj_bmi = []

# EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects
# BMI = weight/(length/100)**2

Clearly, this is clumsy. MATLAB users would expect something like this to work:

In [ ]:
subj_bmi = subj_weight/(subj_length/100)**2 
mean_bmi = mean(subj_bmi)

But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do?

The ndarray data type

Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them.

Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values.

In [ ]:
import numpy as np

# Create a numpy array from a list
subj_length = np.array([180.0,165.0,190.0,172.0,156.0])
subj_weight = np.array([75.0,60.0,83.0,85.0,62.0])

print type(subj_length), type(subj_weight)

# EXERCISE 2: Try to complete the program now!
# Hint: np.mean() computes the mean of a numpy array
# Note that unlike MATLAB, Python does not need the '.' before elementwise operators

Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started.

shape and dtype

The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar.

In [ ]:
# Multi-dimensional lists are just nested lists
# This is clumsy to work with
my_nested_list = [[1,2,3],[4,5,6]]

print my_nested_list

print len(my_nested_list)
print my_nested_list[0]
print len(my_nested_list[0])
In [ ]:
# Numpy arrays handle multidimensionality better
arr = np.array(my_nested_list)

print arr         # nicer printing
print arr.shape   # direct access to all dimension sizes
print arr.size    # direct access to the total number of elements
print arr.ndim    # direct access to the number of dimensions

The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions.

The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays:

In [ ]:
arr3d = np.array([ [[1,2,3],[4,5,6]] , [[7,8,9],[10,11,12]] ])

print arr3d

print arr3d.shape
print arr3d.size
print arr3d.ndim

Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices.

This implies that dimension sizes are listed from low to high in the shape tuple.

The second basic property of an array is its dtype. Contrary to list elements, numpy array elements are (typically) all of the same type.

In [ ]:
# The type of a numpy array is always... numpy.ndarray
arr = np.array([[1,2,3],[4,5,6]])
print type(arr)

# So, let's do a computation
print arr/2

# Apparently we're doing our computations on integer elements!
# How do we find out?
print arr.dtype
In [ ]:
# And how do we fix this?
arr = arr.astype('float')    # Note: this is not an in-place function!
print arr.dtype
print arr/2
In [ ]:
# Alternatively, we could have defined our dtype better from the start
arr = np.array([[1,2,3],[4,5,6]], dtype='float')
print arr.dtype

arr = np.array([[1.,2.,3.],[4.,5.,6.]])
print arr.dtype

To summarize, any numpy array is of the data type numpy.ndarray, but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array.

Indexing and slicing

The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly.

But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands.

In [ ]:
arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float')

# Indexing and slicing
print arr[0,0]      # or: arr[0][0]
print arr[:-1,0]
In [ ]:
# Elementwise computations on slices
# Remember, the LAST dimension is the INNER dimension
print arr[:,0] * arr[:,1]
print arr[0,:] * arr[1,:]

# Note that you could never slice across rows like this in a nested list!
In [ ]:
# This doesn't work
# print arr[1:,0] * arr[:,1]

# And here's why:
print arr[1:,0].shape, arr[:,1].shape
In [ ]:
# This however does work. You can always use scalars as the other operand.
print arr[:,0] * arr[2,2]

# Or, similarly:
print arr[:,0] * 9.

As an exercise, can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop.

In [ ]:
# EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix
# Do not use a for-loop, and also do not use the np.mean() function for now.
arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float')

This works, but it is still a bit clumsy. We will learn more efficient methods below.

Filling and manipulating arrays

Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB.

In [ ]:
# 1-D array, filled with zeros
arr = np.zeros(3)
print arr

# Multidimensional array of a given shape, filled with ones
# This automatically allows you to fill arrays with /any/ value
arr = np.ones((3,2))*5
print arr

# Sequence from 1 to AND NOT including 16, in steps of 3
# Note that using a float input makes the dtype a float as well
# This is equivalent to np.array(range(1.,16.,3))
arr = np.arange(1.,16.,3)
print arr

# Sequence from 1 to AND including 16, in 3 steps
# This always returns an array with dtype float
arr = np.linspace(1,16,3)
print arr
In [ ]:
# Array of random numbers between 0 and 1, of a given shape
# Note that the inputs here are separate integers, not a tuple
arr = np.random.rand(5,2)
print arr

# Array of random integers from 0 to AND NOT including 10, of a given shape
# Here the shape is defined as a tuple again
arr = np.random.randint(0,10,(5,2))
print arr

Once we have an array, we may wish to replicate it to create a larger array.

Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D).

In [ ]:
arr0 = np.array([[1,2],[3,4]])
print arr0

# 'repeat' replicates elements along a given axis
# Each element is replicated directly after itself
arr = np.repeat(arr0, 3, axis=-1)
print arr

# We may even specify the number of times each element should be repeated
# The length of the tuple should correspond to the dimension length
arr = np.repeat(arr0, (2,4), axis=0)
print arr
In [ ]:
print arr0

# 'tile' replicates the array as a whole
# Use a tuple to specify the number of tilings along each dimensions
arr = np.tile(arr0, (2,4))
print arr
In [ ]:
# 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors
# where each array contains the X or Y coordinates corresponding to a given pixel in an image
x = np.arange(10)
y = np.arange(5)
print x,y

arrx, arry = np.meshgrid(x,y)
print arrx
print arry

Concatenating an array allows you to make several arrays into one.

In [ ]:
arr0 = np.array([[1,2],[3,4]])
arr1 = np.array([[5,6],[7,8]])

# 'concatenate' requires an axis to perform its operation on
# The original arrays should be put in a tuple
arr = np.concatenate((arr0,arr1), axis=0)
print arr # as new rows

arr = np.concatenate((arr0,arr1), axis=1)
print arr # as new columns
In [ ]:
# Suppose we want to create a 3-D matrix from them,
# we have to create them as being three-dimensional
# (what happens if you don't?)
arr0 = np.array([[[1],[2]],[[3],[4]]])
arr1 = np.array([[[5],[6]],[[7],[8]]])
print arr0.shape, arr1.shape
arr = np.concatenate((arr0,arr1),axis=2)
print arr
In [ ]:
# hstack, vstack, and dstack are short-hand functions
# which will automatically create these 'missing' dimensions
arr0 = np.array([[1,2],[3,4]])
arr1 = np.array([[5,6],[7,8]])

# vstack() concatenates rows
arr = np.vstack((arr0,arr1))
print arr

# hstack() concatenates columns
arr = np.hstack((arr0,arr1))
print arr

# dstack() concatenates 2D arrays into 3D arrays
arr = np.dstack((arr0,arr1))
print arr
In [ ]:
# Their counterparts are the hsplit, vsplit, dsplit functions
# They take a second argument: how do you want to split
arr = np.random.rand(4,4)
print arr
print '--'

# Splitting int equal parts 
arr0,arr1 = hsplit(arr,2)
print arr0
print arr1
print '--'

# Or, specify exact split points
arr0,arr1,arr2 = hsplit(arr,(1,2))
print arr0
print arr1
print arr2

Finally, we can easily reshape and transpose arrays

In [ ]:
arr0 = np.arange(10)
print arr0
print '--'

# 'reshape' does exactly what you would expect
# Make sure though that the total number of elements remains the same
arr = np.reshape(arr0,(5,2))
print arr

# You can also leave one dimension blank by using -1 as a value
# Numpy will then compute for you how long this dimension should be
arr = np.reshape(arr0,(-1,5))
print arr
print '--'

# 'transpose' allows you to switch around dimensions
# A tuple specifies the new order of dimensions
arr = np.transpose(arr,(1,0))
print arr

# For simply transposing rows and columns, there is the short-hand form .T
arr = arr.T
print arr
print '--'

# 'flatten' creates a 1D array out of everything
arr = arr.flatten()
print arr

Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays?

In [ ]:
# EXERCISE 4: Create your own meshgrid3d function
# Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows
# Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays
# ...do not use the np.meshgrid() function

def meshgrid3d(xvec, yvec):
    # fill in!

xvec = np.arange(10)
yvec = np.arange(5)
xy = meshgrid3d(xvec, yvec)
print xy
print xy[:,:,0] # = first output of np.meshgrid()
print xy[:,:,1] # = second output of np.meshgrid()

A few useful functions

We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions.

In [ ]:
arr = np.random.rand(5)
print arr

# Sorting and shuffling
res = arr.sort() 
print arr # in-place!!!
print res

res = np.random.shuffle(arr) 
print arr # in-place!!!
print res
In [ ]:
# Min, max, mean, standard deviation
arr = np.random.rand(5)
print arr

mn = np.min(arr)
mx = np.max(arr)
print mn, mx

mu = np.mean(arr)
sigma = np.std(arr)
print mu, sigma
In [ ]:
# Some functions allow you to specify an axis to work along, in case of multidimensional arrays
arr2d = np.random.rand(3,5)
print arr2d

print np.mean(arr2d, axis=0)
print np.mean(arr2d, axis=1)
In [ ]:
# Trigonometric functions
# Note: Numpy works with radians units, not degrees
arr = np.random.rand(5)
print arr

sn = np.sin(arr*2*np.pi)
cs = np.cos(arr*2*np.pi)
print sn
print cs
In [ ]:
# Exponents and logarithms
arr = np.random.rand(5)
print arr

xp = np.exp(arr)
print xp
print np.log(xp)
In [ ]:
# Rounding
arr = np.random.rand(5)
print arr

print arr*5
print np.round(arr*5)
print np.floor(arr*5)
print np.ceil(arr*5)

A complete list of all numpy functions can be found at the Numpy website. Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well.

A small exercise

Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing!

In [ ]:
# EXERCISE 5: Make a better version of Exercise 3 with what you've just learned
arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float')

# What we had:
print np.array([(arr[:,0]+arr[:,1]+arr[:,2])/3,(arr[0,:]+arr[1,:]+arr[2,:])/3])

# Now the new version:

A bit harder: The Gabor

A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by:

$grating = \sin(xf)$

where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by:

$gaussian = e^{-(x^2+y^2)/2}$

where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals:

$gabor = grating \times gaussian$

To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast):

Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result.

In [ ]:
# EXERCISE 6: Create a Gabor patch of 100 by 100 pixels
import numpy as np
import matplotlib.pyplot as plt

# Step 1: Define the 1D coordinate values
# Tip: use 100 equally spaced values between -np.pi and np.pi

# Step 2: Create the 2D x and y coordinate arrays
# Tip: use np.meshgrid()

# Step 3: Create the grating
# Tip: Use a frequency of 10

# Step 4: Create the Gaussian
# Tip: use np.exp() to compute a power of e

# Step 5: Create the Gabor

# Visualize your result
# (we will discuss how this works later)
plt.figure(figsize=(15,5))
plt.subplot(131)
plt.imshow(grating, cmap='gray')
plt.subplot(132)
plt.imshow(gaussian, cmap='gray')
plt.subplot(133)
plt.imshow(gabor, cmap='gray')
plt.show()

Boolean indexing

The dtype of a Numpy array can also be boolean, that is, True or False. It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays.

In [ ]:
# Check whether each element of a 2x2 array is greater than 0.5
arr = np.random.rand(2,2)
print arr
res = arr>0.5
print res
print '--'
 
# Analogously, check it against each element of a second 2x2 array
arr2 = np.random.rand(2,2)
print arr2
res = arr>arr2
print res
In [ ]:
# We can use these boolean arrays as indices into other arrays!

# Add 0.5 to any element smaller than 0.5
arr = np.random.rand(2,2)
print arr
res = arr<0.5
print res
arr[res] = arr[res]+0.5
print arr

# Or, shorter:
arr[arr<0.5] = arr[arr<0.5] + 0.5

# Or, even shorter:
arr[arr<0.5] += 0.5

While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators: and, or, xor, not.

In [ ]:
arr = np.array([[1,2,3],[4,5,6]])

# The short-hand forms for elementwise boolean operators are: & | ~ ^
# Use parentheses around such expressions
res = (arr<4) & (arr>1)
print res
print '--'

res = (arr<2) | (arr==5)
print res
print '--'

res = (arr>3) & ~(arr==6)
print res
print '--'

res = (arr>3) ^ (arr<5)
print res
In [ ]:
# To convert boolean indices to normal integer indices, use the 'nonzero' function
print res
print np.nonzero(res)
print '--'

# Separate row and column indices
print np.nonzero(res)[0]
print np.nonzero(res)[1]
print '--'

# Or stack and transpose them to get index pairs
pairs = np.vstack(np.nonzero(res)).T
print pairs

Vectorizing a simulation

Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration.

For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution.

We could just use two nested for-loops:

In [ ]:
import numpy as np

# We will keep track of the sum of first occurence positions,
# as well as the number of positions entered into this sum.
# This way we can compute the mean.
sum111 = 0.
n111 = 0.
sum123 = 0.
n123 = 0.

for sim in range(5000):
    
    # Keep track of how far along we are in finding a given pattern
    d111 = 0
    d123 = 0
    
    for throw in range(2000):
        
        # Throw a die
        die = np.random.randint(1,7)
        
        # 111 case
        if d111==3:
            pass
        elif die==1 and d111==0:
            d111 = 1
        elif die==1 and d111==1:
            d111 = 2
        elif die==1 and d111==2:
            d111 = 3
            sum111 = sum111 + throw
            n111 = n111 + 1            
        else:
            d111 = 0
      
        # 123 case
        if d123==3:
            pass
        elif die==1:
            d123 = 1
        elif die==2 and d123==1:
            d123 = 2
        elif die==3 and d123==2:
            d123 = 3
            sum123 = sum123 + throw
            n123 = n123 + 1
        else:
            d123 = 0
        
        # Don't continue if both have been found
        if d111==3 and d123==3:
            break

# Compute the averages
avg111 = sum111/n111
avg123 = sum123/n123
print avg111, avg123   

# ...can you spot the crucial difference between both patterns?

However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred.

Try to run these 5000 simulations using Numpy, without any loops, and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence.

You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it.

Vectorizing problems is a crucial skill in scientific computing!

In [ ]:
# EXERCISE 7: Vectorize the above program

# You get these lines for free...
import numpy as np
throws = np.random.randint(1,7,(5000,2000))
one = (throws==1)
two = (throws==2)
three = (throws==3)

# Find out where all the 111 and 123 sequences occur
find111 = 
find123 = 

# Then at what index they /first/ occur in each sequence
first111 = 
first123 = 

# Compute the average first occurence location for both situations
avg111 = 
avg123 = 

# Print the result
print avg111, avg123

In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls.

Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing!

PIL: the Python Imaging Library

As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow, for which an excellent documentation can be found here. The module to import is however still called 'PIL'. In practice, we will mostly use its Image module.

In [ ]:
from PIL import Image

Loading and showing images

The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename.

In [ ]:
# Opening an image is simple enough:
# Construct an Image object with the filename as an argument
im = Image.open('python.jpg')

# It is now represented as an object of the 'JpegImageFile' type
print im

# There are some useful member variables we can inspect here
print im.format  # format in which the file was saved
print im.size    # pixel dimensions
print im.mode    # luminance/color model used

# We can even display it
# NOTE this is not perfect; meant for debugging
im.show()

If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook.

(Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here)

In [ ]:
# Alternative quick-show method
from Tkinter import Tk, Button
from PIL import ImageTk

def alt_show(im):
    win = Tk()
    tkimg = ImageTk.PhotoImage(im)
    Button(image=tkimg).pack()
    win.mainloop()

alt_show(im)

Once we have opened the image in PIL, we can convert it to a Numpy object.

In [ ]:
# We can convert PIL images to an ndarray!
arr = np.array(im)
print arr.dtype   # uint8 = unsigned 8-bit integer (values 0-255 only)
print arr.shape   # Why do we have three layers?

# Let's make it a float-type for doing computations
arr = arr.astype('float')
print arr.dtype

# This opens up unlimited possibilities for image processing!
# For instance, let's make this a grayscale image, and add white noise
max_noise = 50
arr = np.mean(arr,-1)
noise = (np.random.rand(arr.shape[0],arr.shape[1])-0.5)*2
arr = arr + noise*max_noise

# Make sure we don't exceed the 0-255 limits of a uint8
arr[arr<0] = 0
arr[arr>255] = 255

The conversion back to PIL is easy as well

In [ ]:
# When going back to PIL, it's a good idea to explicitly 
# specify the right dtype and the mode.
# Because automatic conversions might mess things up
arr = arr.astype('uint8')
imn = Image.fromarray(arr, mode='L')
print imn.format
print imn.size
print imn.mode  # L = greyscale
imn.show() # or use alt_show() from above if show() doesn't work well for you

# Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255
# can be converted to an image object in this way

Resizing, rotating, cropping and converting

The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities.

In [ ]:
im = Image.open('python.jpg')

# Make the image smaller
ims = im.resize((800,600))
ims.show()

# Or you could even make it larger
# The resample argument allows you to specify the method used
iml = im.resize((1280,1024), resample=Image.BILINEAR)
iml.show()
In [ ]:
# Rotation is similar (unit=degrees)
imr = im.rotate(10, resample=Image.BILINEAR, expand=False)
imr.show()

# If we want to lose the black corners, we can crop (unit=pixels)
imr = imr.crop((100,100,924,668))
imr.show()
In [ ]:
# 'convert' allows conversion between different color models
# The most important here is between 'L' (luminance) and 'RGB' (color)
imbw = im.convert('L')
imbw.show()
print imbw.mode

imrgb = imbw.convert('RGB')
imrgb.show()
print imrgb.mode

# Note that the grayscale conversion of PIL is more sophisticated
# than simply averaging the three layers in Numpy (it is a weighted average)

# Also note that the color information is effectively lost after converting to L

Advanced

The ImageFilter module implements several types of filters to execute on any image. You can also define your own.

In [ ]:
from PIL import Image, ImageFilter

im = Image.open('python.jpg')
imbw = im.convert('L')

# Contour detection filter
imf = imbw.filter(ImageFilter.CONTOUR)
imf.show()

# Blurring filter
imf = imbw.filter(ImageFilter.GaussianBlur(radius=3))
imf.show()

Similarly, you can import the ImageDraw module to draw shapes and text onto an image.

In [ ]:
from PIL import Image, ImageDraw

im = Image.open('python.jpg')

# You need to attach a drawing object to the image first
imd = ImageDraw.Draw(im)

# Then you work on this object
imd.rectangle([10,10,100,100], fill=(255,0,0))
imd.line([(200,200),(200,600)], width=10, fill=(0,0,255))
imd.text([500,500], 'Python', fill=(0,255,0))

# The results are automatically applied to the Image object
im.show()

Saving

Finally, you can of course save these image objects back to a file on the disk.

In [ ]:
# PIL will figure out the file type by the extension
im.save('python.bmp')

# There are also further options, like compression quality (0-100)
im.save('python_bad.jpg', quality=5)

Exercise

We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion?

Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green.

Hint: you will have to make use of Boolean indexing.

As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used.

As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size.

In [ ]:
# EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB
# Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green
# The luminance of these colors should correspond to the size of the difference
#
# Extra 1: Maximize the overall contrast in your image
#
# Extra 2: Save as three PNG files, of different sizes (large, medium, small)

Matplotlib

While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module.

Quick plots

Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply.

In [ ]:
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt

# As data for our plots, we will use the pixel values of the image
# Open image, convert to an array
im = Image.open('python.jpg')
im = im.resize((400,300))
arr = np.array(im, dtype='float')

# Split the RGB layers and flatten them
R,G,B = np.dsplit(arr,3)
R = R.flatten()
G = G.flatten()
B = B.flatten()
In [ ]:
# QUICKPLOT 1: Correlation of luminances in the image

# This works if you want to be very quick:
# (xb means blue crosses, .g are green dots) 
plt.plot(R, B, 'xb')
plt.plot(R, G, '.g')
In [ ]:
# However we will take a slightly more disciplined approach here
# Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255

# Create a square figure
plt.figure(figsize=(5,5))

# Plot both scatter clouds
# marker: self-explanatory
# linestyle: 'None' because we want no line
# color: RGB triplet with values 0-1
plt.plot(R, B, marker='x', linestyle='None', color=(0,0,0.6))
plt.plot(R, G, marker='.', linestyle='None', color=(0,0.35,0))

# Make the axis scales equal, and name them
plt.axis([0,255,0,255])
plt.xlabel('Red value')
plt.ylabel('Green/Blue value')

# Show the result
plt.show()
In [ ]:
# QUICKPLOT 2: Histogram of 'red' values in the image
plt.hist(R)
In [ ]:
# ...and now a nicer version

# Make a non-square figure
plt.figure(figsize=(7,5))

# Make a histogram with 25 red bins
# Here we simply use the abbreviation 'r' for red
plt.hist(R, bins=25, color='r')

# Set the X axis limits and label
plt.xlim([0,255])
plt.xlabel('Red value', size=16)

# Remove the Y ticks and labels by setting them to an empty list
plt.yticks([])

# Remove the top ticks by specifying the 'top' argument
plt.tick_params(top=False)

# Add two vertical lines for the mean and the median
plt.axvline(np.mean(R), color='g', linewidth=3, label='mean')
plt.axvline(np.median(R), color='b', linewidth=1, linestyle=':', label='median')

# Generate a legend based on the label= arguments
plt.legend(loc=2)

# Show the plot
plt.show()
In [ ]:
# QUICKPLOT 3: Bar chart of mean+std of RGB values
plt.bar([0,1,2],[np.mean(R), np.mean(G), np.mean(B)], yerr=[np.std(R), np.std(G), np.std(B)])
In [ ]:
# ...and now a nicer version

# Make a non-square-figure
plt.figure(figsize=(7,5))

# Plot the bars with various options
# x location where bars start, y height of bars
# yerr: data for error bars
# width: width of the bars
# color: surface color of bars
# ecolor: color of error bars ('k' means black)
plt.bar([0,1,2], [np.mean(R), np.mean(G), np.mean(B)], 
         yerr=[np.std(R), np.std(G), np.std(B)],
         width=0.75,
         color=['r','g','b'], 
         ecolor='k')

# Set the X-axis limits and tick labels
plt.xlim((-0.25,3.))
plt.xticks(np.array([0,1,2])+0.75/2, ['Red','Green','Blue'], size=16)

# Remove all X-axis ticks by setting their length to 0
plt.tick_params(length=0)

# Set a figure title
plt.title('RGB Color Channels', size=16)

# Show the figure
plt.show()

A full documentation of all these pyplot commands and options can be found here. If you use Matplotlib, you will be consulting this page a lot!

Saving to a file

Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria.

In [ ]:
# So, copy-paste this line into the box above, before the plt.show() command
plt.savefig('bar.png') 

# There are some further formatting options possible, e.g.
plt.savefig('bar.svg', dpi=300, bbox_inches=('tight'), pad_inches=(1,1), facecolor=(0.8,0.8,0.8))

Visualizing arrays

Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow()

In [ ]:
# A simple grayscale luminance map
# cmap: colormap used to display the values
plt.figure(figsize=(5,5))
plt.imshow(np.mean(arr,2), cmap='gray')
plt.show()

# Importantly and contrary to PIL, imshow luminances are by default relative
# That is, the values are always rescaled to 0-255 first (maximum contrast)
# Moreover, colormaps other than grayscale can be used
plt.figure(figsize=(5,5))
plt.imshow(np.mean(arr,2)+100, cmap='jet') # or hot, hsv, cool,...
plt.show() # as you can see, adding 100 didn't make a difference here

Multi-panel figures

As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects.

In [ ]:
# 'Figure' objects are returned by the plt.figure() command
fig = plt.figure(figsize=(7,5))
print type(fig)

# Axes objects are the /actual/ plots within the figure
# Create them using the add_axes() method of the figure object
# The input coordinates are relative (left, bottom, width, height)
ax0 = fig.add_axes([0.1,0.1,0.4,0.7], xlabel='The X Axis')
ax1 = fig.add_axes([0.2,0.2,0.5,0.2], axisbg='gray')
ax2 = fig.add_axes([0.4,0.5,0.4,0.4], projection='polar')
print type(ax0), type(ax1), type(ax2)

# This allows you to execute functions like savefig() directly on the figure object
# This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig()
fig.savefig('fig.png')

# It also allows you to add text to the figure as a whole, across the different axes objects
fig.text(0.5, 0.5, 'splatter', color='r')

# The overall figure title can be set separate from the individual plot titles
fig.suptitle('What a mess', size=18)

# show() is actually a figure method as well
# It just gets 'forwarded' to what is thought to be the current figure if you use plt.show()
fig.show()

For a full list of the Figure methods and options, go here.

In [ ]:
# Create a new figure
fig = plt.figure(figsize=(15,10))

# As we saw, many of the axes properties can already be set at their creation
ax0 = fig.add_axes([0.,0.,0.25,0.25], xticks=(0.1,0.5,0.9), xticklabels=('one','thro','twee'))
ax1 = fig.add_axes([0.3,0.,0.25,0.25], xscale='log', ylim=(0,0.5))
ax2 = fig.add_axes([0.6,0.,0.25,0.25])

# Once you have the axes object though, there are further methods available
# This includes many of the top-level pyplot functions
# If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this
# to an Axes.plot() call on the current Axes object
R.sort()
G.sort()
B.sort()
ax2.plot(R, color='r', linestyle='-', marker='None')  # plot directly to an Axes object of choice
plt.plot(G, color='g', linestyle='-', marker='None')  # plt.plot() just plots to the last created Axes object
ax2.plot(B, color='b', linestyle='-', marker='None')

# Other top-level pyplot functions are simply renamed to 'set_' functions here
ax1.set_xticks([])
plt.yticks([])

# Show the figure
fig.show()

The full methods and options of Axes can be found here.

Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically.

In [ ]:
# Create a new figure
fig = plt.figure(figsize=(15,5))

# Specify the LAYOUT of the subplots (rows,columns)
# as well as the CURRENT Axes you want to work on
ax0 = fig.add_subplot(231)

# Equivalent top-level call on the current figure
# It is also possible to create several subplots at once using plt.subplots()
ax1 = plt.subplot(232)      

# Optional arguments are similar to those of add_axes()
ax2 = fig.add_subplot(233, title='three') 

# We can use these Axes object as before
ax3 = fig.add_subplot(234)
ax3.plot(R, 'r-') 
ax3.set_xticks([])
ax3.set_yticks([])

# We skipped the fifth subplot, and create only the 6th
ax5 = fig.add_subplot(236, projection='polar')

# We can adjust the spacings afterwards
fig.subplots_adjust(hspace=0.4)

# And even make room in the figure for a plot that doesn't fit the grid
fig.subplots_adjust(right=0.5)
ax6 = fig.add_axes([0.55,0.1,0.3,0.8])

# Show the figure
fig.show()

Exercise: Function plots

Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good.

In [ ]:
# EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other
# Let x range from 0 to 2*pi

Finer figure control

If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually.

First, many figure elements can be manually added through top-level or Axes functions:

In [ ]:
# This uses the result of the exercise above
# You have to copy-paste it into the same code-box, before the fig.show()

# Add horizontal lines
ax0.axhline(0, color='g')
ax0.axhline(0.5, color='gray', linestyle=':')
ax0.axhline(-0.5, color='gray', linestyle=':')
ax1.axhline(0, color='g')
ax1.axhline(0.5, color='gray', linestyle=':')
ax1.axhline(-0.5, color='gray', linestyle=':')

# Add text to the plots
ax0.text(0.1,-0.9,'$y = sin(x)$', size=16)   # math mode for proper formula formatting!
ax1.text(0.1,-0.9,'$y = sin(x^2)$', size=16)

# Annotate certain points with a value
for x_an in np.linspace(0,2*np.pi,9):
    ax0.annotate(str(round(sin(x_an),2)),(x_an,sin(x_an)))
    
# Add an arrow (x,y,xlength,ylength)
ax0.arrow(np.pi-0.5,-0.5,0.5,0.5, head_width=0.1, length_includes_head=True)

Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right, attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects.

In [ ]:
# This uses the result of the exercise above
# You have to copy-paste it into the same code-box, before the fig.show()

# For instance, fetch the X axis
# XAxis objects have their own methods
xax = ax1.get_xaxis()
print type(xax)

# These methods allow you to fetch the even smaller building blocks
# For instance, tick-lines are Line2D objects attached to the XAxis
xaxt = xax.get_majorticklines()
print len(xaxt)

# Of which you can fetch AND change the properties
# Here we change just one tickline into a cross
print xaxt[6].get_color()
xaxt[6].set_color('g')
xaxt[6].set_marker('x')
xaxt[6].set_markersize(10)
In [ ]:
# This uses the result of the exercise above
# You have to copy-paste it into the same code-box, before the fig.show()

# Another example: fetch the lines in the plot
# Change the color, change the marker, and mark only every 100 points for one specific line
ln = ax0.get_lines()
print ln
ln[0].set_color('g')
ln[0].set_marker('o')
ln[0].set_markerfacecolor('b')
ln[0].set_markevery(100)

# Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function
# And then attach it to existing Axes
# NOTE: we need to import something before we can create the ellipse like this. What should we import?
ell = matplotlib.patches.Ellipse((np.pi,0), 1., 1., color='r')
ax0.add_artist(ell)
ell.set_hatch('//')
ell.set_edgecolor('black')
ell.set_facecolor((0.9,0.9,0.9))

Exercise: Add regression lines

Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes.

Useful functions:

  • np.polyfit(x,y,1) performs a linear regression, returning slope and constant
  • plt.gca() retrieves the current Axes object
  • matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors
In [ ]:
# EXERCISE 10: Add regression lines
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.lines as lines

# Open image, convert to an array
im = Image.open('python.jpg')
im = im.resize((400,300))
arr = np.array(im, dtype='float')

# Split the RGB layers and flatten them
R,G,B = np.dsplit(arr,3)
R = R.flatten()
G = G.flatten()
B = B.flatten()

# Do the plotting
plt.figure(figsize=(5,5))
plt.plot(R, B, marker='x', linestyle='None', color=(0,0,0.6))
plt.plot(R, G, marker='.', linestyle='None', color=(0,0.35,0))

# Tweak the plot
plt.axis([0,255,0,255])
plt.xlabel('Red value')
plt.ylabel('Green/Blue value')

# Fill in your code...

# Show the result
plt.show()

Scipy

Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here.

We will pick two useful modules from SciPy: stats and fftpack

I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work.

Statistics

In [ ]:
import numpy as np
import scipy.stats as stats

# Generate random numbers between 0 and 1
data = np.random.rand(30)

# Do a t-test with a H0 for the mean of 0.4
t,p = stats.ttest_1samp(data,0.4)
print p

# Generate another sample of random numbers, with mean 0.4
data2 = np.random.rand(30)-0.1

# Do a t-test that these have the same mean
t,p = stats.ttest_ind(data, data2)
print p
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats

# Simulate the size of the F statistic when comparing three conditions
# Given a constant n, and an increasing true effect size. 
true_effect = np.linspace(0,0.5,500)
n = 100
Fres = []

# Draw random normally distributed samples for each condition, and do a one-way ANOVA
for eff in true_effect:
    c1 = stats.norm.rvs(0,1,size=n)
    c2 = stats.norm.rvs(eff,1,size=n)
    c3 = stats.norm.rvs(2*eff,1,size=n)
    F,p = stats.f_oneway(c1,c2,c3)
    Fres.append(F)

# Create the plot
plt.figure()
plt.plot(true_effect,Fres,'r*-')
plt.xlabel('True Effect')
plt.ylabel('F')
plt.show()
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats

# Compute the pdf and cdf of normal distributions, with increasing sd's
# Then plot them in different colors
# (of course, many other distributions are also available)
x = np.linspace(-5,5,1000)
sds = np.linspace(0.25,2.5,10)
cols = np.linspace(0.15,0.85,10)

# Create the figure
fig = plt.figure(figsize=(10,5))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)

# Compute the densities, and plot them
for i,sd in enumerate(sds):
    y1 = stats.norm.pdf(x,0,sd)
    y2 = stats.norm.cdf(x,0,sd)
    ax0.plot(x,y1,color=cols[i]*np.array([1,0,0]))
    ax1.plot(x,y2,color=cols[i]*np.array([0,1,0]))

# Show the figure
plt.show()

The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here.

For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python.

Fast Fourier Transform

FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft, but SciPy has its own set of functions as well in scipy.fftpack. Both are very similar, you can use whichever package you like.

I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency.

Let's do this for a simple 1D function.

In [ ]:
import numpy as np
import scipy.fftpack as fft

# The original data: a step function
data = np.zeros(200, dtype='float')
data[25:100] = 1

# Decompose into sinusoidal components
# The result is a series of complex numbers as long as the data itself
res = fft.fft(data)

# FREQUENCY is implied by the ordering, but can be retrieved as well
# It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart
# Note: in case of real input data, the FFT results will be symmetrical
freq = fft.fftfreq(data.size)

# AMPLITUDE is given by np.abs() of the complex numbers
amp = np.abs(res)

# PHASE is given by np.angle() of the complex numbers
phase = np.angle(res)

# We can plot each component separately
plt.figure(figsize=(15,5))
plt.plot(data, 'k-', lw=3)
xs = np.linspace(0,data.size-1,data.size)*2*np.pi
for i in range(len(res)):   
    ys = np.cos(xs*freq[i]+phase[i]) * (amp[i]/data.size)
    plt.plot(ys.real, 'r:', lw=1)
plt.show()

# Can you then plot what the SUM of all these components looks like?

Of course, there is a short-hand function available for reconstructing the original input from the FFT result:

In [ ]:
# ifft = inverse fft
reconstructed = fft.ifft(res)

plt.figure(figsize=(15,5))
plt.plot(data,'k-', lw=3)
plt.plot(reconstructed, 'r:', lw=3)
plt.show()

# Note that /some/ information has been lost, but very little
print 'Total deviation:', np.sum(np.abs(data-reconstructed))

This allows us to perform operations in the frequency domain, like applying a high-pass filter:

In [ ]:
# Set components with low frequencies equal to 0
resf = res.copy()
mask = np.abs(fft.fftfreq(data.size)) < 0.25
resf[mask] = 0

# ifft the modified components
filtered = fft.ifft(resf)

# And the result is high-pass filtered
plt.figure(figsize=(15,5))
plt.plot(data,'k-', lw=3)
plt.plot(filtered, 'r:', lw=3)
plt.show()

The exact same logic can be applied to 2D images, using the ftt2 and ifft2 functions. For instance, let us equalize the amplitude spectrum of an image, so that all frequencies are equally strong.

In [ ]:
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import scipy.fftpack as fft

im = Image.open('python.jpg').convert('L')
arr = np.array(im, dtype='float')
res = fft.fft2(arr)

# Just set all amplitudes to one
new_asp = np.ones(res.shape, dtype='float')

# Then recombine the complex numbers
real_part = new_asp * np.cos(np.angle(res))
imag_part = new_asp * np.sin(np.angle(res))
eq = real_part+(imag_part*1j)

# And do the ifft
arr_eq = fft.ifft2(eq).real

# Show the result
# Clear the high frequencies dominate now
plt.figure()
plt.imshow(arr_eq, cmap='gray')
plt.show()

Note that in practice, it is often recommended to use image dimensions that are a power of 2, for efficiency. The fft functions allow you to automatically pad images to a given size; at the end you can just crop the result to obtain the original image size again.