Why isn't multi-threading making my Python code faster? How can I fix it.
Python restricts access to the interpreter with a Global Interpreter Lock, referred to as the GIL, for internal thread-safety requirements. Unfortunately, this design decision restricts only allows one thread to interpret Python code at a time, even on a machine with the capability to run multiple threads together. 

If you wish to run Python code on a set of data, and the data can be broken up into independent segments (a process known as data partitioning), you may wish to look into the multiprocessing library in Python. 

As an example, consider the function foo which takes one argument, arg1, and say you wanted to run the foo function on a number of values, say, val1, val2, and val3.

def foo(arg1):
  # do something

if __name__ == '__main__':
  foo(val1)
  foo(val2)
  foo(val3)

To have foo execute on multiple processes together, you could use the multiprocessing Pool  to run multiple worker processes together:

from multiprocessing import Pool

def foo(arg1):
  # do something

if __name__ == '__main__':
  p = Pool()
  p.map(foo,)

The multiprocessing library also provides various other APIs for parallel programming. Would you like to know more?