Skip to content Skip to sidebar Skip to footer

Parallelizing A Python Code For Different Instances Of A Class

My question is related to parallelizing a python code and I want to know how we can run a function for different instances of a class to decrease the runtime. What I have: I have

Solution 1:

The pattern that your toy code seems to follow would suggest to map a wrapper function to the list using a thread pool / process pool. The number of instances and the basic arithmetic operation that you want to apply for each instance however suggests that the overhead for parallelizing this would outweigh any potential benefit.

Whether it makes sense to do this, depends on the number of instances and the time it takes to run each of those member functions. So make sure to do at least some basic profiling of your code before you try to parallelize this. Find out whether the tasks you attempt to parallelize is CPU-bound or IO-bound.

Here's an example that should demonstrate the basic pattern:

# use multiprocessing.Pool for a processes-based worker pool# use multiprocessing.dummy.Pool for a thread-based worker poolfrom multiprocessing.dummy import Pool
# make up list of instances
l = [list() for i inrange(5)]
# function that calls the method on each instancedeffoo(x):
    x.append(20)
    return x
# actually call functions and retrieve list of results
p = Pool(3)
results = p.map(foo, l)
print(results)

Obviously you need to fill the blanks to adapt this to your real code.

For further reading:

Also maybe have a look at futures:

If you really want to have this parallel, also consider to port your calculations to a GPU (you might need to move away from Python then).

Post a Comment for "Parallelizing A Python Code For Different Instances Of A Class"