Conteúdo do Curso
Optimization Techniques in Python
Optimization Techniques in Python
Measuring Function Performance
While measuring string-based code snippets may sometimes be enough, this approach lacks flexibility. Using timeit
with functions allows you to measure performance more effectively, and decorators make it easier to measure the performance of multiple functions in a clean and modular way.
Using timeit with Functions
timeit
can measure the performance of functions directly by passing a callable (i.e., a function) instead of a string. This is more flexible and readable than using strings, especially when you want to benchmark complex functions.
Let's take a look at an example:
import timeit import numpy as np # Function to test def generate_squares(): return [x**2 for x in range(1000000)] # Measure time using a callable (function) iterations = 30 execution_time = timeit.timeit(generate_squares, number=iterations) # Calculate average time per run average_time = execution_time / iterations print(f'Average execution time: {average_time:.6f} seconds')
We pass generate_squares
as the function (callable) to be timed using timeit.timeit()
. Similar to before, the number
parameter specifies the number of times to run the function (30
times). The average execution time is then calculated by dividing the total time by the number of runs.
This method is cleaner and more efficient for benchmarking real functions, and it avoids the overhead of evaluating code from a string.
Let's now benchmark this code snippet using a string-based approach:
import timeit import numpy as np code_snippet = 'np.array([x ** 2 for x in range(1000000)])' iterations = 30 execution_time = timeit.timeit(code_snippet, number=iterations) average_time = execution_time / iterations print(f'Average execution time: {average_time:.6f} seconds')
Oops, we got the following error: NameError: name 'np' is not defined
. The error occurs because timeit.timeit()
runs the code in isolation, so it doesn’t have access to numpy
unless you explicitly import it in the setup
argument:
Using functions is cleaner, reduces errors, and doesn’t require managing external imports through a setup
string.
Enhancing Performance Measurement with Decorators
It's common to apply the same timing logic to multiple functions, and using a decorator is an excellent way to achieve this without duplicating the code.
Each time a function is called, it executes as usual, but with seamless benchmarking added. Decorators offer several advantages: they enhance reusability by applying the same logic across multiple functions, improve clarity by separating timing logic from core functionality, and allow for customization, such as adjusting the number of iterations or adding additional metrics for performance analysis.
Here’s how you can create a reusable timeit
decorator:
import timeit # Decorator to time the execution of a function def timeit_decorator(number=1): def decorator(func): def wrapper(*args, **kwargs): # Measure time with timeit total_time = timeit.timeit(lambda: func(*args, **kwargs), number=number) average_time = total_time / number print(f'{func.__name__} executed in {average_time:.6f} seconds (average over {number} runs)') return func(*args, **kwargs) return wrapper return decorator # Function to measure @timeit_decorator(number=30) def generate_squares(): return [x**2 for x in range(1000000)] # Calling the decorated function squares_array = generate_squares()
Now, whenever you call a function decorated with @timeit_decorator
, its performance will be automatically measured, and the results will be displayed.
Obrigado pelo seu feedback!