Зміст курсу
Optimization Techniques in Python
Optimization Techniques in Python
Timing and Benchmarking Basics
Since we're not emphasizing time complexity analysis in this course, we'll focus on empirical (hands-on) methods for measuring actual code performance. One of the simplest ways to measure the performance of a Python code snippet is by using the built-in time.time()
function. This function returns the current time in seconds since the epoch (the system's reference point for time). By calling time.time()
before and after a piece of code, you can calculate the difference to see how long it takes to execute.
Here's an example:
import time # Record the start time start_time = time.time() # Code you want to measure result = [x**2 for x in range(1000000)] # Record the end time end_time = time.time() # Calculate the difference to get the execution time execution_time = end_time - start_time print(f'Execution time: {execution_time} seconds')
While using time.time()
is simple and effective for a rough estimate, it has certain limitations:
- Low resolution: The precision of
time.time()
can vary depending on the operating system, leading to inaccurate results for small operations; - Overhead: It includes other system processes running in the background, which can distort the measurement;
- Doesn't repeat: For more accurate measurements, we often need to run the same code multiple times to get an average result, which
time.time()
doesn’t handle by itself.
Advantages of Using timeit
The timeit
module is a more advanced tool designed to overcome the limitations of time.time()
and provide a reliable way to measure the execution time of small code snippets, often referred to as micro-benchmarking.
The main advantages of timeit
are:
- High precision:
timeit
usestime.perf_counter()
under the hood, a high-resolution timer that includes time spent during sleep and waiting for I/O, making it more accurate for short intervals thantime.time()
; - Automatic repetition:
timeit
automatically runs the code multiple times and calculates the average execution time. This mitigates the effects of background processes and provides a more reliable representation of code performance; - Minimal overhead:
timeit
is designed to run in a clean environment, temporarily disabling garbage collection to ensure that measurements focus on the code being benchmarked without interference from memory management operations.
Here's an example of using timeit
for micro-benchmarking:
import timeit # Code snippet to test code_snippet = 'result = [x**2 for x in range(1000000)]' # Running timeit to measure execution time iterations = 30 execution_time = timeit.timeit(code_snippet, number=iterations) print(f'Average Execution Time: {execution_time / iterations} seconds')
In this example, timeit.timeit()
runs the code specified as a string (code_snippet
variable) 30 times (specified by the number
parameter) and returns the total execution time for all 30 runs. By dividing the total time by the number of iterations (30), we can calculate the average execution time for a single run.
Choosing the Number of Iterations
Choosing the number of iterations depends on the complexity of the code you're benchmarking and the precision you require in the timing results. Running your code with varying iteration counts allows you to assess stability in the results; if execution times are consistent, you've likely found an optimal iteration count.
For very fast code snippets (milliseconds or less), aim for 1000+ iterations to get reliable averages. For moderately timed code (a few milliseconds to seconds), 100 to 500 iterations should be sufficient. For longer-running code (several seconds or more), 10 to 50 iterations will usually provide a good balance between accuracy and time spent benchmarking.
Дякуємо за ваш відгук!