Contenido del Curso
Neural Networks with PyTorch
Neural Networks with PyTorch
Mathematical Operations with Tensors
We'll now explore how to perform mathematical operations with PyTorch tensors. These operations form the basis for building and training neural networks, so understanding them is essential.
Element-wise Operations
Element-wise operations are applied to each element in the tensor individually. These operations, such as addition, subtraction, and division, work similarly to how they do in NumPy:
import torch a = torch.tensor([1, 2, 3]) b = torch.tensor([4, 5, 6]) # Element-wise addition addition_result = a + b print(f"Addition: {addition_result}") # Element-wise subtraction subtraction_result = a - b print(f"Subtraction: {subtraction_result}") # Element-wise multiplication multiplication_result = a * b print(f"Multiplication: {multiplication_result}") # Element-wise division division_result = a / b print(f"Division: {division_result}")
Matrix Operations
PyTorch also supports matrix multiplication and dot product, which are performed using the torch.matmul()
function:
import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) # Matrix multiplication z = torch.matmul(x, y) print(f"Matrix multiplication: {z}")
You can also use the @
operator for matrix multiplication:
import torch z = x @ y print(f"Matrix Multiplication with @: {z}")
Aggregation Operations
Aggregation operations compute summary statistics from tensors, such as sum, mean, maximum, and minimum values, which can be calculated using their respective methods.
import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]).float() # Sum of all elements print(f"Sum: {tensor.sum()}") # Mean of all elements print(f"Mean: {tensor.mean()}") # Maximum value print(f"Max: {tensor.max()}") # Minimum value print(f"Min: {tensor.min()}")
Aggregation methods also have two optional parameters:
dim
: specifies the dimension (similarly toaxis
in NumPy) along which the operation is applied. By default, ifdim
is not provided, the operation is applied to all elements of the tensor;keepdim
: a boolean flag (False
by default). If set toTrue
, the reduced dimension is retained as a size1
dimension in the output, preserving the original number of dimensions.
import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) # Aggregation operations along specific dimensions print(f"Sum along rows (dim=1): {tensor.sum(dim=1)}") print(f"Sum along columns (dim=0): {tensor.sum(dim=0)}") # Aggregation with keepdim=True print(f"Sum along rows with keepdim (dim=1): {tensor.sum(dim=1, keepdim=True)}") print(f"Sum along columns with keepdim (dim=0): {tensor.sum(dim=0, keepdim=True)}")
Broadcasting
Broadcasting allows operations between tensors of different shapes by automatically expanding dimensions. IIf you need a refresher on broadcasting, you can find more details here.
a = torch.tensor([[1, 2, 3]]) # Shape (1, 3) b = torch.tensor([[4], [5]]) # Shape (2, 1) # Broadcasting addition c = a + b print(f"Broadcasted addition: {c}")
Useful Mathematical Functions
PyTorch also provides various mathematical functions such as exponentials, logarithms, and trigonometric functions.
tensor = torch.tensor([1.0, 2.0, 3.0]) # Exponentiation print(f"Exponent: {tensor.exp()}") # Logarithm print(f"Logarithm: {tensor.log()}") # Sine print(f"Sine: {tensor.sin()}")
¡Gracias por tus comentarios!