Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprende Mathematical Operations with Tensors | PyTorch Basics
Neural Networks with PyTorch
course content

Contenido del Curso

Neural Networks with PyTorch

Neural Networks with PyTorch

1. PyTorch Basics
2. Preparing for Neural Networks
3. Neural Networks

book
Mathematical Operations with Tensors

We'll now explore how to perform mathematical operations with PyTorch tensors. These operations form the basis for building and training neural networks, so understanding them is essential.

Element-wise Operations

Element-wise operations are applied to each element in the tensor individually. These operations, such as addition, subtraction, and division, work similarly to how they do in NumPy:

123456789101112131415
import torch a = torch.tensor([1, 2, 3]) b = torch.tensor([4, 5, 6]) # Element-wise addition addition_result = a + b print(f"Addition: {addition_result}") # Element-wise subtraction subtraction_result = a - b print(f"Subtraction: {subtraction_result}") # Element-wise multiplication multiplication_result = a * b print(f"Multiplication: {multiplication_result}") # Element-wise division division_result = a / b print(f"Division: {division_result}")
copy

Matrix Operations

PyTorch also supports matrix multiplication and dot product, which are performed using the torch.matmul() function:

123456
import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) # Matrix multiplication z = torch.matmul(x, y) print(f"Matrix multiplication: {z}")
copy

You can also use the @ operator for matrix multiplication:

123
import torch z = x @ y print(f"Matrix Multiplication with @: {z}")
copy

Aggregation Operations

Aggregation operations compute summary statistics from tensors, such as sum, mean, maximum, and minimum values, which can be calculated using their respective methods.

12345678910
import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]).float() # Sum of all elements print(f"Sum: {tensor.sum()}") # Mean of all elements print(f"Mean: {tensor.mean()}") # Maximum value print(f"Max: {tensor.max()}") # Minimum value print(f"Min: {tensor.min()}")
copy

Aggregation methods also have two optional parameters:

  • dim: specifies the dimension (similarly to axis in NumPy) along which the operation is applied. By default, if dim is not provided, the operation is applied to all elements of the tensor;
  • keepdim: a boolean flag (False by default). If set to True, the reduced dimension is retained as a size 1 dimension in the output, preserving the original number of dimensions.
12345678
import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) # Aggregation operations along specific dimensions print(f"Sum along rows (dim=1): {tensor.sum(dim=1)}") print(f"Sum along columns (dim=0): {tensor.sum(dim=0)}") # Aggregation with keepdim=True print(f"Sum along rows with keepdim (dim=1): {tensor.sum(dim=1, keepdim=True)}") print(f"Sum along columns with keepdim (dim=0): {tensor.sum(dim=0, keepdim=True)}")
copy

Broadcasting

Broadcasting allows operations between tensors of different shapes by automatically expanding dimensions. IIf you need a refresher on broadcasting, you can find more details here.

12345
a = torch.tensor([[1, 2, 3]]) # Shape (1, 3) b = torch.tensor([[4], [5]]) # Shape (2, 1) # Broadcasting addition c = a + b print(f"Broadcasted addition: {c}")
copy

Useful Mathematical Functions

PyTorch also provides various mathematical functions such as exponentials, logarithms, and trigonometric functions.

1234567
tensor = torch.tensor([1.0, 2.0, 3.0]) # Exponentiation print(f"Exponent: {tensor.exp()}") # Logarithm print(f"Logarithm: {tensor.log()}") # Sine print(f"Sine: {tensor.sin()}")
copy

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 1. Capítulo 7
We're sorry to hear that something went wrong. What happened?
some-alt