option
Cuestiones
ayuda
daypo
buscar.php

Pytorch Beginner - Tensor Manipulation I

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
Pytorch Beginner - Tensor Manipulation I

Descripción:
Pytorch Beginner - Tensor Manipulation I

Fecha de Creación: 2025/11/17

Categoría: Informática

Número Preguntas: 15

Valoración:(0)
COMPARTE EL TEST
Nuevo ComentarioNuevo Comentario
Comentarios
NO HAY REGISTROS
Temario:

You have a tensor t of shape (3, 4). Which operation correctly reshapes it into a tensor of shape (4, 3) without changing the data?. Using t.resize_(4, 3). Using t.reshape(4, 3). Using t.view(4, 3). Using t.squeeze().

Given two tensors a of shape (5, 1) and b of shape (1, 4), what will be the shape of a + b due to broadcasting?. (5, 4). (5, 1). (1, 4). (6, 5).

You want to extract all elements from a tensor x of shape (10, 10) where the values are greater than 5. Which indexing method is appropriate?. x[x > 5]. x[x > 5, :]. x[:, x > 5]. x.index_select(x > 5).

Which of the following is a common pitfall when using PyTorch broadcasting in arithmetic operations?. Always reshape tensors explicitly before arithmetic operations. Assuming broadcasting always aligns dimensions as intended without verifying shapes. Broadcasting requires tensors to have the same number of dimensions. Broadcasting only works on CPU tensors.

You have a tensor y created as torch.zeros((2, 3)). You execute y[0, 0] = 5. What happens to the tensor y?. The element at position (0, 0) is updated to 5 in the original tensor. A new tensor is created with the updated value, original remains unchanged. The tensor remains all zeros because torch.zeros() tensors are immutable. An error occurs because you cannot assign a scalar to a tensor element.

Given two tensors A of shape (3, 4) and B of shape (4, 5), which operation(s) will successfully compute their matrix product without error?. Both torch.mm(A, B) and torch.matmul(A, B). Using torch.cat(A, B, dim=1). Using torch.stack((A, B), dim=0). Only torch.mm(A, B) works, torch.matmul will error.

Which code gets the following results >>> x = torch.randn(2, 3) >>> x tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) >>> torch.<?>((x, x, x), 0) # shape 6, 3 tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) >>> torch.<?>((x, x, x), 1). # shape 2, 9 tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497]]). torch.cat((x, x, x), 0) and torch.cat((x, x, x), 1). torch.stack((x, x, x), 0) and torch.stack((x, x, x), 1).

Which code gets the following results >>> x = torch.randn(2, 3) >>> x tensor([[ 0.3367, 0.1288, 0.2345], [ 0.2303, -1.1229, -0.1863]]) >>> torch.<?>((x, x)) # same as torch.<?>((x, x), dim=0) tensor([[[ 0.3367, 0.1288, 0.2345], [ 0.2303, -1.1229, -0.1863]], [[ 0.3367, 0.1288, 0.2345], [ 0.2303, -1.1229, -0.1863]]]) >>>torch.<?>((x, x)).size() torch.Size([2, 2, 3]). torch.cat((x, x, x), 0). torch.stack((x, x, x), 0).

Which dimension will get this stack result? >>> x = torch.randn(2, 3) >>> x tensor([[ 0.3367, 0.1288, 0.2345], [ 0.2303, -1.1229, -0.1863]]) >>> torch.stack((x, x), dim=????) tensor([[[ 0.3367, 0.3367], [ 0.1288, 0.1288], [ 0.2345, 0.2345]], [[ 0.2303, 0.2303], [-1.1229, -1.1229], [-0.1863, -0.1863]]]). torch.stack((x, x), dim=-1). # shape (2, 3, 2). torch.stack((x, x), dim=1). # shape (2, 2, 3). torch.stack((x, x), dim=0). # shape (2, 2, 3).

Which dimension will get this stack result? >>> x = torch.randn(2, 3) >>> x tensor([[ 0.3689, 0.3130, 0.4637], [-0.1375, 2.5800, 0.2643]]) >>> torch.stack((x, x), dim=????) ensor([[[ 0.3689, 0.3130, 0.4637], [ 0.3689, 0.3130, 0.4637]], [[-0.1375, 2.5800, 0.2643], [-0.1375, 2.5800, 0.2643]]]. torch.stack((x, x), dim=-1). # shape (2, 3, 2). torch.stack((x, x), dim=1). # shape (2, 2, 3). torch.stack((x, x), dim=0). # shape (2, 2, 3). torch.stack((x, x), dim=-2). # shape (2, 2, 3).

You have two tensors X of shape (2, 3, 4) and Y of shape (2, 4, 5). Which operation correctly performs batch matrix multiplication to produce a tensor of shape (2, 3, 5)?. torch.mm(X, Y). torch.matmul(X, Y). torch.cat((X, Y), dim=1). torch.stack((X, Y), dim=2).

Consider two tensors P and Q both of shape (3, 4). What is the primary difference between torch.cat((P, Q), dim=0) and torch.stack((P, Q), dim=0) in terms of the resulting tensor shape?. torch.cat((P, Q), dim=0) results in shape (3, 8) and torch.stack((P, Q), dim=0) results in (2, 3, 4). torch.cat((P, Q), dim=0) results in (6, 4) while torch.stack((P, Q), dim=0) results in (2, 3, 4). torch.cat((P, Q), dim=0) results in (2, 3, 4) and torch.stack((P, Q), dim=0) results in (6, 4). Both produce the same tensor shape (6, 4).

Which of the following scenarios would cause an error when using torch.mm?. A has shape (2, 3, 4) and B has shape (4, 5), using torch.mm(A, B). A has shape (2, 3, 4) and B has shape (4, 5), using torch.matmul(A, B). torch.cat((A, B), dim=1) where A and B have compatible shapes. torch.stack((A, B), dim=0) where A and B have the same shape.

You want to combine two tensors M and N of shape (3, 4) each into a single tensor with a new dimension representing the pair. Which operation and dimension should you use?. torch.cat((M, N), dim=0). torch.stack((M, N), dim=0). torch.cat((M, N), dim=2). torch.stack((M, N), dim=3).

Which operation should you use if you want to multiply two matrices but also handle broadcasting over batch dimensions automatically?. torch.matmul. torch.mm. torch.stack. torch.cat.

Denunciar Test