多維數組, numpy, torch多維矩陣操作的理解

初學者對一維, 二維數組還容易理解, 但是3維, 4維就難理解了. 其實4維數組之內可以和圖像對應起來理解.

1維:W dim=0

2維: H, W dim:(0, 1)

3維: C, H, W dim:(0, 1, 2)

4維: B, C, H, W dim:(0, 1, 2, 3)

這也是爲什麼深度學習框架默認都是N, C, H, W

所以很多多維矩陣操作就好理解了

import torch

變換操作

torch.repeat

a = torch.tensor([1, 2, 3, 1.1, 2.1, 3.1]).view(2, 3)
print(a)
print(a.repeat(1, 3))
print(a.repeat(3, 1))
tensor([[1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000]])
tensor([[1.0000, 2.0000, 3.0000, 1.0000, 2.0000, 3.0000, 1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000, 1.1000, 2.1000, 3.1000, 1.1000, 2.1000, 3.1000]])
tensor([[1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000],
        [1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000],
        [1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000]])

torch.view

a = torch.tensor([1, 2, 3, 1.1, 2.1, 3.1]).view(2, 3)
print(a)
print(a.view(1, 6))
print(a.view(3, 2))
#先把原矩陣轉爲1維矩陣, 再view, 長寬長度變換
tensor([[1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000]])
tensor([[1.0000, 2.0000, 3.0000, 1.1000, 2.1000, 3.1000]])
tensor([[1.0000, 2.0000],
        [3.0000, 1.1000],
        [2.1000, 3.1000]])

torch.permute

a = torch.rand(2, 3, 3)
print(a)
print(a.permute(2, 0, 1))#c, h, w -->w, c, h
print(a.permute(0, 2, 1).permute(1,0, 2))
#上述兩個操作等價, 即通道數不變, 長寬旋轉, 再w不變, 通道與h平面旋轉,理解爲長方體也可以.
tensor([[[0.9450, 0.7652, 0.4687],
         [0.1275, 0.3541, 0.1216],
         [0.5265, 0.2152, 0.9015]],

        [[0.0413, 0.0178, 0.5594],
         [0.5511, 0.9967, 0.9560],
         [0.2367, 0.6877, 0.4822]]])
tensor([[[0.9450, 0.1275, 0.5265],
         [0.0413, 0.5511, 0.2367]],

        [[0.7652, 0.3541, 0.2152],
         [0.0178, 0.9967, 0.6877]],

        [[0.4687, 0.1216, 0.9015],
         [0.5594, 0.9560, 0.4822]]])
tensor([[[0.9450, 0.1275, 0.5265],
         [0.0413, 0.5511, 0.2367]],

        [[0.7652, 0.3541, 0.2152],
         [0.0178, 0.9967, 0.6877]],

        [[0.4687, 0.1216, 0.9015],
         [0.5594, 0.9560, 0.4822]]])

實現for循環

import torch
totalClass=3
feature_dim=2
centers = torch.tensor([1, 2, 3, 1.1, 2.1, 3.1]).view(feature_dim, totalClass).float()

centers_inter = centers.repeat(1, totalClass).view(feature_dim, totalClass, totalClass)#(64, 3, 3)
centers_self_rep = centers.repeat(totalClass, 1).permute(1, 0).view(totalClass, totalClass, feature_dim).permute(2, 0, 1)
center_diff = torch.add(centers_self_rep, -1, centers_inter)
print(centers)
print(centers_inter)
print(centers_self_rep)
print(center_diff)
tensor([[1.0000, 2.0000, 3.0000],
        [1.1000, 2.1000, 3.1000]])
tensor([[[1.0000, 2.0000, 3.0000],
         [1.0000, 2.0000, 3.0000],
         [1.0000, 2.0000, 3.0000]],

        [[1.1000, 2.1000, 3.1000],
         [1.1000, 2.1000, 3.1000],
         [1.1000, 2.1000, 3.1000]]])
tensor([[[1.0000, 1.0000, 1.0000],
         [2.0000, 2.0000, 2.0000],
         [3.0000, 3.0000, 3.0000]],

        [[1.1000, 1.1000, 1.1000],
         [2.1000, 2.1000, 2.1000],
         [3.1000, 3.1000, 3.1000]]])
tensor([[[ 0.0000, -1.0000, -2.0000],
         [ 1.0000,  0.0000, -1.0000],
         [ 2.0000,  1.0000,  0.0000]],

        [[ 0.0000, -1.0000, -2.0000],
         [ 1.0000,  0.0000, -1.0000],
         [ 2.0000,  1.0000,  0.0000]]])

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章