tf.transpose (API r1.3)

tf.transpose (API r1.3)

https://github.com/tensorflow/docs/tree/r1.3/site/en/api_docs/api_docs/python/tf
site/en/api_docs/api_docs/python/tf/transpose.md

transpose(
    a,
    perm=None,
    name='transpose'
)

Defined in tensorflow/python/ops/array_ops.py.
See the guides: Math > Matrix Math Functions, Tensor Transformations > Slicing and Joining

Transposes a. Permutes the dimensions according to perm.
轉置 a,根據 perm 重新排列維度。

The returned tensor’s dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1…0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.
返回的張量的維度 i 將對應於輸入維度 perm[i]。如果 perm 沒有給出,它被設置爲 (n-1 … 0),其中 n 是輸入張量的秩。因此,默認情況下,此操作在二維輸入張量上執行常規矩陣轉置。

x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.transpose(x)  # [[1, 4]
                 #  [2, 5]
                 #  [3, 6]]

# Equivalently
tf.transpose(x, perm=[1, 0])  # [[1, 4]
                              #  [2, 5]
                              #  [3, 6]]

# 'perm' is more useful for n-dimensional tensors, for n > 2
x = tf.constant([[[ 1,  2,  3],
                  [ 4,  5,  6]],
                 [[ 7,  8,  9],
                  [10, 11, 12]]])

# Take the transpose of the matrices in dimension-0
tf.transpose(x, perm=[0, 2, 1])  # [[[1,  4],
                                 #   [2,  5],
                                 #   [3,  6]],
                                 #  [[7, 10],
                                 #   [8, 11],
                                 #   [9, 12]]]

1. Args

a: A Tensor.
perm: A permutation of the dimensions of a. (a 的維度的置換。)
name: A name for the operation (optional). (操作的名字,可選的。)

permutation [pɜːmjʊ'teɪʃ(ə)n]:n. 排列,置換
transpose [træns'pəʊz; trɑːns-; -nz-]:vt. 調換,移項,顛倒順序 vi. 進行變換 n. 轉置陣

2. Returns

A transposed Tensor.
一個轉置 Tensor。

3. transpose

二維數組的 transpose,就是二維數組的轉置。二維數組,perm=[0,1],0 代表二維數組的行,1 代表二維數組的列。

三維 array,shape 爲 [i, j, k],可以看成是 i 個 [j, k] 的二維數組,那麼 i 是三維數組的高度,j 是二維數組的行數,k 是二維數組的列數。

x2 的 shape 爲 [2, 2, 3],經過 transpose(perm=[0, 2, 1]),將第二維度和第三維度進行調換,得到的 shape 爲 [2, 3, 2],經過 transpose(perm=[1, 0, 2]) 將第一維度和第二維度進行調換,得到的 shape 爲 [2, 2, 3]。

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import absolute_import
from __future__ import print_function
from __future__ import division

import os
import sys
import numpy as np
import tensorflow as tf

sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))

print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")

x1 = tf.constant([[1, 2, 3], [4, 5, 6]])

y11 = tf.transpose(x1)
# [[1, 4]
#  [2, 5]
#  [3, 6]]

# Equivalently
y12 = tf.transpose(x1, perm=[1, 0])
# [[1, 4]
#  [2, 5]
#  [3, 6]]

# 'perm' is more useful for n-dimensional tensors, for n > 2
x2 = tf.constant([[[1, 2, 3],
                   [4, 5, 6]],
                  [[7, 8, 9],
                   [10, 11, 12]]])

# Take the transpose of the matrices in dimension-0
y021 = tf.transpose(x2, perm=[0, 2, 1])
# [[[1,  4],
#   [2,  5],
#   [3,  6]],
#  [[7, 10],
#   [8, 11],
#   [9, 12]]]

y012 = tf.transpose(x2, perm=[0, 1, 2])
y102 = tf.transpose(x2, perm=[1, 0, 2])
y120 = tf.transpose(x2, perm=[1, 2, 0])
y201 = tf.transpose(x2, perm=[2, 0, 1])
y210 = tf.transpose(x2, perm=[2, 1, 0])

with tf.Session() as sess:
    outputy11 = sess.run(y11)
    print("outputy11:\n", outputy11)
    print('\n')

    outputy12 = sess.run(y12)
    print("outputy12:\n", outputy12)
    print('\n')

    outputy021 = sess.run(y021)
    print("outputy021:\n", outputy021)
    print('\n')

    outputy012 = sess.run(y012)
    print("outputy012:\n")
    print(outputy012)
    print('\n')

    outputy102 = sess.run(y102)
    print("outputy102:\n", outputy102)
    print('\n')

    outputy120 = sess.run(y120)
    print("outputy120:\n", outputy120)
    print('\n')

    outputy201 = sess.run(y201)
    print("outputy201:\n", outputy201)
    print('\n')

    outputy210 = sess.run(y210)
    print("outputy210:\n", outputy210)
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-18 08:19:21.654522: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-18 08:19:21.734975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-18 08:19:21.735214: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.41GiB
2019-08-18 08:19:21.735225: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
outputy11:
 [[1 4]
 [2 5]
 [3 6]]


outputy12:
 [[1 4]
 [2 5]
 [3 6]]


outputy021:
 [[[ 1  4]
  [ 2  5]
  [ 3  6]]

 [[ 7 10]
  [ 8 11]
  [ 9 12]]]


outputy012:

[[[ 1  2  3]
  [ 4  5  6]]

 [[ 7  8  9]
  [10 11 12]]]


outputy102:
 [[[ 1  2  3]
  [ 7  8  9]]

 [[ 4  5  6]
  [10 11 12]]]


outputy120:
 [[[ 1  7]
  [ 2  8]
  [ 3  9]]

 [[ 4 10]
  [ 5 11]
  [ 6 12]]]


outputy201:
 [[[ 1  4]
  [ 7 10]]

 [[ 2  5]
  [ 8 11]]

 [[ 3  6]
  [ 9 12]]]


outputy210:
 [[[ 1  7]
  [ 4 10]]

 [[ 2  8]
  [ 5 11]]

 [[ 3  9]
  [ 6 12]]]

Process finished with exit code 0

4. transpose

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import absolute_import
from __future__ import print_function
from __future__ import division

import os
import sys
import numpy as np
import tensorflow as tf

sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))

print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")

x_anchor = tf.constant([[[0, 1],
                         [2, 3],
                         [4, 5],
                         [6, 7],
                         [8, 9],
                         [10, 11]]], dtype=np.float32)

y_anchor102 = tf.transpose(x_anchor, perm=[1, 0, 2])

with tf.Session() as sess:
    input_anchor = sess.run(x_anchor)
    print("input_anchor.shape:\n", input_anchor.shape)
    print('\n')

    output_anchor102 = sess.run(y_anchor102)
    print("output_anchor102:\n", output_anchor102)
    print('\n')
    print("output_anchor102.shape:\n", output_anchor102.shape)

/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-18 11:20:05.038400: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-18 11:20:05.121427: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-18 11:20:05.121665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.41GiB
2019-08-18 11:20:05.121677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
input_anchor.shape:
 (1, 6, 2)


output_anchor102:
 [[[ 0.  1.]]

 [[ 2.  3.]]

 [[ 4.  5.]]

 [[ 6.  7.]]

 [[ 8.  9.]]

 [[10. 11.]]]


output_anchor102.shape:
 (6, 1, 2)

Process finished with exit code 0

5. transpose

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import absolute_import
from __future__ import print_function
from __future__ import division

import os
import sys
import numpy as np
import tensorflow as tf

sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))

print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")

# 'perm' is more useful for n-dimensional tensors, for n > 2
x = tf.constant([[[1, 2, 3],
                  [4, 5, 6]],
                 [[7, 8, 9],
                  [10, 11, 12]]])

# Take the transpose of the matrices in dimension-0
y102 = tf.transpose(x, perm=[1, 0, 2])

with tf.Session() as sess:
    inputx = sess.run(x)
    print("inputx.shape:", inputx.shape)
    print('\n')
    print("inputx:\n", inputx)
    print('\n')

    outputy102 = sess.run(y102)
    print("outputy102.shape:", outputy102.shape)
    print('\n')
    print("outputy102:\n", outputy102)
    print('\n')

    print("inputx[1, 0, 0] - 7:", inputx[1, 0, 0])
    print("outputy102[0, 1, 0] - 7:", outputy102[0, 1, 0])

/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-18 11:24:35.056265: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-18 11:24:35.135658: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-18 11:24:35.135939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.41GiB
2019-08-18 11:24:35.135952: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
inputx.shape: (2, 2, 3)


inputx:
 [[[ 1  2  3]
  [ 4  5  6]]

 [[ 7  8  9]
  [10 11 12]]]


outputy102.shape: (2, 2, 3)


outputy102:
 [[[ 1  2  3]
  [ 7  8  9]]

 [[ 4  5  6]
  [10 11 12]]]


inputx[1, 0, 0] - 7: 7
outputy102[0, 1, 0] - 7: 7

Process finished with exit code 0
x 中元素 7 的索引爲 [1, 0, 0]。
tf.transpose(x, perm=[1, 0, 2]) 之後,outputy102 中元素 7 的索引 [0, 1, 0]。
perm=[1, 0, 2] 可以理解爲第 0 維和第 1 維的元素座標置換。

outputy102 中索引 [0, 0, 0] 的元素 (1) 是未 transpose 之前 x 中索引 [0, 0, 0] 元素 1。
outputy102 中索引 [0, 0, 1] 的元素 (2) 是未 transpose 之前 x 中索引 [0, 0, 1] 元素 2。
outputy102 中索引 [0, 0, 2] 的元素 (3) 是未 transpose 之前 x 中索引 [0, 0, 2] 元素 3。

outputy102 中索引 [1, 0, 0] 的元素 (4) 是未 transpose 之前 x 中索引 [0, 1, 0] 元素 4。
outputy102 中索引 [1, 0, 1] 的元素 (5) 是未 transpose 之前 x 中索引 [0, 1, 1] 元素 5。
outputy102 中索引 [1, 0, 2] 的元素 (6) 是未 transpose 之前 x 中索引 [0, 1, 2] 元素 6。

outputy102 中索引 [0, 1, 0] 的元素 (7) 是未 transpose 之前 x 中索引 [1, 0, 0] 元素 7。
outputy102 中索引 [0, 1, 1] 的元素 (8) 是未 transpose 之前 x 中索引 [1, 0, 1] 元素 8。
outputy102 中索引 [0, 1, 2] 的元素 (9) 是未 transpose 之前 x 中索引 [1, 0, 2] 元素 9。

outputy102 中索引 [1, 1, 0] 的元素 (10) 是未 transpose 之前 x 中索引 [1, 1, 0] 元素 10。
outputy102 中索引 [1, 1, 1] 的元素 (11) 是未 transpose 之前 x 中索引 [1, 1, 1] 元素 11。
outputy102 中索引 [1, 1, 2] 的元素 (12) 是未 transpose 之前 x 中索引 [1, 1, 2] 元素 12。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章