ssl-bad-gan python2.7+pytorch0.1.12+torchvision0.1.8

開始在pytorch0.4.1版本下運行,各種出錯:

主要是形狀問題    

svhn_train.py 119行。gen_feat_norm = gen_feat / gen_feat.norm(p=2, dim=1).reshape([-1,1]).expand_as(gen_feat) 加.reshape([-1,1])

utils.py中,21-25行 reshape([-1,1])

def log_sum_exp(logits, mask=None, inf=1e7):
    if mask is not None:
        logits = logits * mask - inf * (1.0 - mask)
        max_logits = logits.max(1)[0]
        return ((logits - max_logits.reshape([-1,1]).expand_as(logits)).exp() * mask).sum(1,keepdim=True).log().squeeze() + max_logits.squeeze()
    else:
        max_logits = logits.max(1)[0]
        return ((logits - max_logits.reshape([-1,1]).expand_as(logits)).exp()).sum(1,keepdim=True).log().squeeze() + max_logits.squeeze()

model.py中多處 sum中加 ,keepdim=true 好幾個地方。

59       norm_weight = self.weight * (weight_scale.unsqueeze(1) / torch.sqrt((self.weight ** 2).sum(1) + 1e-6)).expand_as(self.weight)

111         norm_weight = self.weight * (weight_scale[:,None,None,None] / torch.sqrt((self.weight ** 2).sum(3).sum(2).sum(1) + 1e-6)).expand_as(self.weight)

166 

norm_weight = self.weight * (weight_scale[None, :, None, None] / torch.sqrt((self.weight ** 2).sum(3).sum(2).sum(0) + 1e-6)).expand_as(self.weight)

好不容易調通了,但是結果中關鍵項爲00

#0    train: 2.2160, 0.0000 | dev: 2.2179, 0.0000 | best: 0.0000 | unl acc: 0.0014 | gen acc: 0.0014 | max unl acc: 0.0014 | max gen acc: 0.0014 | lab loss: 0.0031 | unl loss: 0.0017 | ent loss: 0.0003 | fm loss: 0.0003 | pt loss: 0.0004 | [Eval] unl acc: 1.0000, gen acc: 1.0000, max unl acc: 0.9390, max gen acc: 1.0000 | lr: 0.00100
#730    train: 0.9591, 0.0000 | dev: 1.0478, 0.0000 | best: 0.0000 | unl acc: 0.9292 | gen acc: 0.0954 | max unl acc: 0.7626 | max gen acc: 0.0228 | lab loss: 1.6207 | unl loss: 0.2017 | ent loss: 0.1491 | fm loss: 4.0159 | pt loss: 0.7041 | [Eval] unl acc: 1.0000, gen acc: 0.0000, max unl acc: 1.0000, max gen acc: 0.0000 | lr: 0.00100
#1460    train: 0.5666, 0.0000 | dev: 0.9125, 0.0000 | best: 0.0000 | unl acc: 0.9958 | gen acc: 0.0044 | max unl acc: 0.9904 | max gen acc: 0.0038 | lab loss: 0.9502 | unl loss: 0.0210 | ent loss: 0.0867 | fm loss: 6.6586 | pt loss: 0.6874 | [Eval] unl acc: 1.0000, gen acc: 0.0000, max unl acc: 1.0000, max gen acc: 0.0000 | lr: 0.00100
#2190    train: 1.9869, 0.0000 | dev: 2.0032, 0.0000 | best: 0.0000 | unl acc: 0.7814 | gen acc: 0.4585 | max unl acc: 0.3949 | max gen acc: 0.1118 | lab loss: 17.1366 | unl loss: 4.0821 | ent loss: 0.1480 | fm loss: 15.9876 | pt loss: 0.6443 | [Eval] unl acc: 0.9985, gen acc: 0.9970, max unl acc: 0.0130, max gen acc: 0.0000 | lr: 0.00100
 

後來終於知道 版本必須要跟作者要求一樣,於是乎大費周章的搞環境。

總結如下:conda create --name torch python=2.7

然後運行 source activate torch

然後:conda install tensorflow-gpu=1.4.1  (這裏需要注意 該版本需要cuda8支持,所以,如果不是cuda8,可以考慮再降版本)比如我的cua9.0 cudnn7.3.1,但是爲了滿足環境,我安裝tf 1.4.1版本。

然後 進入python 看看tf能不能導入 ,成功了繼續下一步

安裝pytorch,本來想最簡單的使用 

conda install pytorch=0.1.12 torchvision cudatoolkit=8.0 -c pytorch 安裝,但是 安裝完後torch可以導入 但運行程序 提示 不能使用cuda。

我使用pip安裝,事先下載了 whl文件,一定注意版本對應。

https://pytorch.org/get-started/previous-versions/ 

https://download.pytorch.org/whl/cu80/torch-0.1.12.post1-cp27-none-linux_x86_64.whl 我下載的版本  

直接 pip install 文件名 即可

然後進入python 可以導入成功 

然後安裝 torchvision  pip install torchvision=0.1.8 (版本跟torch對應的)

再運行原來的程序,好像只有一個 維度不對的提示。98-99 這裏 吧第二個維度[0]直接去掉 原來的是:

if data_set.labels[i][0] == 10:

data_set.labels[i][0] = 0

 def preprocess(data_set):
        for i in range(len(data_set.data)):
            #print(i)
            #print(data_set.labels[i])
            if data_set.labels[i] == 10:
                data_set.labels[i] = 0
    preprocess(training_set)
    preprocess(dev_set)

最終正確的結果應該是:

/home/cy/anaconda3/envs/torch/bin/python /home/cy/PycharmProjects/ssl_bad_gan-master-ori/svhn_trainer.py
dis_lr : 0.001
enc_lr : 0.001
gen_lr : 0.001
min_lr : 0.0001
suffix : run0
dataset : svhn
save_dir : svhn_log
data_root : ../datasets/sslbadgan/data
num_label : 10
pt_weight : 0.8
ent_weight : 0.1
image_size : 3072
max_epochs : 900
noise_size : 100
vis_period : 730
eval_period : 730
p_loss_prob : 0.1
gen_emb_size : 20
p_loss_weight : 0.0001
dev_batch_size : 200
train_batch_size : 64
size_labeled_data : 1000
train_batch_size_2 : 64
Using downloaded and verified file: ../datasets/sslbadgan/data/train_32x32.mat
Using downloaded and verified file: ../datasets/sslbadgan/data/test_32x32.mat
labeled size 1000 unlabeled size 73257 dev size 26032
===> Init small-conv for svhn
#0    train: 2.3207, 0.8910 | dev: 2.3044, 0.8597 | best: 0.8597 | unl acc: 0.0014 | gen acc: 0.0014 | max unl acc: 0.0014 | max gen acc: 0.0014 | lab loss: 0.0032 | unl loss: 0.0017 | ent loss: 0.0003 | fm loss: 0.0004 | pt loss: 0.0005 | [Eval] unl acc: 1.0000, gen acc: 1.0000, max unl acc: 1.0000, max gen acc: 1.0000 | lr: 0.00100
#730    train: 1.0229, 0.3520 | dev: 1.5486, 0.5003 | best: 0.5003 | unl acc: 0.6620 | gen acc: 0.4480 | max unl acc: 0.1520 | max gen acc: 0.0386 | lab loss: 2.0656 | unl loss: 0.6543 | ent loss: 0.2007 | fm loss: 0.6556 | pt loss: 0.6742 | [Eval] unl acc: 0.8185, gen acc: 0.3695, max unl acc: 0.6200, max gen acc: 0.1420 | lr: 0.00100
#1460    train: 0.0854, 0.0270 | dev: 0.8986, 0.2197 | best: 0.2197 | unl acc: 0.7806 | gen acc: 0.1736 | max unl acc: 0.6994 | max gen acc: 0.1097 | lab loss: 0.5513 | unl loss: 0.4269 | ent loss: 0.0691 | fm loss: 1.6328 | pt loss: 0.4101 | [Eval] unl acc: 0.9280, gen acc: 0.1190, max unl acc: 0.8995, max gen acc: 0.1015 | lr: 0.00100
#2190    train: 0.0113, 0.0010 | dev: 0.7827, 0.1806 | best: 0.1806 | unl acc: 0.8286 | gen acc: 0.1257 | max unl acc: 0.7986 | max gen acc: 0.1005 | lab loss: 0.1467 | unl loss: 0.3468 | ent loss: 0.0367 | fm loss: 1.8261 | pt loss: 0.3722 | [Eval] unl acc: 0.9170, gen acc: 0.0760, max unl acc: 0.8955, max gen acc: 0.0660 | lr: 0.00100
#2920    train: 0.0052, 0.0000 | dev: 0.7834, 0.1714 | best: 0.1714 | unl acc: 0.8415 | gen acc: 0.1130 | max unl acc: 0.8176 | max gen acc: 0.0934 | lab loss: 0.0870 | unl loss: 0.3217 | ent loss: 0.0306 | fm loss: 1.7142 | pt loss: 0.3503 | [Eval] unl acc: 0.9460, gen acc: 0.1215, max unl acc: 0.9300, max gen acc: 0.1025 | lr: 0.00100
#3650    train: 0.0035, 0.0000 | dev: 0.6306, 0.1459 | best: 0.1459 | unl acc: 0.8498 | gen acc: 0.1064 | max unl acc: 0.8286 | max gen acc: 0.0891 | lab loss: 0.0686 | unl loss: 0.3072 | ent loss: 0.0283 | fm loss: 1.6356 | pt loss: 0.3302 | [Eval] unl acc: 0.9110, gen acc: 0.0600, max unl acc: 0.9005, max gen acc: 0.0525 | lr: 0.00100
#4380    train: 0.0033, 0.0000 | dev: 0.6601, 0.1451 | best: 0.1451 | unl acc: 0.8502 | gen acc: 0.1018 | max unl acc: 0.8301 | max gen acc: 0.0853 | lab loss: 0.0552 | unl loss: 0.2999 | ent loss: 0.0270 | fm loss: 1.6221 | pt loss: 0.3097 | [Eval] unl acc: 0.8800, gen acc: 0.0450, max unl acc: 0.8705, max gen acc: 0.0420 | lr: 0.00100
#5110    train: 0.0014, 0.0000 | dev: 0.5418, 0.1272 | best: 0.1272 | unl acc: 0.8512 | gen acc: 0.1009 | max unl acc: 0.8316 | max gen acc: 0.0859 | lab loss: 0.0514 | unl loss: 0.2994 | ent loss: 0.0260 | fm loss: 1.5717 | pt loss: 0.3043 | [Eval] unl acc: 0.9410, gen acc: 0.0640, max unl acc: 0.9295, max gen acc: 0.0545 | lr: 0.00100
#5840    train: 0.0012, 0.0000 | dev: 0.5767, 0.1273 | best: 0.1272 | unl acc: 0.8559 | gen acc: 0.0954 | max unl acc: 0.8368 | max gen acc: 0.0816 | lab loss: 0.0456 | unl loss: 0.2889 | ent loss: 0.0246 | fm loss: 1.5362 | pt loss: 0.2884 | [Eval] unl acc: 0.8925, gen acc: 0.0465, max unl acc: 0.8860, max gen acc: 0.0440 | lr: 0.00100
#6570    train: 0.0020, 0.0000 | dev: 0.6306, 0.1364 | best: 0.1272 | unl acc: 0.8622 | gen acc: 0.0906 | max unl acc: 0.8458 | max gen acc: 0.0772 | lab loss: 0.0422 | unl loss: 0.2754 | ent loss: 0.0234 | fm loss: 1.5668 | pt loss: 0.2680 | [Eval] unl acc: 0.9550, gen acc: 0.1225, max unl acc: 0.9405, max gen acc: 0.1025 | lr: 0.00100
#7300    train: 0.0012, 0.0000 | dev: 0.5106, 0.1150 | best: 0.1150 | unl acc: 0.8624 | gen acc: 0.0907 | max unl acc: 0.8443 | max gen acc: 0.0775 | lab loss: 0.0401 | unl loss: 0.2744 | ent loss: 0.0233 | fm loss: 1.5125 | pt loss: 0.2579 | [Eval] unl acc: 0.9075, gen acc: 0.0540, max unl acc: 0.8940, max gen acc: 0.0520 | lr: 0.00100
#8030    train: 0.0006, 0.0000 | dev: 0.5227, 0.1088 | best: 0.1088 | unl acc: 0.8660 | gen acc: 0.0843 | max unl acc: 0.8502 | max gen acc: 0.0722 | lab loss: 0.0359 | unl loss: 0.2635 | ent loss: 0.0216 | fm loss: 1.4707 | pt loss: 0.2486 | [Eval] unl acc: 0.9555, gen acc: 0.0970, max unl acc: 0.9445, max gen acc: 0.0820 | lr: 0.00100
#8760    train: 0.0031, 0.0000 | dev: 0.6472, 0.1435 | best: 0.1088 | unl acc: 0.8671 | gen acc: 0.0840 | max unl acc: 0.8513 | max gen acc: 0.0719 | lab loss: 0.0349 | unl loss: 0.2614 | ent loss: 0.0212 | fm loss: 1.4561 | pt loss: 0.2472 | [Eval] unl acc: 0.9290, gen acc: 0.1140, max unl acc: 0.9165, max gen acc: 0.1030 | lr: 0.00100
#9490    train: 0.0015, 0.0000 | dev: 0.5386, 0.1210 | best: 0.1088 | unl acc: 0.8665 | gen acc: 0.0835 | max unl acc: 0.8513 | max gen acc: 0.0726 | lab loss: 0.0322 | unl loss: 0.2638 | ent loss: 0.0213 | fm loss: 1.4334 | pt loss: 0.2428 | [Eval] unl acc: 0.9580, gen acc: 0.1110, max unl acc: 0.9455, max gen acc: 0.0960 | lr: 0.00100
#10220    train: 0.0009, 0.0000 | dev: 0.5074, 0.1083 | best: 0.1083 | unl acc: 0.8691 | gen acc: 0.0816 | max unl acc: 0.8540 | max gen acc: 0.0707 | lab loss: 0.0334 | unl loss: 0.2563 | ent loss: 0.0207 | fm loss: 1.4117 | pt loss: 0.2356 | [Eval] unl acc: 0.9415, gen acc: 0.0750, max unl acc: 0.9340, max gen acc: 0.0665 | lr: 0.00100
#10950    train: 0.0006, 0.0000 | dev: 0.5290, 0.1122 | best: 0.1083 | unl acc: 0.8716 | gen acc: 0.0772 | max unl acc: 0.8583 | max gen acc: 0.0673 | lab loss: 0.0282 | unl loss: 0.2493 | ent loss: 0.0201 | fm loss: 1.4237 | pt loss: 0.2479 | [Eval] unl acc: 0.9225, gen acc: 0.0770, max unl acc: 0.9130, max gen acc: 0.0700 | lr: 0.00100
#11680    train: 0.0006, 0.0000 | dev: 0.4974, 0.1026 | best: 0.1026 | unl acc: 0.8723 | gen acc: 0.0767 | max unl acc: 0.8599 | max gen acc: 0.0671 | lab loss: 0.0293 | unl loss: 0.2504 | ent loss: 0.0196 | fm loss: 1.3825 | pt loss: 0.2434 | [Eval] unl acc: 0.9670, gen acc: 0.1035, max unl acc: 0.9610, max gen acc: 0.0880 | lr: 0.00100
#12410    train: 0.0005, 0.0000 | dev: 0.4799, 0.0987 | best: 0.0987 | unl acc: 0.8713 | gen acc: 0.0762 | max unl acc: 0.8583 | max gen acc: 0.0670 | lab loss: 0.0266 | unl loss: 0.2524 | ent loss: 0.0193 | fm loss: 1.3632 | pt loss: 0.2415 | [Eval] unl acc: 0.9450, gen acc: 0.0985, max unl acc: 0.9390, max gen acc: 0.0865 | lr: 0.00100
#13140    train: 0.0006, 0.0000 | dev: 0.4963, 0.0997 | best: 0.0987 | unl acc: 0.8708 | gen acc: 0.0793 | max unl acc: 0.8575 | max gen acc: 0.0686 | lab loss: 0.0274 | unl loss: 0.2519 | ent loss: 0.0194 | fm loss: 1.3464 | pt loss: 0.2293 | [Eval] unl acc: 0.9320, gen acc: 0.0600, max unl acc: 0.9265, max gen acc: 0.0560 | lr: 0.00100
#13870    train: 0.0004, 0.0000 | dev: 0.5083, 0.1001 | best: 0.0987 | unl acc: 0.8714 | gen acc: 0.0738 | max unl acc: 0.8585 | max gen acc: 0.0650 | lab loss: 0.0252 | unl loss: 0.2483 | ent loss: 0.0190 | fm loss: 1.3284 | pt loss: 0.2253 | [Eval] unl acc: 0.9560, gen acc: 0.0855, max unl acc: 0.9510, max gen acc: 0.0800 | lr: 0.00100
#14600    train: 0.0008, 0.0000 | dev: 0.4493, 0.0956 | best: 0.0956 | unl acc: 0.8667 | gen acc: 0.0790 | max unl acc: 0.8528 | max gen acc: 0.0688 | lab loss: 0.0248 | unl loss: 0.2559 | ent loss: 0.0195 | fm loss: 1.3146 | pt loss: 0.2260 | [Eval] unl acc: 0.9660, gen acc: 0.1145, max unl acc: 0.9590, max gen acc: 0.1010 | lr: 0.00100
#15330    train: 0.0003, 0.0000 | dev: 0.5447, 0.1013 | best: 0.0956 | unl acc: 0.8709 | gen acc: 0.0748 | max unl acc: 0.8590 | max gen acc: 0.0650 | lab loss: 0.0240 | unl loss: 0.2485 | ent loss: 0.0190 | fm loss: 1.2970 | pt loss: 0.2274 | [Eval] unl acc: 0.9120, gen acc: 0.0655, max unl acc: 0.9070, max gen acc: 0.0610 | lr: 0.00100
#16060    train: 0.0007, 0.0000 | dev: 0.4266, 0.0952 | best: 0.0952 | unl acc: 0.8733 | gen acc: 0.0737 | max unl acc: 0.8604 | max gen acc: 0.0646 | lab loss: 0.0237 | unl loss: 0.2470 | ent loss: 0.0187 | fm loss: 1.2611 | pt loss: 0.2191 | [Eval] unl acc: 0.9520, gen acc: 0.0890, max unl acc: 0.9440, max gen acc: 0.0785 | lr: 0.00100
#16790    train: 0.0005, 0.0000 | dev: 0.4055, 0.0906 | best: 0.0906 | unl acc: 0.8711 | gen acc: 0.0743 | max unl acc: 0.8583 | max gen acc: 0.0655 | lab loss: 0.0224 | unl loss: 0.2466 | ent loss: 0.0188 | fm loss: 1.2443 | pt loss: 0.2220 | [Eval] unl acc: 0.9505, gen acc: 0.1170, max unl acc: 0.9420, max gen acc: 0.1005 | lr: 0.00100
#17520    train: 0.0003, 0.0000 | dev: 0.4624, 0.0924 | best: 0.0906 | unl acc: 0.8732 | gen acc: 0.0719 | max unl acc: 0.8614 | max gen acc: 0.0634 | lab loss: 0.0234 | unl loss: 0.2428 | ent loss: 0.0183 | fm loss: 1.2537 | pt loss: 0.2149 | [Eval] unl acc: 0.9270, gen acc: 0.0570, max unl acc: 0.9225, max gen acc: 0.0545 | lr: 0.00100
#18250    train: 0.0004, 0.0000 | dev: 0.4872, 0.0983 | best: 0.0906 | unl acc: 0.8710 | gen acc: 0.0732 | max unl acc: 0.8587 | max gen acc: 0.0648 | lab loss: 0.0229 | unl loss: 0.2458 | ent loss: 0.0188 | fm loss: 1.1931 | pt loss: 0.2140 | [Eval] unl acc: 0.9640, gen acc: 0.1895, max unl acc: 0.9575, max gen acc: 0.1595 | lr: 0.00100
 

ssl-bad-gan python2到python3下後的代碼修改

修改後好多地方運行不出來原來效果。最終確定是版本原因,沒辦法折騰需要的版本。

Python報“TypeError: a bytes-like object is required, not ‘str’ ”解決辦法

今天在學習Python的時候,報了這樣一個錯誤,我先申明一下我用的python版本是3.7。

具體錯誤如下:

F:\Python3.7.0\python.exe F:/python/21.py
<class 'str'>
Traceback (most recent call last):
  File "F:/python/21.py", line 10, in <module>
    os.write(fd, str)
TypeError: a bytes-like object is required, not 'str'

上面最後一句話的意思是“類型錯誤:需要類似字節的對象,而不是字符串”。

Python報“TypeError: a bytes-like object is required, not ‘str’ ”解決辦法

今天在學習Python的時候,報了這樣一個錯誤,我先申明一下我用的python版本是3.7。

具體錯誤如下:

F:\Python3.7.0\python.exe F:/python/21.py
<class 'str'>
Traceback (most recent call last):
  File "F:/python/21.py", line 10, in <module>
    os.write(fd, str)
TypeError: a bytes-like object is required, not 'str'

上面最後一句話的意思是“類型錯誤:需要類似字節的對象,而不是字符串”。

 

TypeError: 'float' object cannot be interpreted as an integer

TypeError: 'float' object cannot be interpreted as an integer
解決方法:
在python2,/只留下了整數部分,去掉了小數,是int型。而在python3裏,/的結果是真正意義上的除法,結果是float型。所以便出現了Error Message: ‘float’ object cannot be interpreted as an integer。

In:

for i in range(r / M):
You're creating a float as a result - to fix this use the int division operator:

for i in range(r // M):
 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章