數據科學包——pandas基礎(處理丟失數據、統計、合併、分組)

一、處理丟失數據

Pandas 使用 numpy.NaN 來表示丟失的數據,它不參與計算。

import pandas as pd
import numpy as np
dates = pd.date_range('20160301', periods=6)
df = pd.DataFrame(data=np.random.randn(6, 4), index=dates, columns=list('ABCD'))
df
>>>
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110
2016-03-03  0.815005 -0.343642  2.097537  1.706366
2016-03-04  0.755148  0.022449 -0.033679 -0.411535
2016-03-05  1.361040 -0.813593 -1.230564 -0.957386
2016-03-06  0.277162 -0.650544 -0.152688  1.070919
  • 添加新的列
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1
>>>
                   A         B         C         D   E
2016-03-01 -0.768789  1.347539  0.715367  0.581275 NaN
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110 NaN
2016-03-03  0.815005 -0.343642  2.097537  1.706366 NaN
2016-03-04  0.755148  0.022449 -0.033679 -0.411535 NaN

1.爲空值數據賦值

df1.loc[dates[1:3], 'E'] = 1
df1
>>>
                   A         B         C         D    E
2016-03-01 -0.768789  1.347539  0.715367  0.581275  NaN
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0
2016-03-04  0.755148  0.022449 -0.033679 -0.411535  NaN

還可以這樣做:

df1.loc['20160302':'20160303','E']=5

2.刪除空數據行和列

不會改變df1的結構和數據

df1.dropna(how='any')
>>>
                   A         B         C         D    E
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0

可以通過axis參數來刪除含有空數據的全部列

df1 = df1.dropna(axis=1)

可以通過subset參數來刪除E中含有空數據的全部行

df1 = df1.dropna(subset=["E"])

3.填充所有缺失數據

同樣不改變df1的數據和結構

df1.fillna(value=5)
>>>
                   A         B         C         D    E
2016-03-01 -0.768789  1.347539  0.715367  0.581275  5.0
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0
2016-03-04  0.755148  0.022449 -0.033679 -0.411535  5.0

4.判斷是否有NaN值

NaN, Not a Number, NaN是浮點數的一個值,代表“不是數”. 它即不是無窮大, 也不是無窮小.
存在NaN,返回True

pd.isnull(df1)
>>>
                A      B      C      D      E
2016-03-01  False  False  False  False   True
2016-03-02  False  False  False  False  False
2016-03-03  False  False  False  False  False
2016-03-04  False  False  False  False   True

二、統計

numpy.NaN 不參與計算

1.平均值、求和、累加和

df1.mean() #默認按列求平均值
df1.mean(axis=1) #按行求平均值
df.sum() #按列求和
df.sum(axis='columns') #按行求和
df.cumsum() #計算軸向元素累加和,返回由中間結果組成的數組

2.shift函數

該函數主要的功能就是使數據框中的數據移動,若freq=None時,根據axis的設置,行索引數據保持不變,列索引數據可以在行上上下移動或在列上左右移動;若行索引爲時間序列,則可以設置freq參數,根據periods和freq參數值組合,使行索引每次發生periods*freq偏移量滾動,列索引數據不會移動

# shift(0)
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(0)
s
>>>
2020-04-29    1.0
2020-04-30    3.0
2020-05-01    5.0
2020-05-02    NaN
2020-05-03    6.0
2020-05-04    8.0
Freq: D, dtype: float64

#shift(1)
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(1)
s
>>>
2020-04-29    NaN
2020-04-30    1.0
2020-05-01    3.0
2020-05-02    5.0
2020-05-03    NaN
2020-05-04    6.0
Freq: D, dtype: float64

3.sub函數

>>> df
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110
2016-03-03  0.815005 -0.343642  2.097537  1.706366
2016-03-04  0.755148  0.022449 -0.033679 -0.411535
2016-03-05  1.361040 -0.813593 -1.230564 -0.957386
2016-03-06  0.277162 -0.650544 -0.152688  1.070919
>>> s
2016-03-01    NaN
2016-03-02    1.0
2016-03-03    3.0
2016-03-04    5.0
2016-03-05    NaN
2016-03-06    6.0
Freq: D, dtype: float64
  • sub減法,二維數組df按行減去一維數組s的值
>>> df.sub(s,axis='index')
                   A         B         C         D
2016-03-01       NaN       NaN       NaN       NaN
2016-03-02 -0.836999 -1.509262 -1.615675 -1.220110
2016-03-03 -2.184995 -3.343642 -0.902463 -1.293634
2016-03-04 -4.244852 -4.977551 -5.033679 -5.411535
2016-03-05       NaN       NaN       NaN       NaN
2016-03-06 -5.722838 -6.650544 -6.152688 -4.929081

4.apply 應用函數

  • 使用apply函數,調用cumsum函數
df.apply(np.cumsum)
>>>
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02 -0.605788  0.838277  0.099692  0.361166
2016-03-03  0.209217  0.494635  2.197229  2.067531
2016-03-04  0.964365  0.517084  2.163549  1.655996
2016-03-05  2.325405 -0.296509  0.932986  0.698610
2016-03-06  2.602567 -0.947053  0.780298  1.769529
  • 使用apply函數,自定義一個函數,返回每列最大值與最小值的差值
df.apply(lambda x: x.max() - x.min())
>>>
>>> df.apply(lambda x: x.max() - x.min())
A    2.129829
B    2.161131
C    3.328100
D    2.663752
dtype: float64
  • 返回每行最大值與最小值的差值
df.apply(lambda x: x.max() - x.min() ,axis=1)

5.計數和衆數

s = pd.Series(np.random.randint(0, 7, size=10))
s
>>>
0    4
1    6
2    6
3    0
4    5
5    4
6    6
7    4
8    1
9    1

計數,每個數字出現的次數

>>> s.value_counts()
6    3
4    3
1    2
5    1
0    1
dtype: int64

衆數,4和6出現的次數最多

>>> s.mode()
0    4
1    6

三、數據合併

1.concat函數

>>> df = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD'))
>>> df
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860

>>> df.iloc[:3]
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259

>>> df.iloc[3:7]
          A         B         C         D
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521

>>> df.iloc[7:]
          A         B         C         D
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860
  • concat函數
df1 = pd.concat([df.iloc[:3], df.iloc[3:7], df.iloc[7:]])
df1
>>> df1 = pd.concat([df.iloc[:3], df.iloc[3:7], df.iloc[7:]])
>>> df1
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860

2.SQL 樣式的聯合查詢(merge函數)

>>> left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
>>> right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
>>> left
   key  lval
0  foo     1
1  foo     2
>>> right
   key  rval
0  foo     4
1  foo     5

merge合併

# SELECT * FROM left INNER JOIN right ON left.key = right.key;
pd.merge(left, right, on='key')
>>>
   key  lval  rval
0  foo     1     4
1  foo     1     5
2  foo     2     4
3  foo     2     5

3.append函數

append是seriesdataframe的方法,使用它就是默認沿着(axis = 0)列進行拼接

>>> s = pd.Series(np.random.randint(1, 5, size=4), index=list('ABCD'))
>>> df.append(s, ignore_index=True)
           A         B         C         D
0  -0.650238  1.293171  1.091541  0.178952
1  -0.164027 -0.053264  0.490023 -0.992589
2   0.782788  0.254571 -1.867243  0.529259
3  -0.429579  0.562526 -0.436258  2.364229
4   2.234823 -1.420623 -0.809523  0.165442
5  -0.277173 -0.151315 -0.132357 -0.992252
6   0.261993  0.018131  1.355897  0.151521
7  -0.696561 -2.785051 -0.771156  0.278054
8   0.995268  0.083372 -0.498132  0.946659
9   0.675779 -0.853811 -0.674370 -1.470860
10  2.000000  2.000000  4.000000  2.000000

四、分組統計

  • 數據分組
  • 每個組應用一個函數,輸出一個結果
  • 合併每個組的結果構成最終輸出
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                           'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})
df
>>>
     A      B         C         D
0  foo    one -0.907531  0.245216
1  bar    one  0.575741  0.917884
2  foo    two -0.725860  1.146291
3  bar  three -0.421367 -0.885779
4  foo    two -0.363707 -1.905409
5  bar    two -1.307323  1.316524
6  foo    one  0.436804  1.164812
7  foo  three -1.146570  2.534074
  • ‘A’爲分組對象,對其他項按分組求和
>>> df.groupby('A').sum()
            C         D
A
bar -1.152949  1.348629
foo -2.706864  3.184985
  • ‘A、B’爲分組對象,對其他項按分組求和
# df.groupby(['B', 'A']).sum()
df.groupby(['A', 'B']).sum()
>>>
                  C         D
A   B
bar one    0.575741  0.917884
    three -0.421367 -0.885779
    two   -1.307323  1.316524
foo one   -0.470727  1.410028
    three -1.146570  2.534074
    two   -1.089567 -0.759117
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章