大師兄的Python機器學習筆記:數據預處理

大師兄的Python機器學習筆記:Numpy庫、Scipy庫和Matplotlib庫 (三)
大師兄的Python機器學習筆記:數據重抽樣

一、獲得數據

  • 機器學習需要大量的真實數據,可以通過互聯網獲得。
1. 關于Kaggle
  • Kaggle(https://www.kaggle.com/)成立于2010年,是一個進行數據發掘和預測競賽的在線平臺。
  • 通過Kaggle,我們可以獲得一些真實數據。
2. 下載數據
  • 例如,我們可以在 此處下載美國各縣的新冠肺炎數據。
3. 讀取數據
  • 由于數據基本都是.csv或者.json格式的,可以用python直接讀取。
import os
from pprint import pprint

def read_csv(file):
    # 讀取數據并轉換為list
    with open(file,'r') as f:
        return list(f.readlines())

if __name__ == '__main__':
    file_path = os.path.join('D:\\','dataset','us-counties.csv')
    data = read_csv(file_path)
    pprint(data) # 為了讓數據看起來更直觀,使用pprint
['date,county,state,fips,cases,deaths\n',
 '2020-01-21,Snohomish,Washington,53061,1,0\n',
 '2020-01-22,Snohomish,Washington,53061,1,0\n',
 '2020-01-23,Snohomish,Washington,53061,1,0\n',
 '2020-01-24,Cook,Illinois,17031,1,0\n',
 '2020-01-24,Snohomish,Washington,53061,1,0\n',
 '2020-01-25,Orange,California,06059,1,0\n',
 '2020-01-25,Cook,Illinois,17031,1,0\n',
 '2020-01-25,Snohomish,Washington,53061,1,0\n',
... ... (省略)
'2020-04-13,Teton,Wyoming,56039,56,0\n',
 '2020-04-13,Uinta,Wyoming,56041,4,0\n',
 '2020-04-13,Washakie,Wyoming,56043,4,0']

二、判斷數據缺失

  • 為了保證結果的準確性,需要對缺失數據進行處理。
1. 篩選完整數據
  • 通過判斷跳過不完整的數據。
>>>import os
>>>from csv import reader
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 讀取數據,跳過缺失的行或數據不完整的行 
>>>    dataset = []
    
>>>    with open(file,'r') as f:
>>>        lines = list(reader(f))
>>>        data_len = len(list(lines)[0]) # 獲取標題列的長度

>>>        for line in lines:
>>>            if line and len(line) == data_len: # 如果行為空或者數據不完整則跳過
>>>                dataset.append(line)
>>>        return dataset

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(data)
2. 判斷元素是否缺失**
  • 檢查每個元素是否有缺失的
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    return pd.read_csv(file)

>>>def find_null(data):
>>>    return data.isnull()

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(find_null(data))
        date  county  state   fips  cases  deaths
0      False   False  False  False  False   False
1      False   False  False  False  False   False
2      False   False  False  False  False   False
3      False   False  False  False  False   False
4      False   False  False  False  False   False
...      ...     ...    ...    ...    ...     ...
56536  False   False  False  False  False   False
56537  False   False  False  False  False   False
56538  False   False  False  False  False   False
56539  False   False  False  False  False   False
56540  False   False  False  False  False   False

[56541 rows x 6 columns]
3. 判斷缺失列
  • 檢查每列是否包含缺失的元素。
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    return pd.read_csv(file)

>>>def find_null_column(data):
>>>    return data.isnull().any()

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(find_null_column(data))
date      False
county    False
state     False
fips       True
cases     False
deaths    False
dtype: bool
4. 統計缺失元素
  • 統計每列缺失元素的數目。
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    return pd.read_csv(file)

>>>def find_null_column(data):
>>>    return data.isnull().any()

>>>def count_null(data,null_column):
>>>    missing = data.columns[null_column].tolist()
>>>    return data[missing].isnull().sum()

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    null_column = find_null_column(data)
>>>    print(count_null(data,null_column))
fips    746
dtype: int64
5. 替換缺失值
  • 將缺失值替換為一個默認值。
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def find_null_column(data):
>>>    # 返回所有包含空的列
>>>    return data.isnull().any()

>>>def get_null_column_name(null_column):
>>>    # 返回包含空列的列名
>>>    return data.columns[null_column].tolist()

>>>def replace_null(data,columns,value):
>>>    # 替換空值
>>>    for column in columns:
>>>        data.loc[data[column].isnull(),column] = value
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    null_column_list = get_null_column_name(find_null_column(data)) # 獲得空列名
>>>    new_data = replace_null(data,null_column_list,0) # 將空數據替換為0
>>>    pprint(find_null_column(new_data))
date      False
county    False
state     False
fips      False
cases     False
deaths    False
dtype: bool
6. 缺失值比對
  • 判斷兩列的缺失值是否同時為空,并獲得對比數據。
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def compare_columns(col1,col2):
>>>    # 對比兩列的缺失值
>>>    res = data[[col1,col2]][data[col2].isnull()==True]
>>>    # 獲得對比數據
>>>    return res.describe()

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(compare_columns('state','fips'))
       fips
count   0.0
mean    NaN
std     NaN
min     NaN
25%     NaN
50%     NaN
75%     NaN
max     NaN

三. 數據類型轉換

  • 為了保障結果的統一性,需要盡量將數據類型轉換為浮點數(float)。
>>>import os
>>>import pandas as pd
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def to_float(data):
>>>    for column in data:
>>>        if column =='date':continue # 跳過日期
>>>        if str(data[column][1]).isdigit(): # 如果是數字
>>>            data[column] = data[column].astype('float') # 將列轉為浮點數
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    print(to_float(data))
             date      county       state     fips  cases  deaths
0      2020-01-21   Snohomish  Washington  53061.0    1.0     0.0
1      2020-01-22   Snohomish  Washington  53061.0    1.0     0.0
2      2020-01-23   Snohomish  Washington  53061.0    1.0     0.0
3      2020-01-24        Cook    Illinois  17031.0    1.0     0.0
4      2020-01-24   Snohomish  Washington  53061.0    1.0     0.0
...           ...         ...         ...      ...    ...     ...
56536  2020-04-13    Sublette     Wyoming  56035.0    1.0     0.0
56537  2020-04-13  Sweetwater     Wyoming  56037.0    9.0     0.0
56538  2020-04-13       Teton     Wyoming  56039.0   56.0     0.0
56539  2020-04-13       Uinta     Wyoming  56041.0    4.0     0.0
56540  2020-04-13    Washakie     Wyoming  56043.0    4.0     0.0

[56541 rows x 6 columns]

四. 數據特征縮放

  • 為了保證數據的特征具有相近的尺度,有時需要對數據進行特征縮放。
1. 歸一化(Rescaling)
  • 將所有特征縮放到0~1之間,使梯度下降法能更快的收斂。
  • 公式x' = \frac{x-min}{max-min}
>>>import os
>>>import pandas as pd
>>>import numpy as np
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def to_float(data):
>>>    # 將數據改為浮點數
>>>    for column in data:
>>>        if column =='date':continue # 跳過日期
>>>        if str(data[column][1]).isdigit(): # 如果是數字
>>>            data[column] = data[column].astype('float') # 將列轉為浮點數
>>>    return data

>>>def min_max_normalization(data):
>>>    # 歸一化特征縮放
>>>    for column in data:
>>>        if column == 'date': continue  # 跳過日期
>>>        if isinstance(data[column][1],float):  # 如果是浮點數
>>>            x = data[column]
>>>            x = (x - np.min(x))/(np.max(x)-np.min(x))
>>>            data[column] = x
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(min_max_normalization(to_float(data)))
             date      county       state      fips     cases  deaths
0      2020-01-21   Snohomish  Washington  0.945823  0.000008     0.0
1      2020-01-22   Snohomish  Washington  0.945823  0.000008     0.0
2      2020-01-23   Snohomish  Washington  0.945823  0.000008     0.0
3      2020-01-24        Cook    Illinois  0.291232  0.000008     0.0
4      2020-01-24   Snohomish  Washington  0.945823  0.000008     0.0
...           ...         ...         ...       ...       ...     ...
61966  2020-04-15    Sublette     Wyoming  0.999855  0.000008     0.0
61967  2020-04-15  Sweetwater     Wyoming  0.999891  0.000085     0.0
61968  2020-04-15       Teton     Wyoming  0.999927  0.000499     0.0
61969  2020-04-15       Uinta     Wyoming  0.999964  0.000034     0.0
61970  2020-04-15    Washakie     Wyoming  1.000000  0.000034     0.0

[61971 rows x 6 columns]
2. 均值歸一化(Mean Normalization)
  • 歸一化的另一種方法,數據離平均值的距離。
  • 公式x' = \frac{x-average(x)}{max-min}
>>>import os
>>>import pandas as pd
>>>import numpy as np
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def to_float(data):
>>>    # 將數據改為浮點數
>>>    for column in data:
>>>        if column =='date':continue # 跳過日期
>>>        if str(data[column][1]).isdigit(): # 如果是數字
>>>            data[column] = data[column].astype('float') # 將列轉為浮點數
>>>    return data

>>>def mean_normalization(data):
>>>    # 均值歸一化特征縮放
>>>    for column in data:
>>>        if column == 'date': continue  # 跳過日期
>>>        if isinstance(data[column][1],float):  # 如果是浮點數
>>>            x = data[column]
>>>            x = (x - np.mean(x))/(np.max(x)-np.min(x))
>>>            data[column] = x
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(mean_normalization(to_float(data)))
             date      county       state      fips     cases   deaths
0      2020-01-21   Snohomish  Washington  0.426211 -0.001020 -0.00049
1      2020-01-22   Snohomish  Washington  0.426211 -0.001020 -0.00049
2      2020-01-23   Snohomish  Washington  0.426211 -0.001020 -0.00049
3      2020-01-24        Cook    Illinois -0.228380 -0.001020 -0.00049
4      2020-01-24   Snohomish  Washington  0.426211 -0.001020 -0.00049
...           ...         ...         ...       ...       ...      ...
61966  2020-04-15    Sublette     Wyoming  0.480243 -0.001020 -0.00049
61967  2020-04-15  Sweetwater     Wyoming  0.480279 -0.000944 -0.00049
61968  2020-04-15       Teton     Wyoming  0.480315 -0.000530 -0.00049
61969  2020-04-15       Uinta     Wyoming  0.480352 -0.000995 -0.00049
61970  2020-04-15    Washakie     Wyoming  0.480388 -0.000995 -0.00049

[61971 rows x 6 columns]
3. 標準化(Standardlization)
  • 特征標準化使得數據中每個特征的值具有零均值和單位方差。
  • 公式x' = \frac{x-\bar{x}}{\sigma}
>>>import os
>>>import pandas as pd
>>>import numpy as np
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def to_float(data):
>>>    # 將數據改為浮點數
>>>    for column in data:
>>>        if column =='date':continue # 跳過日期
>>>        if str(data[column][1]).isdigit(): # 如果是數字
>>>            data[column] = data[column].astype('float') # 將列轉為浮點數
>>>    return data

>>>def standardlization(data):
>>>    # 標準化
>>>    for column in data:
>>>        if column == 'date': continue  # 跳過日期
>>>        if isinstance(data[column][1],float):  # 如果是浮點數
>>>            x = data[column]
>>>            x = (x - np.mean(x))/(np.var(x))
>>>            data[column] = x
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(standardlization(to_float(data)))
             date      county       state      fips     cases    deaths
0      2020-01-21   Snohomish  Washington  0.000097 -0.000052 -0.000585
1      2020-01-22   Snohomish  Washington  0.000097 -0.000052 -0.000585
2      2020-01-23   Snohomish  Washington  0.000097 -0.000052 -0.000585
3      2020-01-24        Cook    Illinois -0.000052 -0.000052 -0.000585
4      2020-01-24   Snohomish  Washington  0.000097 -0.000052 -0.000585
...           ...         ...         ...       ...       ...       ...
61966  2020-04-15    Sublette     Wyoming  0.000110 -0.000052 -0.000585
61967  2020-04-15  Sweetwater     Wyoming  0.000110 -0.000048 -0.000585
61968  2020-04-15       Teton     Wyoming  0.000110 -0.000027 -0.000585
61969  2020-04-15       Uinta     Wyoming  0.000110 -0.000051 -0.000585
61970  2020-04-15    Washakie     Wyoming  0.000110 -0.000051 -0.000585

[61971 rows x 6 columns]

4. 縮放至單位長度(Scaling to Unit Length)
  • 該方法也在機器學習中常用。縮放特征向量的分量,將每個分量除以向量的歐幾里得距離,使整個向量的長度為1。
  • 公式:x' = \frac{x}{||x||}
>>>import os
>>>import pandas as pd
>>>import numpy as np
>>>from pprint import pprint

>>>def read_csv(file):
>>>    # 獲得文件中的數據
>>>    return pd.read_csv(file)

>>>def to_float(data):
>>>    # 將數據改為浮點數
>>>    for column in data:
>>>        if column =='date':continue # 跳過日期
>>>        if str(data[column][1]).isdigit(): # 如果是數字
>>>            data[column] = data[column].astype('float') # 將列轉為浮點數
>>>    return data

>>>def scaling_to_Unit_Length(data):
>>>    #  縮放至單位長度
>>>    for column in data:
>>>        if column == 'date': continue  # 跳過日期
>>>        if isinstance(data[column][1],float):  # 如果是浮點數
>>>            x = data[column]
>>>            x = x/np.linalg.norm(x)
>>>            data[column] = x
>>>    return data

>>>if __name__ == '__main__':
>>>    file_path = os.path.join('D:\\','dataset','us-counties.csv')
>>>    data = read_csv(file_path)
>>>    pprint(standardlization(to_float(data)))
             date      county       state      fips     cases    deaths
0      2020-01-21   Snohomish  Washington  0.000097 -0.000052 -0.000585
1      2020-01-22   Snohomish  Washington  0.000097 -0.000052 -0.000585
2      2020-01-23   Snohomish  Washington  0.000097 -0.000052 -0.000585
3      2020-01-24        Cook    Illinois -0.000052 -0.000052 -0.000585
4      2020-01-24   Snohomish  Washington  0.000097 -0.000052 -0.000585
...           ...         ...         ...       ...       ...       ...
61966  2020-04-15    Sublette     Wyoming  0.000110 -0.000052 -0.000585
61967  2020-04-15  Sweetwater     Wyoming  0.000110 -0.000048 -0.000585
61968  2020-04-15       Teton     Wyoming  0.000110 -0.000027 -0.000585
61969  2020-04-15       Uinta     Wyoming  0.000110 -0.000051 -0.000585
61970  2020-04-15    Washakie     Wyoming  0.000110 -0.000051 -0.000585

[61971 rows x 6 columns]

參考資料


最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容