1. 先前處理數(shù)據(jù)集的代碼經(jīng)常比較混亂并且難以維護
2. 數(shù)據(jù)集處理代碼應(yīng)該和訓(xùn)練代碼解耦合,從而達到模塊化和更好的可讀性
因此,pytorch提出了兩個數(shù)據(jù)處理類:DataLoader與Dataset
from torch.utils.data import Dataset, DataLoader
class SimpleDataset(Dataset):
def __init__(self, size):
self.x = [i for i in range(size)]
def __getitem__(self, index):
return self.x[index]
def __len__(self):
return len(self.x)
dataset = SimpleDataset(322)
# 將dataset對象傳入DataLoader類
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, drop_last=False)
# 此處不執(zhí)行,生成器
for idx, data in enumerate(dataloader):
print(idx)
print(len(data))
當(dāng)處理自己的數(shù)據(jù)時,需要繼承Dataset類,重寫init, len, getitem函數(shù)。
init:讀取數(shù)據(jù)集
len:獲取整個數(shù)據(jù)集的樣本個數(shù)
getitem:獲取樣本中的第index個樣本
當(dāng)你需要加載數(shù)據(jù)時,將dataset對象傳入DataLoader類
batch_size:將數(shù)據(jù)集分成大小為batch_size的序列
shuffle:是否打亂數(shù)據(jù)。如果打亂,獲取樣本并不是按照順序的,多次執(zhí)行dataloader獲取的數(shù)據(jù)內(nèi)容也不一樣的
drop_last:將數(shù)據(jù)集按照batch_size大小分批,如果有剩余,是否丟掉剩余的
上面每條樣本只有一個數(shù)據(jù),當(dāng)一條樣本對應(yīng)于多個數(shù)據(jù)時,需要用到字典:
from torch.utils.data import Dataset, DataLoader
class SimpleDataset(Dataset):
def __init__(self, size):
self.x = [i for i in range(size)]
self.y = [i + 1 for i in range(size)]
def __getitem__(self, index):
sample = {"x": self.x[index], "y": self.y[index]}
return sample
def __len__(self):
return len(self.x)
dataset = SimpleDataset(322)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, drop_last=False)
for idx, data in enumerate(dataloader):
print(idx)
print(len(data))
對于非向量化的數(shù)據(jù)同樣可處理:
from torch.utils.data import Dataset, DataLoader
class SimpleDataset(Dataset):
def __init__(self, size):
self.x = ["I love you" for i in range(size // 2)] + ["I love you very much" for i in range(size // 2, size)]
self.y = [i + 1 for i in range(size)]
def __getitem__(self, index):
sample = {"x": self.x[index], "y": self.y[index]}
return sample
def __len__(self):
return len(self.x)
dataset = SimpleDataset(322)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, drop_last=False)
for idx, data in enumerate(dataloader):
print(idx)
print(len(data))
讀取文本后再向量化有一個好處,就是進行padding時,只需要考慮當(dāng)前batch中最大長度,而不需要考慮整個數(shù)據(jù)集的最大長度。
并且如果數(shù)據(jù)集太大,無法在SimpleDataset的init函數(shù)中全部向量化,則可以只讀原始數(shù)據(jù),通過dataloader分批之后再向量化。
但是以下處理是錯誤的:
from torch.utils.data import Dataset, DataLoader
class SimpleDataset(Dataset):
def __init__(self, size):
self.x = ["I love you".split() for i in range(size // 2)] + \
["I love you very much".split() for i in range(size // 2, size)]
# self.x 里面的元素為列表,自動解析成每個樣本對應(yīng)的多個元素;
# 但是因為列表的長度不一致,后面列表中的very,much取不到,因為有的樣本不包含這兩個元素
self.y = [i + 1 for i in range(size)]
def __getitem__(self, index):
sample = {"x": self.x[index], "y": self.y[index]}
return sample
def __len__(self):
return len(self.x)
dataset = SimpleDataset(322)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, drop_last=False)
for idx, data in enumerate(dataloader):
print(idx)
print(len(data))