Python實(shí)戰(zhàn)小項(xiàng)目之Mnist手寫(xiě)數(shù)字識(shí)別
程序流程分析圖:
傳播過(guò)程:
代碼展示:
創(chuàng)建環(huán)境
使用<pip install+包名>來(lái)下載torch,torchvision包
準(zhǔn)備數(shù)據(jù)集
設(shè)置一次訓(xùn)練所選取的樣本數(shù)Batch_Sized的值為512,訓(xùn)練此時(shí)Epochs的值為8
BATCH_SIZE = 512 EPOCHS = 8 device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
下載數(shù)據(jù)集
Normalize()數(shù)字歸一化,轉(zhuǎn)換使用的值0.1307和0.3081是MNIST數(shù)據(jù)集的全局平均值和標(biāo)準(zhǔn)偏差,這里我們將它們作為給定值。model
train_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([. transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=BATCH_SIZE, shuffle=True)
下載測(cè)試集
test_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=BATCH_SIZE, shuffle=True)
繪制圖像
我們可以使用matplotlib來(lái)繪制其中的一些圖像
examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) print(example_targets) print(example_data.shape) print(example_data) import matplotlib.pyplot as plt fig = plt.figure() for i in range(6): plt.subplot(2,3,i+1) plt.tight_layout() plt.imshow(example_data[i][0], cmap='gray', interpolation='none') plt.title("Ground Truth: {}".format(example_targets[i])) plt.xticks([]) plt.yticks([]) plt.show()
搭建神經(jīng)網(wǎng)絡(luò)
這里我們構(gòu)建全連接神經(jīng)網(wǎng)絡(luò),我們使用三個(gè)全連接(或線性)層進(jìn)行前向傳播。
class linearNet(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 10) def forward(self, x): x = x.view(-1, 784) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.log_softmax(x, dim=1) return x
訓(xùn)練模型
首先,我們需要使用optimizer.zero_grad()手動(dòng)將梯度設(shè)置為零,因?yàn)镻yTorch在默認(rèn)情況下會(huì)累積梯度。然后,我們生成網(wǎng)絡(luò)的輸出(前向傳遞),并計(jì)算輸出與真值標(biāo)簽之間的負(fù)對(duì)數(shù)概率損失?,F(xiàn)在,我們收集一組新的梯度,并使用optimizer.step()將其傳播回每個(gè)網(wǎng)絡(luò)參數(shù)。
def train(model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if (batch_idx) % 30 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item()))
測(cè)試模型
def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # 將一批的損失相加 pred = output.max(1, keepdim=True)[1] # 找到概率最大的下標(biāo) correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset)))
將訓(xùn)練次數(shù)進(jìn)行循環(huán)
if __name__ == '__main__': model = linearNet() optimizer = optim.Adam(model.parameters()) for epoch in range(1, EPOCHS + 1): train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader)
保存訓(xùn)練模型
torch.save(model, 'MNIST.pth')
運(yùn)行結(jié)果展示:
分享人:蘇云云
到此這篇關(guān)于Python實(shí)戰(zhàn)小項(xiàng)目之Mnist手寫(xiě)數(shù)字識(shí)別的文章就介紹到這了,更多相關(guān)Python Mnist手寫(xiě)數(shù)字識(shí)別內(nèi)容請(qǐng)搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!
版權(quán)聲明:本站文章來(lái)源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來(lái)源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來(lái)源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來(lái),僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。