chinese直男口爆体育生外卖, 99久久er热在这里只有精品99, 又色又爽又黄18禁美女裸身无遮挡, gogogo高清免费观看日本电视,私密按摩师高清版在线,人妻视频毛茸茸,91论坛 兴趣闲谈,欧美 亚洲 精品 8区,国产精品久久久久精品免费

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

如何讓網(wǎng)絡(luò)模型加速訓(xùn)練

科技綠洲 ? 來源:Python數(shù)據(jù)科學(xué) ? 作者:Python數(shù)據(jù)科學(xué) ? 2023-11-03 10:00 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

如果我們使用的 數(shù)據(jù)集較大 ,且 網(wǎng)絡(luò)較深 ,則會(huì)造成 訓(xùn)練較慢 ,此時(shí)我們要想加速訓(xùn)練可以使用 Pytorch的AMPautocast與Gradscaler );本文便是依據(jù)此寫出的博文,對(duì) Pytorch的AMP (autocast與Gradscaler進(jìn)行對(duì)比) 自動(dòng)混合精度對(duì)模型訓(xùn)練加速 。

注意Pytorch1.6+,已經(jīng)內(nèi)置torch.cuda.amp,因此便不需要加載NVIDIA的apex庫(半精度加速),為方便我們便 不使用NVIDIA的apex庫 (安裝麻煩),轉(zhuǎn)而 使用torch.cuda.amp 。

AMP (Automatic mixed precision): 自動(dòng)混合精度,那 什么是自動(dòng)混合精度 ?

先來梳理一下歷史:先有NVIDIA的apex,之后NVIDIA的開發(fā)人員將其貢獻(xiàn)到Pytorch 1.6+產(chǎn)生了torch.cuda.amp[這是筆者梳理,可能有誤,請(qǐng)留言]

詳細(xì)講:默認(rèn)情況下,大多數(shù)深度學(xué)習(xí)框架都采用32位浮點(diǎn)算法進(jìn)行訓(xùn)練。2017年,NVIDIA研究了一種用于混合精度訓(xùn)練的方法(apex),該方法在訓(xùn)練網(wǎng)絡(luò)時(shí)將單精度(FP32)與半精度(FP16)結(jié)合在一起,并使用相同的超參數(shù)實(shí)現(xiàn)了與FP32幾乎相同的精度,且速度比之前快了不少

之后,來到了AMP時(shí)代(特指torch.cuda.amp),此有兩個(gè)關(guān)鍵詞:自動(dòng)混合精度 (Pytorch 1.6+中的torch.cuda.amp)其中,自動(dòng)表現(xiàn)在Tensor的dtype類型會(huì)自動(dòng)變化,框架按需自動(dòng)調(diào)整tensor的dtype,可能有些地方需要手動(dòng)干預(yù);混合精度表現(xiàn)在采用不止一種精度的Tensor, torch.FloatTensor與torch.HalfTensor。并且從名字可以看出torch.cuda.amp,這個(gè)功能 只能在cuda上使用

為什么我們要使用AMP自動(dòng)混合精度?

1.減少顯存占用(FP16優(yōu)勢(shì))

2.加快訓(xùn)練和推斷的計(jì)算(FP16優(yōu)勢(shì))

3.張量核心的普及(NVIDIA Tensor Core),低精度(FP16優(yōu)勢(shì))

  1. 混合精度訓(xùn)練緩解舍入誤差問題,(FP16有此劣勢(shì),但是FP32可以避免此)

5.損失放大,可能使用混合精度還會(huì)出現(xiàn)無法收斂的問題[其原因時(shí)激活梯度值較小],造成了溢出,則可以通過使用torch.cuda.amp.GradScaler放大損失來防止梯度的下溢

申明此篇博文主旨如何讓網(wǎng)絡(luò)模型加速訓(xùn)練 ,而非去了解其原理,且其以AlexNet為網(wǎng)絡(luò)架構(gòu)(其需要輸入的圖像大小為227x227x3),CIFAR10為數(shù)據(jù)集,Adamw為梯度下降函數(shù),學(xué)習(xí)率機(jī)制為ReduceLROnPlateau舉例。使用的電腦是2060的拯救者,雖然渣,但是還是可以搞搞這些測(cè)試。

本文從1.沒使用DDP與DP訓(xùn)練與評(píng)估代碼(之后加入amp),2.分布式DP訓(xùn)練與評(píng)估代碼(之后加入amp),3.單進(jìn)程占用多卡DDP訓(xùn)練與評(píng)估代碼(之后加入amp) 角度講解。

運(yùn)行此程序時(shí),文件的結(jié)構(gòu):

D:/PycharmProject/Simple-CV-Pytorch-master
|
|
|
|----AMP(train_without.py、train_DP.py、train_autocast.py、train_GradScaler.py、eval_XXX.py
|等,之后加入的alexnet也在這里,alexnet.py)
|
|
|
|----tensorboard(保存tensorboard的文件夾)
|
|
|
|----checkpoint(保存模型的文件夾)
|
|
|
|----data(數(shù)據(jù)集所在文件夾)

1.沒使用DDP與DP訓(xùn)練與評(píng)估代碼

沒使用DDP與DP的訓(xùn)練與評(píng)估實(shí)驗(yàn),作為我們實(shí)驗(yàn)的參照組

(1)原本模型的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

注意:此段代碼無比簡陋,僅為代碼的雛形,大致能理解尚可!

train_without.py

import time
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision.models import alexnet
from torchvision import transforms
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        outputs = model(imgs)
        loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

代碼特別粗獷,尤其是device與精度計(jì)算,僅供參考,切勿模仿!

eval_without.py

import torch
import torchvision
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()
# 1.Create model
model = alexnet()


# 2.Ready Dataset
if args.dataset == 'CIFAR10':
    test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                transform=transforms.Compose(
                                                    [transforms.Resize(args.img_size),
                                                     transforms.ToTensor()]),
                                                download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
# 3.Length
test_dataset_size = len(test_dataset)
print("the test dataset size is {}".format(test_dataset_size))

# 4.DataLoader
test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)

# 5. Set some parameters for testing the network
total_accuracy = 0

# test
model.eval()
with torch.no_grad():
    for data in test_dataloader:
        imgs, targets = data
        device = torch.device('cpu')
        imgs, targets = imgs.to(device), targets.to(device)
        model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
        model.load_state_dict(model_load)
        outputs = model(imgs)
        outputs = outputs.to(device)
        accuracy = (outputs.argmax(1) == targets).sum()
        total_accuracy = total_accuracy + accuracy
        accuracy = total_accuracy / test_dataset_size
    print("the total accuracy is {}".format(accuracy))

運(yùn)行結(jié)果:

圖片

分析:

原本模型訓(xùn)練完20個(gè)epochs花費(fèi)了22分22秒,得到的準(zhǔn)確率為0.8191

(2)原本模型加入autocast的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

訓(xùn)練大致代碼流程:

from torch.cuda.amp import autocast as autocast

...

# Create model, default torch.FloatTensor
model = Net().cuda()

# SGD,Adm, Admw,...
optim = optim.XXX(model.parameters(),..)

...

for imgs,targets in dataloader:
    imgs,targets = imgs.cuda(),targets.cuda()

    ....
    with autocast():
        outputs = model(imgs)
        loss = loss_fn(outputs,targets)
   ...
    optim.zero_grad()
    loss.backward()
    optim.step()

...

train_autocast_without.py

import time
import torch
import torchvision
from torch import nn
from torch.cuda.amp import autocast
from torchvision import transforms
from torchvision.models import alexnet
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_without.py 和 1.(1)一樣

運(yùn)行結(jié)果:

圖片

分析:

原本模型訓(xùn)練完20個(gè)epochs花費(fèi)了22分22秒,加入autocast之后模型花費(fèi)的時(shí)間為21分21秒,說明模型速度增加了,并且準(zhǔn)確率從之前的0.8191提升到0.8403

(3)原本模型加入autocast與GradScaler的訓(xùn)練與評(píng)估源碼:

使用torch.cuda.amp.GradScaler是放大損失值來防止梯度的下溢

訓(xùn)練源碼:

訓(xùn)練大致代碼流程:

from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
...

# Create model, default torch.FloatTensor
model = Net().cuda()

# SGD,Adm, Admw,...
optim = optim.XXX(model.parameters(),..)
scaler = GradScaler()

...

for imgs,targets in dataloader:
    imgs,targets = imgs.cuda(),targets.cuda()
    ...
    optim.zero_grad()
    ....
    with autocast():
        outputs = model(imgs)
        loss = loss_fn(outputs,targets)

    scaler.scale(loss).backward()
    scaler.step(optim)
    scaler.update()
...

train_GradScaler_without.py

import time
import torch
import torchvision
from torch import nn
from torch.cuda.amp import autocast, GradScaler
from torchvision import transforms
from torchvision.models import alexnet
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
scaler = GradScaler()
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        optim.zero_grad()
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        scaler.scale(loss_train).backward()
        scaler.step(optim)
        scaler.update()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_without.py 和 1.(1)一樣

運(yùn)行結(jié)果:

圖片

分析:

為什么,我們訓(xùn)練完20個(gè)epochs花費(fèi)了27分27秒,比之前原模型未使用任何amp的時(shí)間(22分22秒)都多了?

這是因?yàn)槲覀兪褂昧薌radScaler放大了損失降低了模型訓(xùn)練的速度,還有個(gè)原因可能是筆者自身的顯卡太小,沒有起到加速的作用

2.分布式DP訓(xùn)練與評(píng)估代碼

(1)DP原本模型的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

train_DP.py

import time
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision.models import alexnet
from torchvision import transforms
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        outputs = model(imgs)
        loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DP.py

import torch
import torchvision
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()
# 1.Create model
model = alexnet()
model = torch.nn.DataParallel(model)

# 2.Ready Dataset
if args.dataset == 'CIFAR10':
    test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                transform=transforms.Compose(
                                                    [transforms.Resize(args.img_size),
                                                     transforms.ToTensor()]),
                                                download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
# 3.Length
test_dataset_size = len(test_dataset)
print("the test dataset size is {}".format(test_dataset_size))

# 4.DataLoader
test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)

# 5. Set some parameters for testing the network
total_accuracy = 0

# test
model.eval()
with torch.no_grad():
    for data in test_dataloader:
        imgs, targets = data
        device = torch.device('cpu')
        imgs, targets = imgs.to(device), targets.to(device)
        model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
        model.load_state_dict(model_load)
        outputs = model(imgs)
        outputs = outputs.to(device)
        accuracy = (outputs.argmax(1) == targets).sum()
        total_accuracy = total_accuracy + accuracy
        accuracy = total_accuracy / test_dataset_size
    print("the total accuracy is {}".format(accuracy))

運(yùn)行結(jié)果:

圖片

(2)DP使用autocast的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

如果你 這樣寫代碼 ,那么你的代碼 無效 ?。。?/p>

...
    model = Model()
    model = torch.nn.DataParallel(model)
    ...
    with autocast():
        output = model(imgs)
        loss = loss_fn(output)

正確寫法 ,訓(xùn)練大致流程代碼:

1.Model(nn.Module):
      @autocast()
      def forward(self, input):
      ...

2.Model(nn.Module):
      def foward(self, input):
          with autocast():
              ...

1與2皆可,之后:

...
model = Model()
model = torch.nn.DataParallel(model)
with autocast():
    output = model(imgs)
    loss = loss_fn(output)
...

模型:

須在forward函數(shù)上加入@autocast()或者在forward里面最上面加入with autocast():

alexnet.py

import torch
import torch.nn as nn
from torchvision.models.utils import load_state_dict_from_url
from torch.cuda.amp import autocast
from typing import Any

__all__ = ['AlexNet', 'alexnet']

model_urls = {
    'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth',
}


class AlexNet(nn.Module):

    def __init__(self, num_classes: int = 1000) - > None:
        super(AlexNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(64, 192, kernel_size=5, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(192, 384, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
        )
        self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(256 * 6 * 6, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(inplace=True),
            nn.Linear(4096, num_classes),
        )

    @autocast()
    def forward(self, x: torch.Tensor) - > torch.Tensor:
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x


def alexnet(pretrained: bool = False, progress: bool = True, **kwargs: Any) - > AlexNet:
    r"""AlexNet model architecture from the
    `"One weird trick..." < https://arxiv.org/abs/1404.5997 >`_ paper.
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    model = AlexNet(**kwargs)
    if pretrained:
        state_dict = load_state_dict_from_url(model_urls["alexnet"],
                                              progress=progress)
        model.load_state_dict(state_dict)
    return model

train_DP_autocast.py 導(dǎo)入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.cuda.amp import autocast as autocast
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DP.py 相比與2. (1)導(dǎo)入自己的alexnet.py

運(yùn)行結(jié)果:

圖片

分析:

可以看出DP使用autocast訓(xùn)練完20個(gè)epochs時(shí)需要花費(fèi)的時(shí)間是21分21秒,相比與之前DP沒有使用的時(shí)間(22分22秒)快了1分1秒

之前DP未使用amp能達(dá)到準(zhǔn)確率0.8216,而現(xiàn)在準(zhǔn)確率降低到0.8188,說明還是使用自動(dòng)混合精度加速還是對(duì)模型的準(zhǔn)確率有所影響,后期可通過增大batch_sizel讓運(yùn)行時(shí)間和之前一樣,但是準(zhǔn)確率上升,來降低此影響

(3)DP使用autocast與GradScaler的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

train_DP_GradScaler.py 導(dǎo)入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
scaler = GradScaler()
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        optim.zero_grad()
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        scaler.scale(loss_train).backward()
        scaler.step(optim)
        scaler.update()

        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DP.py 相比與2. (1)導(dǎo)入自己的alexnet.py

運(yùn)行結(jié)果:

圖片

分析:

跟之前一樣,DP使用了GradScaler放大了損失降低了模型訓(xùn)練的速度

現(xiàn)在DP使用了autocast與GradScaler的準(zhǔn)確率為0.8409,相比與DP只使用autocast準(zhǔn)確率0.8188還是有所上升,并且之前DP未使用amp是準(zhǔn)確率(0.8216)也提高了不少

3.單進(jìn)程占用多卡DDP訓(xùn)練與評(píng)估代碼

(1)DDP原模型訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

train_DDP.py

import time
import torch
from torchvision.models.alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)

    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            optim.zero_grad()
            loss_train.backward()
            optim.step()
            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DDP.py

import torch
import torchvision
import torch.distributed as dist
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
# from alexnet import alexnet
from torchvision.models.alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def eval():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create model
    model = alexnet()
    model = torch.nn.parallel.DistributedDataParallel(model)

    # 2.Ready Dataset
    if args.dataset == 'CIFAR10':
        test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                    transform=transforms.Compose(
                                                        [transforms.Resize(args.img_size),
                                                         transforms.ToTensor()]),
                                                    download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    # 3.Length
    test_dataset_size = len(test_dataset)
    print("the test dataset size is {}".format(test_dataset_size))
    test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)

    # 4.DataLoader
    test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,
                                 num_workers=2,
                                 pin_memory=True)

    # 5. Set some parameters for testing the network
    total_accuracy = 0

    # test
    model.eval()
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            device = torch.device('cpu')
            imgs, targets = imgs.to(device), targets.to(device)
            model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
            model.load_state_dict(model_load)
            outputs = model(imgs)
            outputs = outputs.to(device)
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy
            accuracy = total_accuracy / test_dataset_size
        print("the total accuracy is {}".format(accuracy))


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    eval()

運(yùn)行結(jié)果:

圖片

(2)DDP使用autocast的訓(xùn)練與評(píng)估源碼:

訓(xùn)練源碼:

train_DDP_autocast.py 導(dǎo)入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.cuda.amp import autocast as autocast
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)

    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            with autocast():
                outputs = model(imgs)
                loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            optim.zero_grad()
            loss_train.backward()
            optim.step()
            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DDP.py 導(dǎo)入自己的alexnet.py

import torch
import torchvision
import torch.distributed as dist
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
# from torchvision.models.alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def eval():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create model
    model = alexnet()
    model = torch.nn.parallel.DistributedDataParallel(model)

    # 2.Ready Dataset
    if args.dataset == 'CIFAR10':
        test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                    transform=transforms.Compose(
                                                        [transforms.Resize(args.img_size),
                                                         transforms.ToTensor()]),
                                                    download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    # 3.Length
    test_dataset_size = len(test_dataset)
    print("the test dataset size is {}".format(test_dataset_size))
    test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)

    # 4.DataLoader
    test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,
                                 num_workers=2,
                                 pin_memory=True)

    # 5. Set some parameters for testing the network
    total_accuracy = 0

    # test
    model.eval()
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            device = torch.device('cpu')
            imgs, targets = imgs.to(device), targets.to(device)
            model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
            model.load_state_dict(model_load)
            outputs = model(imgs)
            outputs = outputs.to(device)
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy
            accuracy = total_accuracy / test_dataset_size
        print("the total accuracy is {}".format(accuracy))


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    eval()

運(yùn)行結(jié)果:

圖片

分析:

從DDP未使用amp花費(fèi)21分21秒,DDP使用autocast花費(fèi)20分20秒,說明速度提升了

DDP未使用amp的準(zhǔn)確率0.8224,之后DDP使用了autocast準(zhǔn)確率下降到0.8162

(3)DDP使用autocast與GradScaler的訓(xùn)練與評(píng)估源碼

訓(xùn)練源碼:

train_DDP_GradScaler.py 導(dǎo)入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
    scaler = GradScaler()
    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            optim.zero_grad()
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            with autocast():
                outputs = model(imgs)
                loss_train = cross_entropy_loss(outputs, targets)
                loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            scaler.scale(loss_train).backward()
            scaler.step(optim)
            scaler.update()

            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運(yùn)行結(jié)果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評(píng)估源碼:

eval_DDP.py 與3. (2) 一樣,導(dǎo)入自己的alexnet.py

運(yùn)行結(jié)果:

圖片

分析:

運(yùn)行起來了,速度也比DDP未使用amp(用時(shí)21分21秒)快了不少(用時(shí)20分20秒),之前DDP未使用amp準(zhǔn)確率到達(dá)0.8224,現(xiàn)在DDP使用了autocast與GradScaler的準(zhǔn)確率達(dá)到0.8252,提升了

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • NVIDIA
    +關(guān)注

    關(guān)注

    14

    文章

    5309

    瀏覽量

    106423
  • 數(shù)據(jù)集
    +關(guān)注

    關(guān)注

    4

    文章

    1224

    瀏覽量

    25449
  • 網(wǎng)絡(luò)模型
    +關(guān)注

    關(guān)注

    0

    文章

    44

    瀏覽量

    8761
  • 深度學(xué)習(xí)
    +關(guān)注

    關(guān)注

    73

    文章

    5561

    瀏覽量

    122799
收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    深層神經(jīng)網(wǎng)絡(luò)模型訓(xùn)練:過擬合優(yōu)化

    為了訓(xùn)練出高效可用的深層神經(jīng)網(wǎng)絡(luò)模型,在訓(xùn)練時(shí)必須要避免過擬合的現(xiàn)象。過擬合現(xiàn)象的優(yōu)化方法通常有三種。
    的頭像 發(fā)表于 12-02 14:17 ?3136次閱讀
    深層神經(jīng)<b class='flag-5'>網(wǎng)絡(luò)</b><b class='flag-5'>模型</b>的<b class='flag-5'>訓(xùn)練</b>:過擬合優(yōu)化

    【大語言模型:原理與工程實(shí)踐】大語言模型的預(yù)訓(xùn)練

    大語言模型的核心特點(diǎn)在于其龐大的參數(shù)量,這賦予了模型強(qiáng)大的學(xué)習(xí)容量,使其無需依賴微調(diào)即可適應(yīng)各種下游任務(wù),而更傾向于培養(yǎng)通用的處理能力。然而,隨著學(xué)習(xí)容量的增加,對(duì)預(yù)訓(xùn)練數(shù)據(jù)的需求也相應(yīng)
    發(fā)表于 05-07 17:10

    請(qǐng)問Labveiw如何調(diào)用matlab訓(xùn)練好的神經(jīng)網(wǎng)絡(luò)模型呢?

    我在matlab中訓(xùn)練好了一個(gè)神經(jīng)網(wǎng)絡(luò)模型,想在labview中調(diào)用,請(qǐng)問應(yīng)該怎么做呢?或者labview有自己的神經(jīng)網(wǎng)絡(luò)工具包嗎?
    發(fā)表于 07-05 17:32

    Pytorch模型訓(xùn)練實(shí)用PDF教程【中文】

    本教程以實(shí)際應(yīng)用、工程開發(fā)為目的,著重介紹模型訓(xùn)練過程中遇到的實(shí)際問題和方法。在機(jī)器學(xué)習(xí)模型開發(fā)中,主要涉及三大部分,分別是數(shù)據(jù)、模型和損失函數(shù)及優(yōu)化器。本文也按順序的依次介紹數(shù)據(jù)、
    發(fā)表于 12-21 09:18

    基于tensorflow.js設(shè)計(jì)、訓(xùn)練面向web的神經(jīng)網(wǎng)絡(luò)模型的經(jīng)驗(yàn)

    NVIDIA顯卡。tensorflow.js在底層使用了WebGL加速,所以在瀏覽器中訓(xùn)練模型的一個(gè)好處是可以利用AMD顯卡。另外,在瀏覽器中訓(xùn)練
    的頭像 發(fā)表于 10-18 09:43 ?4410次閱讀

    如何利用Google Colab的云TPU加速Keras模型訓(xùn)練

    云TPU包含8個(gè)TPU核,每個(gè)核都作為獨(dú)立的處理單元運(yùn)作。如果沒有用上全部8個(gè)核心,那就沒有充分利用TPU。為了充分加速訓(xùn)練,相比在單GPU上訓(xùn)練的同樣的模型,我們可以選擇較大的bat
    的頭像 發(fā)表于 11-16 09:10 ?1.1w次閱讀

    如何PyTorch模型訓(xùn)練變得飛快?

    讓我們面對(duì)現(xiàn)實(shí)吧,你的模型可能還停留在石器時(shí)代。我敢打賭你仍然使用32位精度或GASP甚至只在一個(gè)GPU上訓(xùn)練。 我明白,網(wǎng)上都是各種神經(jīng)網(wǎng)絡(luò)加速指南,但是一個(gè)checklist都沒有
    的頭像 發(fā)表于 11-27 10:43 ?1997次閱讀

    基于預(yù)訓(xùn)練模型和長短期記憶網(wǎng)絡(luò)的深度學(xué)習(xí)模型

    作為模型的初始化詞向量。但是,隨機(jī)詞向量存在不具備語乂和語法信息的缺點(diǎn);預(yù)訓(xùn)練詞向量存在¨一詞-乂”的缺點(diǎn),無法為模型提供具備上下文依賴的詞向量。針對(duì)該問題,提岀了一種基于預(yù)訓(xùn)練
    發(fā)表于 04-20 14:29 ?19次下載
    基于預(yù)<b class='flag-5'>訓(xùn)練</b><b class='flag-5'>模型</b>和長短期記憶<b class='flag-5'>網(wǎng)絡(luò)</b>的深度學(xué)習(xí)<b class='flag-5'>模型</b>

    利用視覺語言模型對(duì)檢測(cè)器進(jìn)行預(yù)訓(xùn)練

    預(yù)訓(xùn)練通常被用于自然語言處理以及計(jì)算機(jī)視覺領(lǐng)域,以增強(qiáng)主干網(wǎng)絡(luò)的特征提取能力,達(dá)到加速訓(xùn)練和提高模型泛化性能的目的。該方法亦可以用于場(chǎng)景文本
    的頭像 發(fā)表于 08-08 15:33 ?1730次閱讀

    類GPT模型訓(xùn)練提速26.5%,清華朱軍等人用INT4算法加速神經(jīng)網(wǎng)絡(luò)訓(xùn)練

    我們知道,將激活、權(quán)重和梯度量化為 4-bit 對(duì)于加速神經(jīng)網(wǎng)絡(luò)訓(xùn)練非常有價(jià)值。但現(xiàn)有的 4-bit 訓(xùn)練方法需要自定義數(shù)字格式,而當(dāng)代硬件不支持這些格式。在本文中,清華朱軍等人提出了
    的頭像 發(fā)表于 07-02 20:35 ?973次閱讀
    類GPT<b class='flag-5'>模型</b><b class='flag-5'>訓(xùn)練</b>提速26.5%,清華朱軍等人用INT4算法<b class='flag-5'>加速</b>神經(jīng)<b class='flag-5'>網(wǎng)絡(luò)</b><b class='flag-5'>訓(xùn)練</b>

    卷積神經(jīng)網(wǎng)絡(luò)模型訓(xùn)練步驟

    卷積神經(jīng)網(wǎng)絡(luò)模型訓(xùn)練步驟? 卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Network, CNN)是一種常用的深度學(xué)習(xí)算法,廣泛應(yīng)用于圖像識(shí)別、語音識(shí)別、自然語言處理等諸多
    的頭像 發(fā)表于 08-21 16:42 ?2296次閱讀

    人臉識(shí)別模型訓(xùn)練流程

    據(jù)準(zhǔn)備階段,需要收集大量的人臉圖像數(shù)據(jù),并進(jìn)行數(shù)據(jù)清洗、標(biāo)注和增強(qiáng)等操作。 1.1 數(shù)據(jù)收集 數(shù)據(jù)收集是人臉識(shí)別模型訓(xùn)練的第一步??梢酝ㄟ^網(wǎng)絡(luò)爬蟲、公開數(shù)據(jù)集、合作伙伴等途徑收集人臉圖像數(shù)據(jù)。在收集數(shù)據(jù)時(shí),需要注意
    的頭像 發(fā)表于 07-04 09:19 ?1946次閱讀

    如何使用經(jīng)過訓(xùn)練的神經(jīng)網(wǎng)絡(luò)模型

    使用經(jīng)過訓(xùn)練的神經(jīng)網(wǎng)絡(luò)模型是一個(gè)涉及多個(gè)步驟的過程,包括數(shù)據(jù)準(zhǔn)備、模型加載、預(yù)測(cè)執(zhí)行以及后續(xù)優(yōu)化等。
    的頭像 發(fā)表于 07-12 11:43 ?1944次閱讀

    PyTorch GPU 加速訓(xùn)練模型方法

    在深度學(xué)習(xí)領(lǐng)域,GPU加速訓(xùn)練模型已經(jīng)成為提高訓(xùn)練效率和縮短訓(xùn)練時(shí)間的重要手段。PyTorch作為一個(gè)流行的深度學(xué)習(xí)框架,提供了豐富的工具和
    的頭像 發(fā)表于 11-05 17:43 ?1411次閱讀

    如何使用FP8新技術(shù)加速模型訓(xùn)練

    利用 FP8 技術(shù)加速 LLM 推理和訓(xùn)練越來越受到關(guān)注,本文主要和大家介紹如何使用 FP8 這項(xiàng)新技術(shù)加速模型訓(xùn)練。 使用 FP8 進(jìn)
    的頭像 發(fā)表于 12-09 11:30 ?1075次閱讀