当前位置:网站首页>Monitoring loss functions using visdom

Monitoring loss functions using visdom

2022-06-11 18:23:00 Yalin melon seeds

problem

The loss function needs to be monitored Loss When did the training converge .

solve

pip

pip3 install visdom

Visdom

function :


python3 -m visdom.server

then , open http://localhost:8097/ that will do .

Python

Last , stay Python Just bury points in the program .

def cifar10_go():
    #  Instantiate a window 
    viz = Visdom(port=8097)
    #  Initialization window information 
    viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss'))

    transform = transforms.Compose([
        transforms.RandomResizedCrop((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ])
    cifar10_dataset = torchvision.datasets.CIFAR10(root='./data',
                                                   train=False,
                                                   transform=transform,
                                                   target_transform=None,
                                                   download=True)
    dataloader = DataLoader(dataset=cifar10_dataset,  #  Incoming dataset ,  Necessary parameters 
                            batch_size=32,  #  Output batch size 
                            shuffle=True,  #  Whether the data is disturbed 
                            num_workers=4)  #  Number of processes , 0 Indicates that only the main process 
    model = MyCNN()
    #  Cross entropy loss function 
    criterion = nn.CrossEntropyLoss()
    #  Define optimizer 
    optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, weight_decay=1e-2, momentum=0.9)
    #  Let's start training 
    start = time.time()  #  Time begins 
    for epoch in range(3):  #  Set the number of times to train on all data 

        for i, data in enumerate(dataloader):
            # data It's the one we got batch size Size data 

            inputs, labels = data  #  Get the input data and its corresponding category results respectively 
            #  First pass zero_grad() The function clears the gradient , Otherwise PyTorch Each calculation of the gradient will add up , If it is not cleared, the gradient calculated for the second time is equal to the gradient calculated for the first time plus the gradient calculated for the second time 
            optimizer.zero_grad()
            #  Get the output of the model , That is, the effect learned by the current model 
            outputs = model(inputs)
            #  Get the loss function of the real category of output results and data 
            loss = criterion(outputs, labels)
            print('Epoch {}, Loss {}'.format(epoch + 1, loss))
            #  We're done loss Then back gradient propagation , After this process, the gradient will be recorded in the variable 
            loss.backward()
            #  Use the calculated gradient to optimize 
            optimizer.step()
            #  Update the listening information 
            viz.line([loss.item()], [i], win='train_loss', update='append')
    end = time.time()  #  End of the timing 
    print(' Use your time : {:.5f} s'.format(end - start))
    #  Save model training results 
    torch.save(model, './MyCNN_model_23.pth')

Here is the last example :《PyTorch Use CIFAR-10 Data for training 》.

effect

visdom effect

Reference resources :

原网站

版权声明
本文为[Yalin melon seeds]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/162/202206111804321029.html