방명록
- 자동 미분과 기울기, 모델 생성 후 테스트 및 검증해보기2024년 02월 11일 16시 52분 06초에 업로드 된 글입니다.작성자: 재형이반응형
- 이제 파이토치를 통해 예시 데이터셋을 가져와서 모델을 생성해보고 테스트하고 검증하는 강의를 들었다
- 실습을 진행하면서 들은 생각은 너무 빠르게 지나간다는 점...?
- 이전부터 느끼고 있었던 패스트캠퍼스 강의들의 문제점이 느껴졌었다
- 새로운 기술들을 습득하고 배우는데에 있어서는 패스트캠퍼스 강의보다는 책을 읽어보던가 다른 강의들을 찾아보는 것을 추천한다
- 그럴 수 밖에 없는 것이 너무 많은 내용을 단기간에 전달하려고 하다보니 그런 것 같다. 하지만 그렇다고 패스트캠퍼스가 좋지 않다는 것은 아니다. 어느정도 기반 지식이 있고 다양한 오픈소스들 또는 관련 기술들을 접해보고 싶을 때는 패스트캠퍼스만한 게 없는 것 같다.
- 나는 인공지능이 처음이라 지금 머릿속이 매우 복잡하긴 하지만 내가 항상 생각하고 있는 낯섦을 익숙함으로 바꾸는 시기를 겪고 있다고 생각하기 때문에 자주 보고 접해보면서 친해지는 수 밖에 없는 것 같다
1. 자동 미분과 기울기 (Gradient)
- PyTorch에서는 연산에 대하여 자동 미분을 수행할 수 있다
import torch # requires_grad를 설정할 때만 기울기 추적 x = torch.tensor([3.0, 4.0], requires_grad=True) y = torch.tensor([1.0, 2.0], requires_grad=True) z = x + y print(x) print(z) # [4.0, 6.0] print(z.grad_fn) # 더하기(add) out = z.mean() print(out) # 5.0 print(out.grad_fn) # 평균(mean) out.backward() # scalar에 대하여 가능 print(x.grad) print(y.grad) print(z.grad) # leaf variable에 대해서만 gradient 추적이 가능하다. 따라서 None. # tensor([3., 4.], requires_grad=True) # tensor([4., 6.], grad_fn=<AddBackward0>) # <AddBackward0 object at 0x79130c9e7b50> # tensor(5., grad_fn=<MeanBackward0>) # <MeanBackward0 object at 0x79133693ae60> # tensor([0.5000, 0.5000]) # tensor([0.5000, 0.5000]) # None # <ipython-input-4-1c7206c9ecda>:19: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.) # print(z.grad) # leaf variable에 대해서만 gradient 추적이 가능하다. 따라서 None.
- 일반적으로 모델을 학습할 때는 기울기(gradient)를 추적한다
- 하지만, 학습된 모델을 사용할 때는 파라미터를 업데이트하지 않으므로, 기울기를 추적하지 않는 것이 일반적이다
- with torch.no_grad() : 블록 내에서는 기울기 계산이 일시적으로 비활성화됩니다. 이는 모델을 평가하는 동안 불필요한 기울기 계산을 방지하여 계산 속도를 높이기 위해 사용됩니다.
temp = torch.tensor([3.0, 4.0], requires_grad=True) print(temp.requires_grad) print((temp ** 2).requires_grad) # 기울기 추적을 하지 않기 때문에 계산 속도가 더 빠르다. with torch.no_grad(): temp = torch.tensor([3.0, 4.0], requires_grad=True) print(temp.requires_grad) print((temp ** 2).requires_grad) # True # True # True # False
2. 뉴런부터 깊은 모델 만들기 - 파이토치
2-1. 데이터 세트 불러오기(Load Dataset)
- 이미지를 불러올 때 어떤 방법(회전, 자르기, 뒤집기 등)을 사용할 것인지 명시할 수 있다
- 이후에 DataLoader( )를 이용하여 실질적으로 데이터를 불러올 수 있다
- 어떤 데이터를 사용할 것인지, 배치 크기(batch size), 데이터 셔플(shuffle) 여부 등을 명시한다
# 딥러닝 모델 학습 과정에서 필요한 데이터 세트를 불러온다 !git clone https://github.com/ndb796/weather_dataset %cd weather_dataset # 딥러닝 모델 학습 과정에서 필요한 라이브러리를 불러온다 import torch import torchvision import torchvision.transforms as transforms import torchvision.models as models import torchvision.datasets as datasets import torch.optim as optim import torch.nn as nn import torch.nn.functional as F from torch.utils.data import random_split import matplotlib.pyplot as plt import matplotlib.image as image import numpy as np transform_train = transforms.Compose([ transforms.Resize((256, 256)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize( mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5] ) ]) transform_val = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize( mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5] ) ]) transform_test = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize( mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5] ) ]) train_dataset = datasets.ImageFolder( root='train/', transform=transform_train ) dataset_size = len(train_dataset) train_size = int(dataset_size * 0.8) val_size = dataset_size - train_size train_dataset, val_dataset = random_split(train_dataset, [train_size, val_size]) test_dataset = datasets.ImageFolder( root='test/', transform=transform_test ) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=64, shuffle=False) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)
2-2. 데이터 시각화(Data Visualization)
- next( ) 함수를 이용하여 tensor 형태로 데이터를 배치 단위로 얻을 수 있다
plt.rcParams['figure.figsize'] = [12, 8] plt.rcParams['figure.dpi'] = 60 plt.rcParams.update({'font.size': 20}) def imshow(input): # torch.Tensor => numpy input = input.numpy().transpose((1, 2, 0)) # undo image normalization mean = np.array([0.5, 0.5, 0.5]) std = np.array([0.5, 0.5, 0.5]) input = std * input + mean input = np.clip(input, 0, 1) # display images plt.imshow(input) plt.show() class_names = { 0: "Cloudy", 1: "Rain", 2: "Shine", 3: "Sunrise" } # load a batch of train image iterator = iter(train_dataloader) # visualize a batch of train image imgs, labels = next(iterator) out = torchvision.utils.make_grid(imgs[:4]) imshow(out) print([class_names[labels[i].item()] for i in range(4)])
2-3. 딥러닝 모델 학습(Training)
- 자신이 직접 정의한 뉴럴 네트워크를 이용하여 데이터 세트에 대한 학습이 가능하다
- 레이어의 깊이를 늘려 가며, 파라미터의 개수를 증가시킬 수 있다
class Model1(nn.Module): def __init__(self): super(Model1, self).__init__() self.linear1 = nn.Linear(256 * 256 * 3, 4) self.flatten = nn.Flatten() def forward(self, x): x = self.flatten(x) x = self.linear1(x) return x class Model2(nn.Module): def __init__(self): super(Model2, self).__init__() self.linear1 = nn.Linear(256 * 256 * 3, 64) self.linear2 = nn.Linear(64, 4) self.flatten = nn.Flatten() def forward(self, x): x = self.flatten(x) x = self.linear1(x) x = self.linear2(x) return x class Model3(nn.Module): def __init__(self): super(Model3, self).__init__() self.linear1 = nn.Linear(256 * 256 * 3, 128) self.dropout1 = nn.Dropout(0.5) self.linear2 = nn.Linear(128, 64) self.dropout2 = nn.Dropout(0.5) self.linear3 = nn.Linear(64, 32) self.dropout3 = nn.Dropout(0.5) self.linear4 = nn.Linear(32, 4) self.flatten = nn.Flatten() def forward(self, x): x = self.flatten(x) x = F.relu(self.linear1(x)) x = self.dropout1(x) x = F.relu(self.linear2(x)) x = self.dropout2(x) x = F.relu(self.linear3(x)) x = self.dropout3(x) x = self.linear4(x) return x
import time def train(): start_time = time.time() print(f'[Epoch: {epoch + 1} - Training]') model.train() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(train_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() outputs = model(imgs) optimizer.zero_grad() _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if i % log_step == log_step - 1: print(f'[Batch: {i + 1}] running train loss: {running_loss / total}, running train accuracy: {running_corrects / total}') print(f'train loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item() def validate(): start_time = time.time() print(f'[Epoch: {epoch + 1} - Validation]') model.eval() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(val_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() with torch.no_grad(): outputs = model(imgs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if (i == 0) or (i % log_step == log_step - 1): print(f'[Batch: {i + 1}] running val loss: {running_loss / total}, running val accuracy: {running_corrects / total}') print(f'val loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item() def test(): start_time = time.time() print(f'[Test]') model.eval() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(test_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() with torch.no_grad(): outputs = model(imgs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if (i == 0) or (i % log_step == log_step - 1): print(f'[Batch: {i + 1}] running test loss: {running_loss / total}, running test accuracy: {running_corrects / total}') print(f'test loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item()
import time def adjust_learning_rate(optimizer, epoch): lr = learning_rate if epoch >= 3: lr /= 10 if epoch >= 7: lr /= 10 for param_group in optimizer.param_groups: param_group['lr'] = lr
2-4. 학습 결과 확인하기
learning_rate = 0.01 log_step = 20 model = Model1() model = model.cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9) num_epochs = 20 best_val_acc = 0 best_epoch = 0 history = [] accuracy = [] for epoch in range(num_epochs): adjust_learning_rate(optimizer, epoch) train_loss, train_acc = train() val_loss, val_acc = validate() history.append((train_loss, val_loss)) accuracy.append((train_acc, val_acc)) if val_acc > best_val_acc: print("[Info] best validation accuracy!") best_val_acc = val_acc best_epoch = epoch torch.save(model.state_dict(), f"best_checkpoint_epoch_{epoch + 1}.pth") torch.save(model.state_dict(), f"last_checkpoint_epoch_{num_epochs}.pth") plt.plot([x[0] for x in accuracy], 'b', label='train') plt.plot([x[1] for x in accuracy], 'r--',label='validation') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend() test_loss, test_accuracy = test() print(f"Test loss: {test_loss:.8f}") print(f"Test accuracy: {test_accuracy * 100.:.2f}%")
[Epoch: 1 - Training] train loss: 0.2567662864261203, accuracy: 0.5733333230018616 elapsed time: 6.803460121154785 [Epoch: 1 - Validation] [Batch: 1] running val loss: 0.4910295903682709, running val accuracy: 0.640625 val loss: 0.43044863367927144, accuracy: 0.668639063835144 elapsed time: 1.5926661491394043 [Info] best validation accuracy! [Epoch: 2 - Training] train loss: 0.33135583453708223, accuracy: 0.7051851749420166 elapsed time: 5.796816110610962 [Epoch: 2 - Validation] [Batch: 1] running val loss: 0.5253145098686218, running val accuracy: 0.6875 val loss: 0.5071378290300539, accuracy: 0.692307710647583 elapsed time: 1.3913233280181885 [Info] best validation accuracy! [Epoch: 3 - Training] train loss: 0.3916583435623734, accuracy: 0.6948148012161255 elapsed time: 6.28375244140625 [Epoch: 3 - Validation] [Batch: 1] running val loss: 0.48151618242263794, running val accuracy: 0.703125 val loss: 0.4469795452772513, accuracy: 0.7041420340538025 elapsed time: 1.4555256366729736 [Info] best validation accuracy! ... [Epoch: 18 - Training] train loss: 0.06829585605197483, accuracy: 0.8311111330986023 elapsed time: 6.123921632766724 [Epoch: 18 - Validation] [Batch: 1] running val loss: 0.14488814771175385, running val accuracy: 0.71875 val loss: 0.15127370766634066, accuracy: 0.7278106808662415 elapsed time: 1.4212892055511475 [Epoch: 19 - Training] train loss: 0.06393914293359827, accuracy: 0.8399999737739563 elapsed time: 5.9954869747161865 [Epoch: 19 - Validation] [Batch: 1] running val loss: 0.17086072266101837, running val accuracy: 0.734375 val loss: 0.1706133735250439, accuracy: 0.7218934893608093 elapsed time: 2.9930226802825928 [Epoch: 20 - Training] train loss: 0.06417858300385652, accuracy: 0.8370370268821716 elapsed time: 5.5381317138671875 [Epoch: 20 - Validation] [Batch: 1] running val loss: 0.12306518852710724, running val accuracy: 0.75 val loss: 0.1439898479619675, accuracy: 0.7396450042724609 elapsed time: 1.4337537288665771 [Test] [Batch: 1] running test loss: 0.2646869122982025, running test accuracy: 0.59375 test loss: 0.1993811190645359, accuracy: 0.7010676264762878 elapsed time: 1.0359320640563965 Test loss: 0.19938112 Test accuracy: 70.11%
learning_rate = 0.01 log_step = 20 model = Model2() model = model.cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9) num_epochs = 20 best_val_acc = 0 best_epoch = 0 history = [] accuracy = [] for epoch in range(num_epochs): adjust_learning_rate(optimizer, epoch) train_loss, train_acc = train() val_loss, val_acc = validate() history.append((train_loss, val_loss)) accuracy.append((train_acc, val_acc)) if val_acc > best_val_acc: print("[Info] best validation accuracy!") best_val_acc = val_acc best_epoch = epoch torch.save(model.state_dict(), f"best_checkpoint_epoch_{epoch + 1}.pth") torch.save(model.state_dict(), f"last_checkpoint_epoch_{num_epochs}.pth") plt.plot([x[0] for x in accuracy], 'b', label='train') plt.plot([x[1] for x in accuracy], 'r--',label='validation') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend() test_loss, test_accuracy = test() print(f"Test loss: {test_loss:.8f}") print(f"Test accuracy: {test_accuracy * 100.:.2f}%")
... [Epoch: 19 - Training] train loss: 0.1639189843778257, accuracy: 0.8148148059844971 elapsed time: 5.705990314483643 [Epoch: 19 - Validation] [Batch: 1] running val loss: 0.2906940281391144, running val accuracy: 0.734375 val loss: 0.31424146008914744, accuracy: 0.7396450042724609 elapsed time: 1.4762709140777588 [Epoch: 20 - Training] train loss: 0.15077252564606844, accuracy: 0.8133333325386047 elapsed time: 6.453710317611694 [Epoch: 20 - Validation] [Batch: 1] running val loss: 0.2633395493030548, running val accuracy: 0.703125 val loss: 0.3107570783626398, accuracy: 0.715976357460022 elapsed time: 1.4691383838653564 [Test] [Batch: 1] running test loss: 0.5579845309257507, running test accuracy: 0.609375 test loss: 0.5404184144586855, accuracy: 0.7153024673461914 elapsed time: 1.0637791156768799 Test loss: 0.54041841 Test accuracy: 71.53%
learning_rate = 0.01 log_step = 20 model = Model3() model = model.cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9) num_epochs = 20 best_val_acc = 0 best_epoch = 0 history = [] accuracy = [] for epoch in range(num_epochs): adjust_learning_rate(optimizer, epoch) train_loss, train_acc = train() val_loss, val_acc = validate() history.append((train_loss, val_loss)) accuracy.append((train_acc, val_acc)) if val_acc > best_val_acc: print("[Info] best validation accuracy!") best_val_acc = val_acc best_epoch = epoch torch.save(model.state_dict(), f"best_checkpoint_epoch_{epoch + 1}.pth") torch.save(model.state_dict(), f"last_checkpoint_epoch_{num_epochs}.pth") plt.plot([x[0] for x in accuracy], 'b', label='train') plt.plot([x[1] for x in accuracy], 'r--',label='validation') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend() test_loss, test_accuracy = test() print(f"Test loss: {test_loss:.8f}") print(f"Test accuracy: {test_accuracy * 100.:.2f}%")
... [Epoch: 19 - Training] train loss: 0.013266877509929515, accuracy: 0.6503703594207764 elapsed time: 5.3732171058654785 [Epoch: 19 - Validation] [Batch: 1] running val loss: 0.011657175607979298, running val accuracy: 0.734375 val loss: 0.01267393735738901, accuracy: 0.7573964595794678 elapsed time: 1.9730148315429688 [Epoch: 20 - Training] train loss: 0.012981251610649957, accuracy: 0.6429629325866699 elapsed time: 5.319707155227661 [Epoch: 20 - Validation] [Batch: 1] running val loss: 0.011605185456573963, running val accuracy: 0.734375 val loss: 0.012672973807746842, accuracy: 0.7573964595794678 elapsed time: 1.3562369346618652 [Test] [Batch: 1] running test loss: 0.017807412892580032, running test accuracy: 0.6875 test loss: 0.011031828814076784, accuracy: 0.7829181551933289 elapsed time: 0.9401159286499023 Test loss: 0.01103183 Test accuracy: 78.29%
3. 이미지 날씨 분류 모델 - 파이토치
- 여기서는 사전 학습된(pre-trained) 모델(model)을 가지고 데이터 세트에 대한 학습을 시켜보자
- 데이터 세트는 위에서 사용한 동일한 날씨 데이터 세트를 사용할 것이다
- 네트워크의 마지막에 FC 레이어를 적용하여 클래스 개수를 일치시킨다
3-1. 딥러닝 모델 학습(Training)
learning_rate = 0.01 log_step = 20 model = models.resnet50(pretrained=True) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 4) # transfer learning model = model.cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9)
import time def train(): start_time = time.time() print(f'[Epoch: {epoch + 1} - Training]') model.train() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(train_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() outputs = model(imgs) optimizer.zero_grad() _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if i % log_step == log_step - 1: print(f'[Batch: {i + 1}] running train loss: {running_loss / total}, running train accuracy: {running_corrects / total}') print(f'train loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item() def validate(): start_time = time.time() print(f'[Epoch: {epoch + 1} - Validation]') model.eval() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(val_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() with torch.no_grad(): outputs = model(imgs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if (i == 0) or (i % log_step == log_step - 1): print(f'[Batch: {i + 1}] running val loss: {running_loss / total}, running val accuracy: {running_corrects / total}') print(f'val loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item() def test(): start_time = time.time() print(f'[Test]') model.eval() total = 0 running_loss = 0.0 running_corrects = 0 for i, batch in enumerate(test_dataloader): imgs, labels = batch imgs, labels = imgs.cuda(), labels.cuda() with torch.no_grad(): outputs = model(imgs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) total += labels.shape[0] running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) if (i == 0) or (i % log_step == log_step - 1): print(f'[Batch: {i + 1}] running test loss: {running_loss / total}, running test accuracy: {running_corrects / total}') print(f'test loss: {running_loss / total}, accuracy: {running_corrects / total}') print("elapsed time:", time.time() - start_time) return running_loss / total, (running_corrects / total).item()
import time def adjust_learning_rate(optimizer, epoch): lr = learning_rate if epoch >= 3: lr /= 10 if epoch >= 7: lr /= 10 for param_group in optimizer.param_groups: param_group['lr'] = lr num_epochs = 10 best_val_acc = 0 best_epoch = 0 history = [] accuracy = [] for epoch in range(num_epochs): adjust_learning_rate(optimizer, epoch) train_loss, train_acc = train() val_loss, val_acc = validate() history.append((train_loss, val_loss)) accuracy.append((train_acc, val_acc)) if val_acc > best_val_acc: print("[Info] best validation accuracy!") best_val_acc = val_acc best_epoch = epoch torch.save(model.state_dict(), f'best_checkpoint_epoch_{epoch + 1}.pth') torch.save(model.state_dict(), f'last_checkpoint_epoch_{num_epochs}.pth')
3-2. 학습 결과 확인하기
plt.plot([x[0] for x in history], 'b', label='train') plt.plot([x[1] for x in history], 'r--',label='validation') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend()
plt.plot([x[0] for x in accuracy], 'b', label='train') plt.plot([x[1] for x in accuracy], 'r--',label='validation') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend()
model = models.resnet50(pretrained=True) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 4) # transfer learning model = model.cuda() model_path = 'best_checkpoint_epoch_5.pth' model.load_state_dict(torch.load(model_path)) test_loss, test_accuracy = test() print(f"Test loss: {test_loss:.8f}") print(f"Test accuracy: {test_accuracy * 100.:.2f}%")
반응형'인공지능 > 프레임워크 or 라이브러리' 카테고리의 다른 글
의사 결정 트리, 랜덤 포레스트, SVM, 선형 회귀 (4) 2024.02.15 사이킷런 개요, 가상 데이터, 데이터 분할, ROC 커브 (2) 2024.02.14 텐서플로우 자동미분, 모델 생성 후 테스트 (2) 2024.02.13 텐서플로우 개요, 텐서 형변환, 연산, 함수 (2) 2024.02.12 파이토치 개요, 텐서의 형변환, 연산, 함수 (2) 2024.02.10 다음글이 없습니다.이전글이 없습니다.댓글