Loading... Learn AI 系列文章: [1. Setup ubuntu server 20.04 for machine learning](https://www.helloyiyu.com/index.php/ai/28.html) [2. Try run PyTorch Quickstart demo](https://www.helloyiyu.com/index.php/ai/32.html) [3. 使用Ollama和OpenWebUI本地运行llama3大语言模型](https://www.helloyiyu.com/index.php/ai/39.html) --- 关于PyTorch的资料: [datawhalechina/thorough-pytorch: PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/](https://github.com/datawhalechina/thorough-pytorch) --- 参考官网文档:[Quickstart — PyTorch Tutorials 2.3.0+cu121 documentation](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html) 在本地Ubuntu Server 20.04 环境中跑一下pytorch quickstart demo code。 ### Working with data PyTorch has two [primitives to work with data](https://pytorch.org/docs/stable/data.html): `torch.utils.data.DataLoader` and `torch.utils.data.Dataset`. `Dataset` stores the samples and their corresponding labels, and `DataLoader` wraps an iterable around the `Dataset`. ```python import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets from torchvision.transforms import ToTensor ``` PyTorch offers domain-specific libraries such as [TorchText](https://pytorch.org/text/stable/index.html), [TorchVision](https://pytorch.org/vision/stable/index.html), and [TorchAudio](https://pytorch.org/audio/stable/index.html), all of which include datasets. For this tutorial, we will be using a TorchVision dataset. The `torchvision.datasets` module contains `Dataset` objects for many real-world vision data like CIFAR, COCO ([full list here](https://pytorch.org/vision/stable/datasets.html)). In this tutorial, we use the FashionMNIST dataset. Every TorchVision `Dataset` includes two arguments: `transform` and `target_transform` to modify the samples and labels respectively. **Note**: 这一步下载速度可能很慢,可以直接通过浏览器从[fashion-mnist · GitHub](https://github.com/zalandoresearch/fashion-mnist/tree/master/data/fashion)将`train-images-idx3-ubyte.gz`和`t10k-images-idx3-ubyte.gz`下载到本地,然后上传到{jupyter_root_dir}/data/FashionMNIST/raw/,然后再执行下面这段code。 ```python # Download training data from open datasets. training_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor(), ) # Download test data from open datasets. test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor(), ) ``` ``` Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz Using downloaded and verified file: data/FashionMNIST/raw/train-images-idx3-ubyte.gz Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz 100.0% Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz Using downloaded and verified file: data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz 100.0% Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw ``` We pass the `Dataset` as an argument to `DataLoader`. This wraps an iterable over our dataset, and supports automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element in the dataloader iterable will return a batch of 64 features and labels. ```python batch_size = 64 # Create data loaders. train_dataloader = DataLoader(training_data, batch_size=batch_size) test_dataloader = DataLoader(test_data, batch_size=batch_size) for X, y in test_dataloader: print(f"Shape of X [N, C, H, W]: {X.shape}") print(f"Shape of y: {y.shape} {y.dtype}") break ``` ``` Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28]) Shape of y: torch.Size([64]) torch.int64 ``` Read more about [loading data in PyTorch](data_tutorial.html). --- ### Creating Models To define a neural network in PyTorch, we create a class that inherits from [nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). We define the layers of the network in the `__init__` function and specify how data will pass through the network in the `forward` function. To accelerate operations in the neural network, we move it to the GPU or MPS if available. ```python # Get cpu, gpu or mps device for training. device = ( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) print(f"Using {device} device") # Define model class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10) ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork().to(device) print(model) ``` ``` Using cpu device NeuralNetwork( (flatten): Flatten(start_dim=1, end_dim=-1) (linear_relu_stack): Sequential( (0): Linear(in_features=784, out_features=512, bias=True) (1): ReLU() (2): Linear(in_features=512, out_features=512, bias=True) (3): ReLU() (4): Linear(in_features=512, out_features=10, bias=True) ) ) ``` Read more about [building neural networks in PyTorch](buildmodel_tutorial.html). --- ### Optimizing the Model Parameters To train a model, we need a [loss function](https://pytorch.org/docs/stable/nn.html#loss-functions) and an [optimizer](https://pytorch.org/docs/stable/optim.html). ```python loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) ``` In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model's parameters. ```python def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) model.train() for batch, (X, y) in enumerate(dataloader): X, y = X.to(device), y.to(device) # Compute prediction error pred = model(X) loss = loss_fn(pred, y) # Backpropagation loss.backward() optimizer.step() optimizer.zero_grad() if batch % 100 == 0: loss, current = loss.item(), (batch + 1) * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") ``` We also check the model's performance against the test dataset to ensure it is learning. ```python def test(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) model.eval() test_loss, correct = 0, 0 with torch.no_grad(): for X, y in dataloader: X, y = X.to(device), y.to(device) pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= num_batches correct /= size print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") ``` The training process is conducted over several iterations (*epochs*). During each epoch, the model learns parameters to make better predictions. We print the model's accuracy and loss at each epoch; we'd like to see the accuracy increase and the loss decrease with every epoch. ```python epochs = 5 for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") train(train_dataloader, model, loss_fn, optimizer) test(test_dataloader, model, loss_fn) print("Done!") ``` ``` Epoch 1 ------------------------------- loss: 2.289245 [ 64/60000] loss: 2.280316 [ 6464/60000] loss: 2.262248 [12864/60000] loss: 2.266677 [19264/60000] loss: 2.236694 [25664/60000] loss: 2.220209 [32064/60000] loss: 2.223473 [38464/60000] loss: 2.185845 [44864/60000] loss: 2.181429 [51264/60000] loss: 2.156792 [57664/60000] Test Error: Accuracy: 41.9%, Avg loss: 2.140304 Epoch 2 ------------------------------- loss: 2.141585 [ 64/60000] loss: 2.131078 [ 6464/60000] loss: 2.075279 [12864/60000] loss: 2.101078 [19264/60000] loss: 2.023978 [25664/60000] loss: 1.982755 [32064/60000] loss: 2.003413 [38464/60000] loss: 1.913837 [44864/60000] loss: 1.920139 [51264/60000] loss: 1.854348 [57664/60000] Test Error: Accuracy: 57.2%, Avg loss: 1.844530 Epoch 3 ------------------------------- loss: 1.865049 [ 64/60000] loss: 1.838099 [ 6464/60000] loss: 1.725792 [12864/60000] loss: 1.779518 [19264/60000] loss: 1.646965 [25664/60000] loss: 1.620810 [32064/60000] loss: 1.635814 [38464/60000] loss: 1.534571 [44864/60000] loss: 1.559003 [51264/60000] loss: 1.463222 [57664/60000] Test Error: Accuracy: 61.6%, Avg loss: 1.481398 Epoch 4 ------------------------------- loss: 1.534660 [ 64/60000] loss: 1.510708 [ 6464/60000] loss: 1.369240 [12864/60000] loss: 1.450706 [19264/60000] loss: 1.320927 [25664/60000] loss: 1.332666 [32064/60000] loss: 1.338850 [38464/60000] loss: 1.266171 [44864/60000] loss: 1.298449 [51264/60000] loss: 1.206169 [57664/60000] Test Error: Accuracy: 63.3%, Avg loss: 1.232588 Epoch 5 ------------------------------- loss: 1.298163 [ 64/60000] loss: 1.289273 [ 6464/60000] loss: 1.130808 [12864/60000] loss: 1.241744 [19264/60000] loss: 1.112488 [25664/60000] loss: 1.146212 [32064/60000] loss: 1.159560 [38464/60000] loss: 1.099317 [44864/60000] loss: 1.136193 [51264/60000] loss: 1.057784 [57664/60000] Test Error: Accuracy: 64.8%, Avg loss: 1.077425 Done! ``` Read more about [Training your model](optimization_tutorial.html). --- ### Saving Models A common way to save a model is to serialize the internal state dictionary (containing the model parameters). ```python torch.save(model.state_dict(), "model.pth") print("Saved PyTorch Model State to model.pth") ``` ``` Saved PyTorch Model State to model.pth ``` --- ### Loading Models The process for loading a model includes re-creating the model structure and loading the state dictionary into it. ```python model = NeuralNetwork().to(device) model.load_state_dict(torch.load("model.pth")) ``` ``` <All keys matched successfully> ``` This model can now be used to make predictions. ```python classes = [ "T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot", ] model.eval() x, y = test_data[0][0], test_data[0][1] with torch.no_grad(): x = x.to(device) pred = model(x) predicted, actual = classes[pred[0].argmax(0)], classes[y] print(f'Predicted: "{predicted}", Actual: "{actual}"') ``` ``` Predicted: "Ankle boot", Actual: "Ankle boot" ``` 最后修改:2024 年 05 月 08 日 © 允许规范转载 打赏 赞赏作者 支付宝微信 赞 如果觉得我的文章对你有用,请随意赞赏