Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chapter09/Semantic_Segmentation_with_U_Net.ipynb #79

Open
qwrjwq opened this issue Jan 26, 2024 · 2 comments
Open

Chapter09/Semantic_Segmentation_with_U_Net.ipynb #79

qwrjwq opened this issue Jan 26, 2024 · 2 comments

Comments

@qwrjwq
Copy link

qwrjwq commented Jan 26, 2024

log = Report(n_epochs)
for ex in range(n_epochs):
    N = len(trn_dl)
    for bx, data in enumerate(trn_dl):
        loss, acc = train_batch(model, data, optimizer, criterion)
        log.record(ex+(bx+1)/N, trn_loss=loss, trn_acc=acc, end='\r')

    N = len(val_dl)
    for bx, data in enumerate(val_dl):
        loss, acc = validate_batch(model, data, criterion)
        log.record(ex+(bx+1)/N, val_loss=loss, val_acc=acc, end='\r')
        
    log.report_avgs(ex+1)

RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [4, 224, 224, 3]

in UnetLoss(preds, targets)
1 ce = nn.CrossEntropyLoss()
2 def UnetLoss(preds, targets):
----> 3 ce_loss = ce(preds, targets)
4 acc = (torch.max(preds, 1)[1] == targets).float().mean()
5 return ce_loss, acc

@andysingal
Copy link

log = Report(n_epochs)
for ex in range(n_epochs):
    N = len(trn_dl)
    for bx, data in enumerate(trn_dl):
        loss, acc = train_batch(model, data, optimizer, criterion)
        log.record(ex+(bx+1)/N, trn_loss=loss, trn_acc=acc, end='\r')

    N = len(val_dl)
    for bx, data in enumerate(val_dl):
        loss, acc = validate_batch(model, data, criterion)
        log.record(ex+(bx+1)/N, val_loss=loss, val_acc=acc, end='\r')
        
    log.report_avgs(ex+1)

RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [4, 224, 224, 3]

in UnetLoss(preds, targets) 1 ce = nn.CrossEntropyLoss() 2 def UnetLoss(preds, targets): ----> 3 ce_loss = ce(preds, targets) 4 acc = (torch.max(preds, 1)[1] == targets).float().mean() 5 return ce_loss, acc

I am also getting error:

RuntimeError                              Traceback (most recent call last)
[<ipython-input-11-74e7caf8d75b>](https://localhost:8080/#) in <cell line: 2>()
      3     N = len(trn_dl)
      4     for bx, data in enumerate(trn_dl):
----> 5         loss, acc = train_batch(model, data, optimizer, criterion)
      6         log.record(ex+(bx+1)/N, trn_loss=loss, trn_acc=acc, end='\r')
      7 

5 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   3057     if size_average is not None or reduce is not None:
   3058         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3059     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
   3060 
   3061 

RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4

@WalidGharianiEAGLE
Copy link

  • Issue: The original read function from torch_snippets was causing issues when reading the mask images from SegData.
  • Solution: I replaced read with cv2.imread.
class SegData(Dataset):
    def __init__(self, split):
        self.items = stems(f'./dataset1/dataset1/images_prepped_{split}')
        self.split = split
    def __len__(self):
        return len(self.items)
    def __getitem__(self, ix):
        image = read(f'./dataset1/images_prepped_{self.split}/{self.items[ix]}.png', 1)
        image = cv2.resize(image, (224,224))
        # read(f'./dataset1/annotations_prepped_{self.split}/{self.items[ix]}.png')
        mask = cv2.imread(f'./dataset1/annotations_prepped_{self.split}/{self.items[ix]}.png', cv2.IMREAD_GRAYSCALE) 
        mask = cv2.resize(mask, (224,224))
        return image, mask
    def choose(self): return self[randint(len(self))]
    def collate_fn(self, batch):
        ims, masks = list(zip(*batch))
        ims = torch.cat([tfms(im.copy()/255.)[None] for im in ims]).float().to(device)
        ce_masks = torch.cat([torch.Tensor(mask[None]) for mask in masks]).long().to(device)
        return ims, ce_masks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants