内容字号:默认大号超大号

段落设置:段首缩进取消段首缩进

字体设置:切换到微软雅黑切换到宋体

视频数据集UCF101的处理与加载(用PyTorch实现)

2018-03-12 17:06 出处:清屏网 人气: 评论(0

写在前面

  1. 之前写了一篇没有使用任何深度学习框架来处理视频数据集的文章: 视频数据集UCF101的处理与加载(未使用深度学习框架)
  2. 上面的处理方法简单粗暴,但仍有很多可以优化的空间,这两天又学习了一下PyTorch对于数据集加载的支持: PyTorch入门学习(七):数据加载与处理
  3. 之前说过要用PyTorch的方法重新实现一遍对于UCF101的处理,所以在这里做个记录。这篇文章里仅仅记录具体的实现方法,至于为什么这么做还是戳这里哦~~

1 具体目标

  1. 按照trainlist(testllist)中的列表去确定要用哪些数据集。
  2. 对于每一个视频随机取连续的16帧
  3. 每一帧都减去RGB平均值
  4. 对于每帧先将大小修改到(182,242)
  5. 然后对修改过大小的帧随机截取(160,160)
  6. 每次返回视频表示: x[batch_size,16,3,160,160], 标签值: y[batch_size]

1 基本实现思路

鉴于我们现在要处理的数据集既不是PyTorch直接提供的,又不符合最通用的 ImageFolder 存储格式,我们就乖乖一步步地实现。

  • 跟例程中最大的区别在于我们组要处理的视频,而不是单张图像,那么就把这一步工作放到 __getitem__ 里面去完成。
  • 剩下的变换功能放到 transform 里面去完成。

具体的步骤如下所示:

  1. 首先,定义数据集的类UCF101,这个类要继承 dataset 这个抽象类,并实现 __init__ , __len__ 以及 __getitem__ 这几个函数

    • __init__ :完成infolist的读入及处理还有其他的初始化工作。
    • __len__ :返回数据集大小
    • __getitem__ :返回单个视频随机连续16帧的读取和返回
    • 其他函数用于支持以上的功能。
  2. 然后,实现用于特定图像预处理的功能,并封装成类。

    • 减去RGB的平均值
    • 大小调整成(182,242)
    • 随机截取成(160,160)
    • 转换成Tensor
    • 将它们进行组合成 (transform)
  3. transform 作为上面 UCF101 类的参数传入,并得到实例化 UCF101 得到 my_UCF101 对象。

  4. 最后,将 my_UCF101 作为 torch.utils.data.DataLoader 类的形参,并根据需求设置自己是否需要打乱顺序,批大小...

2完整代码

原理的部分不懂的话还是建议回去看看这篇哇: PyTorch入门学习(七):数据加载与处理

这里就不再赘述,直接上代码了。

from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import random
import torch
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils

# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion()   # interactive mode



class ClipSubstractMean(object):
  def __init__(self, b=104, g=117, r=123):
    self.means = np.array((r, g, b))

  def __call__(self, sample):
    video_x,video_label=sample['video_x'],sample['video_label']
    new_video_x=video_x - self.means
    return {'video_x': new_video_x, 'video_label': video_label}


class Rescale(object):
    """Rescale the image in a sample to a given size.

    Args:
        output_size (tuple or int): Desired output size. If tuple, output is
            matched to output_size. If int, smaller of image edges is matched
            to output_size keeping aspect ratio the same.
    """

    def __init__(self, output_size=(182,242)):
        assert isinstance(output_size, (int, tuple))
        self.output_size = output_size

    def __call__(self, sample):
        video_x,video_label=sample['video_x'],sample['video_label']
    
        h, w = video_x.shape[1],video_x[2]
        if isinstance(self.output_size, int):
            if h > w:
                new_h, new_w = self.output_size * h / w, self.output_size
            else:
                new_h, new_w = self.output_size, self.output_size * w / h
        else:
            new_h, new_w = self.output_size

        new_h, new_w = int(new_h), int(new_w)
        new_video_x=np.zeros((16,new_h,new_w,3))
        for i in range(16):
            image=video_x[i,:,:,:]
            img = transform.resize(image, (new_h, new_w))
            new_video_x[i,:,:,:]=img

        return {'video_x': new_video_x, 'video_label': video_label}


class RandomCrop(object):
    """Crop randomly the image in a sample.

    Args:
        output_size (tuple or int): Desired output size. If int, square crop
            is made.
    """

    def __init__(self, output_size=(160,160)):
        assert isinstance(output_size, (int, tuple))
        if isinstance(output_size, int):
            self.output_size = (output_size, output_size)
        else:
            assert len(output_size) == 2
            self.output_size = output_size

    def __call__(self, sample):
        video_x, video_label = sample['video_x'], sample['video_label']

        h, w = video_x.shape[1],video_x.shape[2]
        new_h, new_w = self.output_size

        top = np.random.randint(0, h - new_h)
        left = np.random.randint(0, w - new_w)
         
        new_video_x=np.zeros((16,new_h,new_w,3))
        for i in range(16):
            image=video_x[i,:,:,:]
            image = image[top: top + new_h,left: left + new_w]
            new_video_x[i,:,:,:]=image

        return {'video_x': new_video_x, 'video_label': video_label}


class ToTensor(object):
    """Convert ndarrays in sample to Tensors."""

    def __call__(self, sample):
        video_x, video_label = sample['video_x'], sample['video_label']

        # swap color axis because
        # numpy image: batch_size x H x W x C
        # torch image: batch_size x C X H X W
        video_x = video_x.transpose((0, 3, 1, 2))
        video_x=np.array(video_x)
        video_label = [video_label]
        return {'video_x':torch.from_numpy(video_x),'video_label':torch.FloatTensor(video_label)}


class UCF101(Dataset):
    """UCF101 Landmarks dataset."""

    def __init__(self, info_list, root_dir, transform=None):
        """
        Args:
            info_list (string): Path to the info list file with annotations.
            root_dir (string): Directory with all the video frames.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        """
        self.landmarks_frame = pd.read_csv(info_list,delimiter=' ', header=None)
        self.root_dir = root_dir
        self.transform = transform
            
    def __len__(self):
        return len(self.landmarks_frame)

    # get (16,240,320,3)
    def __getitem__(self, idx):
        video_path = os.path.join(self.root_dir,self.landmarks_frame.iloc[idx, 0])
        video_label=self.landmarks_frame.iloc[idx,1]
        video_x=self.get_single_video_x(video_path)
        sample = {'video_x':video_x, 'video_label':video_label}

        if self.transform:
            sample = self.transform(sample)
        return sample


    def get_single_video_x(self,video_path):
        slash_rows=video_path.split('.')
        dir_name=slash_rows[0]
        video_jpgs_path=os.path.join(self.root_dir,dir_name)
        # get the random 16 frame
        data=pd.read_csv(os.path.join(video_jpgs_path,'n_frames'),delimiter=' ',header=None)
        frame_count=data[0][0]
        video_x=np.zeros((16,240,320,3))

        image_start=random.randint(1,frame_count-17)
        image_id=image_start
        for i in range(16):
            s="%05d" % image_id
            image_name='image_'+s+'.jpg'
            image_path=os.path.join(video_jpgs_path,image_name)
            tmp_image = io.imread(image_path)
            video_x[i,:,:,:]=tmp_image
            image_id+=1
        return video_x





if __name__=='__main__':
    #usage
    root_list='/home/hl/Desktop/lovelyqian/CV_Learning/UCF101_jpg/'
    info_list='/home/hl/Desktop/lovelyqian/CV_Learning/UCF101_TrainTestlist/trainlist01.txt'
    myUCF101=UCF101(info_list,root_list,transform=transforms.Compose([ClipSubstractMean(),Rescale(),RandomCrop(),ToTensor()]))

    dataloader=DataLoader(myUCF101,batch_size=8,shuffle=True,num_workers=8)
    for i_batch,sample_batched in enumerate(dataloader):
        print (i_batch,sample_batched['video_x'].size(),sample_batched['video_label'].size())

整个代码不管是在逻辑清晰度还是代码行数上都比之前的改进了很多,所以还是要多多学习大佬的框架,当然能自己实现一遍也是被有韵味的啦。

参考文献


分享给小伙伴们:
本文标签: PyTorchUCF101

相关文章

发表评论愿您的每句评论,都能给大家的生活添色彩,带来共鸣,带来思索,带来快乐。

CopyRight © 2015-2016 QingPingShan.com , All Rights Reserved.

清屏网 版权所有 豫ICP备15026204号