參考這個(gè) slowfast博客改的,因?yàn)檫@個(gè)博客只針對(duì)單個(gè)30秒視頻,并不是把一個(gè)視頻裁剪為多個(gè)30秒視頻,并且在windows上運(yùn)行會(huì)報(bào)錯(cuò),本文主要是針對(duì)遇到的問(wèn)題對(duì)代碼進(jìn)行修改。具體如下。
首先文件擺放位置:
文件擺放
首先準(zhǔn)備視頻將視頻放在./ava/videos里面,將其裁剪為30秒一個(gè)視頻存放到./ava/videos_cut,以數(shù)字命名。代碼如下,不再使用ffmpeg命令,而是使用ffmpeg包,使用pip install ffmpeg-python安裝。
文件video2img.py
import os
import shutil
from tqdm import tqdm
start = 0
seconds = 30
video_path = './ava/videos'
labelframes_path = './ava/labelframes'
rawframes_path = './ava/rawframes'
cut_videos_sh_path = './cut_videos.sh'
fps = 30
raw_frames = seconds * fps
with open(cut_videos_sh_path, 'r') as f:
sh = f.read()
sh = sh.replace(sh[sh.find(' ffmpeg'):],
f' ffmpeg -ss {start} -t {seconds} -i "${{video}}" -r 30 -strict experimental "${{out_name}}"\n fi\ndone\n')
with open(cut_videos_sh_path, 'w') as f:
f.write(sh)
os.makedirs(labelframes_path, exist_ok=True)
video_ids = [video_id[:-4] for video_id in os.listdir(video_path)]
for video_id in tqdm(video_ids):
for img_id in range(2 * fps + 1, (seconds - 2) * 30, fps):
shutil.copyfile(os.path.join(rawframes_path, video_id, '1_' + format(img_id, '05d') + '.jpg'),
os.path.join(labelframes_path, video_id + '_' + format(start + img_id // 30, '05d') + '.jpg'))
執(zhí)行這個(gè)后,你的videos_cut文件夾下面就會(huì)生成多個(gè)30秒的視頻
對(duì)裁剪視頻進(jìn)行抽幀
代碼如下:
extract_rgb_frames_ffmpeg.sh
IN_DATA_DIR="./ava/videos_cut"
OUT_DATA_DIR="./ava/rawframes"
if [[ ! -d "${OUT_DATA_DIR}" ]]; then
echo "${OUT_DATA_DIR} doesn't exist. Creating it.";
mkdir -p ${OUT_DATA_DIR}
fi
for video in $(ls -A1 -U ${IN_DATA_DIR}/*)
do
video_name=${video##*/}
if [[ $video_name = *".webm" ]]; then
video_name=${video_name::-5}
else
video_name=${video_name::-4}
fi
# out_video_dir=${OUT_DATA_DIR}/${video_name}
out_video_dir=${OUT_DATA_DIR}/${video_name}
# echo $out_video_dir
# echo $video_name
mkdir -p "${out_video_dir}"
# out_name="${out_video_dir}/${out_video_dir}_%05d.jpg"
out_name="${out_video_dir}/${video_name}_%05d.jpg"
# echo $out_name
ffmpeg -i "${video}" -r 30 -q:v 1 "${out_name}"
done
執(zhí)行之后rawframes下面應(yīng)該是每個(gè)視頻的抽幀,如下所示。
image.png
因?yàn)樵趕low fast 中是1秒抽30幀圖片,目的是用來(lái)訓(xùn)練,據(jù)說(shuō)因?yàn)閟lowfast在slow通道里1秒會(huì)采集到15幀,在fast通道里1秒會(huì)采集到2幀。所以我們的打標(biāo)文件要按照這個(gè)來(lái)。
raw2label.py代碼如下:
import os
import shutil
from tqdm import tqdm
fps = 30
seconds = 30
start =0
video_path = './ava/videos_cut'
labelframes_path = './ava/labelframes'
rawframes_path = './ava/rawframes'
for video_id in os.listdir(video_path):
print(video_id)
video_ids = [video_id[:-4] for video_id in os.listdir(video_path)]
for video_id in tqdm(video_ids):
print(video_id)
# 從61幀到840幀,間隔30幀取一次
for img_id in range(2 * fps + 1, (seconds - 2) * 30, fps):
shutil.copyfile(os.path.join(rawframes_path, video_id, video_id+'_' + format(img_id, '05d') + '.jpg'),
os.path.join(labelframes_path, video_id + '_' + format(start + img_id // 30, '05d') + '.jpg'))
這樣labelframes中文件夾就有圖片。
然后via標(biāo)注工具下載地址,對(duì)labelframes文件進(jìn)行標(biāo)注。將標(biāo)注文件保存后,就可以執(zhí)行最后一步,將via轉(zhuǎn)換為ava數(shù)據(jù)集。
via轉(zhuǎn)換為ava數(shù)據(jù)集
via2ava.py代碼如下:
"""
Theme:ava format data transformer
author:Hongbo Jiang
time:2022/3/14/1:51:51
description:
這是一個(gè)數(shù)據(jù)格式轉(zhuǎn)換器,根據(jù)mmaction2的ava數(shù)據(jù)格式轉(zhuǎn)換規(guī)則將來(lái)自網(wǎng)站:
https://www.robots.ox.ac.uk/~vgg/software/via/app/via_video_annotator.html
的、標(biāo)注好的、視頻理解類(lèi)型的csv文件轉(zhuǎn)換為mmaction2指定的數(shù)據(jù)格式。
轉(zhuǎn)換規(guī)則:
# AVA Annotation Explained
In this section, we explain the annotation format of AVA in details:
```
mmaction2
├── data
│ ├── ava
│ │ ├── annotations
│ │ | ├── ava_dense_proposals_train.FAIR.recall_93.9.pkl
│ │ | ├── ava_dense_proposals_val.FAIR.recall_93.9.pkl
│ │ | ├── ava_dense_proposals_test.FAIR.recall_93.9.pkl
│ │ | ├── ava_train_v2.1.csv
│ │ | ├── ava_val_v2.1.csv
│ │ | ├── ava_train_excluded_timestamps_v2.1.csv
│ │ | ├── ava_val_excluded_timestamps_v2.1.csv
│ │ | ├── ava_action_list_v2.1.pbtxt
```
## The proposals generated by human detectors
In the annotation folder, `ava_dense_proposals_[train/val/test].FAIR.recall_93.9.pkl` are human proposals generated by a human detector. They are used in training, validation and testing respectively. Take `ava_dense_proposals_train.FAIR.recall_93.9.pkl` as an example. It is a dictionary of size 203626. The key consists of the `videoID` and the `timestamp`. For example, the key `-5KQ66BBWC4,0902` means the values are the detection results for the frame at the $$902_{nd}$$ second in the video `-5KQ66BBWC4`. The values in the dictionary are numpy arrays with shape $$N \times 5$$ , $$N$$ is the number of detected human bounding boxes in the corresponding frame. The format of bounding box is $$[x_1, y_1, x_2, y_2, score], 0 \le x_1, y_1, x_2, w_2, score \le 1$$. $$(x_1, y_1)$$ indicates the top-left corner of the bounding box, $$(x_2, y_2)$$ indicates the bottom-right corner of the bounding box; $$(0, 0)$$ indicates the top-left corner of the image, while $$(1, 1)$$ indicates the bottom-right corner of the image.
## The ground-truth labels for spatio-temporal action detection
In the annotation folder, `ava_[train/val]_v[2.1/2.2].csv` are ground-truth labels for spatio-temporal action detection, which are used during training & validation. Take `ava_train_v2.1.csv` as an example, it is a csv file with 837318 lines, each line is the annotation for a human instance in one frame. For example, the first line in `ava_train_v2.1.csv` is `'-5KQ66BBWC4,0902,0.077,0.151,0.283,0.811,80,1'`: the first two items `-5KQ66BBWC4` and `0902` indicate that it corresponds to the $$902_{nd}$$ second in the video `-5KQ66BBWC4`. The next four items ($$[0.077(x_1), 0.151(y_1), 0.283(x_2), 0.811(y_2)]$$) indicates the location of the bounding box, the bbox format is the same as human proposals. The next item `80` is the action label. The last item `1` is the ID of this bounding box.
## Excluded timestamps
`ava_[train/val]_excludes_timestamps_v[2.1/2.2].csv` contains excluded timestamps which are not used during training or validation. The format is `video_id, second_idx` .
## Label map
`ava_action_list_v[2.1/2.2]_for_activitynet_[2018/2019].pbtxt` contains the label map of the AVA dataset, which maps the action name to the label index.
"""
import csv
import os
from distutils.log import info
import pickle
from matplotlib.pyplot import contour, show
import numpy as np
import cv2
from sklearn.utils import shuffle
def transformer(origin_csv_path, frame_image_dir,
train_output_pkl_path, train_output_csv_path,
valid_output_pkl_path, valid_output_csv_path,
exclude_train_output_csv_path, exclude_valid_output_csv_path,
out_action_list, out_labelmap_path, dataset_percent=0.9):
"""
輸入:
origin_csv_path:從網(wǎng)站導(dǎo)出的csv文件路徑。
frame_image_dir:以"視頻名_第n秒.jpg"格式命名的圖片,這些圖片是通過(guò)逐秒讀取的。
output_pkl_path:輸出pkl文件路徑
output_csv_path:輸出csv文件路徑
out_labelmap_path:輸出labelmap.txt文件路徑
dataset_percent:訓(xùn)練集和測(cè)試集分割
輸出:無(wú)
"""
# -----------------------------------------------------------------------------------------------
get_label_map(origin_csv_path, out_action_list, out_labelmap_path)
# -----------------------------------------------------------------------------------------------
information_array = [[], [], []]
# 讀取輸入csv文件的位置信息段落
with open(origin_csv_path, 'r') as csvfile:
count = 0
content = csv.reader(csvfile)
for line in content:
# print(line)
if count >= 10:
frame_image_name = eval(line[1])[0] # str
# print(line[-2])
location_info = eval(line[4])[1:] # list
action_list = list(eval(line[5]).values())[0].split(',')
action_list = [int(x) for x in action_list] # list
information_array[0].append(frame_image_name)
information_array[1].append(location_info)
information_array[2].append(action_list)
count += 1
# 將:對(duì)應(yīng)幀圖片名字、物體位置信息、動(dòng)作種類(lèi)信息匯總為一個(gè)信息數(shù)組
information_array = np.array(information_array, dtype=object).transpose()
# information_array = np.array(information_array)
# -----------------------------------------------------------------------------------------------
num_train = int(dataset_percent * len(information_array))
train_info_array = information_array[:num_train]
valid_info_array = information_array[num_train:]
get_pkl_csv(train_info_array, train_output_pkl_path, train_output_csv_path, exclude_train_output_csv_path,
frame_image_dir)
get_pkl_csv(valid_info_array, valid_output_pkl_path, valid_output_csv_path, exclude_valid_output_csv_path,
frame_image_dir)
def get_label_map(origin_csv_path, out_action_list, out_labelmap_path):
classes_list = 0
classes_content = ""
labelmap_strings = ""
# 提取出csv中的第9行的行為下標(biāo)
with open(origin_csv_path, 'r') as csvfile:
count = 0
content = csv.reader(csvfile)
for line in content:
if count == 8:
classes_list = line
break
count += 1
# 截取種類(lèi)字典段落
st = 0
ed = 0
for i in range(len(classes_list)):
if classes_list[i].startswith('options'):
st = i
if classes_list[i].startswith('default_option_id'):
ed = i
for i in range(st, ed):
if i == st:
classes_content = classes_content + classes_list[i][len('options:'):] + ','
else:
classes_content = classes_content + classes_list[i] + ','
classes_dict = eval(classes_content)[0]
# 寫(xiě)入labelmap.txt文件
with open(out_action_list, 'w') as f: # 寫(xiě)入action_list文件
for v, k in classes_dict.items():
labelmap_strings = labelmap_strings + "label {{\n name: \"{}\"\n label_id: {}\n label_type: PERSON_MOVEMENT\n}}\n".format(
k, int(v) + 1)
f.write(labelmap_strings)
labelmap_strings = ""
with open(out_labelmap_path, 'w') as f: # 寫(xiě)入label_map文件
for v, k in classes_dict.items():
labelmap_strings = labelmap_strings + "{}: {}\n".format(int(v) + 1, k)
f.write(labelmap_strings)
def get_pkl_csv(information_array, output_pkl_path, output_csv_path, exclude_output_csv_path, frame_image_dir):
# 在遍歷之前需要對(duì)我們的字典進(jìn)行初始化
pkl_data = dict() # 存儲(chǔ)pkl鍵值對(duì)信的字典(其值為普通list)
csv_data = [] # 存儲(chǔ)導(dǎo)出csv文件的2d數(shù)組
read_data = {} # 存儲(chǔ)pkl鍵值對(duì)的字典(方便字典的值化為numpy數(shù)組)
for i in range(len(information_array)):
img_name = information_array[i][0]
# -------------------------------------------------------------------------------------------
video_name, frame_name = '_'.join(img_name.split('_')[:-1]), format(int(img_name.split('_')[-1][:-4]),
'04d') # 我的格式是"視頻名稱(chēng)_幀名稱(chēng)",格式不同可自行更改
# -------------------------------------------------------------------------------------------
pkl_key = video_name + ',' + frame_name
pkl_data[pkl_key] = []
# 遍歷所有的圖片進(jìn)行信息讀取并寫(xiě)入pkl數(shù)據(jù)
for i in range(len(information_array)):
img_name = information_array[i][0]
# -------------------------------------------------------------------------------------------
video_name, frame_name = '_'.join(img_name.split('_')[:-1]), str(
int(img_name.split('_')[-1][:-4])) # 我的格式是"視頻名稱(chēng)_幀名稱(chēng)",格式不同可自行更改
# -------------------------------------------------------------------------------------------
imgpath = frame_image_dir + '/' + img_name
location_list = information_array[i][1]
action_info = information_array[i][2]
image_array = cv2.imread(imgpath)
h, w = image_array.shape[:2]
# 進(jìn)行歸一化
location_list[0] /= w
location_list[1] /= h
location_list[2] /= w
location_list[3] /= h
location_list[2] = location_list[2] + location_list[0]
location_list[3] = location_list[3] + location_list[1]
# 置信度置為1
# 組裝pkl數(shù)據(jù)
for kind_idx in action_info:
csv_info = [video_name, frame_name, *location_list, kind_idx + 1, 1]
csv_data.append(csv_info)
location_list = location_list + [1]
pkl_key = video_name + ',' + format(int(frame_name), '04d')
pkl_value = location_list
pkl_data[pkl_key].append(pkl_value)
for k, v in pkl_data.items():
read_data[k] = np.array(v)
with open(output_pkl_path, 'wb') as f: # 寫(xiě)入pkl文件
pickle.dump(read_data, f)
with open(output_csv_path, 'w', newline='') as f: # 寫(xiě)入csv文件, 設(shè)定參數(shù)newline=''可以不換行。
f_csv = csv.writer(f)
f_csv.writerows(csv_data)
with open(exclude_output_csv_path, 'w', newline='') as f: # 寫(xiě)入csv文件, 設(shè)定參數(shù)newline=''可以不換行。
f_csv = csv.writer(f)
f_csv.writerows([])
def showpkl(pkl_path):
with open(pkl_path, 'rb') as f:
content = pickle.load(f)
return content
def showcsv(csv_path):
output = []
with open(csv_path, 'r') as f:
content = csv.reader(f)
for line in content:
output.append(line)
return output
def showlabelmap(labelmap_path):
classes_dict = dict()
with open(labelmap_path, 'r') as f:
content = (f.read().split('\n'))[:-1]
for item in content:
mid_idx = -1
for i in range(len(item)):
if item[i] == ":":
mid_idx = i
classes_dict[item[:mid_idx]] = item[mid_idx + 1:]
return classes_dict
os.makedirs('./ava/annotations', exist_ok=True)
transformer("./Unnamed-VIA Project13Jul2022_16h01m30s_export.csv", './ava/labelframes',
'./ava/annotations/ava_dense_proposals_train.FAIR.recall_93.9.pkl', './ava/annotations/ava_train_v2.1.csv',
'./ava/annotations/ava_dense_proposals_val.FAIR.recall_93.9.pkl', './ava/annotations/ava_val_v2.1.csv',
'./ava/annotations/ava_train_excluded_timestamps_v2.1.csv',
'./ava/annotations/ava_val_excluded_timestamps_v2.1.csv',
'./ava/annotations/ava_action_list_v2.1.pbtxt', './ava/annotations/labelmap.txt', 0.9)
print(showpkl('./ava/annotations/ava_dense_proposals_train.FAIR.recall_93.9.pkl'))
print(showcsv('././ava/annotations/ava_train_v2.1.csv'))
print(showlabelmap('././ava/annotations/labelmap.txt'))
這樣在annotations中就有ava2.1的數(shù)據(jù)樣本了。