Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

视频追踪目标过多时爆显存 #243

Open
102757017 opened this issue Jan 20, 2025 · 7 comments
Open

视频追踪目标过多时爆显存 #243

102757017 opened this issue Jan 20, 2025 · 7 comments

Comments

@102757017
Copy link

我的显卡只有6gb显存,试了一下sam2的多目标视频追踪,发现最多只能追踪4个目标,显存就快用完了,标注更多目标时显存就爆了,程序就卡死了。
这个应用有没有办法分多次进行标注,例如一次只追踪4个目标,整个视频标注完成后下次再追踪另外4个目标,最后再将8个目标的结果合并?

@yatengLG
Copy link
Owner

可以的。但是需要你自己合并下,软件没提供这个功能。

具体可以查看标注生成的json文件:1.每次标注n个目标生成的json中,会有n个object。2.将不同次标注的同名json文件中的object合并到同一个json中即可。

@102757017
Copy link
Author

人工去合并每一帧图片的json文件也不太现实,能不能在软件提供一个脚本来执行这个功能呢。

@yatengLG
Copy link
Owner

人工去合并每一帧图片的json文件也不太现实,能不能在软件提供一个脚本来执行这个功能呢。

不用手动合并,那工作量绝了。直接写个脚本读写json就好

@yatengLG
Copy link
Owner

import json

json_file = open('ISAT_with_segment_anything/example/images/000000000113.json', 'r')
data = json.load(json_file) # 此时已经是字典了

{'info': {'description': 'ISAT', 'folder': '/media/lg/disk2/PycharmProjects/ISAT_with_segment_anything/example/images', 'name': '000000000113.jpg', 'width': 416, 'height': 640, 'depth': 3, 'note': 'A cake is on the table.'}, 'objects': [{'category': 'person', 'group': 1, 'segmentation': [[36.0, 514.0], [35.0, 516.0], [34.0, 521.0], [36.0, 535.0], [36.0, 557.0], [38.0, 561.0], [38.0, 568.0], [40.0, 577.0], [41.0, 579.0], [49.0, 587.0], [50.0, 590.0], [51.0, 591.0], [53.0, 592.0], [59.0, 591.0], [62.0, 594.0], [63.0, 594.0], [65.0, 596.0], [65.0, 597.0], [67.0, 599.0], [68.0, 599.0], [71.0, 602.0], [72.0, 605.0], [75.0, 608.0], [78.0, 609.0], [80.0, 611.0], [81.0, 614.0], [83.0, 614.0], [86.0, 617.0], [88.0, 618.0], [94.0, 618.0], [96.0, 617.0], [97.0, 615.0], [97.0, 600.0], [96.0, 596.0], [94.0, 594.0], [94.0, 592.0], [88.0, 585.0], [87.0, 585.0], [85.0, 583.0], [82.0, 575.0], [81.0, 570.0], [81.0, 565.0], [80.0, 561.0], [68.0, 546.0], [63.0, 541.0], [59.0, 539.0], [57.0, 537.0], [55.0, 533.0], [49.0, 527.0], [48.0, 524.0], [40.0, 516.0]], 'area': 3010.5, 'layer': 1.0, 'bbox': [33.505025253169414, 513.32917960675, 97.5, 618.5], 'iscrowd': 0, 'note': 'man'}, {'category': 'person', 'group': 1, 'segmentation': [[108.0, 36.0], [98.0, 36.0], [87.0, 40.0], [81.0, 45.0], [75.0, 55.0], [71.0, 69.0], [72.0, 96.0], [77.0, 108.0], [77.0, 112.0], [62.0, 126.0], [26.0, 148.0], [16.0, 159.0], [10.0, 175.0], [6.0, 235.0], [8.0, 240.0], [6.0, 258.0], [7.0, 271.0], [16.0, 307.0], [27.0, 328.0], [30.0, 340.0], [29.0, 345.0], [33.0, 362.0], [33.0, 372.0], [40.0, 379.0], [42.0, 385.0], [39.0, 417.0], [43.0, 442.0], [36.0, 461.0], [34.0, 473.0], [35.0, 481.0], [46.0, 478.0], [47.0, 473.0], [43.0, 465.0], [45.0, 461.0], [88.0, 448.0], [89.0, 435.0], [87.0, 413.0], [90.0, 410.0], [139.0, 398.0], [142.0, 394.0], [147.0, 361.0], [147.0, 331.0], [144.0, 311.0], [151.0, 300.0], [154.0, 291.0], [155.0, 206.0], [150.0, 173.0], [143.0, 150.0], [136.0, 140.0], [125.0, 133.0], [125.0, 130.0], [133.0, 117.0], [135.0, 108.0], [135.0, 89.0], [138.0, 78.0], [137.0, 57.0], [127.0, 45.0]], 'area': 43511.5, 'layer': 2.0, 'bbox': [5.501107421071696, 35.5, 155.49996540151537, 481.48238191061887], 'iscrowd': 0, 'note': 'man'}, ...]}

字段object是一个列表,把里面的内容取出来,合并,存到另一个json中就好了

@102757017
Copy link
Author

合并的时候下面的group和layer是不是就重复了(因为两个json文件都是从1开始计数的),是不是要顺延增加?
[{'area': 1308.0,
'bbox': [61.5, 428.29289321881345, 100.5, 472.5],
'category': 'BigScrew',
'group': 1,
'iscrowd': False,
'layer': 1.0,
'note': '',
'segmentation': [[78.0, 429.0],

@yatengLG
Copy link
Owner

合并的时候下面的group和layer是不是就重复了(因为两个json文件都是从1开始计数的),是不是要顺延增加? [{'area': 1308.0, 'bbox': [61.5, 428.29289321881345, 100.5, 472.5], 'category': 'BigScrew', 'group': 1, 'iscrowd': False, 'layer': 1.0, 'note': '', 'segmentation': [[78.0, 429.0],

是的是的,忘了这个了T.T。 是需要顺延,不然group就重复了。

group这个是区分实例的,如果只做语义分割可以忽略这个字段。

@102757017
Copy link
Author

写了个脚本来合并,对于一摊螺丝的跟踪标注,操作起来还是比较麻烦的。
希望还是能在软件内指定视频的跟踪对象,而不是所有已标注的对象都一起跟踪,毕竟不是所有人都有4090的。
`
import os
import json
from collections import defaultdict
import pprint

def renumber_groups_layers(objects):
# 重新排序(如果需要保持某种顺序),
objects.sort(key=lambda obj: (obj['category'], obj['group']))

# 重新编号group和layer,从1开始递增
for i, obj in enumerate(objects, start=1):
    obj['group'] = i
    obj['layer'] = float(i) 

return objects

def merge_same_name_json_files(root_dir, output_dir):
# 存储同名json文件的根路径
files_by_name = defaultdict(list)

# 递归遍历根目录及其子目录下的所有文件
for dirpath, _, filenames in os.walk(root_dir):
    for filename in filenames:
        if filename.endswith('.json'):
            files_by_name[filename].append(os.path.join(dirpath, filename))

# 创建输出目录(如果不存在)
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# 对每个同名json文件组进行合并
for filename, file_paths in files_by_name.items():
    merged_objects = []

    # 获取图片信息
    with open(file_paths[0], 'r') as f:
        first_data = json.load(f)
        info = first_data['info']

    # 合并所有同名文件的对象
    for file_path in file_paths:
        with open(file_path, 'r') as f:
            data = json.load(f)
            merged_objects.extend(data.get('objects', []))

    # 重新编号group和layer
    merged_objects = renumber_groups_layers(merged_objects)

    # 构建合并结果
    result = {
        'info': info,
        'objects': merged_objects
    }

    # 合并后的数据写入新的JSON文件
    output_path = os.path.join(output_dir, filename)
    with open(output_path, 'w') as f:
        json.dump(result, f, indent=4)

参数1:存储同名json文件的根路径

参数2:输出路径

merge_same_name_json_files(r'E:\python\Yolo\Lable', r'E:\python\Yolo\merged Lable')
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants