watchtowerss commited on
Commit
297d2bb
1 Parent(s): bb879e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -9
README.md CHANGED
@@ -11,13 +11,40 @@ license: mit
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
- # Track-Anything
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything](https://github.com/facebookresearch/segment-anything), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable ***Track-Anything*** to be suitable for:
17
  - Video object tracking and segmentation with shot changes.
18
- - Data annnotation for video object tracking and segmentation.
19
  - Object-centric downstream video tasks, such as video inpainting and editing.
20
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Demo
22
 
23
  https://user-images.githubusercontent.com/28050374/232842703-8395af24-b13e-4b8e-aafb-e94b61e6c449.MP4
@@ -44,17 +71,23 @@ cd Track-Anything
44
  # Install dependencies:
45
  pip install -r requirements.txt
46
 
47
- # Install dependencies for inpainting:
48
- pip install -U openmim
49
- mim install mmcv
50
-
51
- # Install dependencies for editing
52
- pip install madgrad
53
-
54
  # Run the Track-Anything gradio demo.
55
  python app.py --device cuda:0 --sam_model_type vit_h --port 12212
56
  ```
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ## Acknowledgements
59
 
60
  The project is based on [Segment Anything](https://github.com/facebookresearch/segment-anything), [XMem](https://github.com/hkchengrex/XMem), and [E2FGVI](https://github.com/MCG-NKU/E2FGVI). Thanks for the authors for their efforts.
 
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
+ <!-- ![](./assets/track-anything-logo.jpg) -->
15
+
16
+ <div align=center>
17
+ <img src="./assets/track-anything-logo.jpg"/>
18
+ </div>
19
+ <br/>
20
+ <div align=center>
21
+ <a src="https://img.shields.io/badge/%F0%9F%93%96-Open_in_Spaces-informational.svg?style=flat-square" href="https://arxiv.org/abs/2304.11968">
22
+ <img src="https://img.shields.io/badge/%F0%9F%93%96-Arxiv_2304.11968-red.svg?style=flat-square">
23
+ </a>
24
+ <a src="https://img.shields.io/badge/%F0%9F%A4%97-Open_in_Spaces-informational.svg?style=flat-square" href="https://huggingface.co/spaces/watchtowerss/Track-Anything">
25
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97-Open_in_Spaces-informational.svg?style=flat-square">
26
+ </a>
27
+ <a src="https://img.shields.io/badge/%F0%9F%9A%80-SUSTech_VIP_Lab-important.svg?style=flat-square" href="https://zhengfenglab.com/">
28
+ <img src="https://img.shields.io/badge/%F0%9F%9A%80-SUSTech_VIP_Lab-important.svg?style=flat-square">
29
+ </a>
30
+ </div>
31
 
32
  ***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything](https://github.com/facebookresearch/segment-anything), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable ***Track-Anything*** to be suitable for:
33
  - Video object tracking and segmentation with shot changes.
34
+ - Visualized development and data annnotation for video object tracking and segmentation.
35
  - Object-centric downstream video tasks, such as video inpainting and editing.
36
 
37
+ <div align=center>
38
+ <img src="./assets/avengers.gif"/>
39
+ </div>
40
+
41
+ <!-- ![avengers]() -->
42
+
43
+ ## :rocket: Updates
44
+ - 2023/04/25: We are delighted to introduce [Caption-Anything](https://github.com/ttengwang/Caption-Anything) :writing_hand:, an inventive project from our lab that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT.
45
+
46
+ - 2023/04/20: We deployed [[DEMO]](https://huggingface.co/spaces/watchtowerss/Track-Anything) on Hugging Face :hugs:!
47
+
48
  ## Demo
49
 
50
  https://user-images.githubusercontent.com/28050374/232842703-8395af24-b13e-4b8e-aafb-e94b61e6c449.MP4
 
71
  # Install dependencies:
72
  pip install -r requirements.txt
73
 
 
 
 
 
 
 
 
74
  # Run the Track-Anything gradio demo.
75
  python app.py --device cuda:0 --sam_model_type vit_h --port 12212
76
  ```
77
 
78
+ ## Citation
79
+ If you find this work useful for your research or applications, please cite using this BibTeX:
80
+ ```bibtex
81
+ @misc{yang2023track,
82
+ title={Track Anything: Segment Anything Meets Videos},
83
+ author={Jinyu Yang and Mingqi Gao and Zhe Li and Shang Gao and Fangjing Wang and Feng Zheng},
84
+ year={2023},
85
+ eprint={2304.11968},
86
+ archivePrefix={arXiv},
87
+ primaryClass={cs.CV}
88
+ }
89
+ ```
90
+
91
  ## Acknowledgements
92
 
93
  The project is based on [Segment Anything](https://github.com/facebookresearch/segment-anything), [XMem](https://github.com/hkchengrex/XMem), and [E2FGVI](https://github.com/MCG-NKU/E2FGVI). Thanks for the authors for their efforts.