Upload 11 files
Browse files- LICENSE +61 -0
- README.md +716 -0
- config.json +896 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- processor_config.json +80 -0
- sam3.pt +3 -0
- special_tokens_map.json +30 -0
- tokenizer.json +0 -0
- tokenizer_config.json +33 -0
- vocab.json +0 -0
LICENSE
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
SAM License
|
| 2 |
+
Last Updated: November 19, 2025
|
| 3 |
+
|
| 4 |
+
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the SAM Materials set forth herein.
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
“SAM Materials” means, collectively, Documentation and the models, software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code, and other elements of the foregoing distributed by Meta and made available under this Agreement.
|
| 8 |
+
|
| 9 |
+
“Documentation” means the specifications, manuals and documentation accompanying
|
| 10 |
+
SAM Materials distributed by Meta.
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) or Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
“Sanctions” means any economic or trade sanctions or restrictions administered or enforced by the United States (including the Office of Foreign Assets Control of the U.S. Department of the Treasury (“OFAC”), the U.S. Department of State and the U.S. Department of Commerce), the United Nations, the European Union, or the United Kingdom.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
“Trade Controls” means any of the following: Sanctions and applicable export and import controls.
|
| 23 |
+
|
| 24 |
+
By using or distributing any portion or element of the SAM Materials, you agree to be bound by this Agreement.
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
1. License Rights and Redistribution.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the SAM Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the SAM Materials.
|
| 31 |
+
|
| 32 |
+
b. Redistribution and Use.
|
| 33 |
+
i. Distribution of SAM Materials, and any derivative works thereof, are subject to the terms of this Agreement. If you distribute or make the SAM Materials, or any derivative works thereof, available to a third party, you may only do so under the terms of this Agreement and you shall provide a copy of this Agreement with any such SAM Materials.
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
ii. If you submit for publication the results of research you perform on, using, or otherwise in connection with SAM Materials, you must acknowledge the use of SAM Materials in your publication.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
iii. Your use of the SAM Materials must comply with applicable laws and regulations, including Trade Control Laws and applicable privacy and data protection laws.
|
| 40 |
+
iv. Your use of the SAM Materials will not involve or encourage others to reverse engineer, decompile or discover the underlying components of the SAM Materials.
|
| 41 |
+
v. You are not the target of Trade Controls and your use of SAM Materials must comply with Trade Controls. You agree not to use, or permit others to use, SAM Materials for any activities subject to the International Traffic in Arms Regulations (ITAR) or end uses prohibited by Trade Controls, including those related to military or warfare purposes, nuclear industries or applications, espionage, or the development or use of guns or illegal weapons.
|
| 42 |
+
2. User Support. Your use of the SAM Materials is done at your own discretion; Meta does not process any information nor provide any service in relation to such use. Meta is under no obligation to provide any support services for the SAM Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind.
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SAM MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SAM MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SAM MATERIALS AND ANY OUTPUT AND RESULTS.
|
| 46 |
+
|
| 47 |
+
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY DIRECT OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
|
| 48 |
+
|
| 49 |
+
5. Intellectual Property.
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
a. Subject to Meta’s ownership of SAM Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the SAM Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
|
| 53 |
+
|
| 54 |
+
b. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the SAM Materials, outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the SAM Materials.
|
| 55 |
+
|
| 56 |
+
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the SAM Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the SAM Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
|
| 57 |
+
|
| 58 |
+
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
8. Modifications and Amendments. Meta may modify this Agreement from time to time; provided that they are similar in spirit to the current version of the Agreement, but may differ in detail to address new problems or concerns. All such changes will be effective immediately. Your continued use of the SAM Materials after any modification to this Agreement constitutes your agreement to such modification. Except as provided in this Agreement, no modification or addition to any provision of this Agreement will be binding unless it is in writing and signed by an authorized representative of both you and Meta.
|
README.md
ADDED
|
@@ -0,0 +1,716 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
extra_gated_fields:
|
| 4 |
+
First Name: text
|
| 5 |
+
Last Name: text
|
| 6 |
+
Date of birth: date_picker
|
| 7 |
+
Country: country
|
| 8 |
+
Affiliation: text
|
| 9 |
+
Job title:
|
| 10 |
+
type: select
|
| 11 |
+
options:
|
| 12 |
+
- Student
|
| 13 |
+
- Research Graduate
|
| 14 |
+
- AI researcher
|
| 15 |
+
- AI developer/engineer
|
| 16 |
+
- Reporter
|
| 17 |
+
- Other
|
| 18 |
+
geo: ip_location
|
| 19 |
+
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
|
| 20 |
+
extra_gated_description: >-
|
| 21 |
+
The information you provide will be collected, stored, processed and shared in
|
| 22 |
+
accordance with the [Meta Privacy
|
| 23 |
+
Policy](https://www.facebook.com/privacy/policy/).
|
| 24 |
+
extra_gated_button_content: Submit
|
| 25 |
+
language:
|
| 26 |
+
- en
|
| 27 |
+
pipeline_tag: mask-generation
|
| 28 |
+
library_name: transformers
|
| 29 |
+
tags:
|
| 30 |
+
- sam3
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor [SAM 2](https://github.com/facebookresearch/sam2), SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new [SA-CO benchmark](https://github.com/facebookresearch/sam3/edit/main_readme/README.md#sa-co-dataset) which contains 270K unique concepts, over 50 times more than existing benchmarks.
|
| 34 |
+
|
| 35 |
+
[Hugging Face 🤗 app](https://huggingface.co/spaces/akhaliq/sam3)
|
| 36 |
+
|
| 37 |
+
### Basic Usage
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
import torch
|
| 41 |
+
#################################### For Image ####################################
|
| 42 |
+
from PIL import Image
|
| 43 |
+
from sam3.model_builder import build_sam3_image_model
|
| 44 |
+
from sam3.model.sam3_image_processor import Sam3Processor
|
| 45 |
+
# Load the model
|
| 46 |
+
model = build_sam3_image_model()
|
| 47 |
+
processor = Sam3Processor(model)
|
| 48 |
+
# Load an image
|
| 49 |
+
image = Image.open("<YOUR_IMAGE_PATH.jpg>")
|
| 50 |
+
inference_state = processor.set_image(image)
|
| 51 |
+
# Prompt the model with text
|
| 52 |
+
output = processor.set_text_prompt(state=inference_state, prompt="<YOUR_TEXT_PROMPT>")
|
| 53 |
+
|
| 54 |
+
# Get the masks, bounding boxes, and scores
|
| 55 |
+
masks, boxes, scores = output["masks"], output["boxes"], output["scores"]
|
| 56 |
+
|
| 57 |
+
#################################### For Video ####################################
|
| 58 |
+
|
| 59 |
+
from sam3.model_builder import build_sam3_video_predictor
|
| 60 |
+
|
| 61 |
+
video_predictor = build_sam3_video_predictor()
|
| 62 |
+
video_path = "<YOUR_VIDEO_PATH>" # a JPEG folder or an MP4 video file
|
| 63 |
+
# Start a session
|
| 64 |
+
response = video_predictor.handle_request(
|
| 65 |
+
request=dict(
|
| 66 |
+
type="start_session",
|
| 67 |
+
resource_path=video_path,
|
| 68 |
+
)
|
| 69 |
+
)
|
| 70 |
+
response = video_predictor.handle_request(
|
| 71 |
+
request=dict(
|
| 72 |
+
type="add_prompt",
|
| 73 |
+
session_id=response["session_id"],
|
| 74 |
+
frame_index=0, # Arbitrary frame index
|
| 75 |
+
text="<YOUR_TEXT_PROMPT>",
|
| 76 |
+
)
|
| 77 |
+
)
|
| 78 |
+
output = response["outputs"]
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
The official code is publicly released in the [sam3 repo](https://github.com/facebookresearch/sam3).
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
## Usage with 🤗 Transformers
|
| 85 |
+
|
| 86 |
+
### SAM3 - Promptable Concept Segmentation (PCS) for Images
|
| 87 |
+
|
| 88 |
+
SAM3 performs Promptable Concept Segmentation (PCS) on images, taking text and/or image exemplars as prompts and returning segmentation masks for **all matching object instances** in the image.
|
| 89 |
+
|
| 90 |
+
#### Text-Only Prompts
|
| 91 |
+
|
| 92 |
+
```python
|
| 93 |
+
>>> from transformers import Sam3Processor, Sam3Model
|
| 94 |
+
>>> import torch
|
| 95 |
+
>>> from PIL import Image
|
| 96 |
+
>>> import requests
|
| 97 |
+
|
| 98 |
+
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 99 |
+
|
| 100 |
+
>>> model = Sam3Model.from_pretrained("facebook/sam3").to(device)
|
| 101 |
+
>>> processor = Sam3Processor.from_pretrained("facebook/sam3")
|
| 102 |
+
|
| 103 |
+
>>> # Load image
|
| 104 |
+
>>> image_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
|
| 105 |
+
>>> image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
|
| 106 |
+
|
| 107 |
+
>>> # Segment using text prompt
|
| 108 |
+
>>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
|
| 109 |
+
|
| 110 |
+
>>> with torch.no_grad():
|
| 111 |
+
... outputs = model(**inputs)
|
| 112 |
+
|
| 113 |
+
>>> # Post-process results
|
| 114 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 115 |
+
... outputs,
|
| 116 |
+
... threshold=0.5,
|
| 117 |
+
... mask_threshold=0.5,
|
| 118 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 119 |
+
... )[0]
|
| 120 |
+
|
| 121 |
+
>>> print(f"Found {len(results['masks'])} objects")
|
| 122 |
+
>>> # Results contain:
|
| 123 |
+
>>> # - masks: Binary masks resized to original image size
|
| 124 |
+
>>> # - boxes: Bounding boxes in absolute pixel coordinates (xyxy format)
|
| 125 |
+
>>> # - scores: Confidence scores
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
You can display masks using a simple helper like the following:
|
| 129 |
+
|
| 130 |
+
```python
|
| 131 |
+
import numpy as np
|
| 132 |
+
import matplotlib
|
| 133 |
+
|
| 134 |
+
def overlay_masks(image, masks):
|
| 135 |
+
image = image.convert("RGBA")
|
| 136 |
+
masks = 255 * masks.cpu().numpy().astype(np.uint8)
|
| 137 |
+
|
| 138 |
+
n_masks = masks.shape[0]
|
| 139 |
+
cmap = matplotlib.colormaps.get_cmap("rainbow").resampled(n_masks)
|
| 140 |
+
colors = [
|
| 141 |
+
tuple(int(c * 255) for c in cmap(i)[:3])
|
| 142 |
+
for i in range(n_masks)
|
| 143 |
+
]
|
| 144 |
+
|
| 145 |
+
for mask, color in zip(masks, colors):
|
| 146 |
+
mask = Image.fromarray(mask)
|
| 147 |
+
overlay = Image.new("RGBA", image.size, color + (0,))
|
| 148 |
+
alpha = mask.point(lambda v: int(v * 0.5))
|
| 149 |
+
overlay.putalpha(alpha)
|
| 150 |
+
image = Image.alpha_composite(image, overlay)
|
| 151 |
+
return image
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
Then you can save the resulting composite image or display it in a notebook:
|
| 155 |
+
|
| 156 |
+
```python
|
| 157 |
+
>>> overlay_masks(image, results["masks"])
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
#### Single Bounding Box Prompt
|
| 161 |
+
|
| 162 |
+
Segment objects using a bounding box:
|
| 163 |
+
|
| 164 |
+
```python
|
| 165 |
+
>>> # Box in xyxy format: [x1, y1, x2, y2] in pixel coordinates
|
| 166 |
+
>>> # Example: laptop region
|
| 167 |
+
>>> box_xyxy = [100, 150, 500, 450]
|
| 168 |
+
>>> input_boxes = [[box_xyxy]] # [batch, num_boxes, 4]
|
| 169 |
+
>>> input_boxes_labels = [[1]] # 1 = positive box
|
| 170 |
+
|
| 171 |
+
>>> inputs = processor(
|
| 172 |
+
... images=image,
|
| 173 |
+
... input_boxes=input_boxes,
|
| 174 |
+
... input_boxes_labels=input_boxes_labels,
|
| 175 |
+
... return_tensors="pt"
|
| 176 |
+
... ).to(device)
|
| 177 |
+
|
| 178 |
+
>>> with torch.no_grad():
|
| 179 |
+
... outputs = model(**inputs)
|
| 180 |
+
|
| 181 |
+
>>> # Post-process results
|
| 182 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 183 |
+
... outputs,
|
| 184 |
+
... threshold=0.5,
|
| 185 |
+
... mask_threshold=0.5,
|
| 186 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 187 |
+
... )[0]
|
| 188 |
+
```
|
| 189 |
+
|
| 190 |
+
#### Multiple Box Prompts (Positive and Negative)
|
| 191 |
+
|
| 192 |
+
Use multiple boxes with positive and negative labels to refine the concept:
|
| 193 |
+
|
| 194 |
+
```python
|
| 195 |
+
>>> # Load kitchen image
|
| 196 |
+
>>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
|
| 197 |
+
>>> kitchen_image = Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
|
| 198 |
+
|
| 199 |
+
>>> # Define two positive boxes (e.g., dial and button on oven)
|
| 200 |
+
>>> # Boxes are in xyxy format [x1, y1, x2, y2] in pixel coordinates
|
| 201 |
+
>>> box1_xyxy = [59, 144, 76, 163] # Dial box
|
| 202 |
+
>>> box2_xyxy = [87, 148, 104, 159] # Button box
|
| 203 |
+
>>> input_boxes = [[box1_xyxy, box2_xyxy]]
|
| 204 |
+
>>> input_boxes_labels = [[1, 1]] # Both positive
|
| 205 |
+
|
| 206 |
+
>>> inputs = processor(
|
| 207 |
+
... images=kitchen_image,
|
| 208 |
+
... input_boxes=input_boxes,
|
| 209 |
+
... input_boxes_labels=input_boxes_labels,
|
| 210 |
+
... return_tensors="pt"
|
| 211 |
+
... ).to(device)
|
| 212 |
+
|
| 213 |
+
>>> with torch.no_grad():
|
| 214 |
+
... outputs = model(**inputs)
|
| 215 |
+
|
| 216 |
+
>>> # Post-process results
|
| 217 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 218 |
+
... outputs,
|
| 219 |
+
... threshold=0.5,
|
| 220 |
+
... mask_threshold=0.5,
|
| 221 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 222 |
+
... )[0]
|
| 223 |
+
>>> overlay_masks(kitchen_image, results["masks"])
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
#### Combined Prompts (Text + Negative Box)
|
| 227 |
+
|
| 228 |
+
Use text prompts with negative visual prompts to refine the concept:
|
| 229 |
+
|
| 230 |
+
```python
|
| 231 |
+
>>> # Segment "handle" but exclude the oven handle using a negative box
|
| 232 |
+
>>> text = "handle"
|
| 233 |
+
>>> # Negative box covering oven handle area (xyxy): [40, 183, 318, 204]
|
| 234 |
+
>>> oven_handle_box = [40, 183, 318, 204]
|
| 235 |
+
>>> input_boxes = [[oven_handle_box]]
|
| 236 |
+
|
| 237 |
+
>>> inputs = processor(
|
| 238 |
+
... images=kitchen_image,
|
| 239 |
+
... text=text,
|
| 240 |
+
... input_boxes=input_boxes,
|
| 241 |
+
... input_boxes_labels=[[0]], # 0 = negative (exclude this region)
|
| 242 |
+
... return_tensors="pt"
|
| 243 |
+
... ).to(device)
|
| 244 |
+
|
| 245 |
+
>>> with torch.no_grad():
|
| 246 |
+
... outputs = model(**inputs)
|
| 247 |
+
|
| 248 |
+
>>> # Post-process results
|
| 249 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 250 |
+
... outputs,
|
| 251 |
+
... threshold=0.5,
|
| 252 |
+
... mask_threshold=0.5,
|
| 253 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 254 |
+
... )[0]
|
| 255 |
+
>>> # This will segment pot handles but exclude the oven handle
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
#### Batched Inference with Text Prompts
|
| 259 |
+
|
| 260 |
+
Process multiple images with different text prompts by batch:
|
| 261 |
+
|
| 262 |
+
```python
|
| 263 |
+
>>> cat_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
|
| 264 |
+
>>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
|
| 265 |
+
>>> images = [
|
| 266 |
+
... Image.open(requests.get(cat_url, stream=True).raw).convert("RGB"),
|
| 267 |
+
... Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
|
| 268 |
+
... ]
|
| 269 |
+
|
| 270 |
+
>>> text_prompts = ["ear", "dial"]
|
| 271 |
+
|
| 272 |
+
>>> inputs = processor(images=images, text=text_prompts, return_tensors="pt").to(device)
|
| 273 |
+
|
| 274 |
+
>>> with torch.no_grad():
|
| 275 |
+
... outputs = model(**inputs)
|
| 276 |
+
|
| 277 |
+
>>> # Post-process results for both images
|
| 278 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 279 |
+
... outputs,
|
| 280 |
+
... threshold=0.5,
|
| 281 |
+
... mask_threshold=0.5,
|
| 282 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 283 |
+
... )
|
| 284 |
+
|
| 285 |
+
>>> print(f"Image 1: {len(results[0]['masks'])} objects found")
|
| 286 |
+
>>> print(f"Image 2: {len(results[1]['masks'])} objects found")
|
| 287 |
+
```
|
| 288 |
+
|
| 289 |
+
#### Batched Mixed Prompts
|
| 290 |
+
|
| 291 |
+
Use different prompt types for different images in the same batch:
|
| 292 |
+
|
| 293 |
+
```python
|
| 294 |
+
>>> # Image 1: text prompt "laptop"
|
| 295 |
+
>>> # Image 2: visual prompt (dial box)
|
| 296 |
+
>>> box2_xyxy = [59, 144, 76, 163]
|
| 297 |
+
|
| 298 |
+
>>> inputs = processor(
|
| 299 |
+
... images=images,
|
| 300 |
+
... text=["laptop", None], # Only first image has text
|
| 301 |
+
... input_boxes=[None, [box2_xyxy]], # Only second image has box
|
| 302 |
+
... input_boxes_labels=[None, [1]], # Positive box for second image
|
| 303 |
+
... return_tensors="pt"
|
| 304 |
+
... ).to(device)
|
| 305 |
+
|
| 306 |
+
>>> with torch.no_grad():
|
| 307 |
+
... outputs = model(**inputs)
|
| 308 |
+
|
| 309 |
+
>>> # Post-process results for both images
|
| 310 |
+
>>> results = processor.post_process_instance_segmentation(
|
| 311 |
+
... outputs,
|
| 312 |
+
... threshold=0.5,
|
| 313 |
+
... mask_threshold=0.5,
|
| 314 |
+
... target_sizes=inputs.get("original_sizes").tolist()
|
| 315 |
+
... )
|
| 316 |
+
>>> # Both images processed in single forward pass
|
| 317 |
+
```
|
| 318 |
+
|
| 319 |
+
#### Semantic Segmentation Output
|
| 320 |
+
|
| 321 |
+
SAM3 also provides semantic segmentation alongside instance masks:
|
| 322 |
+
|
| 323 |
+
```python
|
| 324 |
+
>>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
|
| 325 |
+
|
| 326 |
+
>>> with torch.no_grad():
|
| 327 |
+
... outputs = model(**inputs)
|
| 328 |
+
|
| 329 |
+
>>> # Instance segmentation masks
|
| 330 |
+
>>> instance_masks = torch.sigmoid(outputs.pred_masks) # [batch, num_queries, H, W]
|
| 331 |
+
|
| 332 |
+
>>> # Semantic segmentation (single channel)
|
| 333 |
+
>>> semantic_seg = outputs.semantic_seg # [batch, 1, H, W]
|
| 334 |
+
|
| 335 |
+
>>> print(f"Instance masks: {instance_masks.shape}")
|
| 336 |
+
>>> print(f"Semantic segmentation: {semantic_seg.shape}")
|
| 337 |
+
```
|
| 338 |
+
|
| 339 |
+
### SAM3 Video - Promptable Concept Segmentation (PCS) for Videos
|
| 340 |
+
|
| 341 |
+
SAM3 Video performs Promptable Concept Segmentation (PCS) on videos, taking text as prompts and detecting and tracking **all matching object instances** across video frames.
|
| 342 |
+
|
| 343 |
+
#### Pre-loaded Video Inference
|
| 344 |
+
|
| 345 |
+
Process a video with all frames already available using text prompts:
|
| 346 |
+
|
| 347 |
+
```python
|
| 348 |
+
>>> from transformers import Sam3VideoModel, Sam3VideoProcessor
|
| 349 |
+
>>> from accelerate import Accelerator
|
| 350 |
+
>>> import torch
|
| 351 |
+
|
| 352 |
+
>>> device = Accelerator().device
|
| 353 |
+
>>> model = Sam3VideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
|
| 354 |
+
>>> processor = Sam3VideoProcessor.from_pretrained("facebook/sam3")
|
| 355 |
+
|
| 356 |
+
>>> # Load video frames
|
| 357 |
+
>>> from transformers.video_utils import load_video
|
| 358 |
+
>>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
|
| 359 |
+
>>> video_frames, _ = load_video(video_url)
|
| 360 |
+
|
| 361 |
+
>>> # Initialize video inference session
|
| 362 |
+
>>> inference_session = processor.init_video_session(
|
| 363 |
+
... video=video_frames,
|
| 364 |
+
... inference_device=device,
|
| 365 |
+
... processing_device="cpu",
|
| 366 |
+
... video_storage_device="cpu",
|
| 367 |
+
... dtype=torch.bfloat16,
|
| 368 |
+
... )
|
| 369 |
+
|
| 370 |
+
>>> # Add text prompt to detect and track objects
|
| 371 |
+
>>> text = "person"
|
| 372 |
+
>>> inference_session = processor.add_text_prompt(
|
| 373 |
+
... inference_session=inference_session,
|
| 374 |
+
... text=text,
|
| 375 |
+
... )
|
| 376 |
+
|
| 377 |
+
>>> # Process all frames in the video
|
| 378 |
+
>>> outputs_per_frame = {}
|
| 379 |
+
>>> for model_outputs in model.propagate_in_video_iterator(
|
| 380 |
+
... inference_session=inference_session, max_frame_num_to_track=50
|
| 381 |
+
... ):
|
| 382 |
+
... processed_outputs = processor.postprocess_outputs(inference_session, model_outputs)
|
| 383 |
+
... outputs_per_frame[model_outputs.frame_idx] = processed_outputs
|
| 384 |
+
|
| 385 |
+
>>> print(f"Processed {len(outputs_per_frame)} frames")
|
| 386 |
+
Processed 51 frames
|
| 387 |
+
|
| 388 |
+
>>> # Access results for a specific frame
|
| 389 |
+
>>> frame_0_outputs = outputs_per_frame[0]
|
| 390 |
+
>>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects")
|
| 391 |
+
>>> print(f"Object IDs: {frame_0_outputs['object_ids'].tolist()}")
|
| 392 |
+
>>> print(f"Scores: {frame_0_outputs['scores'].tolist()}")
|
| 393 |
+
>>> print(f"Boxes shape (XYXY format, absolute coordinates): {frame_0_outputs['boxes'].shape}")
|
| 394 |
+
>>> print(f"Masks shape: {frame_0_outputs['masks'].shape}")
|
| 395 |
+
```
|
| 396 |
+
|
| 397 |
+
#### Streaming Video Inference
|
| 398 |
+
|
| 399 |
+
For real-time applications, the Transformers implementation of SAM3 Video supports processing video frames as they arrive:
|
| 400 |
+
|
| 401 |
+
```python
|
| 402 |
+
>>> # Initialize session for streaming
|
| 403 |
+
>>> streaming_inference_session = processor.init_video_session(
|
| 404 |
+
... inference_device=device,
|
| 405 |
+
... processing_device="cpu",
|
| 406 |
+
... video_storage_device="cpu",
|
| 407 |
+
... dtype=torch.bfloat16,
|
| 408 |
+
... )
|
| 409 |
+
|
| 410 |
+
>>> # Add text prompt
|
| 411 |
+
>>> text = "person"
|
| 412 |
+
>>> streaming_inference_session = processor.add_text_prompt(
|
| 413 |
+
... inference_session=streaming_inference_session,
|
| 414 |
+
... text=text,
|
| 415 |
+
... )
|
| 416 |
+
|
| 417 |
+
>>> # Process frames one by one (streaming mode)
|
| 418 |
+
>>> streaming_outputs_per_frame = {}
|
| 419 |
+
>>> for frame_idx, frame in enumerate(video_frames[:50]): # Process first 50 frames
|
| 420 |
+
... # First, process the frame using the processor
|
| 421 |
+
... inputs = processor(images=frame, device=device, return_tensors="pt")
|
| 422 |
+
...
|
| 423 |
+
... # Process frame using streaming inference - pass the processed pixel_values
|
| 424 |
+
... model_outputs = model(
|
| 425 |
+
... inference_session=streaming_inference_session,
|
| 426 |
+
... frame=inputs.pixel_values[0], # Provide processed frame - this enables streaming mode
|
| 427 |
+
... reverse=False,
|
| 428 |
+
... )
|
| 429 |
+
...
|
| 430 |
+
... # Post-process outputs with original_sizes for proper resolution handling
|
| 431 |
+
... processed_outputs = processor.postprocess_outputs(
|
| 432 |
+
... streaming_inference_session,
|
| 433 |
+
... model_outputs,
|
| 434 |
+
... original_sizes=inputs.original_sizes, # Required for streaming inference
|
| 435 |
+
... )
|
| 436 |
+
... streaming_outputs_per_frame[frame_idx] = processed_outputs
|
| 437 |
+
...
|
| 438 |
+
... if (frame_idx + 1) % 10 == 0:
|
| 439 |
+
... print(f"Processed {frame_idx + 1} frames...")
|
| 440 |
+
|
| 441 |
+
>>> print(f"✓ Streaming inference complete! Processed {len(streaming_outputs_per_frame)} frames")
|
| 442 |
+
✓ Streaming inference complete! Processed 50 frames
|
| 443 |
+
|
| 444 |
+
>>> # Access results
|
| 445 |
+
>>> frame_0_outputs = streaming_outputs_per_frame[0]
|
| 446 |
+
>>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects in first frame")
|
| 447 |
+
>>> print(f"Boxes are in XYXY format (absolute pixel coordinates): {frame_0_outputs['boxes'].shape}")
|
| 448 |
+
>>> print(f"Masks are at original video resolution: {frame_0_outputs['masks'].shape}")
|
| 449 |
+
```
|
| 450 |
+
|
| 451 |
+
<div class="warning">
|
| 452 |
+
⚠️ **Note on Streaming Inference Quality**: Streaming inference disables hotstart heuristics that remove unmatched and duplicate objects, as these require access to future frames to make informed decisions. This may result in more false positive detections and duplicate object tracks compared to pre-loaded video inference. For best results, use pre-loaded video inference when all frames are available.
|
| 453 |
+
</div>
|
| 454 |
+
|
| 455 |
+
### SAM3 Tracker - Promptable Visual Segmentation (PVS) for Images
|
| 456 |
+
|
| 457 |
+
Sam3Tracker performs Promptable Visual Segmentation (PVS) on images, taking interactive visual prompts (points, boxes, masks) to segment a **specific object instance** per prompt. It is an updated version of SAM2 that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 workflows.
|
| 458 |
+
|
| 459 |
+
#### Automatic Mask Generation with Pipeline
|
| 460 |
+
|
| 461 |
+
```python
|
| 462 |
+
>>> from transformers import pipeline
|
| 463 |
+
|
| 464 |
+
>>> generator = pipeline("mask-generation", model="facebook/sam3", device=0)
|
| 465 |
+
>>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
|
| 466 |
+
>>> outputs = generator(image_url, points_per_batch=64)
|
| 467 |
+
|
| 468 |
+
>>> len(outputs["masks"]) # Number of masks generated
|
| 469 |
+
```
|
| 470 |
+
|
| 471 |
+
#### Basic Image Segmentation
|
| 472 |
+
|
| 473 |
+
##### Single Point Click
|
| 474 |
+
|
| 475 |
+
```python
|
| 476 |
+
>>> from transformers import Sam3TrackerProcessor, Sam3TrackerModel
|
| 477 |
+
>>> from accelerate import Accelerator
|
| 478 |
+
>>> import torch
|
| 479 |
+
>>> from PIL import Image
|
| 480 |
+
>>> import requests
|
| 481 |
+
|
| 482 |
+
>>> device = Accelerator().device
|
| 483 |
+
|
| 484 |
+
>>> model = Sam3TrackerModel.from_pretrained("facebook/sam3").to(device)
|
| 485 |
+
>>> processor = Sam3TrackerProcessor.from_pretrained("facebook/sam3")
|
| 486 |
+
|
| 487 |
+
>>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
|
| 488 |
+
>>> raw_image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
|
| 489 |
+
|
| 490 |
+
>>> input_points = [[[[500, 375]]]] # Single point click, 4 dimensions (image_dim, object_dim, point_per_object_dim, coordinates)
|
| 491 |
+
>>> input_labels = [[[1]]] # 1 for positive click, 0 for negative click, 3 dimensions (image_dim, object_dim, point_label)
|
| 492 |
+
|
| 493 |
+
>>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
|
| 494 |
+
|
| 495 |
+
>>> with torch.no_grad():
|
| 496 |
+
... outputs = model(**inputs)
|
| 497 |
+
|
| 498 |
+
>>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
|
| 499 |
+
|
| 500 |
+
>>> # The model outputs multiple mask predictions ranked by quality score
|
| 501 |
+
>>> print(f"Generated {masks.shape[1]} masks with shape {masks.shape}")
|
| 502 |
+
```
|
| 503 |
+
|
| 504 |
+
##### Multiple Points for Refinement
|
| 505 |
+
|
| 506 |
+
```python
|
| 507 |
+
>>> # Add both positive and negative points to refine the mask
|
| 508 |
+
>>> input_points = [[[[500, 375], [1125, 625]]]] # Multiple points for refinement
|
| 509 |
+
>>> input_labels = [[[1, 1]]] # Both positive clicks
|
| 510 |
+
|
| 511 |
+
>>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(device)
|
| 512 |
+
|
| 513 |
+
>>> with torch.no_grad():
|
| 514 |
+
... outputs = model(**inputs)
|
| 515 |
+
|
| 516 |
+
>>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
|
| 517 |
+
```
|
| 518 |
+
|
| 519 |
+
##### Bounding Box Input
|
| 520 |
+
|
| 521 |
+
```python
|
| 522 |
+
>>> # Define bounding box as [x_min, y_min, x_max, y_max]
|
| 523 |
+
>>> input_boxes = [[[75, 275, 1725, 850]]]
|
| 524 |
+
|
| 525 |
+
>>> inputs = processor(images=raw_image, input_boxes=input_boxes, return_tensors="pt").to(device)
|
| 526 |
+
|
| 527 |
+
>>> with torch.no_grad():
|
| 528 |
+
... outputs = model(**inputs)
|
| 529 |
+
|
| 530 |
+
>>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
|
| 531 |
+
```
|
| 532 |
+
|
| 533 |
+
##### Multiple Objects Segmentation
|
| 534 |
+
|
| 535 |
+
```python
|
| 536 |
+
>>> # Define points for two different objects
|
| 537 |
+
>>> input_points = [[[[500, 375]], [[650, 750]]]] # Points for two objects in same image
|
| 538 |
+
>>> input_labels = [[[1], [1]]] # Positive clicks for both objects
|
| 539 |
+
|
| 540 |
+
>>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
|
| 541 |
+
|
| 542 |
+
>>> with torch.no_grad():
|
| 543 |
+
... outputs = model(**inputs, multimask_output=False)
|
| 544 |
+
|
| 545 |
+
>>> # Each object gets its own mask
|
| 546 |
+
>>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
|
| 547 |
+
>>> print(f"Generated masks for {masks.shape[0]} objects")
|
| 548 |
+
Generated masks for 2 objects
|
| 549 |
+
```
|
| 550 |
+
|
| 551 |
+
#### Batch Inference
|
| 552 |
+
|
| 553 |
+
|
| 554 |
+
```python
|
| 555 |
+
>>> # Load multiple images
|
| 556 |
+
>>> image_urls = [
|
| 557 |
+
... "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg",
|
| 558 |
+
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dog-sam.png"
|
| 559 |
+
... ]
|
| 560 |
+
>>> raw_images = [Image.open(requests.get(url, stream=True).raw).convert("RGB") for url in image_urls]
|
| 561 |
+
|
| 562 |
+
>>> # Single point per image
|
| 563 |
+
>>> input_points = [[[[500, 375]]], [[[770, 200]]]] # One point for each image
|
| 564 |
+
>>> input_labels = [[[1]], [[1]]] # Positive clicks for both images
|
| 565 |
+
|
| 566 |
+
>>> inputs = processor(images=raw_images, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
|
| 567 |
+
|
| 568 |
+
>>> with torch.no_grad():
|
| 569 |
+
... outputs = model(**inputs, multimask_output=False)
|
| 570 |
+
|
| 571 |
+
>>> # Post-process masks for each image
|
| 572 |
+
>>> all_masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])
|
| 573 |
+
>>> print(f"Processed {len(all_masks)} images, each with {all_masks[0].shape[0]} objects")
|
| 574 |
+
```
|
| 575 |
+
|
| 576 |
+
### SAM3 Tracker Video - Promptable Visual Segmentation (PVS) for Videos
|
| 577 |
+
|
| 578 |
+
Sam3TrackerVideo performs Promptable Visual Segmentation (PVS) on videos, taking interactive visual prompts (points, boxes, masks) to track a **specific object instance** per prompt across video frames. It is an updated version of SAM2 Video that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 Video workflows.
|
| 579 |
+
|
| 580 |
+
#### Basic Video Tracking
|
| 581 |
+
|
| 582 |
+
```python
|
| 583 |
+
>>> from transformers import Sam3TrackerVideoModel, Sam3TrackerVideoProcessor
|
| 584 |
+
>>> from accelerate import Accelerator
|
| 585 |
+
>>> import torch
|
| 586 |
+
|
| 587 |
+
>>> device = Accelerator().device
|
| 588 |
+
>>> model = Sam3TrackerVideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
|
| 589 |
+
>>> processor = Sam3TrackerVideoProcessor.from_pretrained("facebook/sam3")
|
| 590 |
+
|
| 591 |
+
>>> # Load video frames
|
| 592 |
+
>>> from transformers.video_utils import load_video
|
| 593 |
+
>>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
|
| 594 |
+
>>> video_frames, _ = load_video(video_url)
|
| 595 |
+
|
| 596 |
+
>>> # Initialize video inference session
|
| 597 |
+
>>> inference_session = processor.init_video_session(
|
| 598 |
+
... video=video_frames,
|
| 599 |
+
... inference_device=device,
|
| 600 |
+
... dtype=torch.bfloat16,
|
| 601 |
+
... )
|
| 602 |
+
|
| 603 |
+
>>> # Add click on first frame to select object
|
| 604 |
+
>>> ann_frame_idx = 0
|
| 605 |
+
>>> ann_obj_id = 1
|
| 606 |
+
>>> points = [[[[210, 350]]]]
|
| 607 |
+
>>> labels = [[[1]]]
|
| 608 |
+
|
| 609 |
+
>>> processor.add_inputs_to_inference_session(
|
| 610 |
+
... inference_session=inference_session,
|
| 611 |
+
... frame_idx=ann_frame_idx,
|
| 612 |
+
... obj_ids=ann_obj_id,
|
| 613 |
+
... input_points=points,
|
| 614 |
+
... input_labels=labels,
|
| 615 |
+
... )
|
| 616 |
+
|
| 617 |
+
>>> # Segment the object on the first frame (optional, you can also propagate the masks through the video directly)
|
| 618 |
+
>>> outputs = model(
|
| 619 |
+
... inference_session=inference_session,
|
| 620 |
+
... frame_idx=ann_frame_idx,
|
| 621 |
+
... )
|
| 622 |
+
>>> video_res_masks = processor.post_process_masks(
|
| 623 |
+
... [outputs.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
|
| 624 |
+
... )[0]
|
| 625 |
+
>>> print(f"Segmentation shape: {video_res_masks.shape}")
|
| 626 |
+
Segmentation shape: torch.Size([1, 1, 480, 854])
|
| 627 |
+
|
| 628 |
+
>>> # Propagate through the entire video
|
| 629 |
+
>>> video_segments = {}
|
| 630 |
+
>>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
|
| 631 |
+
... video_res_masks = processor.post_process_masks(
|
| 632 |
+
... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
|
| 633 |
+
... )[0]
|
| 634 |
+
... video_segments[sam3_tracker_video_output.frame_idx] = video_res_masks
|
| 635 |
+
|
| 636 |
+
>>> print(f"Tracked object through {len(video_segments)} frames")
|
| 637 |
+
Tracked object through 180 frames
|
| 638 |
+
```
|
| 639 |
+
|
| 640 |
+
#### Multi-Object Video Tracking
|
| 641 |
+
|
| 642 |
+
Track multiple objects simultaneously across video frames:
|
| 643 |
+
|
| 644 |
+
```python
|
| 645 |
+
>>> # Reset for new tracking session
|
| 646 |
+
>>> inference_session.reset_inference_session()
|
| 647 |
+
|
| 648 |
+
>>> # Add multiple objects on the first frame
|
| 649 |
+
>>> ann_frame_idx = 0
|
| 650 |
+
>>> obj_ids = [2, 3]
|
| 651 |
+
>>> input_points = [[[[200, 300]], [[400, 150]]]] # Points for two objects (batched)
|
| 652 |
+
>>> input_labels = [[[1], [1]]]
|
| 653 |
+
|
| 654 |
+
>>> processor.add_inputs_to_inference_session(
|
| 655 |
+
... inference_session=inference_session,
|
| 656 |
+
... frame_idx=ann_frame_idx,
|
| 657 |
+
... obj_ids=obj_ids,
|
| 658 |
+
... input_points=input_points,
|
| 659 |
+
... input_labels=input_labels,
|
| 660 |
+
... )
|
| 661 |
+
|
| 662 |
+
>>> # Get masks for both objects on first frame (optional, you can also propagate the masks through the video directly)
|
| 663 |
+
>>> outputs = model(
|
| 664 |
+
... inference_session=inference_session,
|
| 665 |
+
... frame_idx=ann_frame_idx,
|
| 666 |
+
... )
|
| 667 |
+
|
| 668 |
+
>>> # Propagate both objects through video
|
| 669 |
+
>>> video_segments = {}
|
| 670 |
+
>>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
|
| 671 |
+
... video_res_masks = processor.post_process_masks(
|
| 672 |
+
... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
|
| 673 |
+
... )[0]
|
| 674 |
+
... video_segments[sam3_tracker_video_output.frame_idx] = {
|
| 675 |
+
... obj_id: video_res_masks[i]
|
| 676 |
+
... for i, obj_id in enumerate(inference_session.obj_ids)
|
| 677 |
+
... }
|
| 678 |
+
|
| 679 |
+
>>> print(f"Tracked {len(inference_session.obj_ids)} objects through {len(video_segments)} frames")
|
| 680 |
+
Tracked 2 objects through 180 frames
|
| 681 |
+
```
|
| 682 |
+
|
| 683 |
+
#### Streaming Video Inference
|
| 684 |
+
|
| 685 |
+
For real-time applications, Sam3TrackerVideo supports processing video frames as they arrive:
|
| 686 |
+
|
| 687 |
+
```python
|
| 688 |
+
>>> # Initialize session for streaming
|
| 689 |
+
>>> inference_session = processor.init_video_session(
|
| 690 |
+
... inference_device=device,
|
| 691 |
+
... dtype=torch.bfloat16,
|
| 692 |
+
... )
|
| 693 |
+
|
| 694 |
+
>>> # Process frames one by one
|
| 695 |
+
>>> for frame_idx, frame in enumerate(video_frames[:10]): # Process first 10 frames
|
| 696 |
+
... inputs = processor(images=frame, device=device, return_tensors="pt")
|
| 697 |
+
...
|
| 698 |
+
... if frame_idx == 0:
|
| 699 |
+
... # Add point input on first frame
|
| 700 |
+
... processor.add_inputs_to_inference_session(
|
| 701 |
+
... inference_session=inference_session,
|
| 702 |
+
... frame_idx=0,
|
| 703 |
+
... obj_ids=1,
|
| 704 |
+
... input_points=[[[[210, 350], [250, 220]]]],
|
| 705 |
+
... input_labels=[[[1, 1]]],
|
| 706 |
+
... original_size=inputs.original_sizes[0], # need to be provided when using streaming video inference
|
| 707 |
+
... )
|
| 708 |
+
...
|
| 709 |
+
... # Process current frame
|
| 710 |
+
... sam3_tracker_video_output = model(inference_session=inference_session, frame=inputs.pixel_values[0])
|
| 711 |
+
...
|
| 712 |
+
... video_res_masks = processor.post_process_masks(
|
| 713 |
+
... [sam3_tracker_video_output.pred_masks], original_sizes=inputs.original_sizes, binarize=False
|
| 714 |
+
... )[0]
|
| 715 |
+
... print(f"Frame {frame_idx}: mask shape {video_res_masks.shape}")
|
| 716 |
+
```
|
config.json
ADDED
|
@@ -0,0 +1,896 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Sam3VideoModel"
|
| 4 |
+
],
|
| 5 |
+
"assoc_iou_thresh": 0.1,
|
| 6 |
+
"decrease_trk_keep_alive_for_empty_masklets": false,
|
| 7 |
+
"det_nms_thresh": 0.1,
|
| 8 |
+
"detector_config": {
|
| 9 |
+
"detr_decoder_config": {
|
| 10 |
+
"_name_or_path": "",
|
| 11 |
+
"add_cross_attention": false,
|
| 12 |
+
"architectures": null,
|
| 13 |
+
"bad_words_ids": null,
|
| 14 |
+
"begin_suppress_tokens": null,
|
| 15 |
+
"bos_token_id": null,
|
| 16 |
+
"box_rpb_mode": "log",
|
| 17 |
+
"chunk_size_feed_forward": 0,
|
| 18 |
+
"cross_attention_hidden_size": null,
|
| 19 |
+
"decoder_start_token_id": null,
|
| 20 |
+
"diversity_penalty": 0.0,
|
| 21 |
+
"do_sample": false,
|
| 22 |
+
"dropout": 0.1,
|
| 23 |
+
"dtype": null,
|
| 24 |
+
"early_stopping": false,
|
| 25 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 26 |
+
"eos_token_id": null,
|
| 27 |
+
"exponential_decay_length_penalty": null,
|
| 28 |
+
"finetuning_task": null,
|
| 29 |
+
"forced_bos_token_id": null,
|
| 30 |
+
"forced_eos_token_id": null,
|
| 31 |
+
"hidden_act": "relu",
|
| 32 |
+
"hidden_dropout": 0.0,
|
| 33 |
+
"hidden_size": 256,
|
| 34 |
+
"id2label": {
|
| 35 |
+
"0": "LABEL_0",
|
| 36 |
+
"1": "LABEL_1"
|
| 37 |
+
},
|
| 38 |
+
"initializer_range": 0.02,
|
| 39 |
+
"intermediate_size": 2048,
|
| 40 |
+
"is_decoder": false,
|
| 41 |
+
"is_encoder_decoder": false,
|
| 42 |
+
"label2id": {
|
| 43 |
+
"LABEL_0": 0,
|
| 44 |
+
"LABEL_1": 1
|
| 45 |
+
},
|
| 46 |
+
"layer_norm_eps": 1e-06,
|
| 47 |
+
"length_penalty": 1.0,
|
| 48 |
+
"max_length": 20,
|
| 49 |
+
"min_length": 0,
|
| 50 |
+
"model_type": "sam3_detr_decoder",
|
| 51 |
+
"no_repeat_ngram_size": 0,
|
| 52 |
+
"num_attention_heads": 8,
|
| 53 |
+
"num_beam_groups": 1,
|
| 54 |
+
"num_beams": 1,
|
| 55 |
+
"num_layers": 6,
|
| 56 |
+
"num_queries": 200,
|
| 57 |
+
"num_return_sequences": 1,
|
| 58 |
+
"output_attentions": false,
|
| 59 |
+
"output_hidden_states": false,
|
| 60 |
+
"output_scores": false,
|
| 61 |
+
"pad_token_id": null,
|
| 62 |
+
"prefix": null,
|
| 63 |
+
"problem_type": null,
|
| 64 |
+
"remove_invalid_values": false,
|
| 65 |
+
"repetition_penalty": 1.0,
|
| 66 |
+
"return_dict": true,
|
| 67 |
+
"return_dict_in_generate": false,
|
| 68 |
+
"sep_token_id": null,
|
| 69 |
+
"suppress_tokens": null,
|
| 70 |
+
"task_specific_params": null,
|
| 71 |
+
"temperature": 1.0,
|
| 72 |
+
"tie_encoder_decoder": false,
|
| 73 |
+
"tie_word_embeddings": true,
|
| 74 |
+
"tokenizer_class": null,
|
| 75 |
+
"top_k": 50,
|
| 76 |
+
"top_p": 1.0,
|
| 77 |
+
"typical_p": 1.0,
|
| 78 |
+
"use_presence_token": true
|
| 79 |
+
},
|
| 80 |
+
"detr_encoder_config": {
|
| 81 |
+
"_name_or_path": "",
|
| 82 |
+
"add_cross_attention": false,
|
| 83 |
+
"architectures": null,
|
| 84 |
+
"bad_words_ids": null,
|
| 85 |
+
"begin_suppress_tokens": null,
|
| 86 |
+
"bos_token_id": null,
|
| 87 |
+
"chunk_size_feed_forward": 0,
|
| 88 |
+
"cross_attention_hidden_size": null,
|
| 89 |
+
"decoder_start_token_id": null,
|
| 90 |
+
"diversity_penalty": 0.0,
|
| 91 |
+
"do_sample": false,
|
| 92 |
+
"dropout": 0.1,
|
| 93 |
+
"dtype": null,
|
| 94 |
+
"early_stopping": false,
|
| 95 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 96 |
+
"eos_token_id": null,
|
| 97 |
+
"exponential_decay_length_penalty": null,
|
| 98 |
+
"finetuning_task": null,
|
| 99 |
+
"forced_bos_token_id": null,
|
| 100 |
+
"forced_eos_token_id": null,
|
| 101 |
+
"hidden_act": "relu",
|
| 102 |
+
"hidden_dropout": 0.0,
|
| 103 |
+
"hidden_size": 256,
|
| 104 |
+
"id2label": {
|
| 105 |
+
"0": "LABEL_0",
|
| 106 |
+
"1": "LABEL_1"
|
| 107 |
+
},
|
| 108 |
+
"initializer_range": 0.02,
|
| 109 |
+
"intermediate_size": 2048,
|
| 110 |
+
"is_decoder": false,
|
| 111 |
+
"is_encoder_decoder": false,
|
| 112 |
+
"label2id": {
|
| 113 |
+
"LABEL_0": 0,
|
| 114 |
+
"LABEL_1": 1
|
| 115 |
+
},
|
| 116 |
+
"layer_norm_eps": 1e-06,
|
| 117 |
+
"length_penalty": 1.0,
|
| 118 |
+
"max_length": 20,
|
| 119 |
+
"min_length": 0,
|
| 120 |
+
"model_type": "sam3_detr_encoder",
|
| 121 |
+
"no_repeat_ngram_size": 0,
|
| 122 |
+
"num_attention_heads": 8,
|
| 123 |
+
"num_beam_groups": 1,
|
| 124 |
+
"num_beams": 1,
|
| 125 |
+
"num_layers": 6,
|
| 126 |
+
"num_return_sequences": 1,
|
| 127 |
+
"output_attentions": false,
|
| 128 |
+
"output_hidden_states": false,
|
| 129 |
+
"output_scores": false,
|
| 130 |
+
"pad_token_id": null,
|
| 131 |
+
"prefix": null,
|
| 132 |
+
"problem_type": null,
|
| 133 |
+
"remove_invalid_values": false,
|
| 134 |
+
"repetition_penalty": 1.0,
|
| 135 |
+
"return_dict": true,
|
| 136 |
+
"return_dict_in_generate": false,
|
| 137 |
+
"sep_token_id": null,
|
| 138 |
+
"suppress_tokens": null,
|
| 139 |
+
"task_specific_params": null,
|
| 140 |
+
"temperature": 1.0,
|
| 141 |
+
"tie_encoder_decoder": false,
|
| 142 |
+
"tie_word_embeddings": true,
|
| 143 |
+
"tokenizer_class": null,
|
| 144 |
+
"top_k": 50,
|
| 145 |
+
"top_p": 1.0,
|
| 146 |
+
"typical_p": 1.0
|
| 147 |
+
},
|
| 148 |
+
"geometry_encoder_config": {
|
| 149 |
+
"_name_or_path": "",
|
| 150 |
+
"add_cross_attention": false,
|
| 151 |
+
"architectures": null,
|
| 152 |
+
"bad_words_ids": null,
|
| 153 |
+
"begin_suppress_tokens": null,
|
| 154 |
+
"bos_token_id": null,
|
| 155 |
+
"chunk_size_feed_forward": 0,
|
| 156 |
+
"cross_attention_hidden_size": null,
|
| 157 |
+
"decoder_start_token_id": null,
|
| 158 |
+
"diversity_penalty": 0.0,
|
| 159 |
+
"do_sample": false,
|
| 160 |
+
"dropout": 0.1,
|
| 161 |
+
"dtype": null,
|
| 162 |
+
"early_stopping": false,
|
| 163 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 164 |
+
"eos_token_id": null,
|
| 165 |
+
"exponential_decay_length_penalty": null,
|
| 166 |
+
"finetuning_task": null,
|
| 167 |
+
"forced_bos_token_id": null,
|
| 168 |
+
"forced_eos_token_id": null,
|
| 169 |
+
"hidden_act": "relu",
|
| 170 |
+
"hidden_dropout": 0.0,
|
| 171 |
+
"hidden_size": 256,
|
| 172 |
+
"id2label": {
|
| 173 |
+
"0": "LABEL_0",
|
| 174 |
+
"1": "LABEL_1"
|
| 175 |
+
},
|
| 176 |
+
"initializer_range": 0.02,
|
| 177 |
+
"intermediate_size": 2048,
|
| 178 |
+
"is_decoder": false,
|
| 179 |
+
"is_encoder_decoder": false,
|
| 180 |
+
"label2id": {
|
| 181 |
+
"LABEL_0": 0,
|
| 182 |
+
"LABEL_1": 1
|
| 183 |
+
},
|
| 184 |
+
"layer_norm_eps": 1e-06,
|
| 185 |
+
"length_penalty": 1.0,
|
| 186 |
+
"max_length": 20,
|
| 187 |
+
"min_length": 0,
|
| 188 |
+
"model_type": "sam3_geometry_encoder",
|
| 189 |
+
"no_repeat_ngram_size": 0,
|
| 190 |
+
"num_attention_heads": 8,
|
| 191 |
+
"num_beam_groups": 1,
|
| 192 |
+
"num_beams": 1,
|
| 193 |
+
"num_layers": 3,
|
| 194 |
+
"num_return_sequences": 1,
|
| 195 |
+
"output_attentions": false,
|
| 196 |
+
"output_hidden_states": false,
|
| 197 |
+
"output_scores": false,
|
| 198 |
+
"pad_token_id": null,
|
| 199 |
+
"prefix": null,
|
| 200 |
+
"problem_type": null,
|
| 201 |
+
"remove_invalid_values": false,
|
| 202 |
+
"repetition_penalty": 1.0,
|
| 203 |
+
"return_dict": true,
|
| 204 |
+
"return_dict_in_generate": false,
|
| 205 |
+
"roi_size": 7,
|
| 206 |
+
"sep_token_id": null,
|
| 207 |
+
"suppress_tokens": null,
|
| 208 |
+
"task_specific_params": null,
|
| 209 |
+
"temperature": 1.0,
|
| 210 |
+
"tie_encoder_decoder": false,
|
| 211 |
+
"tie_word_embeddings": true,
|
| 212 |
+
"tokenizer_class": null,
|
| 213 |
+
"top_k": 50,
|
| 214 |
+
"top_p": 1.0,
|
| 215 |
+
"typical_p": 1.0
|
| 216 |
+
},
|
| 217 |
+
"initializer_range": 0.02,
|
| 218 |
+
"mask_decoder_config": {
|
| 219 |
+
"_name_or_path": "",
|
| 220 |
+
"add_cross_attention": false,
|
| 221 |
+
"architectures": null,
|
| 222 |
+
"bad_words_ids": null,
|
| 223 |
+
"begin_suppress_tokens": null,
|
| 224 |
+
"bos_token_id": null,
|
| 225 |
+
"chunk_size_feed_forward": 0,
|
| 226 |
+
"cross_attention_hidden_size": null,
|
| 227 |
+
"decoder_start_token_id": null,
|
| 228 |
+
"diversity_penalty": 0.0,
|
| 229 |
+
"do_sample": false,
|
| 230 |
+
"dropout": 0.0,
|
| 231 |
+
"dtype": null,
|
| 232 |
+
"early_stopping": false,
|
| 233 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 234 |
+
"eos_token_id": null,
|
| 235 |
+
"exponential_decay_length_penalty": null,
|
| 236 |
+
"finetuning_task": null,
|
| 237 |
+
"forced_bos_token_id": null,
|
| 238 |
+
"forced_eos_token_id": null,
|
| 239 |
+
"hidden_size": 256,
|
| 240 |
+
"id2label": {
|
| 241 |
+
"0": "LABEL_0",
|
| 242 |
+
"1": "LABEL_1"
|
| 243 |
+
},
|
| 244 |
+
"initializer_range": 0.02,
|
| 245 |
+
"is_decoder": false,
|
| 246 |
+
"is_encoder_decoder": false,
|
| 247 |
+
"label2id": {
|
| 248 |
+
"LABEL_0": 0,
|
| 249 |
+
"LABEL_1": 1
|
| 250 |
+
},
|
| 251 |
+
"layer_norm_eps": 1e-06,
|
| 252 |
+
"length_penalty": 1.0,
|
| 253 |
+
"max_length": 20,
|
| 254 |
+
"min_length": 0,
|
| 255 |
+
"model_type": "sam3_mask_decoder",
|
| 256 |
+
"no_repeat_ngram_size": 0,
|
| 257 |
+
"num_attention_heads": 8,
|
| 258 |
+
"num_beam_groups": 1,
|
| 259 |
+
"num_beams": 1,
|
| 260 |
+
"num_return_sequences": 1,
|
| 261 |
+
"num_upsampling_stages": 3,
|
| 262 |
+
"output_attentions": false,
|
| 263 |
+
"output_hidden_states": false,
|
| 264 |
+
"output_scores": false,
|
| 265 |
+
"pad_token_id": null,
|
| 266 |
+
"prefix": null,
|
| 267 |
+
"problem_type": null,
|
| 268 |
+
"remove_invalid_values": false,
|
| 269 |
+
"repetition_penalty": 1.0,
|
| 270 |
+
"return_dict": true,
|
| 271 |
+
"return_dict_in_generate": false,
|
| 272 |
+
"sep_token_id": null,
|
| 273 |
+
"suppress_tokens": null,
|
| 274 |
+
"task_specific_params": null,
|
| 275 |
+
"temperature": 1.0,
|
| 276 |
+
"tie_encoder_decoder": false,
|
| 277 |
+
"tie_word_embeddings": true,
|
| 278 |
+
"tokenizer_class": null,
|
| 279 |
+
"top_k": 50,
|
| 280 |
+
"top_p": 1.0,
|
| 281 |
+
"typical_p": 1.0
|
| 282 |
+
},
|
| 283 |
+
"model_type": "sam3",
|
| 284 |
+
"text_config": {
|
| 285 |
+
"_name_or_path": "",
|
| 286 |
+
"add_cross_attention": false,
|
| 287 |
+
"architectures": null,
|
| 288 |
+
"attention_dropout": 0.0,
|
| 289 |
+
"bad_words_ids": null,
|
| 290 |
+
"begin_suppress_tokens": null,
|
| 291 |
+
"bos_token_id": 49406,
|
| 292 |
+
"chunk_size_feed_forward": 0,
|
| 293 |
+
"cross_attention_hidden_size": null,
|
| 294 |
+
"decoder_start_token_id": null,
|
| 295 |
+
"diversity_penalty": 0.0,
|
| 296 |
+
"do_sample": false,
|
| 297 |
+
"dtype": null,
|
| 298 |
+
"early_stopping": false,
|
| 299 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 300 |
+
"eos_token_id": 49407,
|
| 301 |
+
"exponential_decay_length_penalty": null,
|
| 302 |
+
"finetuning_task": null,
|
| 303 |
+
"forced_bos_token_id": null,
|
| 304 |
+
"forced_eos_token_id": null,
|
| 305 |
+
"hidden_act": "gelu",
|
| 306 |
+
"hidden_size": 1024,
|
| 307 |
+
"id2label": {
|
| 308 |
+
"0": "LABEL_0",
|
| 309 |
+
"1": "LABEL_1"
|
| 310 |
+
},
|
| 311 |
+
"initializer_factor": 1.0,
|
| 312 |
+
"initializer_range": 0.02,
|
| 313 |
+
"intermediate_size": 4096,
|
| 314 |
+
"is_decoder": false,
|
| 315 |
+
"is_encoder_decoder": false,
|
| 316 |
+
"label2id": {
|
| 317 |
+
"LABEL_0": 0,
|
| 318 |
+
"LABEL_1": 1
|
| 319 |
+
},
|
| 320 |
+
"layer_norm_eps": 1e-05,
|
| 321 |
+
"length_penalty": 1.0,
|
| 322 |
+
"max_length": 20,
|
| 323 |
+
"max_position_embeddings": 32,
|
| 324 |
+
"min_length": 0,
|
| 325 |
+
"model_type": "clip_text_model",
|
| 326 |
+
"no_repeat_ngram_size": 0,
|
| 327 |
+
"num_attention_heads": 16,
|
| 328 |
+
"num_beam_groups": 1,
|
| 329 |
+
"num_beams": 1,
|
| 330 |
+
"num_hidden_layers": 24,
|
| 331 |
+
"num_return_sequences": 1,
|
| 332 |
+
"output_attentions": false,
|
| 333 |
+
"output_hidden_states": false,
|
| 334 |
+
"output_scores": false,
|
| 335 |
+
"pad_token_id": 1,
|
| 336 |
+
"prefix": null,
|
| 337 |
+
"problem_type": null,
|
| 338 |
+
"projection_dim": 512,
|
| 339 |
+
"remove_invalid_values": false,
|
| 340 |
+
"repetition_penalty": 1.0,
|
| 341 |
+
"return_dict": true,
|
| 342 |
+
"return_dict_in_generate": false,
|
| 343 |
+
"sep_token_id": null,
|
| 344 |
+
"suppress_tokens": null,
|
| 345 |
+
"task_specific_params": null,
|
| 346 |
+
"temperature": 1.0,
|
| 347 |
+
"tie_encoder_decoder": false,
|
| 348 |
+
"tie_word_embeddings": true,
|
| 349 |
+
"tokenizer_class": null,
|
| 350 |
+
"top_k": 50,
|
| 351 |
+
"top_p": 1.0,
|
| 352 |
+
"typical_p": 1.0,
|
| 353 |
+
"vocab_size": 49408
|
| 354 |
+
},
|
| 355 |
+
"vision_config": {
|
| 356 |
+
"_name_or_path": "",
|
| 357 |
+
"add_cross_attention": false,
|
| 358 |
+
"architectures": null,
|
| 359 |
+
"backbone_config": {
|
| 360 |
+
"_name_or_path": "",
|
| 361 |
+
"add_cross_attention": false,
|
| 362 |
+
"architectures": null,
|
| 363 |
+
"attention_dropout": 0.0,
|
| 364 |
+
"bad_words_ids": null,
|
| 365 |
+
"begin_suppress_tokens": null,
|
| 366 |
+
"bos_token_id": null,
|
| 367 |
+
"chunk_size_feed_forward": 0,
|
| 368 |
+
"cross_attention_hidden_size": null,
|
| 369 |
+
"decoder_start_token_id": null,
|
| 370 |
+
"diversity_penalty": 0.0,
|
| 371 |
+
"do_sample": false,
|
| 372 |
+
"dtype": null,
|
| 373 |
+
"early_stopping": false,
|
| 374 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 375 |
+
"eos_token_id": null,
|
| 376 |
+
"exponential_decay_length_penalty": null,
|
| 377 |
+
"finetuning_task": null,
|
| 378 |
+
"forced_bos_token_id": null,
|
| 379 |
+
"forced_eos_token_id": null,
|
| 380 |
+
"global_attn_indexes": [
|
| 381 |
+
7,
|
| 382 |
+
15,
|
| 383 |
+
23,
|
| 384 |
+
31
|
| 385 |
+
],
|
| 386 |
+
"hidden_act": "gelu",
|
| 387 |
+
"hidden_dropout": 0.0,
|
| 388 |
+
"hidden_size": 1024,
|
| 389 |
+
"id2label": {
|
| 390 |
+
"0": "LABEL_0",
|
| 391 |
+
"1": "LABEL_1"
|
| 392 |
+
},
|
| 393 |
+
"image_size": 1008,
|
| 394 |
+
"initializer_range": 0.02,
|
| 395 |
+
"intermediate_size": 4736,
|
| 396 |
+
"is_decoder": false,
|
| 397 |
+
"is_encoder_decoder": false,
|
| 398 |
+
"label2id": {
|
| 399 |
+
"LABEL_0": 0,
|
| 400 |
+
"LABEL_1": 1
|
| 401 |
+
},
|
| 402 |
+
"layer_norm_eps": 1e-06,
|
| 403 |
+
"layer_scale_init_value": null,
|
| 404 |
+
"length_penalty": 1.0,
|
| 405 |
+
"max_length": 20,
|
| 406 |
+
"min_length": 0,
|
| 407 |
+
"model_type": "sam3_vit_model",
|
| 408 |
+
"no_repeat_ngram_size": 0,
|
| 409 |
+
"num_attention_heads": 16,
|
| 410 |
+
"num_beam_groups": 1,
|
| 411 |
+
"num_beams": 1,
|
| 412 |
+
"num_channels": 3,
|
| 413 |
+
"num_hidden_layers": 32,
|
| 414 |
+
"num_return_sequences": 1,
|
| 415 |
+
"output_attentions": false,
|
| 416 |
+
"output_hidden_states": false,
|
| 417 |
+
"output_scores": false,
|
| 418 |
+
"pad_token_id": null,
|
| 419 |
+
"patch_size": 14,
|
| 420 |
+
"prefix": null,
|
| 421 |
+
"pretrain_image_size": 336,
|
| 422 |
+
"problem_type": null,
|
| 423 |
+
"qkv_bias": true,
|
| 424 |
+
"remove_invalid_values": false,
|
| 425 |
+
"repetition_penalty": 1.0,
|
| 426 |
+
"return_dict": true,
|
| 427 |
+
"return_dict_in_generate": false,
|
| 428 |
+
"rope_theta": 10000.0,
|
| 429 |
+
"sep_token_id": null,
|
| 430 |
+
"suppress_tokens": null,
|
| 431 |
+
"task_specific_params": null,
|
| 432 |
+
"temperature": 1.0,
|
| 433 |
+
"tie_encoder_decoder": false,
|
| 434 |
+
"tie_word_embeddings": true,
|
| 435 |
+
"tokenizer_class": null,
|
| 436 |
+
"top_k": 50,
|
| 437 |
+
"top_p": 1.0,
|
| 438 |
+
"typical_p": 1.0,
|
| 439 |
+
"window_size": 24
|
| 440 |
+
},
|
| 441 |
+
"backbone_feature_sizes": [
|
| 442 |
+
[
|
| 443 |
+
288,
|
| 444 |
+
288
|
| 445 |
+
],
|
| 446 |
+
[
|
| 447 |
+
144,
|
| 448 |
+
144
|
| 449 |
+
],
|
| 450 |
+
[
|
| 451 |
+
72,
|
| 452 |
+
72
|
| 453 |
+
]
|
| 454 |
+
],
|
| 455 |
+
"bad_words_ids": null,
|
| 456 |
+
"begin_suppress_tokens": null,
|
| 457 |
+
"bos_token_id": null,
|
| 458 |
+
"chunk_size_feed_forward": 0,
|
| 459 |
+
"cross_attention_hidden_size": null,
|
| 460 |
+
"decoder_start_token_id": null,
|
| 461 |
+
"diversity_penalty": 0.0,
|
| 462 |
+
"do_sample": false,
|
| 463 |
+
"dtype": null,
|
| 464 |
+
"early_stopping": false,
|
| 465 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 466 |
+
"eos_token_id": null,
|
| 467 |
+
"exponential_decay_length_penalty": null,
|
| 468 |
+
"finetuning_task": null,
|
| 469 |
+
"forced_bos_token_id": null,
|
| 470 |
+
"forced_eos_token_id": null,
|
| 471 |
+
"fpn_hidden_size": 256,
|
| 472 |
+
"fpn_kernel_size": 2,
|
| 473 |
+
"fpn_stride": 2,
|
| 474 |
+
"hidden_act": "gelu",
|
| 475 |
+
"id2label": {
|
| 476 |
+
"0": "LABEL_0",
|
| 477 |
+
"1": "LABEL_1"
|
| 478 |
+
},
|
| 479 |
+
"initializer_range": 0.02,
|
| 480 |
+
"is_decoder": false,
|
| 481 |
+
"is_encoder_decoder": false,
|
| 482 |
+
"label2id": {
|
| 483 |
+
"LABEL_0": 0,
|
| 484 |
+
"LABEL_1": 1
|
| 485 |
+
},
|
| 486 |
+
"layer_norm_eps": 1e-06,
|
| 487 |
+
"length_penalty": 1.0,
|
| 488 |
+
"max_length": 20,
|
| 489 |
+
"min_length": 0,
|
| 490 |
+
"model_type": "sam3_vision_model",
|
| 491 |
+
"no_repeat_ngram_size": 0,
|
| 492 |
+
"num_beam_groups": 1,
|
| 493 |
+
"num_beams": 1,
|
| 494 |
+
"num_feature_levels": 3,
|
| 495 |
+
"num_return_sequences": 1,
|
| 496 |
+
"output_attentions": false,
|
| 497 |
+
"output_hidden_states": false,
|
| 498 |
+
"output_scores": false,
|
| 499 |
+
"pad_token_id": null,
|
| 500 |
+
"prefix": null,
|
| 501 |
+
"problem_type": null,
|
| 502 |
+
"remove_invalid_values": false,
|
| 503 |
+
"repetition_penalty": 1.0,
|
| 504 |
+
"return_dict": true,
|
| 505 |
+
"return_dict_in_generate": false,
|
| 506 |
+
"scale_factors": [
|
| 507 |
+
4.0,
|
| 508 |
+
2.0,
|
| 509 |
+
1.0,
|
| 510 |
+
0.5
|
| 511 |
+
],
|
| 512 |
+
"sep_token_id": null,
|
| 513 |
+
"suppress_tokens": null,
|
| 514 |
+
"task_specific_params": null,
|
| 515 |
+
"temperature": 1.0,
|
| 516 |
+
"tie_encoder_decoder": false,
|
| 517 |
+
"tie_word_embeddings": true,
|
| 518 |
+
"tokenizer_class": null,
|
| 519 |
+
"top_k": 50,
|
| 520 |
+
"top_p": 1.0,
|
| 521 |
+
"typical_p": 1.0
|
| 522 |
+
}
|
| 523 |
+
},
|
| 524 |
+
"dtype": "float32",
|
| 525 |
+
"fill_hole_area": 16,
|
| 526 |
+
"high_conf_thresh": 0.8,
|
| 527 |
+
"high_iou_thresh": 0.8,
|
| 528 |
+
"hotstart_delay": 15,
|
| 529 |
+
"hotstart_dup_thresh": 8,
|
| 530 |
+
"hotstart_unmatch_thresh": 8,
|
| 531 |
+
"init_trk_keep_alive": 30,
|
| 532 |
+
"initializer_range": 0.02,
|
| 533 |
+
"low_res_mask_size": 288,
|
| 534 |
+
"max_num_objects": 10000,
|
| 535 |
+
"max_trk_keep_alive": 30,
|
| 536 |
+
"min_trk_keep_alive": -1,
|
| 537 |
+
"model_type": "sam3_video",
|
| 538 |
+
"new_det_thresh": 0.7,
|
| 539 |
+
"recondition_every_nth_frame": 16,
|
| 540 |
+
"recondition_on_trk_masks": false,
|
| 541 |
+
"score_threshold_detection": 0.5,
|
| 542 |
+
"suppress_overlapping_based_on_recent_occlusion_threshold": 0.7,
|
| 543 |
+
"suppress_unmatched_only_within_hotstart": true,
|
| 544 |
+
"tracker_config": {
|
| 545 |
+
"enable_occlusion_spatial_embedding": true,
|
| 546 |
+
"enable_temporal_pos_encoding_for_object_pointers": true,
|
| 547 |
+
"image_size": 1008,
|
| 548 |
+
"initializer_range": 0.02,
|
| 549 |
+
"mask_decoder_config": {
|
| 550 |
+
"_name_or_path": "",
|
| 551 |
+
"add_cross_attention": false,
|
| 552 |
+
"architectures": null,
|
| 553 |
+
"attention_downsample_rate": 2,
|
| 554 |
+
"bad_words_ids": null,
|
| 555 |
+
"begin_suppress_tokens": null,
|
| 556 |
+
"bos_token_id": null,
|
| 557 |
+
"chunk_size_feed_forward": 0,
|
| 558 |
+
"cross_attention_hidden_size": null,
|
| 559 |
+
"decoder_start_token_id": null,
|
| 560 |
+
"diversity_penalty": 0.0,
|
| 561 |
+
"do_sample": false,
|
| 562 |
+
"dtype": null,
|
| 563 |
+
"dynamic_multimask_stability_delta": 0.05,
|
| 564 |
+
"dynamic_multimask_stability_thresh": 0.98,
|
| 565 |
+
"dynamic_multimask_via_stability": true,
|
| 566 |
+
"early_stopping": false,
|
| 567 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 568 |
+
"eos_token_id": null,
|
| 569 |
+
"exponential_decay_length_penalty": null,
|
| 570 |
+
"finetuning_task": null,
|
| 571 |
+
"forced_bos_token_id": null,
|
| 572 |
+
"forced_eos_token_id": null,
|
| 573 |
+
"hidden_act": "gelu",
|
| 574 |
+
"hidden_size": 256,
|
| 575 |
+
"id2label": {
|
| 576 |
+
"0": "LABEL_0",
|
| 577 |
+
"1": "LABEL_1"
|
| 578 |
+
},
|
| 579 |
+
"iou_head_depth": 3,
|
| 580 |
+
"iou_head_hidden_dim": 256,
|
| 581 |
+
"is_decoder": false,
|
| 582 |
+
"is_encoder_decoder": false,
|
| 583 |
+
"label2id": {
|
| 584 |
+
"LABEL_0": 0,
|
| 585 |
+
"LABEL_1": 1
|
| 586 |
+
},
|
| 587 |
+
"length_penalty": 1.0,
|
| 588 |
+
"max_length": 20,
|
| 589 |
+
"min_length": 0,
|
| 590 |
+
"mlp_dim": 2048,
|
| 591 |
+
"model_type": "",
|
| 592 |
+
"no_repeat_ngram_size": 0,
|
| 593 |
+
"num_attention_heads": 8,
|
| 594 |
+
"num_beam_groups": 1,
|
| 595 |
+
"num_beams": 1,
|
| 596 |
+
"num_hidden_layers": 2,
|
| 597 |
+
"num_multimask_outputs": 3,
|
| 598 |
+
"num_return_sequences": 1,
|
| 599 |
+
"output_attentions": false,
|
| 600 |
+
"output_hidden_states": false,
|
| 601 |
+
"output_scores": false,
|
| 602 |
+
"pad_token_id": null,
|
| 603 |
+
"prefix": null,
|
| 604 |
+
"problem_type": null,
|
| 605 |
+
"remove_invalid_values": false,
|
| 606 |
+
"repetition_penalty": 1.0,
|
| 607 |
+
"return_dict": true,
|
| 608 |
+
"return_dict_in_generate": false,
|
| 609 |
+
"sep_token_id": null,
|
| 610 |
+
"suppress_tokens": null,
|
| 611 |
+
"task_specific_params": null,
|
| 612 |
+
"temperature": 1.0,
|
| 613 |
+
"tie_encoder_decoder": false,
|
| 614 |
+
"tie_word_embeddings": true,
|
| 615 |
+
"tokenizer_class": null,
|
| 616 |
+
"top_k": 50,
|
| 617 |
+
"top_p": 1.0,
|
| 618 |
+
"typical_p": 1.0
|
| 619 |
+
},
|
| 620 |
+
"mask_downsampler_embed_dim": 256,
|
| 621 |
+
"mask_downsampler_hidden_act": "gelu",
|
| 622 |
+
"mask_downsampler_kernel_size": 3,
|
| 623 |
+
"mask_downsampler_padding": 1,
|
| 624 |
+
"mask_downsampler_stride": 2,
|
| 625 |
+
"mask_downsampler_total_stride": 16,
|
| 626 |
+
"max_cond_frame_num": 4,
|
| 627 |
+
"max_object_pointers_in_encoder": 16,
|
| 628 |
+
"memory_attention_downsample_rate": 1,
|
| 629 |
+
"memory_attention_dropout": 0.1,
|
| 630 |
+
"memory_attention_feed_forward_hidden_act": "relu",
|
| 631 |
+
"memory_attention_feed_forward_hidden_size": 2048,
|
| 632 |
+
"memory_attention_hidden_size": 256,
|
| 633 |
+
"memory_attention_num_attention_heads": 1,
|
| 634 |
+
"memory_attention_num_layers": 4,
|
| 635 |
+
"memory_attention_rope_dropout": 0.1,
|
| 636 |
+
"memory_attention_rope_feat_sizes": [
|
| 637 |
+
72,
|
| 638 |
+
72
|
| 639 |
+
],
|
| 640 |
+
"memory_attention_rope_theta": 10000,
|
| 641 |
+
"memory_encoder_hidden_size": 256,
|
| 642 |
+
"memory_encoder_output_channels": 64,
|
| 643 |
+
"memory_fuser_embed_dim": 256,
|
| 644 |
+
"memory_fuser_hidden_act": "gelu",
|
| 645 |
+
"memory_fuser_intermediate_dim": 1024,
|
| 646 |
+
"memory_fuser_kernel_size": 7,
|
| 647 |
+
"memory_fuser_layer_scale_init_value": 1e-06,
|
| 648 |
+
"memory_fuser_num_layers": 2,
|
| 649 |
+
"memory_fuser_padding": 3,
|
| 650 |
+
"model_type": "sam3_tracker_video",
|
| 651 |
+
"multimask_max_pt_num": 1,
|
| 652 |
+
"multimask_min_pt_num": 0,
|
| 653 |
+
"multimask_output_for_tracking": true,
|
| 654 |
+
"multimask_output_in_sam": true,
|
| 655 |
+
"num_maskmem": 7,
|
| 656 |
+
"prompt_encoder_config": {
|
| 657 |
+
"_name_or_path": "",
|
| 658 |
+
"add_cross_attention": false,
|
| 659 |
+
"architectures": null,
|
| 660 |
+
"bad_words_ids": null,
|
| 661 |
+
"begin_suppress_tokens": null,
|
| 662 |
+
"bos_token_id": null,
|
| 663 |
+
"chunk_size_feed_forward": 0,
|
| 664 |
+
"cross_attention_hidden_size": null,
|
| 665 |
+
"decoder_start_token_id": null,
|
| 666 |
+
"diversity_penalty": 0.0,
|
| 667 |
+
"do_sample": false,
|
| 668 |
+
"dtype": null,
|
| 669 |
+
"early_stopping": false,
|
| 670 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 671 |
+
"eos_token_id": null,
|
| 672 |
+
"exponential_decay_length_penalty": null,
|
| 673 |
+
"finetuning_task": null,
|
| 674 |
+
"forced_bos_token_id": null,
|
| 675 |
+
"forced_eos_token_id": null,
|
| 676 |
+
"hidden_act": "gelu",
|
| 677 |
+
"hidden_size": 256,
|
| 678 |
+
"id2label": {
|
| 679 |
+
"0": "LABEL_0",
|
| 680 |
+
"1": "LABEL_1"
|
| 681 |
+
},
|
| 682 |
+
"image_size": 1008,
|
| 683 |
+
"is_decoder": false,
|
| 684 |
+
"is_encoder_decoder": false,
|
| 685 |
+
"label2id": {
|
| 686 |
+
"LABEL_0": 0,
|
| 687 |
+
"LABEL_1": 1
|
| 688 |
+
},
|
| 689 |
+
"layer_norm_eps": 1e-06,
|
| 690 |
+
"length_penalty": 1.0,
|
| 691 |
+
"mask_input_channels": 16,
|
| 692 |
+
"max_length": 20,
|
| 693 |
+
"min_length": 0,
|
| 694 |
+
"model_type": "",
|
| 695 |
+
"no_repeat_ngram_size": 0,
|
| 696 |
+
"num_beam_groups": 1,
|
| 697 |
+
"num_beams": 1,
|
| 698 |
+
"num_point_embeddings": 4,
|
| 699 |
+
"num_return_sequences": 1,
|
| 700 |
+
"output_attentions": false,
|
| 701 |
+
"output_hidden_states": false,
|
| 702 |
+
"output_scores": false,
|
| 703 |
+
"pad_token_id": null,
|
| 704 |
+
"patch_size": 14,
|
| 705 |
+
"prefix": null,
|
| 706 |
+
"problem_type": null,
|
| 707 |
+
"remove_invalid_values": false,
|
| 708 |
+
"repetition_penalty": 1.0,
|
| 709 |
+
"return_dict": true,
|
| 710 |
+
"return_dict_in_generate": false,
|
| 711 |
+
"scale": 1,
|
| 712 |
+
"sep_token_id": null,
|
| 713 |
+
"suppress_tokens": null,
|
| 714 |
+
"task_specific_params": null,
|
| 715 |
+
"temperature": 1.0,
|
| 716 |
+
"tie_encoder_decoder": false,
|
| 717 |
+
"tie_word_embeddings": true,
|
| 718 |
+
"tokenizer_class": null,
|
| 719 |
+
"top_k": 50,
|
| 720 |
+
"top_p": 1.0,
|
| 721 |
+
"typical_p": 1.0
|
| 722 |
+
},
|
| 723 |
+
"sigmoid_bias_for_mem_enc": -10.0,
|
| 724 |
+
"sigmoid_scale_for_mem_enc": 20.0,
|
| 725 |
+
"vision_config": {
|
| 726 |
+
"_name_or_path": "",
|
| 727 |
+
"add_cross_attention": false,
|
| 728 |
+
"architectures": null,
|
| 729 |
+
"backbone_config": {
|
| 730 |
+
"_name_or_path": "",
|
| 731 |
+
"add_cross_attention": false,
|
| 732 |
+
"architectures": null,
|
| 733 |
+
"attention_dropout": 0.0,
|
| 734 |
+
"bad_words_ids": null,
|
| 735 |
+
"begin_suppress_tokens": null,
|
| 736 |
+
"bos_token_id": null,
|
| 737 |
+
"chunk_size_feed_forward": 0,
|
| 738 |
+
"cross_attention_hidden_size": null,
|
| 739 |
+
"decoder_start_token_id": null,
|
| 740 |
+
"diversity_penalty": 0.0,
|
| 741 |
+
"do_sample": false,
|
| 742 |
+
"dtype": null,
|
| 743 |
+
"early_stopping": false,
|
| 744 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 745 |
+
"eos_token_id": null,
|
| 746 |
+
"exponential_decay_length_penalty": null,
|
| 747 |
+
"finetuning_task": null,
|
| 748 |
+
"forced_bos_token_id": null,
|
| 749 |
+
"forced_eos_token_id": null,
|
| 750 |
+
"global_attn_indexes": [
|
| 751 |
+
7,
|
| 752 |
+
15,
|
| 753 |
+
23,
|
| 754 |
+
31
|
| 755 |
+
],
|
| 756 |
+
"hidden_act": "gelu",
|
| 757 |
+
"hidden_dropout": 0.0,
|
| 758 |
+
"hidden_size": 1024,
|
| 759 |
+
"id2label": {
|
| 760 |
+
"0": "LABEL_0",
|
| 761 |
+
"1": "LABEL_1"
|
| 762 |
+
},
|
| 763 |
+
"image_size": 1008,
|
| 764 |
+
"initializer_range": 0.02,
|
| 765 |
+
"intermediate_size": 4736,
|
| 766 |
+
"is_decoder": false,
|
| 767 |
+
"is_encoder_decoder": false,
|
| 768 |
+
"label2id": {
|
| 769 |
+
"LABEL_0": 0,
|
| 770 |
+
"LABEL_1": 1
|
| 771 |
+
},
|
| 772 |
+
"layer_norm_eps": 1e-06,
|
| 773 |
+
"layer_scale_init_value": null,
|
| 774 |
+
"length_penalty": 1.0,
|
| 775 |
+
"max_length": 20,
|
| 776 |
+
"min_length": 0,
|
| 777 |
+
"model_type": "sam3_vit_model",
|
| 778 |
+
"no_repeat_ngram_size": 0,
|
| 779 |
+
"num_attention_heads": 16,
|
| 780 |
+
"num_beam_groups": 1,
|
| 781 |
+
"num_beams": 1,
|
| 782 |
+
"num_channels": 3,
|
| 783 |
+
"num_hidden_layers": 32,
|
| 784 |
+
"num_return_sequences": 1,
|
| 785 |
+
"output_attentions": false,
|
| 786 |
+
"output_hidden_states": false,
|
| 787 |
+
"output_scores": false,
|
| 788 |
+
"pad_token_id": null,
|
| 789 |
+
"patch_size": 14,
|
| 790 |
+
"prefix": null,
|
| 791 |
+
"pretrain_image_size": 336,
|
| 792 |
+
"problem_type": null,
|
| 793 |
+
"qkv_bias": true,
|
| 794 |
+
"remove_invalid_values": false,
|
| 795 |
+
"repetition_penalty": 1.0,
|
| 796 |
+
"return_dict": true,
|
| 797 |
+
"return_dict_in_generate": false,
|
| 798 |
+
"rope_theta": 10000.0,
|
| 799 |
+
"sep_token_id": null,
|
| 800 |
+
"suppress_tokens": null,
|
| 801 |
+
"task_specific_params": null,
|
| 802 |
+
"temperature": 1.0,
|
| 803 |
+
"tie_encoder_decoder": false,
|
| 804 |
+
"tie_word_embeddings": true,
|
| 805 |
+
"tokenizer_class": null,
|
| 806 |
+
"top_k": 50,
|
| 807 |
+
"top_p": 1.0,
|
| 808 |
+
"typical_p": 1.0,
|
| 809 |
+
"window_size": 24
|
| 810 |
+
},
|
| 811 |
+
"backbone_feature_sizes": [
|
| 812 |
+
[
|
| 813 |
+
288,
|
| 814 |
+
288
|
| 815 |
+
],
|
| 816 |
+
[
|
| 817 |
+
144,
|
| 818 |
+
144
|
| 819 |
+
],
|
| 820 |
+
[
|
| 821 |
+
72,
|
| 822 |
+
72
|
| 823 |
+
]
|
| 824 |
+
],
|
| 825 |
+
"bad_words_ids": null,
|
| 826 |
+
"begin_suppress_tokens": null,
|
| 827 |
+
"bos_token_id": null,
|
| 828 |
+
"chunk_size_feed_forward": 0,
|
| 829 |
+
"cross_attention_hidden_size": null,
|
| 830 |
+
"decoder_start_token_id": null,
|
| 831 |
+
"diversity_penalty": 0.0,
|
| 832 |
+
"do_sample": false,
|
| 833 |
+
"dtype": null,
|
| 834 |
+
"early_stopping": false,
|
| 835 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 836 |
+
"eos_token_id": null,
|
| 837 |
+
"exponential_decay_length_penalty": null,
|
| 838 |
+
"finetuning_task": null,
|
| 839 |
+
"forced_bos_token_id": null,
|
| 840 |
+
"forced_eos_token_id": null,
|
| 841 |
+
"fpn_hidden_size": 256,
|
| 842 |
+
"fpn_kernel_size": 2,
|
| 843 |
+
"fpn_stride": 2,
|
| 844 |
+
"hidden_act": "gelu",
|
| 845 |
+
"id2label": {
|
| 846 |
+
"0": "LABEL_0",
|
| 847 |
+
"1": "LABEL_1"
|
| 848 |
+
},
|
| 849 |
+
"initializer_range": 0.02,
|
| 850 |
+
"is_decoder": false,
|
| 851 |
+
"is_encoder_decoder": false,
|
| 852 |
+
"label2id": {
|
| 853 |
+
"LABEL_0": 0,
|
| 854 |
+
"LABEL_1": 1
|
| 855 |
+
},
|
| 856 |
+
"layer_norm_eps": 1e-06,
|
| 857 |
+
"length_penalty": 1.0,
|
| 858 |
+
"max_length": 20,
|
| 859 |
+
"min_length": 0,
|
| 860 |
+
"model_type": "sam3_vision_model",
|
| 861 |
+
"no_repeat_ngram_size": 0,
|
| 862 |
+
"num_beam_groups": 1,
|
| 863 |
+
"num_beams": 1,
|
| 864 |
+
"num_feature_levels": 3,
|
| 865 |
+
"num_return_sequences": 1,
|
| 866 |
+
"output_attentions": false,
|
| 867 |
+
"output_hidden_states": false,
|
| 868 |
+
"output_scores": false,
|
| 869 |
+
"pad_token_id": null,
|
| 870 |
+
"prefix": null,
|
| 871 |
+
"problem_type": null,
|
| 872 |
+
"remove_invalid_values": false,
|
| 873 |
+
"repetition_penalty": 1.0,
|
| 874 |
+
"return_dict": true,
|
| 875 |
+
"return_dict_in_generate": false,
|
| 876 |
+
"scale_factors": [
|
| 877 |
+
4.0,
|
| 878 |
+
2.0,
|
| 879 |
+
1.0,
|
| 880 |
+
0.5
|
| 881 |
+
],
|
| 882 |
+
"sep_token_id": null,
|
| 883 |
+
"suppress_tokens": null,
|
| 884 |
+
"task_specific_params": null,
|
| 885 |
+
"temperature": 1.0,
|
| 886 |
+
"tie_encoder_decoder": false,
|
| 887 |
+
"tie_word_embeddings": true,
|
| 888 |
+
"tokenizer_class": null,
|
| 889 |
+
"top_k": 50,
|
| 890 |
+
"top_p": 1.0,
|
| 891 |
+
"typical_p": 1.0
|
| 892 |
+
}
|
| 893 |
+
},
|
| 894 |
+
"transformers_version": "5.0.0.dev0",
|
| 895 |
+
"trk_assoc_iou_thresh": 0.5
|
| 896 |
+
}
|
merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6d06f0a5f84e435071fe6603e61d0b4cc7b40e0d39d487cfd4d67d8cc11cc14a
|
| 3 |
+
size 3439938512
|
processor_config.json
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"image_processor": {
|
| 3 |
+
"crop_size": null,
|
| 4 |
+
"data_format": "channels_first",
|
| 5 |
+
"device": null,
|
| 6 |
+
"disable_grouping": null,
|
| 7 |
+
"do_center_crop": null,
|
| 8 |
+
"do_convert_rgb": true,
|
| 9 |
+
"do_normalize": true,
|
| 10 |
+
"do_pad": null,
|
| 11 |
+
"do_rescale": true,
|
| 12 |
+
"do_resize": true,
|
| 13 |
+
"image_mean": [
|
| 14 |
+
0.5,
|
| 15 |
+
0.5,
|
| 16 |
+
0.5
|
| 17 |
+
],
|
| 18 |
+
"image_processor_type": "Sam3ImageProcessorFast",
|
| 19 |
+
"image_seq_length": null,
|
| 20 |
+
"image_std": [
|
| 21 |
+
0.5,
|
| 22 |
+
0.5,
|
| 23 |
+
0.5
|
| 24 |
+
],
|
| 25 |
+
"input_data_format": null,
|
| 26 |
+
"mask_size": {
|
| 27 |
+
"height": 288,
|
| 28 |
+
"width": 288
|
| 29 |
+
},
|
| 30 |
+
"pad_size": null,
|
| 31 |
+
"processor_class": "Sam3VideoProcessor",
|
| 32 |
+
"resample": 2,
|
| 33 |
+
"rescale_factor": 0.00392156862745098,
|
| 34 |
+
"return_tensors": null,
|
| 35 |
+
"size": {
|
| 36 |
+
"height": 1008,
|
| 37 |
+
"width": 1008
|
| 38 |
+
}
|
| 39 |
+
},
|
| 40 |
+
"processor_class": "Sam3VideoProcessor",
|
| 41 |
+
"target_size": 1008,
|
| 42 |
+
"video_processor": {
|
| 43 |
+
"crop_size": null,
|
| 44 |
+
"data_format": "channels_first",
|
| 45 |
+
"default_to_square": true,
|
| 46 |
+
"device": null,
|
| 47 |
+
"do_center_crop": null,
|
| 48 |
+
"do_convert_rgb": true,
|
| 49 |
+
"do_normalize": true,
|
| 50 |
+
"do_pad": null,
|
| 51 |
+
"do_rescale": true,
|
| 52 |
+
"do_resize": true,
|
| 53 |
+
"do_sample_frames": null,
|
| 54 |
+
"fps": null,
|
| 55 |
+
"image_mean": [
|
| 56 |
+
0.5,
|
| 57 |
+
0.5,
|
| 58 |
+
0.5
|
| 59 |
+
],
|
| 60 |
+
"image_std": [
|
| 61 |
+
0.5,
|
| 62 |
+
0.5,
|
| 63 |
+
0.5
|
| 64 |
+
],
|
| 65 |
+
"input_data_format": null,
|
| 66 |
+
"num_frames": null,
|
| 67 |
+
"pad_size": null,
|
| 68 |
+
"processor_class": "Sam3VideoProcessor",
|
| 69 |
+
"resample": 2,
|
| 70 |
+
"rescale_factor": 0.00392156862745098,
|
| 71 |
+
"return_metadata": false,
|
| 72 |
+
"return_tensors": null,
|
| 73 |
+
"size": {
|
| 74 |
+
"height": 1008,
|
| 75 |
+
"width": 1008
|
| 76 |
+
},
|
| 77 |
+
"video_metadata": null,
|
| 78 |
+
"video_processor_type": "Sam2VideoVideoProcessor"
|
| 79 |
+
}
|
| 80 |
+
}
|
sam3.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9999e2341ceef5e136daa386eecb55cb414446a00ac2b55eb2dfd2f7c3cf8c9e
|
| 3 |
+
size 3450062241
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": {
|
| 3 |
+
"content": "<|startoftext|>",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": true,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"eos_token": {
|
| 10 |
+
"content": "<|endoftext|>",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": false,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"pad_token": {
|
| 17 |
+
"content": "<|endoftext|>",
|
| 18 |
+
"lstrip": false,
|
| 19 |
+
"normalized": false,
|
| 20 |
+
"rstrip": false,
|
| 21 |
+
"single_word": false
|
| 22 |
+
},
|
| 23 |
+
"unk_token": {
|
| 24 |
+
"content": "<|endoftext|>",
|
| 25 |
+
"lstrip": false,
|
| 26 |
+
"normalized": false,
|
| 27 |
+
"rstrip": false,
|
| 28 |
+
"single_word": false
|
| 29 |
+
}
|
| 30 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"added_tokens_decoder": {
|
| 4 |
+
"49406": {
|
| 5 |
+
"content": "<|startoftext|>",
|
| 6 |
+
"lstrip": false,
|
| 7 |
+
"normalized": true,
|
| 8 |
+
"rstrip": false,
|
| 9 |
+
"single_word": false,
|
| 10 |
+
"special": true
|
| 11 |
+
},
|
| 12 |
+
"49407": {
|
| 13 |
+
"content": "<|endoftext|>",
|
| 14 |
+
"lstrip": false,
|
| 15 |
+
"normalized": false,
|
| 16 |
+
"rstrip": false,
|
| 17 |
+
"single_word": false,
|
| 18 |
+
"special": true
|
| 19 |
+
}
|
| 20 |
+
},
|
| 21 |
+
"bos_token": "<|startoftext|>",
|
| 22 |
+
"clean_up_tokenization_spaces": false,
|
| 23 |
+
"do_lower_case": true,
|
| 24 |
+
"eos_token": "<|endoftext|>",
|
| 25 |
+
"errors": "replace",
|
| 26 |
+
"extra_special_tokens": {},
|
| 27 |
+
"max_length": 32,
|
| 28 |
+
"model_max_length": 32,
|
| 29 |
+
"pad_token": "<|endoftext|>",
|
| 30 |
+
"processor_class": "Sam3VideoProcessor",
|
| 31 |
+
"tokenizer_class": "CLIPTokenizer",
|
| 32 |
+
"unk_token": "<|endoftext|>"
|
| 33 |
+
}
|
vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|