NVJT commited on
Commit
3b554c2
·
verified ·
1 Parent(s): 52fd434

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
4
+ pipeline_tag: text-generation
5
+ base_model:
6
+ - Qwen/Qwen3-Coder-30B-A3B-Instruct
7
+ base_model_relation: quantized
8
+ library_name: Model Optimizer
9
+ tags:
10
+ - nvidia
11
+ - ModelOpt
12
+ - Qwen3
13
+ - quantized
14
+ - FP4
15
+ ---
16
+
17
+ # Qwen3-Coder-30B-A3B-Instruct
18
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
19
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
20
+ </a>
21
+
22
+ ## Highlights
23
+
24
+ **Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:
25
+
26
+ - **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks.
27
+ - **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding.
28
+ - **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format.
29
+
30
+ ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-30a3-main.jpg)
31
+
32
+ ## Model Overview
33
+
34
+ **Qwen3-Coder-30B-A3B-Instruct** has the following features:
35
+ - Type: Causal Language Models
36
+ - Training Stage: Pretraining & Post-training
37
+ - Number of Parameters: 30.5B in total and 3.3B activated
38
+ - Number of Layers: 48
39
+ - Number of Attention Heads (GQA): 32 for Q and 4 for KV
40
+ - Number of Experts: 128
41
+ - Number of Activated Experts: 8
42
+ - Context Length: **262,144 natively**.
43
+
44
+ **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
45
+
46
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
47
+
48
+
49
+ ## Quickstart
50
+
51
+ We advise you to use the latest version of `transformers`.
52
+
53
+ With `transformers<4.51.0`, you will encounter the following error:
54
+ ```
55
+ KeyError: 'qwen3_moe'
56
+ ```
57
+
58
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+ model_name = "Qwen/Qwen3-Coder-30B-A3B-Instruct"
63
+
64
+ # load the tokenizer and the model
65
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
66
+ model = AutoModelForCausalLM.from_pretrained(
67
+ model_name,
68
+ torch_dtype="auto",
69
+ device_map="auto"
70
+ )
71
+
72
+ # prepare the model input
73
+ prompt = "Write a quick sort algorithm."
74
+ messages = [
75
+ {"role": "user", "content": prompt}
76
+ ]
77
+ text = tokenizer.apply_chat_template(
78
+ messages,
79
+ tokenize=False,
80
+ add_generation_prompt=True,
81
+ )
82
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
83
+
84
+ # conduct text completion
85
+ generated_ids = model.generate(
86
+ **model_inputs,
87
+ max_new_tokens=65536
88
+ )
89
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
90
+
91
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
92
+
93
+ print("content:", content)
94
+ ```
95
+
96
+ **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
97
+
98
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
99
+
100
+ ## Agentic Coding
101
+
102
+ Qwen3-Coder excels in tool calling capabilities.
103
+
104
+ You can simply define or use any tools as following example.
105
+ ```python
106
+ # Your tool implementation
107
+ def square_the_number(num: float) -> dict:
108
+ return num ** 2
109
+
110
+ # Define Tools
111
+ tools=[
112
+ {
113
+ "type":"function",
114
+ "function":{
115
+ "name": "square_the_number",
116
+ "description": "output the square of the number.",
117
+ "parameters": {
118
+ "type": "object",
119
+ "required": ["input_num"],
120
+ "properties": {
121
+ 'input_num': {
122
+ 'type': 'number',
123
+ 'description': 'input_num is a number that will be squared'
124
+ }
125
+ },
126
+ }
127
+ }
128
+ }
129
+ ]
130
+
131
+ import OpenAI
132
+ # Define LLM
133
+ client = OpenAI(
134
+ # Use a custom endpoint compatible with OpenAI API
135
+ base_url='http://localhost:8000/v1', # api_base
136
+ api_key="EMPTY"
137
+ )
138
+
139
+ messages = [{'role': 'user', 'content': 'square the number 1024'}]
140
+
141
+ completion = client.chat.completions.create(
142
+ messages=messages,
143
+ model="Qwen3-Coder-30B-A3B-Instruct",
144
+ max_tokens=65536,
145
+ tools=tools,
146
+ )
147
+
148
+ print(completion.choice[0])
149
+ ```
150
+
151
+ ## Best Practices
152
+
153
+ To achieve optimal performance, we recommend the following settings:
154
+
155
+ 1. **Sampling Parameters**:
156
+ - We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`.
157
+
158
+ 2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
159
+
160
+
161
+ ### Citation
162
+
163
+ If you find our work helpful, feel free to give us a cite.
164
+
165
+ ```
166
+ @misc{qwen3technicalreport,
167
+ title={Qwen3 Technical Report},
168
+ author={Qwen Team},
169
+ year={2025},
170
+ eprint={2505.09388},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.CL},
173
+ url={https://arxiv.org/abs/2505.09388},
174
+ }
175
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% macro render_item_list(item_list, tag_name='required') %}
2
+ {%- if item_list is defined and item_list is iterable and item_list | length > 0 %}
3
+ {%- if tag_name %}{{- '\n<' ~ tag_name ~ '>' -}}{% endif %}
4
+ {{- '[' }}
5
+ {%- for item in item_list -%}
6
+ {%- if loop.index > 1 %}{{- ", "}}{% endif -%}
7
+ {%- if item is string -%}
8
+ {{ "`" ~ item ~ "`" }}
9
+ {%- else -%}
10
+ {{ item }}
11
+ {%- endif -%}
12
+ {%- endfor -%}
13
+ {{- ']' }}
14
+ {%- if tag_name %}{{- '</' ~ tag_name ~ '>' -}}{% endif %}
15
+ {%- endif %}
16
+ {% endmacro %}
17
+
18
+ {%- if messages[0]["role"] == "system" %}
19
+ {%- set system_message = messages[0]["content"] %}
20
+ {%- set loop_messages = messages[1:] %}
21
+ {%- else %}
22
+ {%- set loop_messages = messages %}
23
+ {%- endif %}
24
+
25
+ {%- if not tools is defined %}
26
+ {%- set tools = [] %}
27
+ {%- endif %}
28
+
29
+ {%- if system_message is defined %}
30
+ {{- "<|im_start|>system\n" + system_message }}
31
+ {%- else %}
32
+ {%- if tools is iterable and tools | length > 0 %}
33
+ {{- "<|im_start|>system\nYou are Qwen, a helpful AI assistant that can interact with a computer to solve tasks." }}
34
+ {%- endif %}
35
+ {%- endif %}
36
+ {%- if tools is iterable and tools | length > 0 %}
37
+ {{- "\n\nYou have access to the following functions:\n\n" }}
38
+ {{- "<tools>" }}
39
+ {%- for tool in tools %}
40
+ {%- if tool.function is defined %}
41
+ {%- set tool = tool.function %}
42
+ {%- endif %}
43
+ {{- "\n<function>\n<name>" ~ tool.name ~ "</name>" }}
44
+ {{- '\n<description>' ~ (tool.description | trim) ~ '</description>' }}
45
+ {{- '\n<parameters>' }}
46
+ {%- for param_name, param_fields in tool.parameters.properties|items %}
47
+ {{- '\n<parameter>' }}
48
+ {{- '\n<name>' ~ param_name ~ '</name>' }}
49
+ {%- if param_fields.type is defined %}
50
+ {{- '\n<type>' ~ (param_fields.type | string) ~ '</type>' }}
51
+ {%- endif %}
52
+ {%- if param_fields.description is defined %}
53
+ {{- '\n<description>' ~ (param_fields.description | trim) ~ '</description>' }}
54
+ {%- endif %}
55
+ {{- render_item_list(param_fields.enum, 'enum') }}
56
+ {%- set handled_keys = ['type', 'description', 'enum', 'required'] %}
57
+ {%- for json_key in param_fields.keys() | reject("in", handled_keys) %}
58
+ {%- set normed_json_key = json_key | replace("-", "_") | replace(" ", "_") | replace("$", "") %}
59
+ {%- if param_fields[json_key] is mapping %}
60
+ {{- '\n<' ~ normed_json_key ~ '>' ~ (param_fields[json_key] | tojson | safe) ~ '</' ~ normed_json_key ~ '>' }}
61
+ {%- else %}
62
+ {{-'\n<' ~ normed_json_key ~ '>' ~ (param_fields[json_key] | string) ~ '</' ~ normed_json_key ~ '>' }}
63
+ {%- endif %}
64
+ {%- endfor %}
65
+ {{- render_item_list(param_fields.required, 'required') }}
66
+ {{- '\n</parameter>' }}
67
+ {%- endfor %}
68
+ {{- render_item_list(tool.parameters.required, 'required') }}
69
+ {{- '\n</parameters>' }}
70
+ {%- if tool.return is defined %}
71
+ {%- if tool.return is mapping %}
72
+ {{- '\n<return>' ~ (tool.return | tojson | safe) ~ '</return>' }}
73
+ {%- else %}
74
+ {{- '\n<return>' ~ (tool.return | string) ~ '</return>' }}
75
+ {%- endif %}
76
+ {%- endif %}
77
+ {{- '\n</function>' }}
78
+ {%- endfor %}
79
+ {{- "\n</tools>" }}
80
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
81
+ {%- endif %}
82
+ {%- if system_message is defined %}
83
+ {{- '<|im_end|>\n' }}
84
+ {%- else %}
85
+ {%- if tools is iterable and tools | length > 0 %}
86
+ {{- '<|im_end|>\n' }}
87
+ {%- endif %}
88
+ {%- endif %}
89
+ {%- for message in loop_messages %}
90
+ {%- if message.role == "assistant" and message.tool_calls is defined and message.tool_calls is iterable and message.tool_calls | length > 0 %}
91
+ {{- '<|im_start|>' + message.role }}
92
+ {%- if message.content is defined and message.content is string and message.content | trim | length > 0 %}
93
+ {{- '\n' + message.content | trim + '\n' }}
94
+ {%- endif %}
95
+ {%- for tool_call in message.tool_calls %}
96
+ {%- if tool_call.function is defined %}
97
+ {%- set tool_call = tool_call.function %}
98
+ {%- endif %}
99
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
100
+ {%- if tool_call.arguments is defined %}
101
+ {%- for args_name, args_value in tool_call.arguments|items %}
102
+ {{- '<parameter=' + args_name + '>\n' }}
103
+ {%- set args_value = args_value if args_value is string else args_value | string %}
104
+ {{- args_value }}
105
+ {{- '\n</parameter>\n' }}
106
+ {%- endfor %}
107
+ {%- endif %}
108
+ {{- '</function>\n</tool_call>' }}
109
+ {%- endfor %}
110
+ {{- '<|im_end|>\n' }}
111
+ {%- elif message.role == "user" or message.role == "system" or message.role == "assistant" %}
112
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
113
+ {%- elif message.role == "tool" %}
114
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
115
+ {{- '<|im_start|>user\n' }}
116
+ {%- endif %}
117
+ {{- '<tool_response>\n' }}
118
+ {{- message.content }}
119
+ {{- '\n</tool_response>\n' }}
120
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
121
+ {{- '<|im_end|>\n' }}
122
+ {%- elif loop.last %}
123
+ {{- '<|im_end|>\n' }}
124
+ {%- endif %}
125
+ {%- else %}
126
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>\n' }}
127
+ {%- endif %}
128
+ {%- endfor %}
129
+ {%- if add_generation_prompt %}
130
+ {{- '<|im_start|>assistant\n' }}
131
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3MoeForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "decoder_sparse_step": 1,
8
+ "eos_token_id": 151645,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 2048,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 5472,
14
+ "max_position_embeddings": 262144,
15
+ "max_window_layers": 28,
16
+ "mlp_only_layers": [],
17
+ "model_type": "qwen3_moe",
18
+ "moe_intermediate_size": 768,
19
+ "norm_topk_prob": true,
20
+ "num_attention_heads": 32,
21
+ "num_experts": 128,
22
+ "num_experts_per_tok": 8,
23
+ "num_hidden_layers": 48,
24
+ "num_key_value_heads": 4,
25
+ "output_router_logits": false,
26
+ "qkv_bias": false,
27
+ "rms_norm_eps": 1e-06,
28
+ "rope_scaling": null,
29
+ "rope_theta": 10000000,
30
+ "router_aux_loss_coef": 0.0,
31
+ "shared_expert_intermediate_size": 0,
32
+ "sliding_window": null,
33
+ "tie_word_embeddings": false,
34
+ "torch_dtype": "bfloat16",
35
+ "transformers_version": "4.53.1",
36
+ "use_cache": true,
37
+ "use_qk_norm": true,
38
+ "use_sliding_window": false,
39
+ "vocab_size": 151936,
40
+ "quantization_config": {
41
+ "config_groups": {
42
+ "group_0": {
43
+ "input_activations": {
44
+ "dynamic": false,
45
+ "num_bits": 4,
46
+ "type": "float",
47
+ "group_size": 16
48
+ },
49
+ "weights": {
50
+ "dynamic": false,
51
+ "num_bits": 4,
52
+ "type": "float",
53
+ "group_size": 16
54
+ },
55
+ "targets": [
56
+ "Linear"
57
+ ]
58
+ }
59
+ },
60
+ "ignore": [
61
+ "model.layers.0.mlp.gate",
62
+ "model.layers.1.mlp.gate",
63
+ "model.layers.10.mlp.gate",
64
+ "model.layers.11.mlp.gate",
65
+ "model.layers.12.mlp.gate",
66
+ "model.layers.13.mlp.gate",
67
+ "model.layers.14.mlp.gate",
68
+ "model.layers.15.mlp.gate",
69
+ "model.layers.16.mlp.gate",
70
+ "model.layers.17.mlp.gate",
71
+ "model.layers.18.mlp.gate",
72
+ "model.layers.19.mlp.gate",
73
+ "model.layers.2.mlp.gate",
74
+ "model.layers.20.mlp.gate",
75
+ "model.layers.21.mlp.gate",
76
+ "model.layers.22.mlp.gate",
77
+ "model.layers.23.mlp.gate",
78
+ "model.layers.24.mlp.gate",
79
+ "model.layers.25.mlp.gate",
80
+ "model.layers.26.mlp.gate",
81
+ "model.layers.27.mlp.gate",
82
+ "model.layers.28.mlp.gate",
83
+ "model.layers.29.mlp.gate",
84
+ "model.layers.3.mlp.gate",
85
+ "model.layers.30.mlp.gate",
86
+ "model.layers.31.mlp.gate",
87
+ "model.layers.32.mlp.gate",
88
+ "model.layers.33.mlp.gate",
89
+ "model.layers.34.mlp.gate",
90
+ "model.layers.35.mlp.gate",
91
+ "model.layers.36.mlp.gate",
92
+ "model.layers.37.mlp.gate",
93
+ "model.layers.38.mlp.gate",
94
+ "model.layers.39.mlp.gate",
95
+ "model.layers.4.mlp.gate",
96
+ "model.layers.40.mlp.gate",
97
+ "model.layers.41.mlp.gate",
98
+ "model.layers.42.mlp.gate",
99
+ "model.layers.43.mlp.gate",
100
+ "model.layers.44.mlp.gate",
101
+ "model.layers.45.mlp.gate",
102
+ "model.layers.46.mlp.gate",
103
+ "model.layers.47.mlp.gate",
104
+ "model.layers.5.mlp.gate",
105
+ "model.layers.6.mlp.gate",
106
+ "model.layers.7.mlp.gate",
107
+ "model.layers.8.mlp.gate",
108
+ "model.layers.9.mlp.gate",
109
+ "lm_head"
110
+ ],
111
+ "quant_algo": "NVFP4",
112
+ "kv_cache_scheme": {
113
+ "dynamic": false,
114
+ "num_bits": 8,
115
+ "type": "float"
116
+ },
117
+ "producer": {
118
+ "name": "modelopt",
119
+ "version": "0.33.1.dev3+ga65bc2f"
120
+ },
121
+ "quant_method": "modelopt"
122
+ }
123
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_sample": true,
3
+ "eos_token_id": [
4
+ 151645,
5
+ 151643
6
+ ],
7
+ "pad_token_id": 151643,
8
+ "repetition_penalty": 1.05,
9
+ "temperature": 0.7,
10
+ "top_k": 20,
11
+ "top_p": 0.8,
12
+ "transformers_version": "4.53.1"
13
+ }
hf_quant_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "producer": {
3
+ "name": "modelopt",
4
+ "version": "0.33.1.dev3+ga65bc2f"
5
+ },
6
+ "quantization": {
7
+ "quant_algo": "NVFP4",
8
+ "kv_cache_quant_algo": "FP8",
9
+ "group_size": 16,
10
+ "exclude_modules": [
11
+ "model.layers.0.mlp.gate",
12
+ "model.layers.1.mlp.gate",
13
+ "model.layers.10.mlp.gate",
14
+ "model.layers.11.mlp.gate",
15
+ "model.layers.12.mlp.gate",
16
+ "model.layers.13.mlp.gate",
17
+ "model.layers.14.mlp.gate",
18
+ "model.layers.15.mlp.gate",
19
+ "model.layers.16.mlp.gate",
20
+ "model.layers.17.mlp.gate",
21
+ "model.layers.18.mlp.gate",
22
+ "model.layers.19.mlp.gate",
23
+ "model.layers.2.mlp.gate",
24
+ "model.layers.20.mlp.gate",
25
+ "model.layers.21.mlp.gate",
26
+ "model.layers.22.mlp.gate",
27
+ "model.layers.23.mlp.gate",
28
+ "model.layers.24.mlp.gate",
29
+ "model.layers.25.mlp.gate",
30
+ "model.layers.26.mlp.gate",
31
+ "model.layers.27.mlp.gate",
32
+ "model.layers.28.mlp.gate",
33
+ "model.layers.29.mlp.gate",
34
+ "model.layers.3.mlp.gate",
35
+ "model.layers.30.mlp.gate",
36
+ "model.layers.31.mlp.gate",
37
+ "model.layers.32.mlp.gate",
38
+ "model.layers.33.mlp.gate",
39
+ "model.layers.34.mlp.gate",
40
+ "model.layers.35.mlp.gate",
41
+ "model.layers.36.mlp.gate",
42
+ "model.layers.37.mlp.gate",
43
+ "model.layers.38.mlp.gate",
44
+ "model.layers.39.mlp.gate",
45
+ "model.layers.4.mlp.gate",
46
+ "model.layers.40.mlp.gate",
47
+ "model.layers.41.mlp.gate",
48
+ "model.layers.42.mlp.gate",
49
+ "model.layers.43.mlp.gate",
50
+ "model.layers.44.mlp.gate",
51
+ "model.layers.45.mlp.gate",
52
+ "model.layers.46.mlp.gate",
53
+ "model.layers.47.mlp.gate",
54
+ "model.layers.5.mlp.gate",
55
+ "model.layers.6.mlp.gate",
56
+ "model.layers.7.mlp.gate",
57
+ "model.layers.8.mlp.gate",
58
+ "model.layers.9.mlp.gate",
59
+ "lm_head"
60
+ ]
61
+ }
62
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28093ed51163c2ed16d49b2c798374911eafa94159f6dbae212c51f977a2bc8b
3
+ size 5002180600
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed3ebbe1ea740f10c05b714d65cb134b4f9d1a0ed6abd540dc978f6789b3709d
3
+ size 5002610700
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e885fe6bc262244f91b43e6aef3e938b20163a4615b6bbfc2478b56e39d28e3
3
+ size 5001922900
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1942c8133715cc12564eb5066bbfcfc75ab233f2a325e95b43007741203bd5c3
3
+ size 3089614888
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": "<|endoftext|>",
18
+ "pad_token": "<|endoftext|>"
19
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|endoftext|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 1048576,
235
+ "pad_token": "<|endoftext|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff