Update Model Card for NPUv0530
#3
by
EnzymeZoo
- opened
README.md
CHANGED
@@ -8,9 +8,13 @@ pipeline_tag: text-to-image
|
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
|
11 |
-
# stabilityai/stable-diffusion-3-medium - AMD Optimized ONNX
|
12 |
|
13 |
-
This repository hosts the **AMD Optimized version of Stable Diffusion 3 Medium** created in collaboration with [AMD](https://huggingface.co/amd).
|
|
|
|
|
|
|
|
|
14 |
|
15 |
## Model Description
|
16 |
Refer to the [Stable Diffusion 3 Medium Model card](https://huggingface.co/stabilityai/stable-diffusion-3-medium) for more details.
|
@@ -22,8 +26,9 @@ Refer to the [Stable Diffusion 3 Medium Model card](https://huggingface.co/stabi
|
|
22 |
|
23 |
## Running
|
24 |
|
25 |
-
Use Amuse GUI application to run it: https://www.amuse-ai.com
|
26 |
-
|
|
|
27 |
|
28 |
|
29 |
## Inference Result
|
@@ -66,7 +71,8 @@ The model was not trained to be factual or true representations of people or eve
|
|
66 |
|
67 |
## **Safety**
|
68 |
|
69 |
-
As part of our safety-by-design and responsible AI
|
|
|
70 |
For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety).
|
71 |
|
72 |
### **Integrity Evaluation**
|
@@ -75,7 +81,7 @@ Our integrity evaluation methods include structured evaluations and red-teaming
|
|
75 |
|
76 |
### **Risks identified and mitigations:**
|
77 |
|
78 |
-
* Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed.
|
79 |
* Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products.
|
80 |
* Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy.
|
81 |
|
|
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
|
11 |
+
# stabilityai/stable-diffusion-3-medium - AMD Optimized ONNX and Ryzen(TM) AI NPU
|
12 |
|
13 |
+
This repository hosts the **AMD Optimized version of Stable Diffusion 3 Medium** and the **AMD Ryzen™ AI optimized version of SD 3 Medium** created in collaboration with [AMD](https://huggingface.co/amd).
|
14 |
+
|
15 |
+
The AMDGPU model is an AMD-optimized ONNX port of the Stable Diffusion 3 Medium model offering [significantly higher inferencing speeds](https://stability.ai/news/stable-diffusion-now-optimized-for-amd-radeon-gpus) compared to the base model on compatible AMD hardware.
|
16 |
+
|
17 |
+
The ONNX-ported Ryzen™ AI model is the world’s first Block FP16 model with the UNET and VAE decoder completely in Block FP16. Built for the AMD XDNA™ 2 based NPU, this model combines the accuracy of FP16 with the performance of INT8.
|
18 |
|
19 |
## Model Description
|
20 |
Refer to the [Stable Diffusion 3 Medium Model card](https://huggingface.co/stabilityai/stable-diffusion-3-medium) for more details.
|
|
|
26 |
|
27 |
## Running
|
28 |
|
29 |
+
GPU: Use Amuse GUI application to run it: https://www.amuse-ai.com/. Use *_io32 version to run with the Amuse application
|
30 |
+
|
31 |
+
NPU: On a compatible XDNA(TM) 2 NPU device, Install and Open Amuse GUI, Click on Advanced Mode. Download SD 3 Medium AMDGPU from Model Manager. Click on Image Generation. Change variant to Ryzen AI. Load.
|
32 |
|
33 |
|
34 |
## Inference Result
|
|
|
71 |
|
72 |
## **Safety**
|
73 |
|
74 |
+
As part of our safety-by-design and responsible AI development approach, we prioritize integrity from the earliest stages of model development. We implement safeguards throughout the development process to help reduce the risk of misuse. While we’ve built in protections to mitigate certain harms, we encourage developers to test responsibly based on their intended use cases and apply additional mitigations as needed.
|
75 |
+
|
76 |
For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety).
|
77 |
|
78 |
### **Integrity Evaluation**
|
|
|
81 |
|
82 |
### **Risks identified and mitigations:**
|
83 |
|
84 |
+
* Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. All developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases.
|
85 |
* Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products.
|
86 |
* Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy.
|
87 |
|