VRAM usage with transformers
I conducted tests on Gemma 3n models using Transformers on an RTX 4090 GPU. The observed VRAM usage was as follows:
E2B: 11.733 GB
E4B: 16.229 GB
These results clearly exceed typical portable device levels. Additionally, the VRAM usage shows minimal variation between different modalities. This leads to questions regarding the dynamic loading mechanism: is dynamic loading unsupported by Transformers, or does it only activate when VRAM resources become limited?
same in jetson agx orin.
The answer from Chat-GPT(o3) :
A: When Transformers ≥4.53 is used with Accelerate, PLE caching and visual/audio weight diversion will be automatically enabled. The judgment condition is "Is OOM possible":
If the GPU memory is sufficient (like RTX4090), Accelerate determines that it is safe and all weights are still put into the GPU at once, so you see 11–16GB.
Just use device_map="auto" with max_memory={'cuda:0':'3GiB'} or add 4‑bit quantization to force PLE etc. to move back to the CPU - at this time, the VRAM of E2B/E4B will drop to the official 2GB /3GB.
To further reduce it, use Mix‑n‑Match to cut smaller sub-models or exclude visual/audio weights.