Cross-lingual Initialized Gemma 3 1B IT

A cross-lingual version of Google's Gemma 3 1B Instruction-Tuned model with extended vocabulary and initialized embeddings for multilingual support.

Model Details

  • Base Model: google/gemma-3-1b-it
  • Model Type: Causal Language Model with Cross-lingual Initialization
  • Initialization Method: Cross-lingual embedding initialization using English token mappings
  • Extended Vocabulary: Additional tokens for multilingual support

Description

This model extends the original Gemma 3 1B IT model with:

  • Extended tokenizer vocabulary for additional language support
  • Cross-lingual embedding initialization where new language tokens are initialized with embeddings from semantically equivalent English tokens
  • Preserved model architecture and instruction-tuning capabilities

⚠️ Important Note

This model is NOT pretrained after token extension and initialization. This is a base model with extended tokens and initialized embeddings only. The new language tokens require additional pretraining/fine-tuning to achieve optimal performance. This model serves as a starting point for multilingual adaptation rather than a ready-to-use multilingual model.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("pavan-naik/gemma-3-1b-it-exp-init")
tokenizer = AutoTokenizer.from_pretrained("pavan-naik/gemma-3-1b-it-exp-init")

# Use like any other Gemma model
inputs = tokenizer("Your multilingual text here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)

Technical Details

  • Initialization Strategy: New language tokens initialized with embeddings from mapped English equivalents
  • Preserved Components: Original model weights, architecture, and instruction-following capabilities
  • Extended Components: Input embeddings and output projection layer (LM head)

Intended Use

This model serves as a starting point for multilingual model development. It is designed for:

  • Further pretraining on multilingual corpora
  • Fine-tuning for specific multilingual tasks
  • Research into cross-lingual transfer learning

This model requires additional training before production use. The extended tokens have only been initialized but not trained on actual multilingual data.

Limitations

  • Requires additional training: New language tokens are only initialized, not trained on multilingual data
  • Not production-ready: This is a base model for further development, not a finished multilingual model
  • Performance: Extended tokens will have limited performance without additional pretraining/fine-tuning
  • Cross-lingual initialization: Provides a starting point but may not capture all linguistic nuances
  • Token mapping quality: Performance depends on the quality of English token mappings used during initialization
Downloads last month
11
Safetensors
Model size
1.06B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pavan-naik/gemma-3-1b-it-exp-init

Finetuned
(262)
this model
Finetunes
1 model

Collection including pavan-naik/gemma-3-1b-it-exp-init