-
A LoRA-Based Approach to Fine-Tuning LLMs for Educational Guidance in Resource-Constrained Settings
Paper • 2504.15610 • Published • 1 -
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
Paper • 2502.13533 • Published • 11 -
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models
Paper • 2403.08822 • Published -
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Paper • 2407.18242 • Published
Collections
Discover the best community collections!
Collections including paper arxiv:2204.02292
-
Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Paper • 2212.05276 • Published • 1 -
Hybrid and Collaborative Passage Reranking
Paper • 2305.09313 • Published • 1 -
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval
Paper • 2204.02292 • Published • 1 -
Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker
Paper • 2305.13729 • Published • 1
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 44 -
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 -
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2
-
Dissecting In-Context Learning of Translations in GPTs
Paper • 2310.15987 • Published • 6 -
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
Paper • 2309.08958 • Published • 2 -
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
Paper • 2305.04160 • Published • 2 -
Ziya-VL: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning
Paper • 2310.08166 • Published • 1
-
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
Paper • 2401.01055 • Published • 56 -
YAYI 2: Multilingual Open-Source Large Language Models
Paper • 2312.14862 • Published • 15 -
Order Matters in the Presence of Dataset Imbalance for Multilingual Learning
Paper • 2312.06134 • Published • 3 -
TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes
Paper • 2311.10797 • Published
-
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper • 2310.17157 • Published • 14 -
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
Paper • 2305.15805 • Published • 1 -
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt
Paper • 2305.11186 • Published • 1 -
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Paper • 2110.07560 • Published • 2
-
Democratizing Reasoning Ability: Tailored Learning from Large Language Model
Paper • 2310.13332 • Published • 16 -
Teaching Language Models to Self-Improve through Interactive Demonstrations
Paper • 2310.13522 • Published • 12 -
Self-Convinced Prompting: Few-Shot Question Answering with Repeated Introspection
Paper • 2310.05035 • Published • 1 -
Tuna: Instruction Tuning using Feedback from Large Language Models
Paper • 2310.13385 • Published • 10
-
A LoRA-Based Approach to Fine-Tuning LLMs for Educational Guidance in Resource-Constrained Settings
Paper • 2504.15610 • Published • 1 -
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
Paper • 2502.13533 • Published • 11 -
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models
Paper • 2403.08822 • Published -
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Paper • 2407.18242 • Published
-
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
Paper • 2401.01055 • Published • 56 -
YAYI 2: Multilingual Open-Source Large Language Models
Paper • 2312.14862 • Published • 15 -
Order Matters in the Presence of Dataset Imbalance for Multilingual Learning
Paper • 2312.06134 • Published • 3 -
TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes
Paper • 2311.10797 • Published
-
Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Paper • 2212.05276 • Published • 1 -
Hybrid and Collaborative Passage Reranking
Paper • 2305.09313 • Published • 1 -
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval
Paper • 2204.02292 • Published • 1 -
Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker
Paper • 2305.13729 • Published • 1
-
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper • 2310.17157 • Published • 14 -
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
Paper • 2305.15805 • Published • 1 -
Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt
Paper • 2305.11186 • Published • 1 -
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Paper • 2110.07560 • Published • 2
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 44 -
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 -
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2
-
Democratizing Reasoning Ability: Tailored Learning from Large Language Model
Paper • 2310.13332 • Published • 16 -
Teaching Language Models to Self-Improve through Interactive Demonstrations
Paper • 2310.13522 • Published • 12 -
Self-Convinced Prompting: Few-Shot Question Answering with Repeated Introspection
Paper • 2310.05035 • Published • 1 -
Tuna: Instruction Tuning using Feedback from Large Language Models
Paper • 2310.13385 • Published • 10
-
Dissecting In-Context Learning of Translations in GPTs
Paper • 2310.15987 • Published • 6 -
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
Paper • 2309.08958 • Published • 2 -
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
Paper • 2305.04160 • Published • 2 -
Ziya-VL: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning
Paper • 2310.08166 • Published • 1