-
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Paper • 2006.03654 • Published • 3 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 21 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16
Collections
Discover the best community collections!
Collections including paper arxiv:2205.14135
-
A Survey on Inference Engines for Large Language Models: Perspectives on Optimization and Efficiency
Paper • 2505.01658 • Published • 39 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
Reward Reasoning Model
Paper • 2505.14674 • Published • 37
-
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 28 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9
-
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 17 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 21 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 23
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
1.58-bit FLUX
Paper • 2412.18653 • Published • 85
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
Fast Transformer Decoding: One Write-Head is All You Need
Paper • 1911.02150 • Published • 8
-
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 21 -
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
Paper • 2402.10644 • Published • 82 -
Repeat After Me: Transformers are Better than State Space Models at Copying
Paper • 2402.01032 • Published • 25 -
Zoology: Measuring and Improving Recall in Efficient Language Models
Paper • 2312.04927 • Published • 2
-
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Paper • 2006.03654 • Published • 3 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 21 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
1.58-bit FLUX
Paper • 2412.18653 • Published • 85
-
A Survey on Inference Engines for Large Language Models: Perspectives on Optimization and Efficiency
Paper • 2505.01658 • Published • 39 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
Reward Reasoning Model
Paper • 2505.14674 • Published • 37
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9 -
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Paper • 2407.08608 • Published • 1 -
Fast Transformer Decoding: One Write-Head is All You Need
Paper • 1911.02150 • Published • 8
-
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 28 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 9
-
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 17 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 21 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 23
-
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 21 -
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
Paper • 2402.10644 • Published • 82 -
Repeat After Me: Transformers are Better than State Space Models at Copying
Paper • 2402.01032 • Published • 25 -
Zoology: Measuring and Improving Recall in Efficient Language Models
Paper • 2312.04927 • Published • 2