-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 45 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 24
Collections
Discover the best community collections!
Collections including paper arxiv:2403.05530
-
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
Enhancing Large Language Model Performance To Answer Questions and Extract Information More Accurately
Paper • 2402.01722 • Published -
Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge
Paper • 2403.01432 • Published • 2 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 27
-
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 374 -
Qwen2.5-Coder Technical Report
Paper • 2409.12186 • Published • 151 -
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
Paper • 2409.12122 • Published • 4 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 203
-
SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding
Paper • 2408.15545 • Published • 38 -
Controllable Text Generation for Large Language Models: A Survey
Paper • 2408.12599 • Published • 66 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 44 -
Automated Design of Agentic Systems
Paper • 2408.08435 • Published • 41
-
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 21 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 16 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16
-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 24 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 24
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 67 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 59
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 45 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 24
-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 24 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 24
-
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73 -
Enhancing Large Language Model Performance To Answer Questions and Extract Information More Accurately
Paper • 2402.01722 • Published -
Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge
Paper • 2403.01432 • Published • 2 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 27
-
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 374 -
Qwen2.5-Coder Technical Report
Paper • 2409.12186 • Published • 151 -
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
Paper • 2409.12122 • Published • 4 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 203
-
SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding
Paper • 2408.15545 • Published • 38 -
Controllable Text Generation for Large Language Models: A Survey
Paper • 2408.12599 • Published • 66 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 44 -
Automated Design of Agentic Systems
Paper • 2408.08435 • Published • 41
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 67 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 59
-
Attention Is All You Need
Paper • 1706.03762 • Published • 83 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 21 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 16 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 16