Papers
arxiv:2304.06962

Prompt Engineering and Calibration for Zero-Shot Commonsense Reasoning

Published on Apr 14, 2023
Authors:

Abstract

Investigating prompt engineering and calibration on smaller language models reveals mixed results, with joint effects generally negative across multiple commonsense reasoning tasks.

AI-generated summary

Prompt engineering and calibration make large language models excel at reasoning tasks, including multiple choice commonsense reasoning. From a practical perspective, we investigate and evaluate these strategies on smaller language models. Through experiments on five commonsense reasoning benchmarks, we find that each strategy favors certain models, but their joint effects are mostly negative.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2304.06962 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2304.06962 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2304.06962 in a Space README.md to link it from this page.

Collections including this paper 2