In the full paper, we also explore which parts of language are most important (spoiler: a little bit of everything), and how much language is needed for LSL to improve over models that don’t use language (spoiler: surprisingly little!)

Moving Forward

As NLP systems grow in their ability to understand and produce language, so too grows the potential for machine learning systems to learn from language to solve other challenging tasks. In the papers above, we’ve shown that deep neural language models can be used to successfully learn from language explanations to improve generalization across a variety of tasks in vision and NLP.

We think this is an exciting new avenue for training machine learning models, and similar ideas are already being explored in areas such as reinforcement learning (45). We envision a future where in order to solve a machine learning task, we no longer have to collect a large labeled dataset, but instead interact naturally and expressively with a model in the same way that humans have interacted with each other for millennia—through language.

Acknowledgments

Thanks to our coauthors (Pang Wei Koh, Percy Liang, and Noah Goodman), and to Nelson Liu, Pang Wei Koh, and the rest of the SAIL blog team for reviewing and publishing this blog post. This research was supported in part by the Facebook Fellowship (to Pang Wei Koh), the NSF Graduate Research Fellowship (to Jesse Mu), Toyota Research Institute, and the Office of Naval Research.

This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.

Source link