Breaking Writer's Block: Low-cost Fine-tuning of Natural Language Generation Models

Alexandre Duval, Thomas Lamson, Gaël de Léséleuc de Kérouara, Matthias Gallé

Demo Paper

Gather-1F: Apr 21, Gather-1F: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: It is standard procedure these days to solve Information Extraction task by fine-tuning large pre-trained language models. This is not the case for generation task, which relies on a variety of techniques for controlled language generation. In this paper, we describe a system that fine-tunes a natural language generation model for the problem of solving writer's block. The fine-tuning changes the conditioning to also include the right context in addition to the left context, as well as an optional list of entities, the size, the genre and a summary of the paragraph that the human author wishes to generate. Our proposed fine-tuning obtains excellent results, even with a small number of epochs and a total cost of USD 150. The system can be accessed as a web-service and all the code is released. A video showcasing the interface and the model is also available.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Similar Papers

Changing the Mind of Transformers for Topically-Controllable Language Generation
Haw-Shiuan Chang, Jiaming Yuan, Mohit Iyyer, Andrew McCallum,
DRAG: Director-Generator Language Modelling Framework for Non-Parallel Author Stylized Rewriting
Hrituraj Singh, Gaurav Verma, Aparna Garimella, Balaji Vasan Srinivasan,
GCM: A Toolkit for Generating Synthetic Code-mixed Text
Mohd Sanad Zaki Rizvi, Anirudh Srinivasan, Tanuja Ganu, Monojit Choudhury, Sunayana Sitaram,