Papers
arxiv:2102.09777

Progressive Transformer-Based Generation of Radiology Reports

Published on Feb 19, 2021
Authors:
,
,
,
,

Abstract

A consecutive image-to-text-to-text generation framework using transformers achieves state-of-the-art results in radiology report generation by breaking the process into two steps.

AI-generated summary

Inspired by Curriculum Learning, we propose a consecutive (i.e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at once, the model generates global concepts from the image in the first step and then reforms them into finer and coherent texts using a transformer architecture. We follow the transformer-based sequence-to-sequence paradigm at each step. We improve upon the state-of-the-art on two benchmark datasets.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2102.09777 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.