accelerated-text awesome-nlg: A curated list of resources dedicated to Natural Language Generation NLG
In a sense, it means that the training scenario is unrealistic and does not map to the real situation when we perform inference. In training, the model is only exposed to sequences of ground truth tokens, but sees its own output metadialog.com when deployed. As we shall see in the following discussion, this exposure bias may…