HAZEL

[NLP Paper Review] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context 논문 리뷰 / Transformer-XL 본문

DATA ANALYSIS/Paper

[NLP Paper Review] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context 논문 리뷰 / Transformer-XL

Rmsid01 2021. 5. 8. 09:48

NLP 논문 스터디에서 발표한 내용으로, PPT만 있는 글 입니다.

- 추후에 설명 글도 첨가할 예정 ** 

 

arxiv.org/abs/1901.02860

 

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length wi

arxiv.org

 

논문 발표 PPT