文件列表:
通过可逆神经网络学习解释的非耦合语义空间【英文版】.pdf |
下载文档 |
资源简介
>
英文标题:Learning Disentangled Semantic Spaces of Explanations via Invertible Neural Networks中文摘要:本文提出了一种将 BERT-GPT2 自编码器的隐藏空间转换为更易分离的语义空间的方法,借助于基于流的可逆神经网络 (INN),实验结果表明,这种方法能够比最新的最先进模型更好地实现语义的分离和可控性。英文摘要:Disentangling sentence representations over continuous spaces can be acritical process in improving interpretability and semantic control bylocalising explicit generative factors. Such process confers to neural-basedlanguage models some of the advantages that are characteristic of symbolicmodels, w
加载中...
已阅读到文档的结尾了