Are we supposing the same length for each sentence in the position encoding? https://github.com/dandelin/Dynamic-memory-networks-plus-Pytorch/blob/ad49955f907c03aade2f6c8ed13370ce7288d5a7/babi_main.py#L18 As in above, each sentence encoding is divided by the same number elen-1.