• <object id="23nk4"></object>
    <tr id="23nk4"><menu id="23nk4"><video id="23nk4"></video></menu></tr><object id="23nk4"><small id="23nk4"></small></object>
    <span id="23nk4"><menu id="23nk4"></menu></span>
  • <pre id="23nk4"><menu id="23nk4"></menu></pre>

    首頁 > 學術講座 > 正文
    From Multilingual to Multimodal Processing
    發布時間:2019-12-26    

    講座主題

    From Multilingual to Multimodal Processing

    主講人姓名及介紹

    Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.

    報告摘要

    In this talk, I will introduce three of our recent work coving from multilingual to multimodal processing. The first work is about how to exploit multilingualism for low-resource neural machine translation. The second work is for identifying visual grounded paraphrases from image and language multimodal data. The last work explores knowledge for visual question answering in videos. Through the talk, I would like to discuss the research challenges and opportunities in multilingual and multimodal processing.

    學術講座
    日韩在线av免费视久久_av天堂亚洲 欧美_欧美av在线