工学博士,博士后,副教授,硕士生导师。日本电气通信大学计算机系统结构专业联合培养硕士,重庆大学计算机软件与理论专业博士,重庆大学控制科学与工程流动站从事博士后。现为中国计算机学会会员、重庆市数字诊疗专委会委员。研究领域包括自然语言处理、计算机视觉、大语言模型。先后主持国家自然科学基金项,重庆市自然科学基金项,中国博士后科学基金项,中央高校基本科研业务费科研专项项目,主研国家重点研发计划项目、科技部国家重大专项项目。近年在国内外知名学术期刊及会议上发表深度学习相关研究I学术论文240余篇,单篇SCI他引频次最高为144次。总被引次数508,H-Index 10。获评百度飞浆人工智能优秀讲师,教育部-华为智能基座栋梁之师,华为智能金课负责人。指导的硕士多人就业于阿里、腾讯、华为等互联网大厂。
1.国家自然科学基重点金项目, 82241059, 260万, 2023-01-2025.12, 主研
2.四川省重点研发计划项目, 22ZDYF0318, 100万, 2022.01-2023.12, 主研
3.科技部国家重点研发计划(子课题), 2019YFC0850104,533万, 2019.11-2022.12,主研
4.重庆市技术创新与应用发展面上项目, 2022TIAD-GPX0195, 20万, 2022.09-2024.08,主持
5.重庆市技术创新与应用发展专项重点项目, cstc2019jscx-fxydX0088, 20万, 2019.09-2021.09,主研
6.国家自然科学基金面上项目, 71471023, 61.5万, 2015.01-2018.12, 主研
7.国家自然科学基金青年项目 , 61309013, 23万, 2014.01-2016.12, 主持
8.教育部人文社会科学研究一般项目, 21YJAZH013, 10万, 2021.09-2024.12, 主研
9.重庆市科技计划项目基础与前沿研究计划项目, cstc2014jcyjA40042, 5万, 2014.08-2017.06, 主持
10.中国博士后科学基金面上项目, 20110490807, 3万, 2011.05-2013.06, 主持
[1] Yuming Yang, Dongsheng Zou. AdaStyleSpeech: A Fast Stylized Synthesis Model based on Adaptive Instance Normalization[C]. 2024 IEEE International Conference on Multimedia and Expo (ICME).
[2] Yi Yu, Dongsheng Zou, Yuming Yang. Entity and Evidence Guided Attention for Document-Level Relation Extraction [C], 2024 International Joint Conference on Neural Networks.
[3] Yi Yu, Dongsheng Zou,Xinyi Song. MRC-FEE: Machine Reading Comprehension for Chinese Financial Event Extraction[C], 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD),2024.
[4] Yuming Yang, Dongsheng Zou, Xinyi Song, Xiaotong Zhang. DehazeDM: Image Dehazing via Patch Autoencoder Based on Diffusion Models. SMC 2023: 3783-3788.
[5] Xinyi Song, Dongsheng Zou, Yi Yu, Xiaotong Zhang. FW-ECPE: An Emotion-Cause Pair Extraction Model Based on Fusion Word Vectors. IJCNN 2023: 1-7.
[6] Hengyi Chen, Xiaohui Chen, Yi Chen, Chong Zhang, Zixin Sun, Jiaxi Mo, Yongzhong Wang, Jichun Yang, Dongsheng Zou,Yang Luo. High-fidelity imaging of intracellular microRNA via a bioorthogonal nanoprobe.Analyst. 2023; 148(8): 1682-1693.
[7] Dongsheng Zou, Xiaotong Zhang, Xinyi Song, Yi Yu, Yuming Yang, Kang Xi. Multiway Bidirectional Attention and External Knowledge for Multiple-choice Reading Comprehension. SMC 2022: 694-699
[8] XiaoTong Zhang, Dongsheng Zou, Singyi Song, Yi Yu. Emotion-Cause Pair Extraction Model Based on Fusion Word Vector. SMC 2022.
[9] Wei Tang, Dongsheng Zou, Su Yang, Jing SHI, Jingpei Dan and Guowu Song. A Two-Stage Approach for Automatic Liver Segmentation with Faster R-CNN and DeepLab[J]. Neural Computing and Applications, 32(11): 6769-6778, 2020.
[10] Lei Hu, Dongsheng Zou, Xiwang Guo, Liang Qi, Ying Tang, Haohao Song, Jieying Yuan:Four-way Bidirectional Attention for Multiple-choice Reading Comprehension. SMC 2021.
[11] Haohao Song, Dongsheng Zou, Weijia Li. Learning Discrete Sentence Representations via Construction & Decomposition(C). International Conference on Neural Information Processing 2020.
[12] Haohao Song, Dongsheng Zou, Lei Hu and Jieying Yuan. Embedding Compression with Right Triangle Similarity Transformations.(C). International Conference on Artificial Neural Networks 2020.
[13] Wei Tang, Dongsheng Zou, Jing Shi. DSL: Automatic Liver Segmentation with Faster R-CNN and DeepLab. International Conference on Artificial Neural Networks, 2018
[14] Yinghao Li, Zhongshi He, Hao Zhu, Dongsheng Zou, Weiwei Zhang. A coarse-to-fine scheme for groupwise registration of multisensor images[J]. International Journal of Advances Robotics Systems.
[15] Dongsheng Zou, Zhongshi He, Jingyuan He, Yuxian Xia. Supersecondary structure prediction using Chou’s pseudo amino acid composition [J]. Journal of Computational Chemistry, Vol.32, No.2, pp.271-278, 2011.
[16] Dongsheng Zou, Zhongshi He, Jingyuan He. β-hairpin prediction with quadratic discriminant analysis using diversity measure [J]. Journal of Computational Chemistry, Vol.30, No.14, pp.2277-2284, 2009.
[1] 一种基于生成对抗网络的多属性语音合成方法[P]. 中国发明专利.CN202211094041.2
[2] 一种基于监督对比学习的隐式情感元素抽取方法[P]. 中国发明专利.CN202311672984.3
[3] 一种金融文本实体关系抽取方法及系统[P]. 中国发明专利.CN202110855621.8
No content!
No content!