-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
220 lines (199 loc) · 9.39 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
<meta name="description" content="Ronglai Zuo (左镕来)">
<!-- <meta name="viewport" content="width=device-width, initial-scale=1"> -->
<link rel="stylesheet" href="./files/jemdoc.css" type="text/css;charset=utf-8">
<link rel="shortcut icon" href="./files/ten_white.ico"/>
<title>Ronglai Zuo (左镕来)</title>
<body>
<div id="layout-content" style="margin-top:25px">
<table>
<tbody>
<tr>
<td width="670">
<div id="toptitle">
<h1>Ronglai Zuo (左镕来)</h1>
</div>
<h3 style="font-size:125% !important;">Research Associate</h3>
<p>
Huxley Building <br>
South Kensington Campus <br>
Imperial College London <br>
</p>
<p>
Email: <a href="mailto:[email protected]">r.zuo [at] imperial.ac.uk</a>; <a href="mailto:[email protected]">zrl2016ustc [at] outlook.com</a>
</p>
<p>
<a href="./files/Ronglai_CV_hm.pdf">[CV]</a> <a href="https://scholar.google.com/citations?user=vyCvXx8AAAAJ&hl=en">[Google Scholar]</a> <a href="https://www.linkedin.com/in/%E9%95%95%E6%9D%A5-%E5%B7%A6-298108180/?locale=en_US">[LinkedIn]</a> <a href="https://x.com/Ron1404861985">[X]</a>
</p>
</td>
<td>
<img src="./files/ronglai_2.jpg" border="0" width="200">
</td>
</tr>
</tbody>
</table>
<h2>Biography</h2>
<p>I am currently a Research Associate at Imperial College London, working with <a href="https://jiankangdeng.github.io/">Dr. Jiankang Deng</a> and <a href="https://scholar.google.com/citations?user=QKOH5iYAAAAJ&hl=en">Prof. Stefanos Zafeiriou</a>.
Previously, I completed my Ph.D. at the Hong Kong University of Science and Technology under the supervision of <a href="http://home.cse.ust.hk/~mak/profile.html">Prof. Brian Mak</a> in 2024.
Before that, I received my B.Eng. degree from Special Class for the Gifted Young, University of Science and Technology of China, in 2020.
I was a research intern at Microsoft Research Asia supervised by <a href="https://scholar.google.com/citations?user=-ncz2s8AAAAJ&hl=en">Fangyun Wei</a>.</p>
<p>My research focuses on sign language processing (recognition/translation/generation), while I am also interested in video understanding and multimodal learning. </p>
<!-- <p><font color="#FF0000">xxx</font></p> -->
<h2>News</h2>
<ul>
<div style="height:200px;width:device-width;overflow:auto;background:#FFFFFF;">
<li>
<p>[10/2024] Join Imperial College London as a Research Associate!</p>
</li>
<li>
<p>[09/2024] One paper is accepted by EMNLP 2024.</p>
</li>
<li>
<p>[08/2024] Successfully defended my Ph.D. thesis!</p>
</li>
<li>
<p>[07/2024] One paper is accepted by ECCV 2024.</p>
</li>
<li>
<p>[02/2024] One paper is accepted by LREC-COLING 2024.</p>
</li>
<li>
<p>[01/2024] One paper is accepted by ACM TOMM.</p>
</li>
<li>
<p>[02/2023] One paper is accepted by CVPR 2023.</p>
</li>
<li>
<p>[09/2022] One paper is accepted by NeurIPS 2022.</p>
</li>
<li>
<p>[06/2022] One paper is accepted by Interspeech 2022.</p>
</li>
<li>
<p>[04/2022] Start my internship at MSRA!</p>
</li>
<li>
<p>[03/2022] One paper is accepted by CVPR 2022.</p>
</li>
<li>
<p>[12/2021] Pass my Ph.D. qualifying exam. Now I am a Ph.D. candidate!</p>
</li>
<li>
<p>[09/2020] Start my Ph.D. journey at HKUST!</p>
<li>
<p>[07/2020] Finish my undergraduate study at USTC. Memorable 4 years in Hefei!</p>
</li>
</div>
</ul>
<h2>Publications</h2> (*co-first authors)
<ul>
<li>
<a href='https://aclanthology.org/2024.emnlp-main.619/'>Towards Online Continuous Sign Language Recognition and Translation</a><br>
<strong><u>Ronglai Zuo</u></strong>, Fangyun Wei, and Brian Mak<br>
Conference on Empirical Methods in Natural Language Processing <strong>(EMNLP)</strong>, Miami, USA, 2024<br>
<a href="https://arxiv.org/pdf/2401.05336">[pdf]</a>
<a href="https://github.com/FangyunWei/SLRT">[code]</a>
</li>
<li>
<a href='https://www.ecva.net/papers/eccv_2024/papers_ECCV/html/6499_ECCV_2024_paper.php'>A Simple Baseline for Spoken Language to Sign Language Translation with 3D Avatars</a> <br>
<strong><u>Ronglai Zuo</u>*</strong>, Fangyun Wei*, Zenggui Chen, Brian Mak, Jiaolong Yang, and Xin Tong<br>
European Conference on Computer Vision <strong>(ECCV)</strong>, Milan, Italy, 2024, <strong><i>Oral</i></strong> <br>
<a href="https://arxiv.org/pdf/2401.04730.pdf">[pdf]</a>
<a href="https://github.com/FangyunWei/SLRT">[code]</a>
</li>
<li>
<a href='https://aclanthology.org/2024.lrec-main.55/'>A Hong Kong Sign Language Corpus Collected from Sign-interpreted TV News</a> <br>
Zhe Niu*, <strong><u>Ronglai Zuo</u>*</strong>, Brian Mak, and Fangyun Wei<br>
Computational Linguistics, Language Resources and Evaluation <strong>(LREC-COLING)</strong>, Turin, Italy, 2024, <strong><i>Oral</i></strong> <br>
<a href="https://arxiv.org/pdf/2405.00980">[pdf]</a>
<a href="https://tvb-hksl-news.github.io/">[dataset]</a>
</li>
<li>
<a href='https://dl.acm.org/doi/10.1145/3640815'>Improving Continuous Sign Language Recognition with Consistency Constraints and Signer Removal</a> <br>
<strong><u>Ronglai Zuo</u></strong> and Brian Mak<br>
ACM Transactions on Multimedia Computing, Communications and Applications <strong>(TOMM)</strong>, 2024<br>
<a href="https://arxiv.org/pdf/2212.13023.pdf">[pdf]</a>
<a href="https://github.com/2000ZRL/LCSA_C2SLR_SRM">[code]</a>
</li>
<li>
<a href='https://openaccess.thecvf.com/content/CVPR2023/html/Zuo_Natural_Language-Assisted_Sign_Language_Recognition_CVPR_2023_paper.html'>Natural Language-Assisted Sign Language Recognition</a> <br>
<strong><u>Ronglai Zuo</u></strong>, Fangyun Wei, and Brian Mak<br>
IEEE/CVF Conference on Computer Vision and Pattern Recognition <strong>(CVPR)</strong>, Vancouver, Canada, 2023 <br>
<a href="https://arxiv.org/pdf/2303.12080.pdf">[pdf]</a>
<a href="https://github.com/FangyunWei/SLRT">[code]</a>
</li>
<li>
<a href='https://papers.nips.cc/paper_files/paper/2022/hash/6cd3ac24cdb789beeaa9f7145670fcae-Abstract-Conference.html'>Two-Stream Network for Sign Language Recognition and Translation</a> <br>
Yutong Chen*, <strong><u>Ronglai Zuo</u>*</strong>, Fangyun Wei*, Yu Wu, Shujie Liu, and Brian Mak<br>
Advances in Neural Information Processing Systems <strong>(NeurIPS)</strong>, New Orleans, USA, 2022, <strong><i>Spotlight</i></strong> <br>
<a href="https://arxiv.org/pdf/2211.01367.pdf">[pdf]</a>
<a href="https://github.com/FangyunWei/SLRT">[code]</a>
</li>
<li>
<a href='https://openaccess.thecvf.com/content/CVPR2022/html/Zuo_C2SLR_Consistency-Enhanced_Continuous_Sign_Language_Recognition_CVPR_2022_paper.html'>C2SLR: Consistency-enhanced Continuous Sign Language Recognition</a> <br>
<strong><u>Ronglai Zuo</u></strong> and Brian Mak<br>
IEEE/CVF Conference on Computer Vision and Pattern Recognition <strong>(CVPR)</strong>, New Orleans, USA, 2022 <br>
<a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zuo_C2SLR_Consistency-Enhanced_Continuous_Sign_Language_Recognition_CVPR_2022_paper.pdf">[pdf]</a>
<a href="https://github.com/2000ZRL/LCSA_C2SLR_SRM">[code]</a>
</li>
<li>
<a href='https://www.isca-archive.org/interspeech_2022/zuo22_interspeech.html'>Local Context-aware Self-attention for Continuous Sign Language Recognition</a> <br>
<strong><u>Ronglai Zuo</u></strong> and Brian Mak <br>
Annual Conference of International Speech Communication Association <strong>(Interspeech)</strong>, Incheon, Korea, 2022 <br>
<a href="https://www.isca-archive.org/interspeech_2022/zuo22_interspeech.pdf">[pdf]</a>
<a href="https://github.com/2000ZRL/LCSA_C2SLR_SRM">[code]</a>
</li>
</ul>
<!-- <h3 style="color:black; font-weight:normal">Preprints</h3>
<ul>
<li>
<a href='https://arxiv.org/abs/2401.05336'>Towards Online Sign Language Recognition and Translation</a><br>
<strong><u>Ronglai Zuo</u></strong>, Fangyun Wei, and Brian Mak<br>
Under Review, 2024<br>
<a href="https://arxiv.org/pdf/2401.05336.pdf">[pdf]</a>
<a href="https://github.com/FangyunWei/SLRT">[code]</a>
</li>
</ul> -->
<h2>Awards</h2>
<ul>
<li>
Stars of Tomorrow, Microsoft Research Asia, 2023
</li>
<li>
Outstanding Graduate, USTC, 2020
</li>
<li>
Outstanding Student, USTC, 2017-2019
</li>
</ul>
<h2>Invited Talks</h2>
<ul>
<li>
Vision-Based Sign Language Processing, DERI, Queen Mary University of London, 11/2024
</li>
<li>
Vision-Based Sign Language Processing, iBUG Group, Imperial College London, 01/2024
</li>
</ul>
<h2>Services</h2>
<ul>
<li>
Conference Reviewer: CVPR, ICCV, ECCV, ACCV, NeurIPS, ICLR, AAAI
</li>
<li>
Journal Reviewer: IJCV, TMM, TCSVT, THMS, PR, IPM
</li>
</ul>
<h2>Teaching</h2>
<ul>
<li>
TA in COMP2012 Object-Oriented Programming and Data Structures, Fall 2023
</li>
<li>
TA in COMP2011 Programming with C++, Spring 2021, Fall 2021
</li>
</ul>
<div id="footer">
<div id="footer-text"></div>
</div>
© 2024 Ronglai Zuo. Last updated in 11/2024.
</body></html>