-
Notifications
You must be signed in to change notification settings - Fork 0
/
research.html
387 lines (334 loc) · 22.2 KB
/
research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
<!DOCTYPE HTML>
<!--
Introspect by TEMPLATED
templated.co @templatedco
Released for free under the Creative Commons Attribution 3.0 license (templated.co/license)
-->
<html>
<head>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-34679733-4"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-34679733-4');
</script>
<title>INK Research Lab - USC Computer Science</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="INK Research Lab @ USC">
<meta name="author" content="INK Research Lab @ USC">
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<link rel="stylesheet" href="assets/css/main.css" />
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
<link rel="manifest" href="/site.webmanifest">
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
<meta name="msapplication-TileColor" content="#da532c">
<meta name="theme-color" content="#ffffff">
</head>
<body>
<!-- Header -->
<header id="header">
<div class="inner">
<a href="index.html" class="logo"> <img width="55" style="padding: 0.5em 0 0 0" src="images/logo.png"> </a>
<nav id="nav">
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="index.html">Home</a>
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="people.html">People</a>
<a style="color:#FF0433" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="research.html"><u>Research</u></a>
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="publications.html">Publications</a>
<a style="color:#d5d5d5;" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="software.html">Software</a>
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="sponsors.html">Sponsors</a>
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="teaching.html">Teaching</a>
<a style="color:#d5d5d5" onMouseOver="this.style.color='#FF0433'" onMouseOut="this.style.color='#d5d5d5'" href="contact.html">Join Us</a>
</nav>
<div class="inner">
<img class="usclogo" src="images/usc-shield-name-white.png" >
</div>
</div>
</header>
<a href="#menu" class="navPanelToggle"><span class="fa fa-bars"></span></a>
<!-- Banner -->
<!-- <section id="banner" style="height:10pt">
<div class="inner" style="height:10pt">
<h1 style="padding: -10 0 0 0">INK Lab @ USC</h1>
</div>
</section> -->
<!-- One -->
<!-- Two -->
<!-- Two -->
<section id="two" style="background:white">
<!-- <div class="inner">
<h2>Publications @ INK</h2> <br><br><br>
</div> -->
<div class="inner">
<div class="content">
<h2 style="margin: 0em 0em 0em 0em">Learning from Distant, High-level Human Supervision</h2>
<hr style="margin: 0em 0em 1em 0em">
<div style="margin-left: 0" class="row">
<div class="6u 12u$(xsmall)">
<p>State-of-the-art neural models have achieved impressive results on a range of NLP tasks but are still quite data hungry to build. Training (or fine tuning) these models towards a specific task/domain may require hundreds of thousands of labeled samples. This puts huge labor burden and time cost on manual data annotation. Going beyond the standard instance-label training design, we are developing next-generation training paradigms for building neural NLP systems. The key ideas are to translate high-level human supervisions into machine-executable, modularized programs for model training, and to reference pre-existing knowledge resources for automatic data annotation. We focus on building new datasets and algorithms for digesting high-level human supervision and making use of distant supervision, in order to accelerate the model construction process and improve label efficiency of current NLP systems.</p>
<img style="max-width:100%; border:1px solid #ddd; padding: 0;" src="images/paper_gists/treenet.gif"/>
</div>
<div class="6u$ 12u$(xsmall)">
<ul style="font-size:16px; line-height:25px">
<p><b><a href="https://openreview.net/forum?id=rJlUt0EYwS" target="_blank">Learning from Explanations with Neural Module Execution Tree</a></b><br />
Ziqi Wang*, Yujia Qin*, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren. <b>ICLR</b> 2020.
<a class="button pub" href="http://inklab.usc.edu/project-NExT/" target="_blank">Project</a>
<a class="button pub" href="https://github.com/INK-USC/NExT" target="_blank">Github</a>
</p>
<p><b><a href="https://arxiv.org/abs/1910.04289" target="_blank">Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling</a></b><br />
Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu, Xiang Ren. <b>ACL</b> 2020
<a class="button pub" href="https://github.com/INK-USC/ConNet" target="_blank">Code</a>
</p>
<p><b><a href="https://arxiv.org/abs/2004.07493" target="_blank">TriggerNER: Learning with Entity Triggers as Explanation for Named Entity Recognition</a></b>.<br>
Bill Yuchen Lin*, Dong-Ho Lee*, Ming Shen, Xiao Huang, Ryan Moreno, Prashant Shiralkar, and Xiang Ren. <b>ACL</b>, 2020. <a class="button pub" href="https://github.com/INK-USC/TriggerNER" target="_blank">Github</a><br/>
</p>
<p><b><a href="https://arxiv.org/abs/1908.10383">LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Annotator Explanation</a></b><br />
Dong-Ho Lee*, Rahul Khanna*, Bill Yuchen Lin, Seyeon Lee, Qinyuan Ye, Elizabeth Boschee, Leonardo Neves and Xiang Ren. <b>ACL 2020 (demo)</b>
<a class="button pub" href="http://inklab.usc.edu/leanlife/" target="_blank">Project</a>
</p>
<p><b><a href="https://arxiv.org/abs/1909.02177">NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction</a></b><br />
Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, Xiang Ren. <b>The Web Conference</b> 2020.
<a class="button pub" href="https://github.com/INK-USC/NERO" target="_blank">Github</a>
</p>
<p><b><a href="acl19_alpaca.pdf" target="_blank">AlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging</a></b><br />
Bill Yuchen Lin*, Dong-Ho Lee*, Frank F. Xu, Ouyu Lan, Xiang Ren. <b>ACL</b> 2019, (System Demo).
<a class="button pub" href="http://inklab.usc.edu/AlpacaTag" target="_blank">Project</a> |
<a class="button pub" href="https://github.com/INK-USC/AlpacaTag/wiki" target="_blank">Wiki</a> |
<a class="button pub" href="https://github.com/INK-USC/AlpacaTag" target="_blank">Github</a> |
<a class="button pub" href="http://ink-ron.usc.edu:22033/static/file/poster.pdf" target="_blank">Poster</a>
</p>
<p><b><a href="https://arxiv.org/abs/1707.00166" target="_blank">Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach</a></b><br />
Liyuan Liu*, Xiang Ren*, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, Jiawei Han. <b>EMNLP</b> 2017.
<a class="button pub" href="https://github.com/LiyuanLucasLiu/ReHession" target="_blank">Github</a>
<a class="button pub" href="https://liyuanlucasliu.github.io/ReHession/" target="_blank">Project</a>
</p>
</ul>
</div>
</div>
<h2 style="margin: 0em 0em 0em 0em">Common Sense Reasoning for Artificial General Intelligence</h2>
<hr style="margin: 0em 0em 1em 0em">
<div style="margin-left: 0" class="row">
<div class="6u 12u$(xsmall)">
<p>Humans need commonsense knowledge to make new decisions in everyday situations, while even state-of-the-art AI models can make wrong decisions due to the lack of commonsense reasoning (CSR) ability. To teach machines to think with common sense like humans, we have been developing new reasoning methods and benchmarking datasets for CSR. For multiple-choice reasoning setting, we have focused on knowledge-aware methods that exploit commonsense knowledge graphs with graph neural networks. We have also been studying commonsense reasoning in generative and open-ended setting, which are closer to realistic applications (e.g., dialogue systems, search engines, etc.). Beyond the language modal, we are also studying CSR in multi-modal environments (e.g., language + vision). We hope our research in commonsense reasoning can become fundamental building blocks for future Artificial General Intelligence (AGI) systems. </p>
<img style="max-width:100%; border:1px solid #ddd; padding: 0;" src="images/paper_gists/mcs.png"/>
</div>
<div class="6u$ 12u$(xsmall)">
<ul style="font-size:16px; line-height:25px">
<p>
<a class="pub_title" href="https://arxiv.org/abs/2011.07956" target="_blank"><b>Pre-training Text-to-Text Transformers for Concept-centric Common Sense</b></a>
Wangchunshu Zhou*, Dong-Ho Lee*, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren
<b class="conf_name">ICLR</b> 2021
</p>
<p>
<a class="pub_title" href="https://arxiv.org/abs/1911.03705" target="_blank">
<b>CommonGen: A Constrained Text Generation Challenge for Generative Commonsense
Reasoning</b></a>
<br />Bill Yuchen Lin, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren
<b class="conf_name">EMNLP</b>
2020 (<i>Findings</i>).
<a class="button pub" href="http://inklab.usc.edu/CommonGen/"
target="_blank">
Project</a> <a class="button pub"
href="http://inklab.usc.edu/CommonGen/" target="_blank">
Data</a> <a class="button pub"
href="https://huggingface.co/datasets/common_gen" target="_blank">Huggingface</a> <a
class="button pub"
href="https://inklab.usc.edu/CommonGen/leaderboard.html" target="_blank">
Leaderboard</a> <a class="button pub code label label-info"
href="https://github.com/INK-USC/CommonGen" target="_blank">Code</a>
<font>
Media coverage</font>: <a class=""
href="https://www.theregister.com/2020/11/20/machine_learning_language/"
target="_blank">The Register</a>, <a class=""
href="https://techxplore.com/news/2020-11-reveals-ai-lacks-common.html"
target="_blank">Tech Xplore</a>, <a class=""
href="https://www.techzine.eu/news/trends/52449/study-finds-that-an-ais-machine-language-still-lacks-common-sense/#:~:text=Assistant%20Professor%20Xiang%20Ren%20and,make%20sense%20to%20a%20human."
target="_blank">Techzine</a>, <a class=""
href="https://www.radio.com/fm1019/news/ai-still-lacks-common-sense-according-to-new-research"
target="_blank">Radio.com</a>, <a class=""
href="https://www.sciencedaily.com/releases/2020/11/201118141702.htm?"
target="_blank">ScienceDaily</a>
<meta
title="CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning"
venue="EMNLP" year="2020" />
</p>
<p>
<a class="pub_title" href="https://arxiv.org/abs/2005.00683" target="_blank"> <b>Birds have four legs?! NumerSense:
Probing Numerical Commonsense Knowledge of Pre-trained Language Models </b></a>
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, Xiang Ren <b class="conf_name">EMNLP</b>
2020 <a class="button pub project label label-info" href="https://inklab.usc.edu/NumerSense/"
target="_blank">Project</a> <a class="button pub misc label label-info"
href="https://github.com/INK-USC/NumerSense/tree/master/data" target="_blank">Data</a> <a
class="button pub misc label label-info" href="https://inklab.usc.edu/NumerSense/#exp"
target="_blank">Leaderboard</a> <a class="button pub code label label-info"
href="https://github.com/INK-USC/NumerSense" target="_blank">Code</a>
<meta
title="Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models"
venue="EMNLP" year="2020" />
</p>
<p><b>
<a class="pub_title" href="https://yuchenlin.xyz/opencsr_naacl21_draft.pdf" target="_blank">Differentiable
Open-Ended Commonsense Reasoning</a> </b>
<br />
Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen
<b class="conf_name">NAACL</b> 2021
<meta title="Differentiable Open-Ended Commonsense Reasoning" venue="NAACL" year="2021" />
</p>
<p>
<b><a class="pub_title" href="https://arxiv.org/abs/2005.00646" target="_blank">Scalable Multi-Hop
Relational Reasoning for Knowledge-Aware Question Answering</a> </b>
<br>Yanlin Feng*, Xinyue Chen*, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang
Ren <b class="conf_name">EMNLP</b>
2020
<a class="button pub code label label-info" href="https://github.com/INK-USC/MHGRN"
target="_blank">Code</a>
<meta title="Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering"
venue="EMNLP" year="2020" /> </p>
<p>
<b><a class="pub_title" href="https://arxiv.org/abs/2005.00691" target="_blank">Connecting the Dots: A Knowledgeable
Path Generator for Commonsense Question Answering</a></b>
<br />Peifeng Wang, Nanyun Peng, Pedro Szekely, Xiang Ren <br /><b class="conf_name">EMNLP</b>
2020 (<i>Findings</i>) <br /> <a class="button pub project label label-info" href="https://wangpf3.github.io/pathgen-project-page/"
target="_blank">Project</a> <a class="button pub misc label label-info"
href="https://github.com/wangpf3/Commonsense-Path-Generator/blob/main/A-Commonsense-Path-Generator-for-Connecting-Entities.ipynb" target="_blank">Notebook Tutorial</a> <a class="button pub code label label-info"
href="https://github.com/wangpf3/Commonsense-Path-Generator" target="_blank">Code</a> <a class="button pub misc label label-info"
href="https://drive.google.com/file/d/1dQNxyiP4g4pdFQD6EPMQdzNow9sQevqD/view" target="_blank">Model checkpoints</a>
</p>
</ul>
</div>
</div>
<h2 style="margin: 0em 0em 0em 0em">Learning with Structured Inductive Biases</h2>
<hr style="margin: 0em 0em 1em 0em">
<div style="margin-left: 0" class="row">
<div class="6u 12u$(xsmall)">
<p>Deep neural networks have demonstrated strong capability in fitting large dataset in order to master a task, but at the same time also showing poor generalization ability in terms of task/domain transferability. One main reason is because the common mechanisms shared between the tasks (i.e., inductive biases), such as model components and constraints, are not explicitly specified in the model architectures. We are exploring various ways of designing structural inductive biases that are task-general and human-readable, and developing novel model architectures and learning algorithms to impose such inductive biases. This will yield NLP systems that run effectively under low data regime, while demonstrating good task/domain transferability.
</p>
<img style="max-width:100%; border:1px solid #ddd; padding: 0;" src="https://github.com/INK-USC/KagNet/raw/master/figures/kagnet.png"/>
</div>
<div class="6u$ 12u$(xsmall)">
<ul style="font-size:16px; line-height:25px">
<p><b><a href="https://arxiv.org/abs/2005.02439">Contextualizing Hate Speech Classifiers with Post-hoc Explanation</a></b><br />
Brendan Kennedy*, Xisen Jin*, Aida Mostafazadeh Davani, Morteza Dehghani and Xiang Ren. <b>ACL 2020</b>.
<a class="button pub" href="https://inklab.usc.edu/contextualize-hate-speech/" target="_blank">Project</a>
<a class="button pub" href="https://github.com/BrendanKennedy/contextualizing-hate-speech-models-with-explanations" target="_blank">Github</a>
<br />
</p>
<p><b><a href="papers/emnlp19_kagnet.pdf" target="_blank">KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning</a></b><br />
Bill Yuchen Lin, Xinyue Chen, Jamin Chen, Xiang Ren. <b>EMNLP</b> 2019 (long).
<a class="button pub" href="https://github.com/INK-USC/KagNet" target="_blank">Github</a>
</p>
<p><b><a href="https://arxiv.org/abs/1911.03705" target="_blank">CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning</a></b><br />
Bill Yuchen Lin, Ming Shen, Yu Xing, Pei Zhou, Xiang Ren. <b>AKBC</b> 2020.
<a class="button pub" href="http://inklab.usc.edu/CommonGen/" target="_blank">Leaderboard</a>
<a class="button pub" href="https://github.com/INK-USC/CommonGen" target="_blank">Github</a>
</p>
<p><b><a href="https://openreview.net/forum?id=rJlUt0EYwS" target="_blank">Learning from Explanations with Neural Execution Tree</a></b><br />
Ziqi Wang*, Yujia Qin*, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren. <b>ICLR</b> 2020 (poster).
</p>
<p><b><a href="https://openreview.net/forum?id=BkxRRkSKwr" target="_blank">Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models</a></b><br />
Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, Xiang Ren. <b>ICLR 2020 (spotlight).</b>
<a class="button pub" href="https://inklab.usc.edu/hiexpl" target="_blank">Project</a> |
<a class="button pub" href="https://github.com/INK-USC/hierarchical-explanation" target="_blank">Github</a>
</p>
</ul>
</div>
</div>
<br> <br>
<h2 style="margin: 0em 0em 0em 0em">Knowledge Reasoning over Heterogeneous Data</h2>
<hr style="margin: 0em 0em 1em 0em">
<div style="margin-left: 0" class="row">
<div class="6u 12u$(xsmall)">
<p>Rule-based symbolic reasoning systems have the advantage of precise grounding and induction but are short for the fuzzy matching and uncertainty. In contrast, embedding-based reasoning methods are built on data-driven machine learning paradigm and can fit an effective model with large amount of data, while lacking the strength of good generalization. We are working on neural-symbolic reasoning methods to combine fuzzy reasoning with good generalization, and extending the reasoning target from static, graph-structured data to heterogeneous sources such as time-variant graph structures and unstructured text.
</p>
<img style="max-width:100%; border:1px solid #ddd; padding: 0;" src="images/paper_gists/cpl.gif"/>
</div>
<div class="6u$ 12u$(xsmall)">
<ul style="font-size:16px; line-height:25px">
<p><b><a href="https://arxiv.org/abs/1909.00230" target="_blank">Collaborative Policy Learning for Open Knowledge Graph Reasoning</a></b><br />
Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, Xiang Ren. <b>EMNLP</b> 2019.
<a class="button pub" href="https://github.com/INK-USC/CPL" target="_blank">Github</a>
<p><b><a href="https://arxiv.org/abs/1904.05530">Recurrent Event Network for Reasoning over Temporal Knowledge Graphs</a></b><br />
Woojeong Jin, Changlin Zhang, Pedro Szekely, Xiang Ren. <b>ICLR-RLGM</b> 2019.
<a class="button pub" href="https://github.com/INK-USC/RENet" target="_blank">Github</a> | <a class="button pub" href="https://github.com/woojeongjin/dynamic-KG" target="_blank">Survey</a>
</p>
<p><b><a href="papers/emnlp19_kagnet.pdf" target="_blank">KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning</a></b><br />
Bill Yuchen Lin, Xinyue Chen, Jamin Chen, Xiang Ren. <b>EMNLP</b> 2019 (long).
<a class="button pub" href="https://github.com/INK-USC/KagNet" target="_blank">Github</a>
</p>
<p><b><a href="https://arxiv.org/abs/1806.08804" target="_blank">Hierarchical Graph Representation Learning with Differentiable Pooling</a></b><br />
Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec. <b>NeurIPS</b> 2018 (Spotlight).<br/>
<a class="button pub" href="https://arxiv.org/abs/1806.08804" target="_blank">ArXiv</a>
<a class="button pub" href="https://github.com/RexYing/graph-pooling" target="_blank">Github</a><br/>
</p>
</ul>
</div>
</div>
</p>
</div>
</div>
</section>
<!-- Three -->
<!-- <section id="three">
<div class="inner">
<article>
<div class="content">
<span class="icon fa-laptop"></span>
<header>
<h4>Tempus Feugiat</h4>
</header>
<p>Morbi interdum mollis sapien. Sed ac risus. Phasellus lacinia, magna lorem ullamcorper laoreet, lectus arcu.</p>
<ul class="actions">
<li><a href="#" class="button alt">Learn More</a></li>
</ul>
</div>
</article>
<article>
<div class="content">
<span class="icon fa-diamond"></span>
<header>
<h4>Aliquam Nulla</h4>
</header>
<p>Ut convallis, sem sit amet interdum consectetuer, odio augue aliquam leo, nec dapibus tortor nibh sed.</p>
<ul class="actions">
<li><a href="#" class="button alt">Learn More</a></li>
</ul>
</div>
</article>
<article>
<div class="content">
<span class="icon fa-laptop"></span>
<header>
<h4>Sed Magna</h4>
</header>
<p>Suspendisse mauris. Fusce accumsan mollis eros. Pellentesque a diam sit amet mi ullamcorper vehicula.</p>
<ul class="actions">
<li><a href="#" class="button alt">Learn More</a></li>
</ul>
</div>
</article>
</div>
</section> -->
<!-- Footer -->
<section id="footer" >
<div class="inner" style="padding: 0 0 0 0;">
<div class="copyright" style="height: 0pt;" style="padding: 0 0 0 0;">
<div class="row" style="padding: 0 0 0 0;">
<div class="col-md-6 col-sm-6" style="padding: 0 0 0 0;">
<p class="small-text" style="color:white"> Copyright 2022 ©. INK Lab @ USC/ISI</p>
</div> <!-- /.col-md-6 -->
</div> <!-- /.row -->
</div> <!-- /.copyright -->
</div> <!-- /.container -->
</div>
</section>
<!-- Scripts -->
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/skel.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
</body>
</html>