-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy paththesis_resqs.tex
107 lines (105 loc) · 5.55 KB
/
thesis_resqs.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
\newcommand{\resqcontent}[1]{%
\IfEqCase{#1}{%
% -------------MAIN I-------------
% {main}{How can we improve the learning process for language understanding tasks when the supervision signal, i.e., data and/or labels, is noisy or of limited quantity?}
% {main}{How can we improve the learning process for language understanding tasks, if the supervision signal is based on training data and labels that are noisy in quality or limited in quantity?}
{main}{How can we improve the learning process for language understanding tasks, if the supervision signal is noisy in quality or limited in quantity?}
% -------------PART I-------------
{p1}{How to use the structure of the data as prior knowledge to learn robust and effective representations of entities and concepts, when the data is noisy or variable over time?}
% -------------chapter 2-------------
{c2}{How to learn robust representations for entities and abstract concepts that are affected by neither undiscerning general, nor noisy accidental features, given the structural relations in the data?}%
%
{c2.1}{How to estimate a representation for a set of entities that captures all, and only, the essential shared commonalities of these entities?}
%
{c2.2}{How do \swlms capture the mutual notion of relevance for a set of feedback documents and prevent noisy terms by controlling the contribution of each of the documents in the feedback model?}%
%
{c2.3}{How well can \swlms profile groups of entities and how effective are these profiles in content customization tasks?}%
% -------------chapter 3-------------
{c3}{How to learn separable representations for hierarchically structured entities that are less sensitive to structural changes in the data and more transferable across time?}%
%
{c3.1}{What makes separability of representations a desirable property for classifiers?}%
%
{c3.2}{How can we estimate horizontally and vertically separable representations for hierarchically structured entities?}%
%
{c3.3}{How can separability of representations for hierarchical entities improve their transferability?}%
% -------------PART II-------------
{p2}{How to design learning algorithms that can learn from weakly annotated samples, while generalizing over the imperfection in their labels?}
% -------------chapter 4-------------
{c4}{How can we train neural networks using programmatically generated pseudo-labels as a weak supervision signal, in a way that they exhibit superior generalization capabilities?}%
%
{c4.1}{Can labels from an unsupervised heuristic-based model be used as programmatically generated weak supervision signal to train an effective neural network?}
%
{c4.2}{What setup in terms of input representation and learning objective is most suitable for a neural ranker when training on programmatically generated labeled data?}
%
{c4.3}{How can learning from weak supervision signals help to preserve privacy while training neural networks on sensitive data?}
% -------------chapter 5-------------
{c5}{Given a large set of weakly annotated samples and a small set of samples with high-quality labels, how can we best leverage the capacity of information in these sets to train a neural network?}%
%
{c5.1}{When learning from samples of variable quality, can we meta learn an adjustment for the magnitude of the parameter updates in backpropagation based on the merit of labels?}
%
{c5.2}{When learning from samples of variable quality, can we reannotate these samples and provide (hopefully) better labels, associated with a fidelity score to regulate the learning rate?}
% -------------PART III-------------
{p3}{How can inductive biases help to improve the generalization and data efficiency of learning algorithms?}
% -------------chapter 6-------------
{c6}{How can we improve the generalization and data efficiency of self-attentive feed-forward sequence models by injecting a recurrent inductive bias?}
%
{c6.1}{How do Universal Transformers combine the recurrent inductive bias of RNNs with the parallelizability and global receptive field of the Transformer?}
%
{c6.2}{How effective are Universal Transformers at complex reasoning tasks with limited data, at algorithmic tasks that need generalization over observed training samples, and at real-world language understanding tasks?}
}[\PackageError{rq}{Undefined option to rq: #1}{}]%
}%
\newcommand{\resqname}[1]{%
\IfEqCase{#1}{%
% -------------MAIN I-------------
{main}{RQ-Main}
% -------------PART I-------------
{p1}{RQ-1}
% -------------chapter 2-------------
{c2}{RQ-1.1}
%
{c2.1}{RQ-1.1.1}
%
{c2.2}{RQ-1.1.2}
%
{c2.3}{RQ-1.1.3}
% -------------chapter 3-------------
{c3}{RQ-1.2}
%
{c3.1}{RQ-1.2.1}
%
{c3.2}{RQ-1.2.2}
%
{c3.3}{RQ-1.2.3}
% -------------PART II-------------
{p2}{RQ-2}
% -------------chapter 4-------------
{c4}{RQ-2.1}%
%
{c4.1}{RQ-2.1.1}
%
{c4.2}{RQ-2.1.2}
%
{c4.3}{RQ-2.1.3}
% -------------chapter 5-------------
{c5}{RQ-2.2}%
%
{c5.1}{RQ-2.2.1}
%
{c5.2}{RQ-2.2.2}
% -------------PART III-------------
{p3}{RQ-3}
% -------------chapter 6-------------
{c6}{RQ-3.1}%
%
{c6.1}{RQ-3.1.1}
%
{c6.2}{RQ-3.1.2}
}[\PackageError{rq}{Undefined option to rq: #1}{}]%
}%
\newcommand{\resq}[1]{%
\begin{resqbox}
\begin{enumerate}
\item[\textbf{\resqname{#1}}] \emph{\resqcontent{#1}}
\end{enumerate}
\end{resqbox}
}%