-
Notifications
You must be signed in to change notification settings - Fork 97
/
references.html
1953 lines (1875 loc) · 131 KB
/
references.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>C References | Machine Learning for Factor Investing</title>
<meta name="description" content="C References | Machine Learning for Factor Investing" />
<meta name="generator" content="bookdown 0.16 and GitBook 2.6.7" />
<meta property="og:title" content="C References | Machine Learning for Factor Investing" />
<meta property="og:type" content="book" />
<meta name="twitter:card" content="summary" />
<meta name="twitter:title" content="C References | Machine Learning for Factor Investing" />
<meta name="author" content="Guillaume Coqueret and Tony Guida" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />
<link rel="prev" href="solution-to-exercises.html"/>
<script src="libs/jquery-2.2.3/jquery.min.js"></script>
<link href="libs/gitbook-2.6.7/css/style.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-table.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-bookdown.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-highlight.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-search.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-fontsettings.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-clipboard.css" rel="stylesheet" />
<script src="libs/kePrint-0.0.1/kePrint.js"></script>
<style type="text/css">
a.sourceLine { display: inline-block; line-height: 1.25; }
a.sourceLine { pointer-events: none; color: inherit; text-decoration: inherit; }
a.sourceLine:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode { white-space: pre; position: relative; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
code.sourceCode { white-space: pre-wrap; }
a.sourceLine { text-indent: -1em; padding-left: 1em; }
}
pre.numberSource a.sourceLine
{ position: relative; left: -4em; }
pre.numberSource a.sourceLine::before
{ content: attr(data-line-number);
position: relative; left: -1em; text-align: right; vertical-align: baseline;
border: none; pointer-events: all; display: inline-block;
-webkit-touch-callout: none; -webkit-user-select: none;
-khtml-user-select: none; -moz-user-select: none;
-ms-user-select: none; user-select: none;
padding: 0 4px; width: 4em;
color: #aaaaaa;
}
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; }
div.sourceCode
{ }
@media screen {
a.sourceLine::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
</style>
</head>
<body>
<div class="book without-animation with-summary font-size-2 font-family-1" data-basepath=".">
<div class="book-summary">
<nav role="navigation">
<ul class="summary">
<li class="chapter" data-level="1" data-path="preface.html"><a href="preface.html"><i class="fa fa-check"></i><b>1</b> Preface</a><ul>
<li class="chapter" data-level="1.1" data-path="preface.html"><a href="preface.html#what-this-book-is-not-about"><i class="fa fa-check"></i><b>1.1</b> What this book is not about</a></li>
<li class="chapter" data-level="1.2" data-path="preface.html"><a href="preface.html#the-targeted-audience"><i class="fa fa-check"></i><b>1.2</b> The targeted audience</a></li>
<li class="chapter" data-level="1.3" data-path="preface.html"><a href="preface.html#how-this-book-is-structured"><i class="fa fa-check"></i><b>1.3</b> How this book is structured</a></li>
<li class="chapter" data-level="1.4" data-path="preface.html"><a href="preface.html#companion-website"><i class="fa fa-check"></i><b>1.4</b> Companion website</a></li>
<li class="chapter" data-level="1.5" data-path="preface.html"><a href="preface.html#why-r"><i class="fa fa-check"></i><b>1.5</b> Why R?</a></li>
<li class="chapter" data-level="1.6" data-path="preface.html"><a href="preface.html#coding-instructions"><i class="fa fa-check"></i><b>1.6</b> Coding instructions</a></li>
<li class="chapter" data-level="1.7" data-path="preface.html"><a href="preface.html#acknowledgements"><i class="fa fa-check"></i><b>1.7</b> Acknowledgements</a></li>
<li class="chapter" data-level="1.8" data-path="preface.html"><a href="preface.html#future-developments"><i class="fa fa-check"></i><b>1.8</b> Future developments</a></li>
</ul></li>
<li class="chapter" data-level="2" data-path="notdata.html"><a href="notdata.html"><i class="fa fa-check"></i><b>2</b> Notations and data</a><ul>
<li class="chapter" data-level="2.1" data-path="notdata.html"><a href="notdata.html#notations"><i class="fa fa-check"></i><b>2.1</b> Notations</a></li>
<li class="chapter" data-level="2.2" data-path="notdata.html"><a href="notdata.html#dataset"><i class="fa fa-check"></i><b>2.2</b> Dataset</a></li>
</ul></li>
<li class="chapter" data-level="3" data-path="intro.html"><a href="intro.html"><i class="fa fa-check"></i><b>3</b> Introduction</a><ul>
<li class="chapter" data-level="3.1" data-path="intro.html"><a href="intro.html#context"><i class="fa fa-check"></i><b>3.1</b> Context</a></li>
<li class="chapter" data-level="3.2" data-path="intro.html"><a href="intro.html#portfolio-construction-the-workflow"><i class="fa fa-check"></i><b>3.2</b> Portfolio construction: the workflow</a></li>
<li class="chapter" data-level="3.3" data-path="intro.html"><a href="intro.html#machine-learning-is-no-magic-wand"><i class="fa fa-check"></i><b>3.3</b> Machine Learning is no Magic Wand</a></li>
</ul></li>
<li class="chapter" data-level="4" data-path="factor.html"><a href="factor.html"><i class="fa fa-check"></i><b>4</b> Factor investing and asset pricing anomalies</a><ul>
<li class="chapter" data-level="4.1" data-path="factor.html"><a href="factor.html#introduction"><i class="fa fa-check"></i><b>4.1</b> Introduction</a></li>
<li class="chapter" data-level="4.2" data-path="factor.html"><a href="factor.html#detecting-anomalies"><i class="fa fa-check"></i><b>4.2</b> Detecting anomalies</a><ul>
<li class="chapter" data-level="4.2.1" data-path="factor.html"><a href="factor.html#simple-portfolio-sorts"><i class="fa fa-check"></i><b>4.2.1</b> Simple portfolio sorts</a></li>
<li class="chapter" data-level="4.2.2" data-path="factor.html"><a href="factor.html#factors"><i class="fa fa-check"></i><b>4.2.2</b> Factors</a></li>
<li class="chapter" data-level="4.2.3" data-path="factor.html"><a href="factor.html#predictive-regressions-sorts-and-p-value-issues"><i class="fa fa-check"></i><b>4.2.3</b> Predictive regressions, sorts, and p-value issues</a></li>
<li class="chapter" data-level="4.2.4" data-path="factor.html"><a href="factor.html#fama-macbeth-regressions"><i class="fa fa-check"></i><b>4.2.4</b> Fama-Macbeth regressions</a></li>
<li class="chapter" data-level="4.2.5" data-path="factor.html"><a href="factor.html#factor-competition"><i class="fa fa-check"></i><b>4.2.5</b> Factor competition</a></li>
<li class="chapter" data-level="4.2.6" data-path="factor.html"><a href="factor.html#advanced-techniques"><i class="fa fa-check"></i><b>4.2.6</b> Advanced techniques</a></li>
</ul></li>
<li class="chapter" data-level="4.3" data-path="factor.html"><a href="factor.html#factors-or-characteristics"><i class="fa fa-check"></i><b>4.3</b> Factors or characteristics?</a></li>
<li class="chapter" data-level="4.4" data-path="factor.html"><a href="factor.html#momentum-and-timing"><i class="fa fa-check"></i><b>4.4</b> Momentum and timing</a><ul>
<li class="chapter" data-level="4.4.1" data-path="factor.html"><a href="factor.html#factor-momentum"><i class="fa fa-check"></i><b>4.4.1</b> Factor momentum</a></li>
<li class="chapter" data-level="4.4.2" data-path="factor.html"><a href="factor.html#factor-timing"><i class="fa fa-check"></i><b>4.4.2</b> Factor timing</a></li>
</ul></li>
<li class="chapter" data-level="4.5" data-path="factor.html"><a href="factor.html#the-link-with-machine-learning"><i class="fa fa-check"></i><b>4.5</b> The link with machine learning</a><ul>
<li class="chapter" data-level="4.5.1" data-path="factor.html"><a href="factor.html#a-short-list-of-recent-references"><i class="fa fa-check"></i><b>4.5.1</b> A short list of recent references</a></li>
<li class="chapter" data-level="4.5.2" data-path="factor.html"><a href="factor.html#explicit-connexions-with-asset-pricing-models"><i class="fa fa-check"></i><b>4.5.2</b> Explicit connexions with asset pricing models</a></li>
</ul></li>
<li class="chapter" data-level="4.6" data-path="factor.html"><a href="factor.html#coding-exercises"><i class="fa fa-check"></i><b>4.6</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="5" data-path="Data.html"><a href="Data.html"><i class="fa fa-check"></i><b>5</b> Data preprocessing</a><ul>
<li class="chapter" data-level="5.1" data-path="Data.html"><a href="Data.html#know-your-data"><i class="fa fa-check"></i><b>5.1</b> Know your data</a></li>
<li class="chapter" data-level="5.2" data-path="Data.html"><a href="Data.html#missing-data"><i class="fa fa-check"></i><b>5.2</b> Missing data</a></li>
<li class="chapter" data-level="5.3" data-path="Data.html"><a href="Data.html#outlier-detection"><i class="fa fa-check"></i><b>5.3</b> Outlier detection</a></li>
<li class="chapter" data-level="5.4" data-path="Data.html"><a href="Data.html#feateng"><i class="fa fa-check"></i><b>5.4</b> Feature engineering</a><ul>
<li class="chapter" data-level="5.4.1" data-path="Data.html"><a href="Data.html#feature-selection"><i class="fa fa-check"></i><b>5.4.1</b> Feature selection</a></li>
<li class="chapter" data-level="5.4.2" data-path="Data.html"><a href="Data.html#scaling"><i class="fa fa-check"></i><b>5.4.2</b> Scaling the predictors</a></li>
</ul></li>
<li class="chapter" data-level="5.5" data-path="Data.html"><a href="Data.html#labelling"><i class="fa fa-check"></i><b>5.5</b> Labelling</a><ul>
<li class="chapter" data-level="5.5.1" data-path="Data.html"><a href="Data.html#simple-labels"><i class="fa fa-check"></i><b>5.5.1</b> Simple labels</a></li>
<li class="chapter" data-level="5.5.2" data-path="Data.html"><a href="Data.html#categorical-labels"><i class="fa fa-check"></i><b>5.5.2</b> Categorical labels</a></li>
<li class="chapter" data-level="5.5.3" data-path="Data.html"><a href="Data.html#the-triple-barrier-method"><i class="fa fa-check"></i><b>5.5.3</b> The triple barrier method</a></li>
<li class="chapter" data-level="5.5.4" data-path="Data.html"><a href="Data.html#filtering-the-sample"><i class="fa fa-check"></i><b>5.5.4</b> Filtering the sample</a></li>
<li class="chapter" data-level="5.5.5" data-path="Data.html"><a href="Data.html#horizons"><i class="fa fa-check"></i><b>5.5.5</b> Return horizons</a></li>
</ul></li>
<li class="chapter" data-level="5.6" data-path="Data.html"><a href="Data.html#pers"><i class="fa fa-check"></i><b>5.6</b> Handling persistence</a></li>
<li class="chapter" data-level="5.7" data-path="Data.html"><a href="Data.html#extensions"><i class="fa fa-check"></i><b>5.7</b> Extensions</a><ul>
<li class="chapter" data-level="5.7.1" data-path="Data.html"><a href="Data.html#transforming-features"><i class="fa fa-check"></i><b>5.7.1</b> Transforming features</a></li>
<li class="chapter" data-level="5.7.2" data-path="Data.html"><a href="Data.html#macrovar"><i class="fa fa-check"></i><b>5.7.2</b> Macro-economic variables</a></li>
</ul></li>
<li class="chapter" data-level="5.8" data-path="Data.html"><a href="Data.html#additional-code-and-results"><i class="fa fa-check"></i><b>5.8</b> Additional code and results</a><ul>
<li class="chapter" data-level="5.8.1" data-path="Data.html"><a href="Data.html#impact-of-rescaling-graphical-representation"><i class="fa fa-check"></i><b>5.8.1</b> Impact of rescaling: graphical representation</a></li>
<li class="chapter" data-level="5.8.2" data-path="Data.html"><a href="Data.html#impact-of-rescaling-toy-example"><i class="fa fa-check"></i><b>5.8.2</b> Impact of rescaling: toy example</a></li>
</ul></li>
<li class="chapter" data-level="5.9" data-path="Data.html"><a href="Data.html#coding-exercises-1"><i class="fa fa-check"></i><b>5.9</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="6" data-path="lasso.html"><a href="lasso.html"><i class="fa fa-check"></i><b>6</b> Penalized regressions and sparse hedging for minimum variance portfolios</a><ul>
<li class="chapter" data-level="6.1" data-path="lasso.html"><a href="lasso.html#penalised-regressions"><i class="fa fa-check"></i><b>6.1</b> Penalised regressions</a><ul>
<li class="chapter" data-level="6.1.1" data-path="lasso.html"><a href="lasso.html#simple-regressions"><i class="fa fa-check"></i><b>6.1.1</b> Simple regressions</a></li>
<li class="chapter" data-level="6.1.2" data-path="lasso.html"><a href="lasso.html#forms-of-penalizations"><i class="fa fa-check"></i><b>6.1.2</b> Forms of penalizations</a></li>
<li class="chapter" data-level="6.1.3" data-path="lasso.html"><a href="lasso.html#illustrations"><i class="fa fa-check"></i><b>6.1.3</b> Illustrations</a></li>
</ul></li>
<li class="chapter" data-level="6.2" data-path="lasso.html"><a href="lasso.html#sparse-hedging-for-minimum-variance-portfolios"><i class="fa fa-check"></i><b>6.2</b> Sparse hedging for minimum variance portfolios</a><ul>
<li class="chapter" data-level="6.2.1" data-path="lasso.html"><a href="lasso.html#presentation-and-derivations"><i class="fa fa-check"></i><b>6.2.1</b> Presentation and derivations</a></li>
<li class="chapter" data-level="6.2.2" data-path="lasso.html"><a href="lasso.html#sparseex"><i class="fa fa-check"></i><b>6.2.2</b> Example</a></li>
</ul></li>
<li class="chapter" data-level="6.3" data-path="lasso.html"><a href="lasso.html#predictive-regressions"><i class="fa fa-check"></i><b>6.3</b> Predictive regressions</a><ul>
<li class="chapter" data-level="6.3.1" data-path="lasso.html"><a href="lasso.html#literature-review-and-principle"><i class="fa fa-check"></i><b>6.3.1</b> Literature review and principle</a></li>
<li class="chapter" data-level="6.3.2" data-path="lasso.html"><a href="lasso.html#code-and-results"><i class="fa fa-check"></i><b>6.3.2</b> Code and results</a></li>
</ul></li>
<li class="chapter" data-level="6.4" data-path="lasso.html"><a href="lasso.html#coding-exercises-2"><i class="fa fa-check"></i><b>6.4</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="7" data-path="trees.html"><a href="trees.html"><i class="fa fa-check"></i><b>7</b> Tree-based methods</a><ul>
<li class="chapter" data-level="7.1" data-path="trees.html"><a href="trees.html#simple-trees"><i class="fa fa-check"></i><b>7.1</b> Simple trees</a><ul>
<li class="chapter" data-level="7.1.1" data-path="trees.html"><a href="trees.html#principle"><i class="fa fa-check"></i><b>7.1.1</b> Principle</a></li>
<li class="chapter" data-level="7.1.2" data-path="trees.html"><a href="trees.html#further-details-on-classification"><i class="fa fa-check"></i><b>7.1.2</b> Further details on classification</a></li>
<li class="chapter" data-level="7.1.3" data-path="trees.html"><a href="trees.html#pruning-criteria"><i class="fa fa-check"></i><b>7.1.3</b> Pruning criteria</a></li>
<li class="chapter" data-level="7.1.4" data-path="trees.html"><a href="trees.html#code-and-interpretation"><i class="fa fa-check"></i><b>7.1.4</b> Code and interpretation</a></li>
</ul></li>
<li class="chapter" data-level="7.2" data-path="trees.html"><a href="trees.html#random-forests"><i class="fa fa-check"></i><b>7.2</b> Random forests</a><ul>
<li class="chapter" data-level="7.2.1" data-path="trees.html"><a href="trees.html#principle-1"><i class="fa fa-check"></i><b>7.2.1</b> Principle</a></li>
<li class="chapter" data-level="7.2.2" data-path="trees.html"><a href="trees.html#code-and-results-1"><i class="fa fa-check"></i><b>7.2.2</b> Code and results</a></li>
</ul></li>
<li class="chapter" data-level="7.3" data-path="trees.html"><a href="trees.html#adaboost"><i class="fa fa-check"></i><b>7.3</b> Boosted trees: Adaboost</a><ul>
<li class="chapter" data-level="7.3.1" data-path="trees.html"><a href="trees.html#methodology"><i class="fa fa-check"></i><b>7.3.1</b> Methodology</a></li>
<li class="chapter" data-level="7.3.2" data-path="trees.html"><a href="trees.html#illustration"><i class="fa fa-check"></i><b>7.3.2</b> Illustration</a></li>
</ul></li>
<li class="chapter" data-level="7.4" data-path="trees.html"><a href="trees.html#boosted-trees-extreme-gradient-boosting"><i class="fa fa-check"></i><b>7.4</b> Boosted trees: extreme gradient boosting</a><ul>
<li class="chapter" data-level="7.4.1" data-path="trees.html"><a href="trees.html#managing-loss"><i class="fa fa-check"></i><b>7.4.1</b> Managing Loss</a></li>
<li class="chapter" data-level="7.4.2" data-path="trees.html"><a href="trees.html#penalisation"><i class="fa fa-check"></i><b>7.4.2</b> Penalisation</a></li>
<li class="chapter" data-level="7.4.3" data-path="trees.html"><a href="trees.html#aggregation"><i class="fa fa-check"></i><b>7.4.3</b> Aggregation</a></li>
<li class="chapter" data-level="7.4.4" data-path="trees.html"><a href="trees.html#tree-structure"><i class="fa fa-check"></i><b>7.4.4</b> Tree structure</a></li>
<li class="chapter" data-level="7.4.5" data-path="trees.html"><a href="trees.html#boostext"><i class="fa fa-check"></i><b>7.4.5</b> Extensions</a></li>
<li class="chapter" data-level="7.4.6" data-path="trees.html"><a href="trees.html#boostcode"><i class="fa fa-check"></i><b>7.4.6</b> Code and results</a></li>
<li class="chapter" data-level="7.4.7" data-path="trees.html"><a href="trees.html#instweight"><i class="fa fa-check"></i><b>7.4.7</b> Instance weighting</a></li>
</ul></li>
<li class="chapter" data-level="7.5" data-path="trees.html"><a href="trees.html#discussion"><i class="fa fa-check"></i><b>7.5</b> Discussion</a></li>
<li class="chapter" data-level="7.6" data-path="trees.html"><a href="trees.html#coding-exercises-3"><i class="fa fa-check"></i><b>7.6</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="8" data-path="NN.html"><a href="NN.html"><i class="fa fa-check"></i><b>8</b> Neural networks</a><ul>
<li class="chapter" data-level="8.1" data-path="NN.html"><a href="NN.html#the-original-perceptron"><i class="fa fa-check"></i><b>8.1</b> The original perceptron</a></li>
<li class="chapter" data-level="8.2" data-path="NN.html"><a href="NN.html#multilayer-perceptron"><i class="fa fa-check"></i><b>8.2</b> Multilayer perceptron</a><ul>
<li class="chapter" data-level="8.2.1" data-path="NN.html"><a href="NN.html#introduction-and-notations"><i class="fa fa-check"></i><b>8.2.1</b> Introduction and notations</a></li>
<li class="chapter" data-level="8.2.2" data-path="NN.html"><a href="NN.html#universal-approximation"><i class="fa fa-check"></i><b>8.2.2</b> Universal approximation</a></li>
<li class="chapter" data-level="8.2.3" data-path="NN.html"><a href="NN.html#backprop"><i class="fa fa-check"></i><b>8.2.3</b> Learning via back-propagation</a></li>
<li class="chapter" data-level="8.2.4" data-path="NN.html"><a href="NN.html#further-details-on-classification-1"><i class="fa fa-check"></i><b>8.2.4</b> Further details on classification</a></li>
</ul></li>
<li class="chapter" data-level="8.3" data-path="NN.html"><a href="NN.html#howdeep"><i class="fa fa-check"></i><b>8.3</b> How deep should we go? And other practical issues</a><ul>
<li class="chapter" data-level="8.3.1" data-path="NN.html"><a href="NN.html#architectural-choices"><i class="fa fa-check"></i><b>8.3.1</b> Architectural choices</a></li>
<li class="chapter" data-level="8.3.2" data-path="NN.html"><a href="NN.html#frequency-of-weight-updates-and-learning-duration"><i class="fa fa-check"></i><b>8.3.2</b> Frequency of weight updates and learning duration</a></li>
<li class="chapter" data-level="8.3.3" data-path="NN.html"><a href="NN.html#penalizations-and-dropout"><i class="fa fa-check"></i><b>8.3.3</b> Penalizations and dropout</a></li>
</ul></li>
<li class="chapter" data-level="8.4" data-path="NN.html"><a href="NN.html#code-samples-and-comments-for-vanilla-mlp"><i class="fa fa-check"></i><b>8.4</b> Code samples and comments for vanilla MLP</a><ul>
<li class="chapter" data-level="8.4.1" data-path="NN.html"><a href="NN.html#regression-example"><i class="fa fa-check"></i><b>8.4.1</b> Regression example</a></li>
<li class="chapter" data-level="8.4.2" data-path="NN.html"><a href="NN.html#classification-example"><i class="fa fa-check"></i><b>8.4.2</b> Classification example</a></li>
</ul></li>
<li class="chapter" data-level="8.5" data-path="NN.html"><a href="NN.html#recurrent-networks"><i class="fa fa-check"></i><b>8.5</b> Recurrent networks</a><ul>
<li class="chapter" data-level="8.5.1" data-path="NN.html"><a href="NN.html#presentation"><i class="fa fa-check"></i><b>8.5.1</b> Presentation</a></li>
<li class="chapter" data-level="8.5.2" data-path="NN.html"><a href="NN.html#code-and-results-2"><i class="fa fa-check"></i><b>8.5.2</b> Code and results</a></li>
</ul></li>
<li class="chapter" data-level="8.6" data-path="NN.html"><a href="NN.html#other-common-architectures"><i class="fa fa-check"></i><b>8.6</b> Other common architectures</a><ul>
<li class="chapter" data-level="8.6.1" data-path="NN.html"><a href="NN.html#generative-aversarial-networks"><i class="fa fa-check"></i><b>8.6.1</b> Generative adversarial networks</a></li>
<li class="chapter" data-level="8.6.2" data-path="NN.html"><a href="NN.html#autoencoders"><i class="fa fa-check"></i><b>8.6.2</b> Auto-encoders</a></li>
<li class="chapter" data-level="8.6.3" data-path="NN.html"><a href="NN.html#a-word-on-convolutional-networks"><i class="fa fa-check"></i><b>8.6.3</b> A word on convolutional networks</a></li>
<li class="chapter" data-level="8.6.4" data-path="NN.html"><a href="NN.html#advanced-architectures"><i class="fa fa-check"></i><b>8.6.4</b> Advanced architectures</a></li>
</ul></li>
<li class="chapter" data-level="8.7" data-path="NN.html"><a href="NN.html#coding-exercise"><i class="fa fa-check"></i><b>8.7</b> Coding exercise</a></li>
</ul></li>
<li class="chapter" data-level="9" data-path="svm.html"><a href="svm.html"><i class="fa fa-check"></i><b>9</b> Support vector machines</a><ul>
<li class="chapter" data-level="9.1" data-path="svm.html"><a href="svm.html#svm-for-classification"><i class="fa fa-check"></i><b>9.1</b> SVM for classification</a></li>
<li class="chapter" data-level="9.2" data-path="svm.html"><a href="svm.html#svm-for-regression"><i class="fa fa-check"></i><b>9.2</b> SVM for regression</a></li>
<li class="chapter" data-level="9.3" data-path="svm.html"><a href="svm.html#practice"><i class="fa fa-check"></i><b>9.3</b> Practice</a></li>
<li class="chapter" data-level="9.4" data-path="svm.html"><a href="svm.html#coding-exercises-4"><i class="fa fa-check"></i><b>9.4</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="10" data-path="bayes.html"><a href="bayes.html"><i class="fa fa-check"></i><b>10</b> Bayesian methods</a><ul>
<li class="chapter" data-level="10.1" data-path="bayes.html"><a href="bayes.html#the-bayesian-framework"><i class="fa fa-check"></i><b>10.1</b> The Bayesian framework</a></li>
<li class="chapter" data-level="10.2" data-path="bayes.html"><a href="bayes.html#bayesian-sampling"><i class="fa fa-check"></i><b>10.2</b> Bayesian sampling</a><ul>
<li class="chapter" data-level="10.2.1" data-path="bayes.html"><a href="bayes.html#gibbs-sampling"><i class="fa fa-check"></i><b>10.2.1</b> Gibbs sampling</a></li>
<li class="chapter" data-level="10.2.2" data-path="bayes.html"><a href="bayes.html#metropolis-hastings-sampling"><i class="fa fa-check"></i><b>10.2.2</b> Metropolis-Hastings sampling</a></li>
</ul></li>
<li class="chapter" data-level="10.3" data-path="bayes.html"><a href="bayes.html#bayesian-linear-regression"><i class="fa fa-check"></i><b>10.3</b> Bayesian linear regression</a></li>
<li class="chapter" data-level="10.4" data-path="bayes.html"><a href="bayes.html#naive-bayes-classifier"><i class="fa fa-check"></i><b>10.4</b> Naive Bayes classifier</a></li>
<li class="chapter" data-level="10.5" data-path="bayes.html"><a href="bayes.html#BART"><i class="fa fa-check"></i><b>10.5</b> Bayesian additive trees</a><ul>
<li class="chapter" data-level="10.5.1" data-path="bayes.html"><a href="bayes.html#general-formulation"><i class="fa fa-check"></i><b>10.5.1</b> General formulation</a></li>
<li class="chapter" data-level="10.5.2" data-path="bayes.html"><a href="bayes.html#priors"><i class="fa fa-check"></i><b>10.5.2</b> Priors</a></li>
<li class="chapter" data-level="10.5.3" data-path="bayes.html"><a href="bayes.html#sampling-and-predictions"><i class="fa fa-check"></i><b>10.5.3</b> Sampling and predictions</a></li>
<li class="chapter" data-level="10.5.4" data-path="bayes.html"><a href="bayes.html#code"><i class="fa fa-check"></i><b>10.5.4</b> Code</a></li>
</ul></li>
</ul></li>
<li class="chapter" data-level="11" data-path="valtune.html"><a href="valtune.html"><i class="fa fa-check"></i><b>11</b> Validating and tuning</a><ul>
<li class="chapter" data-level="11.1" data-path="valtune.html"><a href="valtune.html#mlmetrics"><i class="fa fa-check"></i><b>11.1</b> Learning metrics</a><ul>
<li class="chapter" data-level="11.1.1" data-path="valtune.html"><a href="valtune.html#regression-analysis"><i class="fa fa-check"></i><b>11.1.1</b> Regression analysis</a></li>
<li class="chapter" data-level="11.1.2" data-path="valtune.html"><a href="valtune.html#classification-analysis"><i class="fa fa-check"></i><b>11.1.2</b> Classification analysis</a></li>
</ul></li>
<li class="chapter" data-level="11.2" data-path="valtune.html"><a href="valtune.html#validation"><i class="fa fa-check"></i><b>11.2</b> Validation</a><ul>
<li class="chapter" data-level="11.2.1" data-path="valtune.html"><a href="valtune.html#the-variance-bias-tradeoff-theory"><i class="fa fa-check"></i><b>11.2.1</b> The variance-bias tradeoff: theory</a></li>
<li class="chapter" data-level="11.2.2" data-path="valtune.html"><a href="valtune.html#the-variance-bias-tradeoff-illustration"><i class="fa fa-check"></i><b>11.2.2</b> The variance-bias tradeoff: illustration</a></li>
<li class="chapter" data-level="11.2.3" data-path="valtune.html"><a href="valtune.html#the-risk-of-overfitting-principle"><i class="fa fa-check"></i><b>11.2.3</b> The risk of overfitting: principle</a></li>
<li class="chapter" data-level="11.2.4" data-path="valtune.html"><a href="valtune.html#the-risk-of-overfitting-some-solutions"><i class="fa fa-check"></i><b>11.2.4</b> The risk of overfitting: some solutions</a></li>
</ul></li>
<li class="chapter" data-level="11.3" data-path="valtune.html"><a href="valtune.html#the-search-for-good-hyperparameters"><i class="fa fa-check"></i><b>11.3</b> The search for good hyperparameters</a><ul>
<li class="chapter" data-level="11.3.1" data-path="valtune.html"><a href="valtune.html#methods"><i class="fa fa-check"></i><b>11.3.1</b> Methods</a></li>
<li class="chapter" data-level="11.3.2" data-path="valtune.html"><a href="valtune.html#example-grid-search"><i class="fa fa-check"></i><b>11.3.2</b> Example: grid search</a></li>
<li class="chapter" data-level="11.3.3" data-path="valtune.html"><a href="valtune.html#example-bayesian-optimization"><i class="fa fa-check"></i><b>11.3.3</b> Example: Bayesian optimization</a></li>
</ul></li>
<li class="chapter" data-level="11.4" data-path="valtune.html"><a href="valtune.html#short-discussion-on-validation-in-backtests"><i class="fa fa-check"></i><b>11.4</b> Short discussion on validation in backtests</a></li>
</ul></li>
<li class="chapter" data-level="12" data-path="ensemble.html"><a href="ensemble.html"><i class="fa fa-check"></i><b>12</b> Ensemble models</a><ul>
<li class="chapter" data-level="12.1" data-path="ensemble.html"><a href="ensemble.html#linear-ensembles"><i class="fa fa-check"></i><b>12.1</b> Linear ensembles</a><ul>
<li class="chapter" data-level="12.1.1" data-path="ensemble.html"><a href="ensemble.html#principles"><i class="fa fa-check"></i><b>12.1.1</b> Principles</a></li>
<li class="chapter" data-level="12.1.2" data-path="ensemble.html"><a href="ensemble.html#example"><i class="fa fa-check"></i><b>12.1.2</b> Example</a></li>
</ul></li>
<li class="chapter" data-level="12.2" data-path="ensemble.html"><a href="ensemble.html#stacked-ensembles"><i class="fa fa-check"></i><b>12.2</b> Stacked ensembles</a><ul>
<li class="chapter" data-level="12.2.1" data-path="ensemble.html"><a href="ensemble.html#two-stage-training"><i class="fa fa-check"></i><b>12.2.1</b> Two stage training</a></li>
<li class="chapter" data-level="12.2.2" data-path="ensemble.html"><a href="ensemble.html#code-and-results-3"><i class="fa fa-check"></i><b>12.2.2</b> Code and results</a></li>
</ul></li>
<li class="chapter" data-level="12.3" data-path="ensemble.html"><a href="ensemble.html#extensions-1"><i class="fa fa-check"></i><b>12.3</b> Extensions</a><ul>
<li class="chapter" data-level="12.3.1" data-path="ensemble.html"><a href="ensemble.html#exogenous-variables"><i class="fa fa-check"></i><b>12.3.1</b> Exogenous variables</a></li>
<li class="chapter" data-level="12.3.2" data-path="ensemble.html"><a href="ensemble.html#shrinking-inter-model-correlations"><i class="fa fa-check"></i><b>12.3.2</b> Shrinking inter-model correlations</a></li>
</ul></li>
<li class="chapter" data-level="12.4" data-path="ensemble.html"><a href="ensemble.html#exercise"><i class="fa fa-check"></i><b>12.4</b> Exercise</a></li>
</ul></li>
<li class="chapter" data-level="13" data-path="backtest.html"><a href="backtest.html"><i class="fa fa-check"></i><b>13</b> Portfolio backtesting</a><ul>
<li class="chapter" data-level="13.1" data-path="backtest.html"><a href="backtest.html#protocol"><i class="fa fa-check"></i><b>13.1</b> Setting the protocol</a></li>
<li class="chapter" data-level="13.2" data-path="backtest.html"><a href="backtest.html#turning-signals-into-portfolio-weights"><i class="fa fa-check"></i><b>13.2</b> Turning signals into portfolio weights</a></li>
<li class="chapter" data-level="13.3" data-path="backtest.html"><a href="backtest.html#perfmet"><i class="fa fa-check"></i><b>13.3</b> Performance metrics</a><ul>
<li class="chapter" data-level="13.3.1" data-path="backtest.html"><a href="backtest.html#discussion-1"><i class="fa fa-check"></i><b>13.3.1</b> Discussion</a></li>
<li class="chapter" data-level="13.3.2" data-path="backtest.html"><a href="backtest.html#pure-performance-and-risk-indicators"><i class="fa fa-check"></i><b>13.3.2</b> Pure performance and risk indicators</a></li>
<li class="chapter" data-level="13.3.3" data-path="backtest.html"><a href="backtest.html#factor-based-evaluation"><i class="fa fa-check"></i><b>13.3.3</b> Factor-based evaluation</a></li>
<li class="chapter" data-level="13.3.4" data-path="backtest.html"><a href="backtest.html#risk-adjusted-measures"><i class="fa fa-check"></i><b>13.3.4</b> Risk-adjusted measures</a></li>
<li class="chapter" data-level="13.3.5" data-path="backtest.html"><a href="backtest.html#transaction-costs-and-turnover"><i class="fa fa-check"></i><b>13.3.5</b> Transaction costs and turnover</a></li>
</ul></li>
<li class="chapter" data-level="13.4" data-path="backtest.html"><a href="backtest.html#common-errors-and-issues"><i class="fa fa-check"></i><b>13.4</b> Common errors and issues</a><ul>
<li class="chapter" data-level="13.4.1" data-path="backtest.html"><a href="backtest.html#forward-looking-data"><i class="fa fa-check"></i><b>13.4.1</b> Forward looking data</a></li>
<li class="chapter" data-level="13.4.2" data-path="backtest.html"><a href="backtest.html#backtest-overfitting"><i class="fa fa-check"></i><b>13.4.2</b> Backtest overfitting</a></li>
<li class="chapter" data-level="13.4.3" data-path="backtest.html"><a href="backtest.html#simple-saveguards"><i class="fa fa-check"></i><b>13.4.3</b> Simple saveguards</a></li>
</ul></li>
<li class="chapter" data-level="13.5" data-path="backtest.html"><a href="backtest.html#implication-of-non-stationarity-forecasting-is-hard"><i class="fa fa-check"></i><b>13.5</b> Implication of non-stationarity: forecasting is hard</a><ul>
<li class="chapter" data-level="13.5.1" data-path="backtest.html"><a href="backtest.html#general-comments"><i class="fa fa-check"></i><b>13.5.1</b> General comments</a></li>
<li class="chapter" data-level="13.5.2" data-path="backtest.html"><a href="backtest.html#the-no-free-lunch-theorem"><i class="fa fa-check"></i><b>13.5.2</b> The no free lunch theorem</a></li>
</ul></li>
<li class="chapter" data-level="13.6" data-path="backtest.html"><a href="backtest.html#example-1"><i class="fa fa-check"></i><b>13.6</b> Example</a></li>
<li class="chapter" data-level="13.7" data-path="backtest.html"><a href="backtest.html#coding-exercises-5"><i class="fa fa-check"></i><b>13.7</b> Coding exercises</a></li>
</ul></li>
<li class="chapter" data-level="14" data-path="interp.html"><a href="interp.html"><i class="fa fa-check"></i><b>14</b> Interpretability</a><ul>
<li class="chapter" data-level="14.1" data-path="interp.html"><a href="interp.html#global-interpretations"><i class="fa fa-check"></i><b>14.1</b> Global interpretations</a><ul>
<li class="chapter" data-level="14.1.1" data-path="interp.html"><a href="interp.html#variable-importance"><i class="fa fa-check"></i><b>14.1.1</b> Variable importance (tree-based)</a></li>
<li class="chapter" data-level="14.1.2" data-path="interp.html"><a href="interp.html#variable-importance-agnostic"><i class="fa fa-check"></i><b>14.1.2</b> Variable importance (agnostic)</a></li>
<li class="chapter" data-level="14.1.3" data-path="interp.html"><a href="interp.html#partial-dependence-plot"><i class="fa fa-check"></i><b>14.1.3</b> Partial dependence plot</a></li>
</ul></li>
<li class="chapter" data-level="14.2" data-path="interp.html"><a href="interp.html#local-interpretations"><i class="fa fa-check"></i><b>14.2</b> Local interpretations</a><ul>
<li class="chapter" data-level="14.2.1" data-path="interp.html"><a href="interp.html#lime"><i class="fa fa-check"></i><b>14.2.1</b> LIME</a></li>
<li class="chapter" data-level="14.2.2" data-path="interp.html"><a href="interp.html#shapley-values"><i class="fa fa-check"></i><b>14.2.2</b> Shapley values</a></li>
<li class="chapter" data-level="14.2.3" data-path="interp.html"><a href="interp.html#breakdown"><i class="fa fa-check"></i><b>14.2.3</b> Breakdown</a></li>
</ul></li>
</ul></li>
<li class="chapter" data-level="15" data-path="causality.html"><a href="causality.html"><i class="fa fa-check"></i><b>15</b> Two key concepts: causality and non-stationarity</a><ul>
<li class="chapter" data-level="15.1" data-path="causality.html"><a href="causality.html#causality-1"><i class="fa fa-check"></i><b>15.1</b> Causality</a><ul>
<li class="chapter" data-level="15.1.1" data-path="causality.html"><a href="causality.html#granger"><i class="fa fa-check"></i><b>15.1.1</b> Granger causality</a></li>
<li class="chapter" data-level="15.1.2" data-path="causality.html"><a href="causality.html#causal-additive-models"><i class="fa fa-check"></i><b>15.1.2</b> Causal additive models</a></li>
<li class="chapter" data-level="15.1.3" data-path="causality.html"><a href="causality.html#structural-time-series-models"><i class="fa fa-check"></i><b>15.1.3</b> Structural time-series models</a></li>
</ul></li>
<li class="chapter" data-level="15.2" data-path="causality.html"><a href="causality.html#nonstat"><i class="fa fa-check"></i><b>15.2</b> Dealing with changing environments</a><ul>
<li class="chapter" data-level="15.2.1" data-path="causality.html"><a href="causality.html#non-stationarity-an-obvious-illustration"><i class="fa fa-check"></i><b>15.2.1</b> Non-stationarity: an obvious illustration</a></li>
<li class="chapter" data-level="15.2.2" data-path="causality.html"><a href="causality.html#online-learning"><i class="fa fa-check"></i><b>15.2.2</b> Online learning</a></li>
<li class="chapter" data-level="15.2.3" data-path="causality.html"><a href="causality.html#homogeneous-transfer-learning"><i class="fa fa-check"></i><b>15.2.3</b> Homogeneous transfer learning</a></li>
<li class="chapter" data-level="15.2.4" data-path="causality.html"><a href="causality.html#active-learning"><i class="fa fa-check"></i><b>15.2.4</b> Active learning</a></li>
</ul></li>
</ul></li>
<li class="chapter" data-level="16" data-path="unsup.html"><a href="unsup.html"><i class="fa fa-check"></i><b>16</b> Unsupervised learning</a><ul>
<li class="chapter" data-level="16.1" data-path="unsup.html"><a href="unsup.html#corpred"><i class="fa fa-check"></i><b>16.1</b> The problem with correlated predictors</a></li>
<li class="chapter" data-level="16.2" data-path="unsup.html"><a href="unsup.html#principal-component-analysis-and-autoencoders"><i class="fa fa-check"></i><b>16.2</b> Principal component analysis and autoencoders</a><ul>
<li class="chapter" data-level="16.2.1" data-path="unsup.html"><a href="unsup.html#a-bit-of-algebra"><i class="fa fa-check"></i><b>16.2.1</b> A bit of algebra</a></li>
<li class="chapter" data-level="16.2.2" data-path="unsup.html"><a href="unsup.html#pca"><i class="fa fa-check"></i><b>16.2.2</b> PCA</a></li>
<li class="chapter" data-level="16.2.3" data-path="unsup.html"><a href="unsup.html#ae"><i class="fa fa-check"></i><b>16.2.3</b> Autoencoders</a></li>
<li class="chapter" data-level="16.2.4" data-path="unsup.html"><a href="unsup.html#application"><i class="fa fa-check"></i><b>16.2.4</b> Application</a></li>
</ul></li>
<li class="chapter" data-level="16.3" data-path="unsup.html"><a href="unsup.html#clustering-via-k-means"><i class="fa fa-check"></i><b>16.3</b> Clustering via k-means</a></li>
<li class="chapter" data-level="16.4" data-path="unsup.html"><a href="unsup.html#nearest-neighbors"><i class="fa fa-check"></i><b>16.4</b> Nearest neighbors</a></li>
<li class="chapter" data-level="16.5" data-path="unsup.html"><a href="unsup.html#coding-exercise-1"><i class="fa fa-check"></i><b>16.5</b> Coding exercise</a></li>
</ul></li>
<li class="chapter" data-level="17" data-path="RL.html"><a href="RL.html"><i class="fa fa-check"></i><b>17</b> Reinforcement learning</a><ul>
<li class="chapter" data-level="17.1" data-path="RL.html"><a href="RL.html#theoretical-layout"><i class="fa fa-check"></i><b>17.1</b> Theoretical layout</a><ul>
<li class="chapter" data-level="17.1.1" data-path="RL.html"><a href="RL.html#general-framework"><i class="fa fa-check"></i><b>17.1.1</b> General framework</a></li>
<li class="chapter" data-level="17.1.2" data-path="RL.html"><a href="RL.html#q-learning"><i class="fa fa-check"></i><b>17.1.2</b> Q-learning</a></li>
<li class="chapter" data-level="17.1.3" data-path="RL.html"><a href="RL.html#sarsa"><i class="fa fa-check"></i><b>17.1.3</b> SARSA</a></li>
</ul></li>
<li class="chapter" data-level="17.2" data-path="RL.html"><a href="RL.html#the-curse-of-dimensionality"><i class="fa fa-check"></i><b>17.2</b> The curse of dimensionality</a></li>
<li class="chapter" data-level="17.3" data-path="RL.html"><a href="RL.html#policy-gradient"><i class="fa fa-check"></i><b>17.3</b> Policy gradient</a><ul>
<li class="chapter" data-level="17.3.1" data-path="RL.html"><a href="RL.html#principle-2"><i class="fa fa-check"></i><b>17.3.1</b> Principle</a></li>
<li class="chapter" data-level="17.3.2" data-path="RL.html"><a href="RL.html#extensions-2"><i class="fa fa-check"></i><b>17.3.2</b> Extensions</a></li>
</ul></li>
<li class="chapter" data-level="17.4" data-path="RL.html"><a href="RL.html#simple-examples"><i class="fa fa-check"></i><b>17.4</b> Simple examples</a><ul>
<li class="chapter" data-level="17.4.1" data-path="RL.html"><a href="RL.html#q-learning-with-simulations"><i class="fa fa-check"></i><b>17.4.1</b> Q-learning with simulations</a></li>
<li class="chapter" data-level="17.4.2" data-path="RL.html"><a href="RL.html#RLemp2"><i class="fa fa-check"></i><b>17.4.2</b> Q-learning with market data</a></li>
</ul></li>
<li class="chapter" data-level="17.5" data-path="RL.html"><a href="RL.html#concluding-remarks"><i class="fa fa-check"></i><b>17.5</b> Concluding remarks</a></li>
<li class="chapter" data-level="17.6" data-path="RL.html"><a href="RL.html#exercises"><i class="fa fa-check"></i><b>17.6</b> Exercises</a></li>
</ul></li>
<li class="chapter" data-level="18" data-path="NLP.html"><a href="NLP.html"><i class="fa fa-check"></i><b>18</b> Natural Language Processing</a></li>
<li class="appendix"><span><b>Appendix</b></span></li>
<li class="chapter" data-level="A" data-path="data-description.html"><a href="data-description.html"><i class="fa fa-check"></i><b>A</b> Data Description</a></li>
<li class="chapter" data-level="B" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html"><i class="fa fa-check"></i><b>B</b> Solution to exercises</a><ul>
<li class="chapter" data-level="B.1" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-4"><i class="fa fa-check"></i><b>B.1</b> Chapter 4</a></li>
<li class="chapter" data-level="B.2" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-5"><i class="fa fa-check"></i><b>B.2</b> Chapter 5</a></li>
<li class="chapter" data-level="B.3" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-6"><i class="fa fa-check"></i><b>B.3</b> Chapter 6</a></li>
<li class="chapter" data-level="B.4" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-7"><i class="fa fa-check"></i><b>B.4</b> Chapter 7</a></li>
<li class="chapter" data-level="B.5" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-8-the-autoencoder-model"><i class="fa fa-check"></i><b>B.5</b> Chapter 8: the autoencoder model</a></li>
<li class="chapter" data-level="B.6" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-9"><i class="fa fa-check"></i><b>B.6</b> Chapter 9</a></li>
<li class="chapter" data-level="B.7" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-12-ensemble-neural-network"><i class="fa fa-check"></i><b>B.7</b> Chapter 12: ensemble neural network</a></li>
<li class="chapter" data-level="B.8" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-13"><i class="fa fa-check"></i><b>B.8</b> Chapter 13</a><ul>
<li class="chapter" data-level="B.8.1" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#advanced-weighting-function"><i class="fa fa-check"></i><b>B.8.1</b> Advanced weighting function</a></li>
<li class="chapter" data-level="B.8.2" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#functional-programming-in-the-backtest"><i class="fa fa-check"></i><b>B.8.2</b> Functional programming in the backtest</a></li>
</ul></li>
<li class="chapter" data-level="B.9" data-path="solution-to-exercises.html"><a href="solution-to-exercises.html#chapter-17"><i class="fa fa-check"></i><b>B.9</b> Chapter 17</a></li>
</ul></li>
<li class="chapter" data-level="C" data-path="references.html"><a href="references.html"><i class="fa fa-check"></i><b>C</b> References</a></li>
</ul>
</nav>
</div>
<div class="book-body">
<div class="body-inner">
<div class="book-header" role="navigation">
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i><a href="./">Machine Learning for Factor Investing</a>
</h1>
</div>
<div class="page-wrapper" tabindex="-1" role="main">
<div class="page-inner">
<section class="normal" id="section-">
<div id="references" class="section level1">
<h1><span class="header-section-number">C</span> References</h1>
<div id="refs" class="references">
<div>
<p>Abbasi, Ahmed, Conan Albrecht, Anthony Vance, and James Hansen. 2012. “Metafraud: A Meta-Learning Framework for Detecting Financial Fraud.” <em>MIS Quarterly</em>, 1293–1327.</p>
</div>
<div>
<p>Aboussalah, Amine Mohamed, and Chi-Guhn Lee. 2020. “Continuous Control with Stacked Deep Dynamic Recurrent Reinforcement Learning for Portfolio Optimization.” <em>Expert Systems with Applications</em> 140: 112891.</p>
</div>
<div>
<p>Agarwal, Amit, Elad Hazan, Satyen Kale, and Robert E Schapire. 2006. “Algorithms for Portfolio Management Based on the Newton Method.” In <em>Proceedings of the 23rd International Conference on Machine Learning</em>, 9–16. ACM.</p>
</div>
<div>
<p>Aggarwal, Charu C. 2013. <em>Outlier Analysis</em>. Springer.</p>
</div>
<div>
<p>Aldridge, Irene, and Marco Avellaneda. 2019. “Neural Networks in Finance: Design and Performance.” <em>Journal of Financial Data Science</em> 1 (4): 39–62.</p>
</div>
<div>
<p>Allison, Paul D. 2001. <em>Missing Data</em>. Vol. 136. Sage publications.</p>
</div>
<div>
<p>Almahdi, Saud, and Steve Y Yang. 2017. “An Adaptive Portfolio Trading System: A Risk-Return Portfolio Optimization Using Recurrent Reinforcement Learning with Expected Maximum Drawdown.” <em>Expert Systems with Applications</em> 87: 267–79.</p>
</div>
<div>
<p>———. 2019. “A Constrained Portfolio Trading System Using Particle Swarm Algorithm and Recurrent Reinforcement Learning.” <em>Expert Systems with Applications</em> 130: 145–56.</p>
</div>
<div>
<p>Ammann, Manuel, Guillaume Coqueret, and Jan-Philip Schade. 2016. “Characteristics-Based Portfolio Choice with Leverage Constraints.” <em>Journal of Banking & Finance</em> 70: 23–37.</p>
</div>
<div>
<p>Amrhein, Valentin, Sander Greenland, and Blake McShane. 2019. “Scientists Rise up Against Statistical Significance.” <em>Nature</em> 567: 305–7.</p>
</div>
<div>
<p>Anderson, James A, and Edward Rosenfeld. 2000. <em>Talking Nets: An Oral History of Neural Networks</em>. MIT Press.</p>
</div>
<div>
<p>Ang, Andrew. 2014. <em>Asset Management: A Systematic Approach to Factor Investing</em>. Oxford University Press.</p>
</div>
<div>
<p>Ang, Andrew, Robert J Hodrick, Yuhang Xing, and Xiaoyan Zhang. 2006. “The Cross-Section of Volatility and Expected Returns.” <em>Journal of Finance</em> 61 (1): 259–99.</p>
</div>
<div>
<p>Ang, Andrew, and Dennis Kristensen. 2012. “Testing Conditional Factor Models.” <em>Journal of Financial Economics</em> 106 (1): 132–56.</p>
</div>
<div>
<p>Ang, Andrew, Jun Liu, and Krista Schwarz. 2018. “Using Individual Stocks or Portfolios in Tests of Factor Models.” <em>SSRN Working Paper</em> 1106463.</p>
</div>
<div>
<p>Arik, Sercan O, and Tomas Pfister. 2019. “TabNet: Attentive Interpretable Tabular Learning.” <em>arXiv Preprint</em>, no. 1908.07442.</p>
</div>
<div>
<p>Arjovsky, Martin, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. “Invariant Risk Minimization.” <em>arXiv Preprint</em>, no. 1907.02893.</p>
</div>
<div>
<p>Arnott, Robert D, Mark Clements, Vitali Kalesnik, and Juhani T Linnainmaa. 2019. “Factor Momentum.” <em>SSRN Working Paper</em> 3116974.</p>
</div>
<div>
<p>Arnott, Robert D, Jason C Hsu, Jun Liu, and Harry Markowitz. 2014. “Can Noise Create the Size and Value Effects?” <em>Management Science</em> 61 (11): 2569–79.</p>
</div>
<div>
<p>Arnott, Rob, Campbell R Harvey, Vitali Kalesnik, and Juhani Linnainmaa. 2019. “Alice’s Adventures in Factorland: Three Blunders That Plague Factor Investing.” <em>The Journal of Portfolio Management</em> 45 (4): 18–36.</p>
</div>
<div>
<p>Arnott, Rob, Campbell R Harvey, and Harry Markowitz. 2019. “A Backtesting Protocol in the Era of Machine Learning.” <em>Journal of Financial Data Science</em> 1 (1): 64–74.</p>
</div>
<div>
<p>Asness, Clifford, Swati Chandra, Antti Ilmanen, and Ronen Israel. 2017. “Contrarian Factor Timing Is Deceptively Difficult.” <em>Journal of Portfolio Management</em> 43 (5): 72–87.</p>
</div>
<div>
<p>Asness, Clifford, Andrea Frazzini, Ronen Israel, Tobias J Moskowitz, and Lasse H Pedersen. 2018. “Size Matters, If You Control Your Junk.” <em>Journal of Financial Economics</em> 129 (3): 479–509.</p>
</div>
<div>
<p>Asness, Clifford S, Tobias J Moskowitz, and Lasse Heje Pedersen. 2013. “Value and Momentum Everywhere.” <em>Journal of Finance</em> 68 (3): 929–85.</p>
</div>
<div>
<p>Astakhov, Anton, Tomas Havranek, and Jiri Novak. 2019. “Firm Size and Stock Returns: A Quantitative Survey.” <em>Journal of Economic Surveys</em> XXX: XX–XX.</p>
</div>
<div>
<p>Back, Kerry. 2010. <em>Asset Pricing and Portfolio Choice Theory</em>. Oxford University Press.</p>
</div>
<div>
<p>Baesens, Bart, Veronique Van Vlasselaer, and Wouter Verbeke. 2015. <em>Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection</em>. John Wiley & Sons.</p>
</div>
<div>
<p>Bailey, David H, and Marcos López de Prado. 2014. “The Deflated Sharpe Ratio: Correcting for Selection Bias, Backtest Overfitting, and Non-Normality.” <em>Journal of Portfolio Management</em> 40 (5): 39–59.</p>
</div>
<div>
<p>Bailey, T, and A.K. Jain. 1978. “A Note on Distance-Weighted K-Nearest Neighbor Rules.” <em>IEEE Trans. On Systems, Man, Cybernetics</em> 8 (4): 311–13.</p>
</div>
<div>
<p>Bajgrowicz, Pierre, and Olivier Scaillet. 2012. “Technical Trading Revisited: False Discoveries, Persistence Tests, and Transaction Costs.” <em>Journal of Financial Economics</em> 106 (3): 473–91.</p>
</div>
<div>
<p>Baker, Malcolm, Brendan Bradley, and Jeffrey Wurgler. 2011. “Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly.” <em>Financial Analysts Journal</em> 67 (1): 40–54.</p>
</div>
<div>
<p>Baker, Malcolm, Patrick Luo, and Ryan Taliaferro. 2017. “Detecting Anomalies: The Relevance and Power of Standard Asset Pricing Tests.”</p>
</div>
<div>
<p>Bali, Turan G, Robert F Engle, and Scott Murray. 2016. <em>Empirical Asset Pricing: The Cross Section of Stock Returns</em>. John Wiley & Sons.</p>
</div>
<div>
<p>Ballings, Michel, Dirk Van den Poel, Nathalie Hespeels, and Ruben Gryp. 2015. “Evaluating Multiple Classifiers for Stock Price Direction Prediction.” <em>Expert Systems with Applications</em> 42 (20): 7046–56.</p>
</div>
<div>
<p>Ban, Gah-Yi, Noureddine El Karoui, and Andrew EB Lim. 2016. “Machine Learning and Portfolio Optimization.” <em>Management Science</em> 64 (3): 1136–54.</p>
</div>
<div>
<p>Bansal, Ravi, David A Hsieh, and S Viswanathan. 1993. “A New Approach to International Arbitrage Pricing.” <em>Journal of Finance</em> 48 (5): 1719–47.</p>
</div>
<div>
<p>Bansal, Ravi, and Salim Viswanathan. 1993. “No Arbitrage and Arbitrage Pricing: A New Approach.” <em>Journal of Finance</em> 48 (4): 1231–62.</p>
</div>
<div>
<p>Banz, Rolf W. 1981. “The Relationship Between Return and Market Value of Common Stocks.” <em>Journal of Financial Economics</em> 9 (1): 3–18.</p>
</div>
<div>
<p>Barberis, Nicholas. 2018. “Psychology-Based Models of Asset Prices and Trading Volume.” In <em>Handbook of Behavioral Economics-Foundations and Applications</em>.</p>
</div>
<div>
<p>Barberis, Nicholas, Robin Greenwood, Lawrence Jin, and Andrei Shleifer. 2015. “X-Capm: An Extrapolative Capital Asset Pricing Model.” <em>Journal of Financial Economics</em> 115 (1): 1–24.</p>
</div>
<div>
<p>Barberis, Nicholas, Lawrence J Jin, and Baolian Wang. 2019. “Prospect Theory and Stock Market Anomalies.” <em>SSRN Working Paper</em> 3477463.</p>
</div>
<div>
<p>Barberis, Nicholas, Abhiroop Mukherjee, and Baolian Wang. 2016. “Prospect Theory and Stock Returns: An Empirical Test.” <em>Review of Financial Studies</em> 29 (11): 3068–3107.</p>
</div>
<div>
<p>Barberis, Nicholas, and Andrei Shleifer. 2003. “Style Investing.” <em>Journal of Financial Economics</em> 68 (2): 161–99.</p>
</div>
<div>
<p>Barillas, Francisco, and Jay Shanken. 2018. “Comparing Asset Pricing Models.” <em>Journal of Finance</em> 73 (2): 715–54.</p>
</div>
<div>
<p>Barron, Andrew R. 1993. “Universal Approximation Bounds for Superpositions of a Sigmoidal Function.” <em>IEEE Transactions on Information Theory</em> 39 (3): 930–45.</p>
</div>
<div>
<p>———. 1994. “Approximation and Estimation Bounds for Artificial Neural Networks.” <em>Machine Learning</em> 14 (1): 115–33.</p>
</div>
<div>
<p>Barroso, Pedro, and Pedro Santa-Clara. 2015. “Momentum Has Its Moments.” <em>Journal of Financial Economics</em> 116 (1): 111–20.</p>
</div>
<div>
<p>Basak, Jayanta. 2004. “Online Adaptive Decision Trees.” <em>Neural Computation</em> 16 (9): 1959–81.</p>
</div>
<div>
<p>Bates, John M, and Clive WJ Granger. 1969. “The Combination of Forecasts.” <em>Journal of the Operational Research Society</em> 20 (4): 451–68.</p>
</div>
<div>
<p>Baz, Jamil, Nicolas Granger, Campbell R Harvey, Nicolas Le Roux, and Sandy Rattray. 2015. “Dissecting Investment Strategies in the Cross Section and Time Series.” <em>SSRN Working Paper</em> 2695101.</p>
</div>
<div>
<p>Beery, Sara, Grant Van Horn, and Pietro Perona. 2018. “Recognition in Terra Incognita.” In <em>Proceedings of the European Conference on Computer Vision (Eccv)</em>, 456–73.</p>
</div>
<div>
<p>Belsley, David A, Edwin Kuh, and Roy E Welsch. 2005. <em>Regression Diagnostics: Identifying Influential Data and Sources of Collinearity</em>. Vol. 571. John Wiley & Sons.</p>
</div>
<div>
<p>Ben-David, Shai, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. “A Theory of Learning from Different Domains.” <em>Machine Learning</em> 79 (1-2): 151–75.</p>
</div>
<div>
<p>Bengio, Yoshua. 2012. “Practical Recommendations for Gradient-Based Training of Deep Architectures.” In <em>Neural Networks: Tricks of the Trade</em>, 437–78. Springer.</p>
</div>
<div>
<p>Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” <em>Journal of Machine Learning Research</em> 13 (Feb): 281–305.</p>
</div>
<div>
<p>Berk, Jonathan B, Richard C Green, and Vasant Naik. 1999. “Optimal Investment, Growth Options, and Security Returns.” <em>Journal of Finance</em> 54 (5): 1553–1607.</p>
</div>
<div>
<p>Bertoluzzo, Francesco, and Marco Corazza. 2012. “Testing Different Reinforcement Learning Configurations for Financial Trading: Introduction and Applications.” <em>Procedia Economics and Finance</em> 3: 68–77.</p>
</div>
<div>
<p>Bertsekas, Dimitri P. 2017. <em>Dynamic Programming and Optimal Control - Volume Ii, Fourth Edition</em>. Athena Scientific.</p>
</div>
<div>
<p>Betermier, Sebastien, Laurent E Calvet, and Evan Jo. 2019. “A Supply and Demand Approach to Equity Pricing.” <em>SSRN Working Paper</em> 3440147.</p>
</div>
<div>
<p>Betermier, Sebastien, Laurent E Calvet, and Paolo Sodini. 2017. “Who Are the Value and Growth Investors?” <em>Journal of Finance</em> 72 (1): 5–46.</p>
</div>
<div>
<p>Bhamra, Harjoat S, and Raman Uppal. 2019. “Does Household Finance Matter? Small Financial Errors with Large Social Costs.” <em>American Economic Review</em> 109 (3): 1116–54.</p>
</div>
<div>
<p>Bhatia, Nitin, and others. 2010. “Survey of Nearest Neighbor Techniques.” <em>arXiv Preprint</em>, no. 1007.0085.</p>
</div>
<div>
<p>Bhattacharyya, Siddhartha, Sanjeev Jha, Kurian Tharakunnel, and J Christopher Westland. 2011. “Data Mining for Credit Card Fraud: A Comparative Study.” <em>Decision Support Systems</em> 50 (3): 602–13.</p>
</div>
<div>
<p>Biau, Gérard. 2012. “Analysis of a Random Forests Model.” <em>Journal of Machine Learning Research</em> 13 (Apr): 1063–95.</p>
</div>
<div>
<p>Biau, Gérard, Luc Devroye, and GAbor Lugosi. 2008. “Consistency of Random Forests and Other Averaging Classifiers.” <em>Journal of Machine Learning Research</em> 9 (Sep): 2015–33.</p>
</div>
<div>
<p>Black, Fischer, and Robert Litterman. 1992. “Global Portfolio Optimization.” <em>Financial Analysts Journal</em> 48 (5): 28–43.</p>
</div>
<div>
<p>Blank, Herbert, Richard Davis, and Shannon Greene. 2019. “Using Alternative Research Data in Real-World Portfolios.” <em>Journal of Investing</em> 28 (4): 95–103.</p>
</div>
<div>
<p>Blum, Avrim, and Adam Kalai. 1999. “Universal Portfolios with and Without Transaction Costs.” <em>Machine Learning</em> 35 (3): 193–205.</p>
</div>
<div>
<p>Boehmke, Brad, and Brandon Greenwell. 2019. <em>Hands-on Machine Learning with R</em>. Chapman; Hall.</p>
</div>
<div>
<p>Boloorforoosh, Ali, Peter Christoffersen, Christian Gourieroux, and Mathieu Fournier. 2019. “Beta Risk in the Cross-Section of Equities.” <em>Review of Financial Studies</em> Forthcoming.</p>
</div>
<div>
<p>Bonaccolto, Giovanni, and Sandra Paterlini. 2019. “Developing New Portfolio Strategies by Aggregation.” <em>Annals of Operations Research</em>, 1–39.</p>
</div>
<div>
<p>Boriah, Shyam, Varun Chandola, and Vipin Kumar. 2008. “Similarity Measures for Categorical Data: A Comparative Evaluation.” In <em>Proceedings of the 2008 Siam International Conference on Data Mining</em>, 243–54.</p>
</div>
<div>
<p>Boser, Bernhard E, Isabelle M Guyon, and Vladimir N Vapnik. 1992. “A Training Algorithm for Optimal Margin Classifiers.” In <em>Proceedings of the Fifth Annual Workshop on Computational Learning Theory</em>, 144–52. ACM.</p>
</div>
<div>
<p>Bouchaud, Jean-philippe, Philipp Krueger, Augustin Landier, and David Thesmar. 2019. “Sticky Expectations and the Profitability Anomaly.” <em>The Journal of Finance</em> 74 (2): 639–74.</p>
</div>
<div>
<p>Bouthillier, Xavier, and Gaël Varoquaux. 2020. “Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020.” Research Report. Inria Saclay Ile de France.</p>
</div>
<div>
<p>Boyd, Stephen, and Lieven Vandenberghe. 2004. <em>Convex Optimization</em>. Cambridge University Press.</p>
</div>
<div>
<p>Brandt, Michael W, Pedro Santa-Clara, and Rossen Valkanov. 2009. “Parametric Portfolio Policies: Exploiting Characteristics in the Cross-Section of Equity Returns.” <em>Review of Financial Studies</em> 22 (9): 3411–47.</p>
</div>
<div>
<p>Braun, Helmut, and John S Chandler. 1987. “Predicting Stock Market Behavior Through Rule Induction: An Application of the Learning-from-Example Approach.” <em>Decision Sciences</em> 18 (3): 415–29.</p>
</div>
<div>
<p>Breiman, Leo. 1996. “Stacked Regressions.” <em>Machine Learning</em> 24 (1): 49–64.</p>
</div>
<div>
<p>———. 2001. “Random Forests.” <em>Machine Learning</em> 45 (1): 5–32.</p>
</div>
<div>
<p>Breiman, Leo, Jerome Friedman, Charles J. Stone, and R.A. Olshen. 1984. <em>Classification and Regression Trees</em>. Chapman; Hall.</p>
</div>
<div>
<p>Breiman, Leo, and others. 2004. “Population Theory for Boosting Ensembles.” <em>Annals of Statistics</em> 32 (1): 1–11.</p>
</div>
<div>
<p>Brodersen, Kay H, Fabian Gallusser, Jim Koehler, Nicolas Remy, Steven L Scott, and others. 2015. “Inferring Causal Impact Using Bayesian Structural Time-Series Models.” <em>Annals of Applied Statistics</em> 9 (1): 247–74.</p>
</div>
<div>
<p>Brodie, Joshua, Ingrid Daubechies, Christine De Mol, Domenico Giannone, and Ignace Loris. 2009. “Sparse and Stable Markowitz Portfolios.” <em>Proceedings of the National Academy of Sciences</em> 106 (30): 12267–72.</p>
</div>
<div>
<p>Brown, Iain, and Christophe Mues. 2012. “An Experimental Comparison of Classification Algorithms for Imbalanced Credit Scoring Data Sets.” <em>Expert Systems with Applications</em> 39 (3): 3446–53.</p>
</div>
<div>
<p>Bryzgalova, Svetlana. 2019. “Spurious Factors in Linear Asset Pricing Models.”</p>
</div>
<div>
<p>Bryzgalova, Svetlana, Jiantao Huang, and Christian Julliard. 2019. “Bayesian Solutions for the Factor Zoo: We Just Ran Two Quadrillion Models.” <em>SSRN Working Paper</em> 3481736.</p>
</div>
<div>
<p>Bryzgalova, Svetlana, Markus Pelger, and Jason Zhu. 2019. “Forest Through the Trees: Building Cross-Sections of Stock Returns.” <em>SSRN Working Paper</em> 3493458.</p>
</div>
<div>
<p>Buehler, Hans, Lukas Gonon, Josef Teichmann, and Ben Wood. 2019. “Deep Hedging.” <em>Quantitative Finance</em>, 1–21.</p>
</div>
<div>
<p>Burrell, Phillip R., and Bukola Otulayo Folarin. 1997. “The Impact of Neural Networks in Finance.” <em>Neural Computing & Applications</em> 6 (4): 193–200.</p>
</div>
<div>
<p>Bühlmann, Peter, Jonas Peters, Jan Ernest, and others. 2014. “CAM: Causal Additive Models, High-Dimensional Order Search and Penalized Regression.” <em>Annals of Statistics</em> 42 (6): 2526–56.</p>
</div>
<div>
<p>Cao, Li-Juan, and Francis Eng Hock Tay. 2003. “Support Vector Machine with Adaptive Parameters in Financial Time Series Forecasting.” <em>IEEE Transactions on Neural Networks</em> 14 (6): 1506–18.</p>
</div>
<div>
<p>Carhart, Mark M. 1997. “On Persistence in Mutual Fund Performance.” <em>Journal of Finance</em> 52 (1): 57–82.</p>
</div>
<div>
<p>Carlson, Murray, Adlai Fisher, and Ron Giammarino. 2004. “Corporate Investment and Asset Price Dynamics: Implications for the Cross-Section of Returns.” <em>Journal of Finance</em> 59 (6): 2577–2603.</p>
</div>
<div>
<p>Cattaneo, Matias D, Richard K Crump, Max Farrell, and Ernst Schaumburg. 2020. “Characteristic-Sorted Portfolios: Estimation and Inference” Forthcoming XXX. Review of Economics; Statistics.</p>
</div>
<div>
<p>Cazalet, Zélia, and Thierry Roncalli. 2014. “Facts and Fantasies About Factor Investing.” <em>SSRN Working Paper</em> 2524547.</p>
</div>
<div>
<p>Chandola, Varun, Arindam Banerjee, and Vipin Kumar. 2009. “Anomaly Detection: A Survey.” <em>ACM Computing Surveys (CSUR)</em> 41 (3): 15.</p>
</div>
<div>
<p>Chang, Chih-Chung, and Chih-Jen Lin. 2011. “LIBSVM: A Library for Support Vector Machines.” <em>ACM Transactions on Intelligent Systems and Technology (TIST)</em> 2 (3): 27.</p>
</div>
<div>
<p>Che, Zhengping, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. 2018. “Recurrent Neural Networks for Multivariate Time Series with Missing Values.” <em>Scientific Reports</em> 8 (1): 6085.</p>
</div>
<div>
<p>Chen, Andrew Y, and Tom Zimmermann. 2019. “Publication Bias and the Cross-Section of Stock Returns.” <em>Review of Asset Pricing Studies</em> Forthcoming.</p>
</div>
<div>
<p>———. 2020. “Publication Bias and the Cross-Section of Stock Returns.” <em>Review of Asset Pricing Studies</em> Forthcoming XXX.</p>
</div>
<div>
<p>Chen, Huifen. 2001. “Initialization for Norta: Generation of Random Vectors with Specified Marginals and Correlations.” <em>INFORMS Journal on Computing</em> 13 (4). INFORMS: 312–31.</p>
</div>
<div>
<p>Chen, Jianbo, Le Song, Martin J Wainwright, and Michael I Jordan. 2018. “L-Shapley and c-Shapley: Efficient Model Interpretation for Structured Data.” <em>arXiv Preprint</em>, no. 1808.02610.</p>
</div>
<div>
<p>Chen, Jou-Fan, Wei-Lun Chen, Chun-Ping Huang, Szu-Hao Huang, and An-Pin Chen. 2016. “Financial Time-Series Data Analysis Using Deep Convolutional Neural Networks.” In <em>2016 7th International Conference on Cloud Computing and Big Data (Ccbd)</em>, 87–92. IEEE.</p>
</div>
<div>
<p>Chen, Long, Zhi Da, and Richard Priestley. 2012. “Dividend Smoothing and Predictability.” <em>Management Science</em> 58 (10): 1834–53.</p>
</div>
<div>
<p>Chen, Luyang, Markus Pelger, and Jason Zhu. 2019. “Deep Learning in Asset Pricing.” <em>SSRN Working Paper</em> 3350138.</p>
</div>
<div>
<p>Chen, Tianqi, and Carlos Guestrin. 2016. “Xgboost: A Scalable Tree Boosting System.” In <em>Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</em>, 785–94. ACM.</p>
</div>
<div>
<p>Chib, Siddhartha, Xiaming Zeng, and Lingxiao Zhao. 2019. “On Comparing Asset Pricing Models.” <em>Journal of Finance</em> Forthcoming.</p>
</div>
<div>
<p>Chinco, Alexander, Adam D Clark-Joseph, and Mao Ye. 2019. “Sparse Signals in the Cross-Section of Returns.” <em>Journal of Finance</em> 74 (1): 449–92.</p>
</div>
<div>
<p>Chinco, Alexander, Andreas Neuhierl, and Michael Weber. 2019. “Estimating the Anomaly Baserate.” <em>SSRN Working Paper</em> 3344499.</p>
</div>
<div>
<p>Chinco, Alex, Samuel M Hartzmark, and Abigail B Sussman. 2019. “Risk-Factor Irrelevance.” <em>SSRN Working Paper</em> 3487624.</p>
</div>
<div>
<p>Chipman, Hugh A, Edward I George, and Robert E McCulloch. 2010. “BART: Bayesian Additive Regression Trees.” <em>Annals of Applied Statistics</em> 4 (1): 266–98.</p>
</div>
<div>
<p>Choi, Seung Mo, and Hwagyun Kim. 2014. “Momentum Effect as Part of a Market Equilibrium.” <em>Journal of Financial and Quantitative Analysis</em> 49 (1): 107–30.</p>
</div>
<div>
<p>Chollet, François. 2017. <em>Deep Learning with Python</em>. Manning Publications Company.</p>
</div>
<div>
<p>Chordia, Tarun, Amit Goyal, and Jay Shanken. 2015. “Cross-Sectional Asset Pricing with Individual Stocks: Betas Versus Characteristics.” <em>SSRN Working Paper</em> 2549578.</p>
</div>
<div>
<p>Chung, Junyoung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. “Gated Feedback Recurrent Neural Networks.” In <em>International Conference on Machine Learning</em>, 2067–75.</p>
</div>
<div>
<p>Claeskens, Gerda, and Nils Lid Hjort. 2008. <em>Model Selection and Model Averaging</em>. Cambridge University Press.</p>
</div>
<div>
<p>Clark, Todd E, and Michael W McCracken. 2009. “Improving Forecast Accuracy by Combining Recursive and Rolling Forecasts.” <em>International Economic Review</em> 50 (2): 363–95.</p>
</div>
<div>
<p>Cocco, Joao F, Francisco Gomes, and Paula Lopes. 2019. “Evidence on Expectations of Household Finances.” <em>SSRN Working Paper</em> 3362495.</p>
</div>
<div>
<p>Cochrane, John H. 2009. <em>Asset Pricing: Revised Edition</em>. Princeton university press.</p>
</div>
<div>
<p>Cong, Lin William, Tengyuan Liang, and Xiao Zhang. 2019a. “Analyzing Textual Information at Scale.” <em>SSRN Working Paper</em> 3449822.</p>
</div>
<div>
<p>———. 2019b. “Textual Factors: A Scalable, Interpretable, and Data-Driven Approach to Analyzing Unstructured Information.” <em>SSRN Working Paper</em> 3307057.</p>
</div>
<div>
<p>Cong, Lin William, and Douglas Xu. 2019. “Rise of Factor Investing: Asset Prices, Informational Efficiency, and Security Design.” <em>SSRN Working Paper</em> 2800590.</p>
</div>
<div>
<p>Connor, Gregory, and Robert A Korajczyk. 1988. “Risk and Return in an Equilibrium Apt: Application of a New Test Methodology.” <em>Journal of Financial Economics</em> 21 (2): 255–89.</p>
</div>
<div>
<p>Cont, Rama. 2007. “Volatility Clustering in Financial Markets: Empirical Facts and Agent-Based Models.” In <em>Long Memory in Economics</em>, 289–309. Springer.</p>
</div>
<div>
<p>Cooper, Ilan, and Paulo F Maio. 2019. “New Evidence on Conditional Factor Models.” <em>Journal of Financial and Quantitative Analysis</em> 54 (5): 1975–2016.</p>
</div>
<div>
<p>Coqueret, Guillaume. 2015. “Diversified Minimum-Variance Portfolios.” <em>Annals of Finance</em> 11 (2): 221–41.</p>
</div>
<div>
<p>———. 2017. “Approximate Norta Simulations for Virtual Sample Generation.” <em>Expert Systems with Applications</em> 73: 69–81.</p>
</div>
<div>
<p>———. 2018. “The Economic Value of Firm-Specific News Sentiment.” <em>SSRN Working Paper</em> 3248925.</p>
</div>
<div>
<p>Coqueret, Guillaume, and Tony Guida. 2019. “Training Trees on Tails with Applications to Portfolio Choice.” <em>SSRN Working Paper</em> 3403009.</p>
</div>
<div>
<p>Cornuejols, Antoine, Laurent Miclet, and Vincent Barra. 2018. <em>Apprentissage Artificiel: Deep Learning, Concepts et Algorithmes</em>. Eyrolles.</p>
</div>
<div>
<p>Cortes, Corinna, and Vladimir Vapnik. 1995. “Support-Vector Networks.” <em>Machine Learning</em> 20 (3): 273–97.</p>
</div>
<div>
<p>Costarelli, Danilo, Renato Spigler, and Gianluca Vinti. 2016. “A Survey on Approximation by Means of Neural Network Operators.” <em>Journal of NeuroTechnology</em> 1 (1).</p>
</div>
<div>
<p>Cover, Thomas M. 1991. “Universal Portfolios.” <em>Mathematical Finance</em> 1 (1): 1–29.</p>
</div>
<div>
<p>Cover, Thomas M, and Erik Ordentlich. 1996. “Universal Portfolios with Side Information.” <em>IEEE Transactions on Information Theory</em> 42 (2): 348–63.</p>
</div>
<div>
<p>Crammer, Koby, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. “Online Passive-Aggressive Algorithms.” <em>Journal of Machine Learning Research</em> 7 (Mar): 551–85.</p>
</div>
<div>
<p>Cronqvist, Henrik, Alessandro Previtero, Stephan Siegel, and Roderick E White. 2015. “The Fetal Origins Hypothesis in Finance: Prenatal Environment, the Gender Gap, and Investor Behavior.” <em>Review of Financial Studies</em> 29 (3): 739–86.</p>
</div>
<div>
<p>Cronqvist, Henrik, Stephan Siegel, and Frank Yu. 2015. “Value Versus Growth Investing: Why Do Different Investors Have Different Styles?” <em>Journal of Financial Economics</em> 117 (2): 333–49.</p>
</div>
<div>
<p>Cuchiero, Christa, Irene Klein, and Josef Teichmann. 2016. “A New Perspective on the Fundamental Theorem of Asset Pricing for Large Financial Markets.” <em>Theory of Probability & Its Applications</em> 60 (4): 561–79.</p>
</div>
<div>
<p>Cybenko, George. 1989. “Approximation by Superpositions of a Sigmoidal Function.” <em>Mathematics of Control, Signals and Systems</em> 2 (4): 303–14.</p>
</div>
<div>
<p>Dangl, Thomas, and Michael Halling. 2012. “Predictive Regressions with Time-Varying Coefficients.” <em>Journal of Financial Economics</em> 106 (1): 157–81.</p>
</div>
<div>
<p>Daniel, Kent D, David Hirshleifer, and Avanidhar Subrahmanyam. 2001. “Overconfidence, Arbitrage, and Equilibrium Asset Pricing.” <em>Journal of Finance</em> 56 (3): 921–65.</p>
</div>
<div>
<p>Daniel, Kent, David Hirshleifer, and Lin Sun. 2019. “Short and Long Horizon Behavioral Factors.” <em>Review of Financial Studies</em> XXX (XXX).</p>
</div>
<div>
<p>Daniel, Kent, and Tobias J Moskowitz. 2016. “Momentum Crashes.” <em>Journal of Financial Economics</em> 122 (2): 221–47.</p>
</div>
<div>
<p>Daniel, Kent, and Sheridan Titman. 1997. “Evidence on the Characteristics of Cross Sectional Variation in Stock Returns.” <em>Journal of Finance</em> 52 (1): 1–33.</p>
</div>
<div>
<p>———. 2012. “Testing Factor-Model Explanations of Market Anomalies.” <em>Critical Finance Review</em> 1 (1): 103–39.</p>
</div>
<div>
<p>Daniel, Kent, Sheridan Titman, and KC John Wei. 2001. “Explaining the Cross-Section of Stock Returns in Japan: Factors or Characteristics?” <em>Journal of Finance</em> 56 (2): 743–66.</p>
</div>
<div>
<p>d’Aspremont, Alexandre. 2011. “Identifying Small Mean-Reverting Portfolios.” <em>Quantitative Finance</em> 11 (3). Taylor & Francis: 351–64.</p>
</div>
<div>
<p>Delbaen, Freddy, and Walter Schachermayer. 1994. “A General Version of the Fundamental Theorem of Asset Pricing.” <em>Mathematische Annalen</em> 300 (1): 463–520.</p>
</div>
<div>
<p>DeMiguel, Victor, Lorenzo Garlappi, and Raman Uppal. 2009. “Optimal Versus Naive Diversification: How Inefficient Is the 1/N Portfolio Strategy?” <em>Review of Financial Studies</em> 22 (5): 1915–53.</p>
</div>
<div>
<p>DeMiguel, Victor, Alberto Martin Utrera, and Raman Uppal. 2019. “What Alleviates Crowding in Factor Investing?” <em>SSRN Working Paper</em> 3392875.</p>
</div>
<div>
<p>De Moor, Lieven, Geert Dhaene, and Piet Sercu. 2015. “On Comparing Zero-Alpha Tests Across Multifactor Asset Pricing Models.” <em>Journal of Banking & Finance</em> 61: S235–S240.</p>
</div>
<div>
<p>Denil, Misha, David Matheson, and Nando De Freitas. 2014. “Narrowing the Gap: Random Forests in Theory and in Practice.” In <em>International Conference on Machine Learning</em>, 665–73.</p>
</div>
<div>
<p>De Prado, Marcos Lopez. 2018. <em>Advances in Financial Machine Learning</em>. John Wiley & Sons.</p>
</div>
<div>
<p>Dichtl, Hubert, Wolfgang Drobetz, Harald Lohre, Carsten Rother, and Patrick Vosskamp. 2019. “Optimal Timing and Tilting of Equity Factors.” <em>Financial Analysts Journal</em> 75 (4): 84–102.</p>
</div>
<div>
<p>Dichtl, Hubert, Wolfgang Drobetz, Andreas Neuhierl, and Viktoria-Sophie Wendt. 2019. “Data Snooping in Equity Premium Prediction.” <em>SSRN Working Paper</em> 2972011.</p>
</div>
<div>
<p>Dingli, Alexiei, and Karl Sant Fournier. 2017. “Financial Time Series Forecasting–a Deep Learning Approach.” <em>International Journal of Machine Learning and Computing</em> 7 (5): 118–22.</p>
</div>
<div>
<p>Donaldson, R Glen, and Mark Kamstra. 1996. “Forecast Combining with Neural Networks.” <em>Journal of Forecasting</em> 15 (1): 49–61.</p>
</div>
<div>
<p>Drucker, Harris. 1997. “Improving Regressors Using Boosting Techniques.” In <em>International Conference on Machine Learning</em>, 97:107–15.</p>
</div>
<div>
<p>Drucker, Harris, Christopher JC Burges, Linda Kaufman, Alex J Smola, and Vladimir Vapnik. 1997. “Support Vector Regression Machines.” In <em>Advances in Neural Information Processing Systems</em>, 155–61.</p>
</div>
<div>
<p>Du, Ke-Lin, and Madisetti NS Swamy. 2013. <em>Neural Networks and Statistical Learning</em>. Springer Science & Business Media.</p>
</div>
<div>
<p>Duchi, John, Elad Hazan, and Yoram Singer. 2011. “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” <em>Journal of Machine Learning Research</em> 12 (Jul): 2121–59.</p>
</div>
<div>
<p>Dunis, Christian L, Spiros D Likothanassis, Andreas S Karathanasopoulos, Georgios S Sermpinis, and Konstantinos A Theofilatos. 2013. “A Hybrid Genetic Algorithm–Support Vector Machine Approach in the Task of Forecasting and Trading.” <em>Journal of Asset Management</em> 14 (1): 52–71.</p>
</div>
<div>
<p>Eakins, Stanley G, Stanley R Stansell, and James F Buck. 1998. “Analyzing the Nature of Institutional Demand for Common Stocks.” <em>Quarterly Journal of Business and Economics</em>. JSTOR, 33–48.</p>
</div>
<div>
<p>Efimov, Dmitry, and Di Xu. 2019. “Using Generative Adversarial Networks to Synthesize Artificial Financial Datasets.” <em>Proceedings of the Conference on Neural Information Processing Systems</em>.</p>
</div>
<div>
<p>Ehsani, Sina, and Juhani T Linnainmaa. 2019. “Factor Momentum and the Momentum Factor.” <em>SSRN Working Paper</em> 3014521.</p>
</div>
<div>
<p>Elliott, Graham, Nikolay Kudrin, and Kaspar Wuthrich. 2019. “Detecting P-Hacking.” <em>arXiv Preprint</em>, no. 1906.06711.</p>
</div>
<div>
<p>Elman, Jeffrey L. 1990. “Finding Structure in Time.” <em>Cognitive Science</em> 14 (2): 179–211.</p>
</div>
<div>
<p>Enders, Craig K. 2001. “A Primer on Maximum Likelihood Algorithms Available for Use with Missing Data.” <em>Structural Equation Modeling</em> 8 (1). Taylor & Francis: 128–41.</p>
</div>
<div>
<p>———. 2010. <em>Applied Missing Data Analysis</em>. Guilford press.</p>
</div>
<div>
<p>Engilberge, Martin, Louis Chevallier, Patrick Pérez, and Matthieu Cord. 2019. “SoDeep: A Sorting Deep Net to Learn Ranking Loss Surrogates.” In <em>Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition</em>, 10792–10801.</p>
</div>
<div>
<p>Engle, Robert F. 1982. “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation.” <em>Econometrica</em>, 987–1007.</p>
</div>
<div>
<p>Enke, David, and Suraphan Thawornwong. 2005. “The Use of Data Mining and Neural Networks for Forecasting Stock Market Returns.” <em>Expert Systems with Applications</em> 29 (4): 927–40.</p>
</div>
<div>
<p>Fabozzi, Frank J, and Marcos López de Prado. 2018. “Being Honest in Backtest Reporting: A Template for Disclosing Multiple Tests.” <em>Journal of Portfolio Management</em> 45 (1): 141–47.</p>
</div>
<div>
<p>Fama, Eugene F, and Kenneth R French. 1992. “The Cross-Section of Expected Stock Returns.” <em>Journal of Finance</em> 47 (2): 427–65.</p>
</div>
<div>
<p>———. 1993. “Common Risk Factors in the Returns on Stocks and Bonds.” <em>Journal of Financial Economics</em> 33 (1): 3–56.</p>
</div>
<div>
<p>———. 2015. “A Five-Factor Asset Pricing Model.” <em>Journal of Financial Economics</em> 116 (1): 1–22.</p>
</div>
<div>
<p>———. 2018. “Choosing Factors.” <em>Journal of Financial Economics</em> 128 (2): 234–52.</p>
</div>
<div>
<p>Fama, Eugene F, and James D MacBeth. 1973. “Risk, Return, and Equilibrium: Empirical Tests.” <em>Journal of Political Economy</em> 81 (3): 607–36.</p>
</div>
<div>
<p>Farmer, Leland, Lawrence Schmidt, and Allan Timmermann. 2019. “Pockets of Predictability.” <em>SSRN Working Paper</em> 3152386.</p>
</div>
<div>
<p>Fastrich, Björn, Sandra Paterlini, and Peter Winker. 2015. “Constructing Optimal Sparse Portfolios Using Regularization Methods.” <em>Computational Management Science</em> 12 (3): 417–34.</p>
</div>
<div>
<p>Feng, Guanhao, Stefano Giglio, and Dacheng Xiu. 2019. “Taming the Factor Zoo: A Test of New Factors.” <em>Journal of Finance</em> Forthcoming.</p>
</div>
<div>
<p>Feng, Guanhao, Nicholas G Polson, and Jianeng Xu. 2019. “Deep Learning in Characteristics-Sorted Factor Models.” <em>SSRN Working Paper</em> 3243683.</p>
</div>
<div>
<p>Fischer, Thomas, and Christopher Krauss. 2018. “Deep Learning with Long Short-Term Memory Networks for Financial Market Predictions.” <em>European Journal of Operational Research</em> 270 (2): 654–69.</p>
</div>
<div>
<p>Fisher, Aaron, Cynthia Rudin, and Francesca Dominici. 2018. “All Models Are Wrong but Many Are Useful: Variable Importance for Black-Box, Proprietary, or Misspecified Prediction Models, Using Model Class Reliance.” <em>arXiv Preprint</em>, no. 1801.01489.</p>
</div>
<div>
<p>Frazier, Peter I. 2018. “A Tutorial on Bayesian Optimization.” <em>arXiv Preprint</em>, no. 1807.02811.</p>
</div>
<div>
<p>Frazzini, Andrea, and Lasse Heje Pedersen. 2014. “Betting Against Beta.” <em>Journal of Financial Economics</em> 111 (1): 1–25.</p>
</div>
<div>
<p>Freeman, Robert N, and Senyo Y Tse. 1992. “A Nonlinear Model of Security Price Responses to Unexpected Earnings.” <em>Journal of Accounting Research</em>, 185–209.</p>