-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathprogram-copy.html
executable file
·586 lines (508 loc) · 37.7 KB
/
program-copy.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<link rel="shortcut icon" href="../../assets/ico/favicon.ico">
<title>ICONS 2018 - Program</title>
<!-- Bootstrap core CSS -->
<link href="./dist/css/bootstrap.min.css" rel="stylesheet">
<!-- Just for debugging purposes. Don't actually copy this line! -->
<!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<!-- Custom styles for this template -->
<link href="rrcf2014.css" rel="stylesheet">
</head>
<!-- NAVBAR
================================================== -->
<body>
<div class="navbar-wrapper">
<div class="container">
<div class="navbar navbar-inverse navbar-static-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="brand" href="index.html"> <img src="images/logo_no_text_gray.png" alt="ICONS 2018"></a>
<!-- <a class="brand" href="index.html"> <img src="images/mlhpc.png" alt=""Rapid Response Cyber Forensics"MLHPC2015"></a>-->
</div>
<div class="navbar-collapse collapse pull-right">
<ul class="nav navbar-nav">
<li class="active"><a href="index.html">Home</a></li>
<!--<li><a href="https://www.ornl.gov/content/come-see-us">Venue</a></li>-->
<li><a href="index.html#important-dates">Important Dates</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Information<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="registration.html">Registration and Hotel</a></li>
<li><a href="cfp.html">Call for Papers</a></li>
<li><a href="keynotespeakers.html">Keynotes</a></li>
<li><a href="program.html">Program</a></li>
<!-- <li><a href="files/ORNLNeuromorphicComputingWorkshop2016Report.pdf">Report PDF</a></li>-->
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Committees<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="organizingcommittee.html">Organizing Committee</a></li>
<li><a href="programcommittee.html">Program Committee</a></li>
</ul>
</li>
<li><a href="index.html#contact">Contact</a></li>
</ul>
</div>
</div>
</div>
</div>
</div>
<!-- Marketing messaging and featurettes
================================================== -->
<!-- Wrap the rest of the page in another container to center all the content. -->
<div class="container marketing">
<!-- START THE FEATURETTES -->
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="featurette-heading">Conference <span class="text-muted">Program</span></h2>
<!--
<p><b>Detailed schedule to follow.</b></p>
<p class="lead">Monday, July 23, 2018</p>
<ul>
<li>Symposiums and Workshops, including a hands-on workshop for Intel's neuromorphic chip Loihi.</li>
</ul>
<p class="lead">Tuesday, July 24, 2018</p>
<ul>
<li>9:00 AM - 5:00 PM - Regular Conference</li>
<li>6:30 PM - Conference Dinner at Lonesome Dove</li>
</ul>
<p class="lead">Wednesday, July 25, 2018</p>
<ul>
<li>9:00 AM - 4:00 PM - Regular Conference</li>
<li>4:00 PM - 5:30 PM - Poster Session</li>
</ul>
<p class="lead">Thursday, July 26, 2018<p>
<ul>
<li>9:00 AM - 12:30 PM - Regular Conference</li>
</ul>
-->
<p class="lead">Monday, July 23, 2018</p>
<ul>
<li>9:00 - 10:00 AM - Registration</li>
<li>10:00 AM - 5:30 PM - Intel Loihi Workshop (Break for lunch on your own 12:00-1:30). Loihi workshop participation is limited. Please register your interest <a href="https://goo.gl/forms/9Xo1j9vjWDbmpeie2">here</a>, and we will let you know if there is room in the workshop.</li>
<ul>
<li>10:00 AM - 12:00 PM - Loihi Overview and the INRC Program -- Mike Davies</li>
<li>12:00 - 1:30 PM - Lunch on your own</li>
<li>1:30 - 4:00 PM - Nx SDK (Presentation and Live Demonstration) -- David Florey and Andreas Wild</li>
<li>4:00 - 5:30 PM - Q&A and Open Discussion</li>
</ul>
<li>11:00 AM - 12:00 PM - Reservoir Computing Tutorial</li>
<li>1:30 PM - 5:30 PM - Special Session on Memristors, organized by Ronald Tetzlaff (<a href="https://cmc-dresden.org">Chua Memristor Center</a>)
<ul>
<li>1:30 - 2:00 PM - "Experimental demonstrations of unconventional computing with memristive devices" -- J. Joshua Yang</li>
<li>2:00 - 2:30 PM - "Neuromorphic spiking networks with resistive switching memory (RRAM) synapses" -- Daniele Ielmini</li>
<li>2:30 - 3:00 PM - "Ge<sub>2</sub>Se<sub>3</sub>-Doped Devices: Towards Optically-Gated Transistors and Memristors for Neuromorphic Computing" -- Kristy A. Campbell</li>
<li>3:00 - 3:30 PM - "Feature extraction and information processing using memristor networks" -- Wei Lu</li>
<li>3:30 - 4:00 PM - Break</li>
<li>4:00 - 4:30 PM - "Real Processing-in-Memory with Memristive Memory Processing Unit" -- Shahar Kvatinsky</li>
<li>4:30 - 5:00 PM - "Computing with Bio–inspired Memristor Networks: Complexity and Nonlinear Dynamics via the Flux–Charge Analysis Method" -- Fernando Corinto</li>
<li>5:00 - 5:30 PM - "Unveiling the nonlinear dynamics of a TaO resistance switching memory" - Alon Ascoli </li>
</ul></li>
</ul>
<p class="lead">Tuesday, July 24, 2018</p>
<ul>
<li>8:00 - 9:00 AM - Registration/Coffee</li>
<li>9:00 - 9:15 AM - Welcome/Introduction</li>
<li>9:15 - 10:15 AM - Keynote Presentation: Tom Potok</li>
<li>10:15 - 10:45 AM - Break</li>
<li>10:45 AM - 12:00 PM - Presentations
<ul>
<li>10:45 - 11:10 AM - “Saccadic Predictive Vision Model with a Fovea” -- <b>Michael Hazoglou</b> and Todd Hylton</li>
<li>11:10 - 11:35 AM - “<a href="presentations/comparison_reynolds.pdf">A Comparison of Neuromorphic Classification Tasks</a>” -- <b>John J. M. Reynolds</b>, James S. Plank, Catherine D. Schuman, Grant Bruer, Adam Disney, Mark E. Dean and Garret S. Rose</li>
<li>11:35 AM - 12:00 PM - “Sparse Coding Enables the Reconstruction of High-Fidelity Images and Video from Retinal Spike Trains” -- <b>Yijing Watkins</b>, Austin Thresher, David Mascarenas and Garrett Kenyon</li>
</ul>
<li>12:00 - 1:30 PM - Lunch and Lunch Talk: “Bio-inspired computing with memristive neural networks” -- Zhongrui Wang, Can Li, Saumil Joshi, Rivu Midya, Qiangfei Xia, <b>J. Joshua Yang</b></li>
<li>1:30 - 3:10 PM - Presentations</li>
<ul>
<li>1:30 - 1:55 PM - “Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain's navigational system” -- <b>Guangzhi Tang</b> and Konstantinos Michmizos</li>
<li>1:55 - 2:20 PM - “A Neural-Astrocytic Network Architecture: Astrocytic calcium waves modulate synchronous neuronal activity” -- Ioannis Polykretis, Vladimir Ivanov and <b>Konstantinos Michmizos</b></li>
<li>2:20 - 2:45 PM - “Neuromorphic hardware implementation of spiking algorithms for Markov random walks” -- <b>James Aimone</b>, Aaron Hill, Rich Lehoucq, Ojas Parekh and William Severa</li>
<li>2:45 - 3:10 PM - “A Summary of Neuromorphic Computing Research at Duke Center for Evolutionary Intelligence" - <b>Yiran Chen</b></li>
</ul>
<li>3:10 - 3:40 PM - Break</li>
<li>3:40 - 4:55 PM - Lightning Talks</li>
<ul>
<li>3:40 - 3:55 PM - “Design of superconducting optoelectronic networks for neuromorphic computing” -- <b>Sonia Buckley</b>, Adam McCaughan, Jeffrey Chiles, Richard Mirin, Sae Woo Nam, Jeffrey Shainline, Catherine Schuman and James Plank</li>
<li>3:55 - 4:10 PM - “Harnessing network dynamics in self-assembled atomic-switch networks for brain-inspired computation” -- <b>Saurabh Bose</b>, Shota Shirai, Josh Mallinson, Susant Acharya, Edoardo Galli and Simon Brown</li>
<li>4:10 - 4:25 PM - “Nano/Micro-Electro-Mechanical System Neuromorphic Computers” -- Mostafa Rafaie, Mohammad Hasan and <b>Fadi Alsaleem</b></li>
<li>4:25 - 4:40 PM - “<a href="presentations/mitchell_danna2_icons.pdf">DANNA 2: Dynamic Adaptive Neural Network Arrays</a>” -- <b>J. Parker Mitchell</b>, Mark Dean, Grant Bruer, James Plank and Garrett Rose</li>
<li>4:40 - 4:55 PM - “<a href="presentations/ICONS_2018_Najem.pdf">Electrostriction, capacitive susceptibility, and neuromorphic computing with biomembrane</a>” -- <b>Joseph Najem</b>, Graham Taylor, Ryan Weiss, Md-Sakib Hasan, Garrett Rose, Catherine Schuman, A. Belianinov, Stephen Sarles and Charles Collier</li>
</ul>
<li>4:55 - 5:10 PM - First day wrap-up and instructions for dinner</li>
<li>6:00 - 8:00 PM - Dinner at Lonesome Dove, sponsored by Knowm and Duke University</li>
</ul>
<p class="lead">Wednesday, July 25, 2018</p>
<ul>
<li>8:00 - 9:00 AM - Registration/Coffee</li>
<li>9:00 - 9:15 AM - Welcome/Introduction</li>
<li>9:15 - 10:15 AM - Keynote Presentation: Elisabetta Chicca</li>
<li>10:15 - 10:45 AM - Break</li>
<li>10:45 AM - 12:00 PM - Presentations
<ul>
<li>10:45 - 11:10 AM - “Modeling Memristor Radiation Interaction Events and the Effect on Neuromorphic Learning Circuits” -- <b>Sumedha Dahl</b>, Robert Ivans and Kurtis Cantley</li>
<li>11:10 - 11:35 AM - “Relative Efficiency of Memristive and Digital Neuromorphic Crossbars” -- <b>Christopher Krieger</b>, David Mountain and Mark McLean</li>
<li>11:35 AM - 12:00 PM - “<a href="presentations/ICONS_2018_Hongyu_15.pdf">Learning Accuracy Analysis of Memristor-based Nonlinear Computing Module on Long Short-term Memory</a>” -- <b>Hongyu An</b>, Mohammad Al-Mamun, Zhen Zhou, Marius Orlowski and Yang Yi</li>
</ul>
<li>12:00 - 1:30 PM - Lunch and Lunch Talk: "<a href="presentations/2018-07-25-DOE_Pino_ICONS.pdf">DOE Programmatic Activities in Advanced Computing Technologies: Beyond Moore’s Law Computing</a>" -- Robinson Pino</li>
<li>1:30 - 3:05 PM - Presentation and Lightning Talk</li>
<ul>
<li>1:30 - 1:55 PM - “Four Simulators of the DANNA Neuromorphic Computing Architecture” -- <b>Adam Disney</b>, James Plank and Mark Dean</li>
<li>1:55 - 2:20 PM - “Towards adaptive spiking label propagation” -- <b>Kathleen Hamilton</b> and Catherine Schuman</li>
<li>2:20 - 2:35 PM - “Whetstone: An accessible, platform-independent method for training spiking deep neural networks for neuromorphic processors” -- <b>William Severa</b>, Craig Vineyard, Ryan Dellana and James Aimone</li>
<li>2:35 - 2:50 PM - “Radiographic Inference Based on a Model of V1 Simple Cells Implemented on the D-Wave 2X Quantum Annealing Computer” -- Nga Nguyen and <b>Garrett Kenyon</b></li>
<li>2:50 - 3:05 PM - “Efficient Classification of Supercomputer Failures Using Neuromorphic Computing” -- <b>Prasanna Date</b>, Christopher Carothers, Malik Magdon-Ismail and James Hendler</li>
</ul>
<li>3:05 - 3:30 PM - Break</li>
<li>3:30 - 4:00 PM - Lightning Talks</li>
<ul>
<li>3:30-3:45 PM - “Retinal-Inspired Algorithms for Detection of Moving Objects” -- <b>Frances Chance</b> and Christina Warrender</li>
<li>3:45 - 4:00 PM - “<a href="presentations/HRL-ICONS2018.pptx">A Dynamical Systems Approach to Neuromorphic Computation of Conditional Probabilities</a>” -- <b>Nigel Stepp</b> and Aruna Jammalamadaka</li>
</ul>
<li>4:00 - 4:30 PM - Poster Introductions</li>
<li>4:30 - 6:00 PM - Poster Session and <a href="https://goo.gl/forms/APXdnjsXfjjrVGkw1">Student Poster Competition</a>, sponsored by Intel</li>
</ul>
<p class="lead">Thursday, July 26, 2018</p>
<ul>
<li>8:00 - 9:00 AM - Registration/Coffee</li>
<li>9:00 - 9:30 AM - Welcome/Introduction</li>
<li>9:30 -10:00 AM - Invited Talk: "<a href="presentations/20180726_Womble_ICONS_presentation.pdf">Artificial Intelligence and Machine Learning: Issues and Opportunities</a>" -- David Womble</li>
<li>10:00 - 10:30 AM - Break</li>
<li>Lightning Talks</li>
<ul>
<li>10:30 - 10:45 AM - “<a href="presentations/ICONS_2018_Kendall.pptx">Memristive Liquid State Machine with MN3 Technology</a>” -- <b>Jack Kendall</b> and Vishal Pathak</li>
<li>10:45 - 11:00 AM - “Stochastic Digital Spike-timing-dependent Plasticity Implementation for Memristive Neuromorphic System” -- <b>Bon Woong Ku</b>, Md Musabbir Adnan, Catherine D. Schuman, Tiffany Mintz, Raphael Pooser, Garrett S. Rose and Sung Kyu Lim</li>
<li>11:30 - 11:15 AM - “<a href="presentations/ICONS_2018_RossPantone_SmallWorld.pdf">Small-World Connectivity Exhibited in Memristive Nanowires</a>” -- <b>Ross Pantone</b>, Jack Kendall and Juan Nino</li>
</ul>
<li>11:15 AM - 12:00 PM - Concluding Remarks</li>
</ul>
<!--
<hr class="featurette-divider">
<p class="lead">Monday, July 17, 2017</p>
<ul>
<li>8:00-9:00 AM - Registration/Coffee</li>
<li>9:00-9:30 AM - Welcome</li>
<li>9:30-10:30 AM - <a href="presentations/cschuman_NeuromorphicSurveyKeynote.pdf">Neuromorphic Community Overview</a> Keynote Presentation: <a href="http://catherineschuman.com">Catherine Schuman</a></li>
<li>10:30-11:00 AM - Break</li>
<li>11:00 AM-12:05 PM - Presentations</li>
<ul>
<li>11:00-11:25 AM - “Efficient Hardware Implementation of Cellular Neural Networks with Powers-of-Two Based Incremental Quantization” -- Xiaowei Xu, Qing Lu, Tianchen Wang, Jinglan Liu, Yu Hu and <b>Yiyu Shi</b></li>
<li>11:25-11:50 AM - “A Multi-Level Optimization Framework for Efficient FPGA-Based Cellular Neural Network Implementation” -- Zhongyang Liu, Shaoheng Luo, Xiaowei Xu, <b>Yiyu Shi</b> and Cheng Zhuo</li>
<li>11:50 AM-12:05 PM - “<a href="presentations/mitchell_bruer_NeuromorphicNavigation.pdf">Neuromorphic Navigation with DANNA</a>” – <b>J. Parker Mitchell, Grant Bruer</b>, and Mark Dean</li>
</ul>
<li>12:05-1:30 PM - Working Lunch - Keynote: “Nexus of Machine Learning, Neuromorphic Computing, High Performance Computing, and the Million Veteran Program (MVP)” - <a href="https://www.energy.gov/diversity/contributors/dimitri-kusnezov">Dimitri Kusnezov</a></li>
<li>1:30-3:10 PM - Presentations</li>
<ul>
<li>1:30-1:55 PM - “<a href="presentations/cschuman_TemporalScientificData_NeuromorphicSymposium_2017.pdf">Neuromorphic Computing for Temporal Scientific Data Classification</a>” - <b>Catherine Schuman</b>, Thomas Potok, Robert Patton, Steven Young, Gangotree Chakma, Austin Wyer and Garrett Rose</li>
<li>1:55-2:20 PM - “Neuromorphic Data Microscope” -- John Naegle, <b>David Follett</b>, Conrad James, Brad Aimone, Roger Suppona, Duncan Townsend and Gabe Karpman</li>
<li>2:20-2:45 PM - “Community detection with spiking neural networks for neuromorphic hardware” - <b>Kathleen Hamilton</b>, Neena Imam and Travis Humble</li>
<li>2:45-3:10 PM - “<a href="presentations/Aimone-NCAMA2017-Presentation.pdf">Neural Computing for Scientific Computing Applications - More than Just Machine Learning</a>” – <b>Brad Aimone</b>, Ojas Parekh and William Severa</li>
</ul>
<li>3:10-3:30 PM - Break</li>
<li>3:30-4:00 PM - Poster Slam</li>
<li>4:00-5:30 PM - Poster/Demo Session</li>
<li>Adjourn at 5:30 PM</li>
<li>6:00-8:00 PM - Dinner at The Lonesome Dove in Knoxville, sponsored by Knowm and Duke University</li>
</ul>
<hr class="featurette-divider">
<p class="lead">Tuesday, July 18, 2017</p>
<ul>
<li> 8:00-9:00 AM - Registration/Coffee</li>
<li>9:00-9:10 AM - Welcome/Recap</li>
<li>9:10-10:10 AM - Keynote Presentation: "<a href="presentations/CNMS_presentation_2017-0718_Christen.pdf">Research and user capabilities at ORNL’s Center for Nanophase Materials Sciences</a>" -- <a href="https://www.ornl.gov/staff-profile/hans-m-christen">Hans Christen</a></li>
<li>10:10-10:30 AM - Break</li>
<li>10:30 AM -12:15 PM - Presentations</li>
<ul>
<li>10:30-10:55 AM - “Improving Neuromorphic Computing Efficiency with Sparse and Light Neural Networks” – <b>Yiran Chen</b></li>
<li>10:55-11:20 AM - “From Meta-Stable Switch Collections to Compositional Machine Learning” – <b>Alex Nugent</b> and Tim Molter</li>
<li>11:20-11:45 AM - “Spatio-Temporal Features on an Energy Budget: Transfer Learning with Deep CNNs and Reservoirs” -- Dillon Graham, Seyed Hamed F. Longroudi, Christopher Kanan, <b>Dhireesha Kudithipudi</b></li>
<li>11:45 AM-12:00 PM - “From Topological Skyrmion to Biological Spike: Developing All-Skyrmion Spiking Neural Network” – Zhezhi He and <b>Deliang Fan</b></li>
<li>12:00-12:15 PM - “<a href="presentations/Kendall_Memristive_Nanowire_Neural_Networks.pdf">Memristive Nanowire Networks</a>” - <b>Jack Kendall</b> and Juan Nino</li>
</ul>
<li>12:15-1:30 PM - Working Lunch - Importance of Co-Design</li>
<li>1:30-2:30 PM - Keynote Presentation: "<a href="presentations/RSWilliams_DoE_NeuroSymposium_2017.pdf">Inspired by the Brain: Computing from Architecture to Devices</a>" – <a href="https://www.hpe.com/h20195/v2/getpdf.aspx/c05139649.pdf?ver=1.0">Stan Williams</a></li>
<li>2:30-3:00 PM - Break</li>
<li>3:00-4:15 PM - Presentations</li>
<ul>
<li>3:00-3:25 PM - “When Energy Efficient Spike-Based Temporal Encoding Meets Resistive Crossbar: From Circuit Design to Application” -- Chenyuan Zhao, Jialing Li, <b>Hongyu An</b>, and Yang Yi</li>
<li>3:25-3:50 PM - “<a href="presentations/wyer_evaluating_online_learning.pdf">Evaluating Online-Learning in Memristive Neuromorphic Circuits</a>” - <b>Austin Wyer</b>, Md Musabbir Adnan, Bon Woong Ku, Sung Kyu Lim, Catherine D. Schuman, Raphael C. Pooser and Garrett S. Rose</li>
<li>3:50-4:15 PM - “IMC: Energy-Efficient In-Memory Convolver for Accelerating Binarized Deep Neural Network” -- Shaahin Angizi and <b>Deliang Fan</b></li>
</ul>
<li>4:15-5:00 PM - Posters/Demos</li>
<li>5:00 PM - Adjourn – Dinner on your own</li>
</ul>
<hr class="featurette-divider">
<p class="lead">Wednesday, July 19, 2017</p>
<ul>
<li> 8:00-9:00 AM - Registration/Coffee</li>
<li>9:00-9:10 AM - Welcome/Recap</li>
<li>9:10-10:10 AM - Keynote Presentation: “The DOE Neuromorphic Computing Research Program” - <a href="https://science.energy.gov/ascr/about/dr-robinson-e-pino/">Robinson Pino</a></li>
<li>10:10-10:40 AM - Break</li>
<li>10:40-11:30 AM - Presentations</li>
<ul>
<li>10:40-11:05 AM - “3D Memristor-based Adjustable Deep Recurrent Neural Network with Programmable Attention Mechanism” – <b>Hongyu An</b> and Yang Yi</li>
<li>11:05-11:30 AM - “<a href="presentations/yu_cao_M3D_v1.pdf">Monolithic 3D IC Design for Deep Neural Networks</a>” -- Kyungwook Chang, Deepak Kadetotad, <b>Yu Cao</b>, Jae-Sun Seo and Sung-Kyu Lim</li>
</ul>
<li>11:30 AM - 1:00 PM - Working Lunch - Paths Forward</li>
<li>1:00-1:40 PM - Presentatations</li>
<ul>
<li>1:00-1:25 PM - "<a href="presentations/plank-2017-07-19-Neuro-Comp-Sym.pdf">A Software Stack for Neuromorphic Computing</a>” – <b>James Plank</b>, Mark Dean, Garrett Rose, and Catherine D. Schuman</li>
<li>1:25-1:40 PM - “<a href="presentations/plagge-NeMo2-Viz-Short-final.pdf">Simulation and Visualization of Custom Neuromorphic Hardware using NeMo</a>” - <b>Mark Plagge</b>, Neil McGlohon, Caitlin Ross and Christopher D. Carothers</li>
</ul>
<li>1:40-2:05 PM - “Memristor Crossbar Based Winner Take All Circuit Design for Self-organization” – Raqibul Hasan and <b>Tarek Taha</b></li>
<li>2:05-2:30 PM - “Peptide-doped lipid membranes as synaptic mimics for neuromorphic computing” – <b>Pat Collier</b>, Andy Sarles, and Joseph Najem</li>
<li>2:30-3:00 PM - End Remarks</li>
<li>3:00 - Adjourn </li>
</ul>
-->
<!--
<p class="lead">Wednesday, June 29, 2016</p>
<ul>
<li>10:30-11:30 - Check-In/Badge Pick-Up (ORNL Visitor's Center)</li>
<li>11:00-12:30 - Welcome/Working Lunch
<ul>
<li>Shaun Gleason, ORNL <a href="presentations/ShaunGleasonORNL&CCSDNeuromorphicWorkshop.pdf">(presentation)</a></li>
<li>Robinson Pino, DOE ASCR <a href="presentations/Pino_DOE_SC_ASCR_Neuromorphic_v1.pdf">(presentation)</a></li>
<li>Tom Potok, ORNL</li>
</ul>
</li>
<li>12:30-1:15 - Keynote: Todd Hylton, Brain Corporation <a href="presentations/Hylton-ORNLNeuromorphicComputingtalk-June2016.pdf">(presentation)</a></li>
<li>1:15-2:00 - Keynote: Catherine Schuman, Oak Ridge National Laboratory <a href="presentations/KatieSchumanKeynote.pdf">(presentation)</a></li>
<li>2:00-2:45 - Keynote: Cindy Leiton, Stony Brook University <a href="presentations/NCW_slides_final_cindy_leiton.pdf">(presentation)</a></li>
<li>2:45-3:30 - Break</li>
<li>3:30-4:00 - Presentation: Lloyd Whitman, White House Office of Science and Technology Policy <a href="presentations/Whitman-DOE-Neuromorphic-GrandChallenge-2016-06-29-handouts.pdf">(presentation)</a></li>
<li>4:00-5:00 - Focus Area Overview: Architectures/Models, Algorithms, Applications</li>
<li>5:00-5:30 - Summary</li>
</ul>
<hr class="featurette-divider">
<p class="lead">Thursday, June 30, 2016</p>
<ul>
<li>8:00-8:30 - Coffee/Light Breakfast</li>
<li>8:30-8:40 - Welcome/Recap</li>
<li>8:40-10:00 - Short Presentations (20 minutes)
<ul>
<li>James Aimone, Kristofor Carlson and Fredrick Rothganger: <a href="presentations/Aimone_NeuralComputingScaleComplexity-OakRidge.pdf">Neural Computing: What Scale and Complexity is Needed?</a></li>
<li>Alice Parker: <a href="presentations/Parker_OakRidge_Final.pptx">Object Recognition and Learning using the BioRC Biomimetic Real-Time Cortex Neurons</a></li>
<li>Kathleen Hamilton, Alexander McCaskey, Jonathan Schrock, Neena Imam and Travis Humble: <a href="presentations/Associative_Memory_Models_with_Adiabatic_Quantum_Computation.pdf">Associative Memory Models with Adiabatic Quantum Optimization</a></li>
<li>Tinoosh Mohsenin and Farinaz Koushanfar: <a href="presentations/DOE2016-mohsenin-koushanfar.pdf">Bringing Physical Dimensions to the Deep Networks for Neuromorphic Computing</a></li>
</ul>
<li>10:00-10:30 - Break</li>
<li>10:30-12:00 - Short Presentations (20 minutes)
<ul>
<li>Yuan Xie: Architecture, ISA support, and Software Toolchain for Neuromorphic Computing in ReRAM Based Main Memory</li>
<li>Angel Yanguas-Gil: <a href="presentations/angel_yanguas-gil_neuro.pdf">Beyond the crossbar: materials based design and emulation of neuromemristive devices and architectures</a></li>
<li>Matthew J. Marinella, Sapan Agarwal, A. Alec Talin, Conrad D. James and F. Rick McCormick: Device to System Modeling Framework to Enable a 10 fJ per Instruction Neuromorphic Computer</li>
<li>Chris Carothers, Noah Wolfe, Prasanna Date, Mark Plagge, and Jim Hendler: <a href="presentations/Wolfe-Large-Scale_Hybrid_Neuromorphic_HPC_Simulations_Algorithms_and_Applications.pdf">Large-Scale Hybrid Neuromorphic HPC Simulations, Algorithms and Applications</a></li>
</ul>
<li>12:00-1:00 - Lunch and <a href="presentations/stanwilliams_doe_neuromorphic_workshop_july.pptx">Plenary by Stan Williams</a>.</li>
<li>1:00-2:30 - Short Presentations (20 minutes)
<ul>
<li>Priyadarshini Panda and Kaushik Roy: Enabling on-chip intelligence with low-power neuromorphic computing</li>
<li>Yu Cao, Steven Skorheim, Ming Tu, Pai-Yu Chen, Shimeng Yu, Jae-Sun Seo, Visar Berisha, Maxim Bazhenov and Zihan Xu: <a href="presentations/Yu-Cao-RHINO-v2.pdf">Efficient Neuromorphic Learning with Motifs of Feedforward Inhibition</a></li>
<li>Praveen Pilly, Nigel Stepp and Jose Cruz-Albrecht: <a href="presentations/ORNL_HRLs_Neuromorphic_redux.pdf">Exploiting Criticality in HRL's Latigo Neuromorphic Device</a></li>
<li>Yiran Chen: <a href="presentations/YiranChen_ORNL2016.pdf">Algorithm Innovations of Enhancing Scalability and Adaptability of Learning Systems</a></li>
</ul>
<li>2:30-3:00 - Break</li>
<li>3:00-3:45 - Lightning Talks (10 minutes)</li>
<ul>
<li>Vishal Saxena and Xinyu Wu: <a href="presentations/Oak_Ridge_Presentation_June_16_Public_Release_vishal_saxena.pdf">Addressing Challenges in Neuromorphic Computing with Memristive Synapses</a></li>
<li>Tarek Taha, Raqibul Hasan and Chris Yakopcic: <a href="presentations/taha_dayton_nca.pdf">Energy Efficiency and Throughput of Multicore Memristor Crossbar Based Neuromorphic Architectures</a></li>
<li>Dhireesha Kudithipudi, James Mnatzaganian, Anvesh Polepalli, Nicholas Soures, and Cory Merkel: <a href="presentations/Kudithipudi-ORNL-Neuromorphic-Workshop-2016.pdf">Energy Efficient and Scalable Neuromemristive Computing Substrates</a></li>
<li>Gangotree Chakma, Elvis Offor, Mark Dean and Garrett Rose: <a href="presentations/GRose_mrDANNA_NCAMA16.pdf">A Reconfigurable Memristive DANNA Circuit with Implementations in Pattern Recognition</a></li>
</ul>
<li>3:45-5:00 - Presenter Panel Discussion, moderated by Mark Dean</li>
<li>5:00-5:30 - Summary</li>
</ul>
<hr class="featurette-divider">
<p class="lead">Friday, July 1, 2016</p>
<ul>
<li>7:30-8:00 - Coffee/Light Breakfast</li>
<li>8:00-10:00 - Breakout Discussion Sessions
<ul>
<li>Workshop Report Content Discussions</li>
<li>Example DOE Workshop Report: <a href="http://science.energy.gov/~/media/bes/pdf/reports/2016/NCFMtSA_rpt.pdf">Neuromorphic Computing: From Materials to Systems Architecture</a></li>
</ul>
</li>
<li>10:00-10:30 - Break</li>
<li>10:30-11:45 - Breakout Discussion Sessions
<ul>
<li>Workshop Report Content Discussions</li>
<li>Example DOE Workshop Report: <a href="http://science.energy.gov/~/media/bes/pdf/reports/2016/NCFMtSA_rpt.pdf">Neuromorphic Computing: From Materials to Systems Architecture</a></li>
</ul>
<li>11:45-12:00 - Final Wrap-Up</li>
</ul>
</li>
-->
<!--<p class="lead">Location: Oak Ridge National Laboratory Conference Center</p>-->
<!-- <p class="lead">Date: July 17, 2017 - July 19, 2016 </p>
<p class="lead"> Schedule: TBA </p>-->
<!--<p class="lead">Time: 2:00pm - 5:30pm</p>-->
</div>
<div class="col-md-5">
<!-- <img class="featurette-image img-responsive" data-src="holder.js/500x500/auto" alt="Generic placeholder image"> -->
</div>
</div>
<!-- <hr class="featurette-divider">-->
<div class="row featurette">
<!--
<div class="col-md-7">
<h2 class="lead">Keynote: Bryan Catanzaro<br>
2:00pm - 3:00pm</h2>
<p class="lead"><span class="text-muted">Baidu Research Silicon Valley Artificial Intelligence Laboratory</span></p>
<p>During the past few years, deep learning has made incredible progress towards solving many previously difficult Artificial Intelligence (AI) tasks. Although the techniques behind deep learning have been studied for decades, they rely on large datasets and large computational resources, and so have only recently become practical for many problems. Training deep neural networks is very computationally intensive: training one model takes tens of exaflops of work, and so HPC techniques are key to creating these models. As in other fields, progress in AI is iterative, building on previous ideas. This means that the turnaround time in training models is a key bottleneck to progress in AI—the quicker an idea can be realized as a trainable model, train it on a large dataset, and test it, the quicker that ways can be found of improving the models. In this talk, Catanzaro will discuss the key insights that make deep learning work for many problems, describe the training problem, and detail the use of standard HPC techniques that allow him to rapidly iterate on his models. He will explain how HPC ideas are becoming increasingly central to progress in AI and will also show several examples of how deep learning is helping solve difficult AI problems.</p>
</div>
-->
<!--<div class="col-md-5">-->
<!--<img class="featurette-image img-responsive" src="bio_images/Catanzaro.jpg" alt="Bryan Catanzaro">-->
<!--
</div>
-->
</div>
<!-- <hr class="featurette-divider">-->
<div class="row featurette">
<!--
<div class="col-md-7">
<h2 class="lead">Coffee Break<br>
3:00pm - 3:30pm</h2>
</div>
</div>
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="lead">Asynchronous Parallel Stochastic Gradient Descent - A Numeric Core for Scalable Distributed Machine Learning Algorithms<br>
3:30pm - 3:55pm</h2>
<p class="lead"><span class="text-muted">Janis Keuper and Franz-Josef Pfreundt</h2></span></p>
<p>The implementation of a vast majority of machine learning (ML) algorithms boils down
to solving a numerical optimization problem. In this context, Stochastic
Gradient Descent (SGD) methods have long proven to provide good results, both
in terms of convergence and accuracy. Recently, several parallelization approaches
have been proposed in order to scale SGD to solve very large ML problems.
At their core, most of these approaches are following a MapReduce scheme.
This paper presents a novel parallel updating algorithm for SGD, which utilizes
the asynchronous single-sided communication paradigm.
Compared to existing methods, Asynchronous Parallel Stochastic Gradient Descent (ASGD) provides faster convergence,
at linear scalability and stable accuracy.</p>
</div>
</div>
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="lead">HPDBSCAN – Highly Parallel DBSCAN<br>
3:55pm - 4:20pm</h2>
<p class="lead"><span class="text-muted">Markus Götz, Christian Bodenstein and Morris Riedel</h2></span></p>
<p>Clustering algorithms in the field of data-mining are used
to aggregate similar objects into common groups. One of
the best-known of these algorithms is called DBSCAN. Its
distinct design enables the search for an apriori unknown
number of arbitrarily shaped clusters, and at the same time
allows to filter out noise. Due to its sequential formulation, the parallelization of DBSCAN renders a challenge. In
this paper we present a new parallel approach which we call
HPDBSCAN. It employs three major techniques in order
to break the sequentiality, empower workload-balancing as
well as speed up neighborhood searches in distributed parallel processing environments i) a computation split heuristic
for domain decomposition, ii) a data index preprocessing
step and iii) a rule-based cluster merging scheme.
As a proof-of-concept we implemented HPDBSCAN as an
OpenMP/MPI hybrid application. Using real-world data
sets, such as a point cloud from the old town of Bremen,
Germany, we demonstrate that our implementation is able
to achieve a significant speed-up and scale-up in common
HPC setups. Moreover, we compare our approach with previous attempts to parallelize DBSCAN showing an order of
magnitude improvement in terms of computation time and
memory consumption.</p>
</div>
</div>
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="lead">LBANN: Livermore Big Artificial Neural Network HPC Toolkit<br>
4:20pm - 4:45</h2>
<p class="lead"><span class="text-muted">Brian Van Essen, Hyojin Kim, Roger Pearce, Kofi Boakye and Barry Chen</span></p>
<p>Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC sys- tems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our prelimi- nary results in scaling the size of model that can be trained with the LBANN toolkit.</p>
</div>
</div>
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="lead">Optimizing Deep Learning Hyper-Parameters Through an Evolutionary Algorithm<br>
4:45pm - 5:10pm</h2>
<p class="lead"><span class="text-muted">Steven Young, Derek Rose, Thomas Karnowski, Seung-Hwan Lim and Robert Patton</h2></span></p>
<p>There has been a recent surge of success in utilizing Deep Learning (DL) in imaging and speech applications for its relatively automatic feature generation and, in particular for convolutional neural networks (CNNs), high accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) through hyper-parameter choices remains a tedious and highly intuition driven task. To address this, Multi-node Evolutionary Neural Networks for Deep Learning (MENNDL) is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms.</p>
</div>
</div>
<hr class="featurette-divider">
<div class="row featurette">
<div class="col-md-7">
<h2 class="lead">Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture<br>
5:10pm - 5:30pm</h2>
<p class="lead"><span class="text-muted">Catherine Schuman, Adam Disney and John Reynolds</h2></span></p>
<p>Dynamic Adaptive Neural Network Array (DANNA) is a
neuromorphic hardware implementation. It differs from most
other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evo-
lutionary optimization. This paper describes the DANNA
structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple
classification task.</p>
</div>
</div>
-->
<hr class="featurette-divider">
<!--
<hr class="featurette-divider">
Program
<div class="row featurette">
<div class="col-md-7">
<h2 class="featurette-heading">And lastly, this one. <span class="text-muted">Checkmate.</span></h2>
<p class="lead">Donec ullamcorper nulla non metus auctor fringilla. Vestibulum id ligula porta felis euismod semper. Praesent commodo cursus magna, vel scelerisque nisl consectetur. Fusce dapibus, tellus ac cursus commodo.</p>
</div>
<div class="col-md-5">
<img class="featurette-image img-responsive" data-src="holder.js/500x500/auto" alt="Generic placeholder image">
</div>
</div>
<hr class="featurette-divider">
-->
<!-- /END THE FEATURETTES -->
<!-- FOOTER -->
<footer>
<div style="width: 100%;overflow:auto;">
<div style="float:left; width: 50%">
<a name="contact"></a>
<p><b>Contact: Thomas Potok</b>, potokte "at" ornl.gov</p>
<p>© 2015 Oak Ridge National Laboratory</p>
</div>
<!--
<div style="float:right;">
<p>In cooperation with</p>
<a class="brand" href="http://www.sighpc.org/"> <img src="images/sighpc_logo_72dpi.jpg" alt="Machine Learning in HPC Environments"></a>
</div>
-->
</div>
<p class="pull-right"><a href="#">Back to top</a></p>
</footer>
</div><!-- /.container -->
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="./dist/js/bootstrap.min.js"></script>
<script src="./assets/js/docs.min.js"></script>
</body>
</html>