-
Notifications
You must be signed in to change notification settings - Fork 0
/
eventvision.html
267 lines (230 loc) · 19.8 KB
/
eventvision.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
<!DOCTYPE html>
<html lang="zxx" class="no-js">
<head>
<!-- Mobile Specific Meta -->
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Favicon-->
<link rel="shortcut icon" href="img/fav.png">
<!-- Author Meta -->
<meta name="author" content="colorlib">
<!-- Meta Description -->
<meta name="description" content="">
<!-- Meta Keyword -->
<meta name="keywords" content="">
<!-- meta character set -->
<meta charset="UTF-8">
<!-- Site Title -->
<title>PeAR WPI</title>
<!-- Site Title -->
<!-- Site Title -->
<link href="https://fonts.googleapis.com/css?family=Poppins:100,200,400,300,500,600,700" rel="stylesheet">
<!--
CSS
============================================= -->
<link rel="stylesheet" href="css/linearicons.css">
<link rel="stylesheet" href="css/font-awesome.min.css">
<link rel="stylesheet" href="css/bootstrap.css">
<link rel="stylesheet" href="css/magnific-popup.css">
<link rel="stylesheet" href="css/nice-select.css">
<link rel="stylesheet" href="css/animate.min.css">
<link rel="stylesheet" href="css/owl.carousel.css">
<link rel="stylesheet" href="css/jquery-ui.css">
<link rel="stylesheet" href="css/main.css">
<link href="css/icofont/icofont.min.css" rel="stylesheet">
<link href="css/remixicon/remixicon.css" rel="stylesheet">
<link href="css/owl.carousel/assets/owl.carousel.min.css" rel="stylesheet">
<link href="css/boxicons/css/boxicons.min.css" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.rawgit.com/jpswalsh/academicons/master/css/academicons.min.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-171009851-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-171009851-1');
</script>
</head>
<body>
<!-- EDIT ME -->
<header id="header">
<div class="container main-menu">
<div class="row align-items-center justify-content-between d-flex">
<!-- style="margin-left: -36vh; margin-right: -36vh" -->
<div id="logo">
<a href="https://www.wpi.edu/" style="font-size: 24px; font-weight: 600; color: #ddd"><img src="img/logos/WPILogo2.png" width="48px" alt="" title=""> </a><a href="index.html" style="font-size: 24px; font-weight: 600; color: #ddd"><img src="img/logos/LogoWhiteRed.png" width="48px" alt="" title=""> Perception and Autonomous Robotics Group</a>
</div>
<nav id="nav-menu-container">
<ul class="nav-menu">
<li><a title="Home" href="index.html" style="position: relative; top: -4px"><i style="font-size: 28px" class="fa fa-home"></i></a></li>
<li class="menu-has-children"><a title="Research" href="research.html">Research</a>
<ul>
<li><a href="research.html">Research Areas</a></li>
<!-- <li><a href="softwares.html">Softwares/Datasets</a></li> -->
<li><a href="publications.html">Publications/Softwares/Datasets</a></li>
<li><a href="labs.html">Research Labs And Facilities</a></li>
</ul>
</li>
<li><a title="Teaching" href="teaching.html">Teaching</a></li>
<li><a title="Media" href="media.html">Media</a></li>
<li><a title="Openings" href="openings.html">Openings</a></li>
<li><a title="Events" href="events.html">Events</a></li>
</ul>
</nav><!-- #nav-menu-container -->
</div>
</div>
</header> </header> </header> </header> <!-- EDIT ME -->
<!-- Start Sample Area -->
<section class="sample-text-area">
<div class="container">
<h3 class="text-heading">Neuromorphic Event-based Sensing and Computing</h3>
<p class="sample-text">
Neuromorphic Event-based sensors are bio-inspired sensors that work like our eyes. Instead of utilizing entire frames, they only transmit pixel-level intensity changes in light caused by movement of the scene or the camera called events. These events have a high dynamic range, low latency (order of micoseconds) and no motion blur. We couple this sensor with Neuromorphic neural networks called Spiking Neural Networks for perception and autonomy.
</p>
</div>
</section>
<!-- End Sample Area -->
<!-- Start Align Area -->
<div class="whole-wrap">
<div class="container">
<div class="section-top-border">
<h3 class="mb-30">EVPropNet</h3>
The rapid rise of accessibility of unmanned aerial vehicles or drones pose a threat to general security and confidentiality. Most of the commercially available or custom-built drones are multi-rotors and are comprised of multiple propellers. Since these propellers rotate at a high-speed, they are generally the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur. We utilize a class of sensors that are particularly suitable for such scenarios called event cameras, which have a high temporal resolution, low-latency, and high dynamic range.<br><br>
In this paper, we model the geometry of a propeller and use it to generate simulated events which are used to train a deep neural network called EVPropNet to detect propellers from the data of an event camera. EVPropNet directly transfers to the real world without any fine-tuning or retraining. We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with different propeller shapes and sizes. Our network can detect propellers at a rate of 85.1% even when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power budget. To our knowledge, this is the first deep learning-based solution for detecting propellers (to detect drones). Finally, our applications also show an impressive success rate of 92% and 90% for the tracking and landing tasks respectively.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2106.15045">EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing And Following</a></h4><br>
<div class="highlight-sec">
<h6>RSS 2021</h6>
</div>
<p>
<b>Nitin J. Sanket</b>, Chahat Deep Singh, Chethan M. Parameshwara, Cornelia Fermuller, Guido C.H.E. de Croon, Yiannis Aloimonos, <i>Robotics Science and Systems (RSS)</i>, 2021.<br>
</p>
<h6>
<a href="https://arxiv.org/abs/2106.15045"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/EVPropNet"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/EVPropNet"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/evpropnet.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/evpropnet.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">SpikeMS</h3>
Spiking Neural Networks (SNN) are the so-called third generation of neural networks which attempt to more closely match the functioning of the biological brain. They inherently encode temporal data, allowing for training with less energy usage and can be extremely energy efficient when coded on neuromorphic hardware. In addition, they are well suited for tasks involving event-based sensors, which match the event-based nature of the SNN. However, SNNs have not been as effectively applied to real-world, large-scale tasks as standard Artificial Neural Networks (ANNs) due to the algorithmic and training complexity. To exacerbate the situation further, the input representation is unconventional and requires careful analysis and deep understanding. In this paper, we propose SpikeMS, the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation using the event-based DVS camera as input. To accomplish this, we introduce a novel spatio-temporal loss formulation that includes both spike counts and classification labels in conjunction with the use of new techniques for SNN backpropagation. In addition, we show that SpikeMS is capable of incremental predictions, or predictions from smaller amounts of test data than it is trained on. This is invaluable for providing outputs even with partial input data for low-latency applications and those requiring fast predictions. We evaluated SpikeMS on challenging synthetic and real-world sequences from EV-IMO, EED and MOD datasets and achieving results on a par with a comparable ANN method, but using potentially 50 times less power.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2105.06562">SpikeMS: Deep Spiking Neural Network for Motion Segmentation</a></h4><br>
<div class="highlight-sec">
<h6>IROS 2021</h6>
</div>
<p>
Chethan M. Parameshwara*, Simin Li*, Cornelia Fermuller, <b>Nitin J. Sanket</b>, Matthew S. Evanusa, Yiannis Aloimonos, <i>IEEE International Conference on Intelligent Robots and Systems (IROS)</i>, 2021.<br>
* Equal Contribution
</p>
<h6>
<a href="https://arxiv.org/abs/2105.06562"><i class="fa fa-file-text-o"></i> Paper </a> <a href="https://prg.cs.umd.edu/SpikeMS"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/SpikeMS"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/spikems.html"><i class="fa fa-quote-right"></i> Cite </a> -->
<!-- <a href="https://arxiv.org/abs/2006.06753"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/PRGFlow"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/PRGFlow"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a> <a href="research/prgflow.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/spikems.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">0-MMS</h3>
Segmentation of moving objects in dynamic scenes is a key process in scene understanding for navigation tasks. Classical cameras suffer from motion blur in such scenarios rendering them effete. On the contrary, event cameras, because of their high temporal resolution and lack of motion blur, are tailor-made for this problem. We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to our knowledge. Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging. We further speed up our method by using the concept of motion propagation and cluster keyslices. The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets and outperformed the state-of-the-art detection rate by 12%, achieving a new state-of-the-art average detection rate of 81.06%, 94.2% and 82.35% on the aforementioned datasets. To enable further research and systematic evaluation of multi-motion segmentation, we present and open-source a new dataset/benchmark called MOD++, which includes challenging sequences and extensive data stratification in-terms of camera and object motion, velocity magnitudes, direction, and rotational speeds.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2006.06158" style="font-weight: 600;"> 0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera</a></h4><br>
<div class="highlight-sec">
<h6>ICRA 2021</h6>
</div>
<p>
Chethan M. Parameshwara, <b>Nitin J. Sanket</b>, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, <i>IEEE International Conference on Robotics and Automation (ICRA)</i>, 2021.<br>
<!-- Add text background in p tag with div -->
</p>
<h6>
<a href="https://arxiv.org/abs/2006.06158"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/0-MMS"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/0-MMS"><i class="fa fa-github"></i> Code </a><a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/zeromms.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/momswithevents.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<br><br>
<hr>
<div class="rowunmod">
<h3>2020</h3>
</div>
<hr><br><br>
<h3 class="mb-30">EVDodgeNet</h3>
Dynamic obstacle avoidance on quadrotors requires low latency. A class of sensors that are particularly suitable for such scenarios are event cameras. In this paper, we present a deep learning based solution for dodging multiple dynamic obstacles on a quadrotor with a single event camera and onboard computation. Our approach uses a series of shallow neural networks for estimating both the ego-motion and the motion of independently moving objects. The networks are trained in simulation and directly transfer to the real world without any fine-tuning or retraining. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy, proving that our navigation stack can cater to different scenarios.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/1906.02919" style="font-weight: 600;"> EVDodgeNet: Deep Dynamic Obstacle Dodging with Event Cameras</a></h4><br>
<div class="highlight-sec">
<h6>ICRA 2020</h6>
</div>
<p>
<b>Nitin J. Sanket*</b>, Chethan M. Parameshwara*, Chahat Deep Singh, Ashwin V. Kuruttukulam, Cornelia Fermuller, Davide Scaramuzza, Yiannis Aloimonos, <i>IEEE International Confernce on Robotics and Automation</i>, Paris, 2020.<br>
* Equal Contribution
<!-- Add text background in p tag with div -->
</p>
<h6>
<a href="https://arxiv.org/abs/1906.02919"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/EVDodgeNet"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/EVDodgeNet"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a> <br><br>
<!-- <a href="research/evdodgenet.html"><i class="fa fa-quote-right"></i> Cite </a> -->
<h4>Featured in</h4> <br>
<a href="https://mashable.com/video/drone-uses-ai-to-dodge-objects-thrown-at-it/"><img src="img/logos/Mashable.png" width="140px" alt="" class="img-fluid"></a> <a href="https://futurism.com/the-byte/watch-drones-dodge-stuff-thrown"><img src="img/logos/Futurism.png" width="140px" alt="" class="img-fluid"></a>
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/EVDodgeNet.gif" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<br><br>
<hr>
</div>
</div>
</div>
<!-- EDIT FOOT -->
<!-- start footer Area -->
<section class="facts-area section-gap" id="facts-area" style="background-color: rgba(255, 255, 255, 1.0); padding: 40px">
<div class="container">
<div class="title text-center">
<p> <a href="index.html"><img src="img/logos/LogoBlackRed.png" width="128px" alt="" title=""></a><br><br>
Perception and Autonomous Robotics Group <br>
Worcester Polytechnic Institute <br>
Copyright © 2023<br>
<span style="font-size: 10px">Website based on <a href="https://colorlib.com" target="_blank">Colorlib</a></span>
</p>
</div>
</div>
</section>
<!-- End footer Area --> <!-- EDIT FOOT -->
<script src="js/vendor/jquery-2.2.4.min.js"></script>
<script src="js/popper.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyBhOdIF3Y9382fqJYt5I_sswSrEw5eihAA"></script>
<script src="js/easing.min.js"></script>
<script src="js/hoverIntent.js"></script>
<script src="js/superfish.min.js"></script>
<script src="js/jquery.ajaxchimp.min.js"></script>
<script src="js/jquery.magnific-popup.min.js"></script>
<script src="js/jquery.tabs.min.js"></script>
<script src="js/jquery.nice-select.min.js"></script>
<script src="js/isotope.pkgd.min.js"></script>
<script src="js/waypoints.min.js"></script>
<script src="js/jquery.counterup.min.js"></script>
<script src="js/simple-skillbar.js"></script>
<script src="js/owl.carousel.min.js"></script>
<script src="js/mail-script.js"></script>
<script src="js/main.js"></script>
</body>
</html>