-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
d36f577
commit 2039389
Showing
11 changed files
with
249 additions
and
18 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
source "https://rubygems.org" | ||
#gemspec | ||
gem 'minimal-mistakes-jekyll', '>= 0' | ||
gem 'kramdown-parser-gfm' | ||
gem 'faraday-retry' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
--- | ||
title: "(ICRA) Integrated Motion Planner for Real-time Aerial Videography with a Drone in a Dense Environment" | ||
categories: | ||
- 2ndAuthor | ||
tags: | ||
- Aerial Tracking | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: Boseong Jeon, <u>Yunwoo Lee*</u>, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/document/9196703 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="zxpBws6kxNI" provider="youtube" %} | ||
|
||
**Abstract:** This work suggests an integrated approach for a drone (or multirotor) to perform an autonomous videography task in a 3-D obstacle environment by following a moving object. The proposed system includes 1) a target motion prediction module which can be applied to dense environments and 2) a hierarchical chasing planner. Leveraging covariant optimization, the prediction module estimates the future motion of the target assuming it efforts to avoid the obstacles. The other module, chasing planner, is in a bi-level structure composed of preplanner and smooth planner. In the first phase, we exploit a graph-search method to plan a chasing corridor which incorporates safety and visibility of target. In the subsequent phase, we generate a smooth and dynamically feasible trajectory within the corridor using quadratic programming (QP). We validate our approach with multiple complex scenarios and actual experiments. The source code and the experiment video can be found in https://github.com/icsl-Jeon/traj_gen_vis and https://www.youtube.com/watch?v=_JSwXBwYRl8. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@INPROCEEDINGS{9196703, | ||
author={Jeon, Boseong and Lee, Yunwoo and Kim, H. Jin}, | ||
booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)}, | ||
title={Integrated Motion Planner for Real-time Aerial Videography with a Drone in a Dense Environment}, | ||
year={2020}, | ||
volume={}, | ||
number={}, | ||
pages={1243-1249}, | ||
keywords={Drones;Trajectory;Safety;Optimization;Measurement;Shape;Real-time systems}, | ||
doi={10.1109/ICRA40945.2020.9196703}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
--- | ||
title: "(IROS) Navigation-Assitant Path Planning within a MAV team" | ||
categories: | ||
- Conference | ||
tags: | ||
- Aerial Tracking | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: Youngseok Jang*, <u>Yunwoo Lee*</u>, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/document/9340792 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="o6fgrlVUq9k" provider="youtube" %} | ||
|
||
**Abstract:** In micro aerial vehicle (MAV) operations, the success of a mission is highly dependent on navigation performance, which has raised recent interests on navigation-aware path planning. One of the challenges lies in that optimal motions for successful navigation and the designated mission are often different in unknown, unstructured environments, and only sub-optimality may be obtained in each aspect. We aim to organize a two-MAV team that can effectively execute the mission and simultaneously guarantee navigation quality, which consists of a main-agent responsible for mission and a sub-agent for navigation of the team. Especially, this paper focuses on path planning of the sub-agent to provide navigational assistance to the main-agent using a monocular camera. We adopt a graph-based receding horizon planner to find a dynamically feasible path in order for the sub-agent to help the main-agent's navigation. In this process, we present a metric for evaluating the localization performance utilizing the distribution of the features projected to the image plane. We also design a map management strategy and pose-estimation support mechanism in a monocular camera setup, and validate their effectiveness in two scenarios. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@INPROCEEDINGS{9340792, | ||
author={Jang, Youngseok and Lee, Yunwoo and Kim, H. Jin}, | ||
booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, | ||
title={Navigation-Assistant Path Planning within a MAV team}, | ||
year={2020}, | ||
volume={}, | ||
number={}, | ||
pages={1436-1443}, | ||
keywords={Measurement;Location awareness;Simultaneous localization and mapping;Navigation;Cameras;Path planning;Intelligent robots}, | ||
doi={10.1109/IROS45743.2020.9340792}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
--- | ||
title: "(T-RO) Multirobot Collaborative Monocular SLAM Utilizing Rendezvous" | ||
categories: | ||
- 2ndAuthor | ||
tags: | ||
- Multi Robot System | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: Youngseok, Changsuk Oh, <u>Yunwoo Lee*</u>, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/document/9381949 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="zxpBws6kxNI" provider="youtube" %} | ||
|
||
**Abstract:** Multirobot simultaneous localization and mapping (SLAM) requires technical ingredients such as systematic construction of multiple SLAM systems and collection of information from each robot. In particular, map fusion is an essential process of multirobot SLAM that combines multiple local maps estimated by team robots into a global map. Fusion of multiple local maps is usually based on interloop detection that recognizes the same scene visited by multiple robots, or robot rendezvous where a member(s) of a robot team is observed in another member's images. This article proposes a collaborative monocular SLAM including a map fusion algorithm that utilizes rendezvous, which can happen when multirobot team members operate in close proximity. Unlike existing rendezvous-based approaches that require additional sensors, the proposed system uses a monocular camera only. Our system can recognize robot rendezvous using nonstatic features (NSFs) without fiducial markers to identify team robots. NSFs, which are abandoned as outliers in typical SLAM systems for not supporting ego-motion, can include relative bearing measurements between robots in a rendezvous situation. The proposed pipeline consists of the following: first, a feature identification module that extracts the relative bearing measurements between robots from NSFs consisting of anonymous bearing vectors with false positives, and second, a map fusion module that integrates the map from the observer robot with the maps from the observed robots using identified relative measurements. The feature identification module can operate quickly using the proposed alternating minimization algorithm formulated by two subproblems with closed-form solutions. The experimental results confirm that our collaborative monocular SLAM system recognizes rendezvous rapidly and robustly, and fuses local maps of team robots into a global map accurately. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@ARTICLE{9381949, | ||
author={Jang, Youngseok and Oh, Changsuk and Lee, Yunwoo and Kim, H. Jin}, | ||
journal={IEEE Transactions on Robotics}, | ||
title={Multirobot Collaborative Monocular SLAM Utilizing Rendezvous}, | ||
year={2021}, | ||
volume={37}, | ||
number={5}, | ||
pages={1469-1486}, | ||
keywords={Robots;Simultaneous localization and mapping;Robot kinematics;Collaboration;Cameras;Sensors;Robot vision systems;Alternating minimization;collaborative monocular simultaneous localization and mapping (SLAM);map fusion (MF);multirobot systems;robot rendezvous}, | ||
doi={10.1109/TRO.2021.3058502}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
--- | ||
title: "(IROS) Target-visible Polynomial Trajectory Generation within an MAV Team" | ||
categories: | ||
- Conference | ||
tags: | ||
- Aerial Tracking | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: <u>Yunwoo Lee</u>, Jungwon Park, Boseong Jeon, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/document/9636446 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="UYYO1Al1rjU" provider="youtube" %} | ||
|
||
**Abstract:** Autonomous aerial videography is a challenging task, which involves collision avoidance against obstacles and visibility guaranteed target tracking in unstructured environments. In this paper, we organize a two micro aerial vehicle (MAV) team, which consists of a target agent responsible for a specific mission and a camera agent for filming the target agent. Especially, this paper focuses on trajectory planning of the camera agent to chase without occlusion of target agent. Our trajectory planner module includes two phases of guaranteeing target visibility. In the first phase, we generate homotopic safe flight corridor (SFC) to attain target-visible regions. In the subsequent phase, we generate a safe and smooth trajectory with the continuous visibility constraint based on the SFC, using quadratic programming (QP). Regardless of complexity of map, our planner converts an overall problem to a single QP and generates a steady flight trajectory without undesirable fluctuating motion, while guaranteeing all-time visibility. We validate our approach in Gazebo simulations and a real-world experiment. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@INPROCEEDINGS{9636446, | ||
author={Lee, Yunwoo and Park, Jungwon and Jeon, Boseong and Kim, H. Jin}, | ||
booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, | ||
title={Target-visible Polynomial Trajectory Generation within an MAV Team}, | ||
year={2021}, | ||
volume={}, | ||
number={}, | ||
pages={1982-1989}, | ||
keywords={Target tracking;Trajectory planning;Cameras;Complexity theory;Quadratic programming;Task analysis;Collision avoidance}, | ||
doi={10.1109/IROS51168.2021.9636446}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
--- | ||
title: "(ACCESS) Autonomous Aerial Dual-Target Following Among Obstacles" | ||
categories: | ||
- 2ndAuthor | ||
tags: | ||
- Aerial Tracking | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: Boseong Jeon, <u>Yunwoo Lee*</u>, Jeongjun Choi, Jungwon Park, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/document/9557293 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="zxpBws6kxNI" provider="youtube" %} | ||
|
||
**Abstract:** In contrast to recent developments in online motion planning to follow a single target with a drone among obstacles, a multi-target case with a single chaser drone has been hardly discussed in similar settings. Following more than one target is challenged by multiple visibility issues due to the inter-target occlusion and the limited field-of-view in addition to the possible occlusion and collision with obstacles. Also, reflecting multiple targets into planning objectives or constraints increases the computation load and numerical issues in the optimization compared to the single target case. To resolve the issues, we first develop a visibility score field for multiple targets incorporating the field-of-view limit and inter-occlusion between targets. Next, we develop a fast sweeping algorithm used to compute the field for the suitability of real-time applications. Last, we build an efficient hierarchical planning pipeline to output a chasing motion for multiple targets ensuring key objectives and constraints. For reliable chasing, we also present a prediction algorithm to forecast the movement of targets considering obstacles. The online performance of the proposed algorithm is extensively validated in challenging scenarios, including a large-scale simulation, and multiple real-world experiments in indoor and outdoor scenes. The full code implementation of the proposed method is released here: https://github.com/icsl-Jeon/dual_chaser. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@ARTICLE{9557293, | ||
author={Jeon, Boseong Felipe and Lee, Yunwoo and Choi, Jeongjun and Park, Jungwon and Kim, H. Jin}, | ||
journal={IEEE Access}, | ||
title={Autonomous Aerial Dual-Target Following Among Obstacles}, | ||
year={2021}, | ||
volume={9}, | ||
number={}, | ||
pages={143104-143120}, | ||
keywords={Drones;Planning;Cameras;Trajectory;Safety;Reliability;Prediction algorithms;Collision avoidance;cinematography;motion planning;trajectory optimization}, | ||
doi={10.1109/ACCESS.2021.3117314}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
--- | ||
title: "(R-AL) Mono-Camera-Only Target Chasing for a Drone in a Dense Environment by Cross-Modal Learning" | ||
categories: | ||
- 2ndAuthor | ||
tags: | ||
- deep learning | ||
header: | ||
teaser: /assets/image/thumbnail/tro2023.gif | ||
authors: Seungyeon Yoo, Seungwoo Jung, <u>Yunwoo Lee</u>, Dongsek Shim, and H. Jin Kim | ||
links: | ||
- paper: | ||
link: https://ieeexplore.ieee.org/abstract/document/10542210 | ||
name: "Paper" | ||
- bibtex: | ||
name: "Bibtex" | ||
--- | ||
{% include video id="9WIFe66S9I8" provider="youtube" %} | ||
|
||
**Abstract:** Chasing a dynamic target in a dense environment is one of the challenging applications of autonomous drones. The task requires multi-modal data, such as RGB and depth, to accomplish safe and robust maneuver. However, using different types of modalities can be difficult due to the limited capacity of drones in aspects of hardware complexity and sensor cost. Our framework resolves such restrictions in the target chasing task by using only a monocular camera instead of multiple sensor inputs. From an RGB input, the perception module can extract a cross-modal representation containing information from multiple data modalities. To learn cross-modal representations at training time, we employ variational autoencoder (VAE) structures and the joint objective function across heterogeneous data. Subsequently, using latent vectors acquired from the pre-trained perception module, the planning module generates a proper next-time-step waypoint by imitation learning of the expert, which performs a numerical optimization using the privileged RGB-D data. Furthermore, the planning module considers temporal information of the target to improve tracking performance through consecutive cross-modal representations. Ultimately, we demonstrate the effectiveness of our framework through the reconstruction results of the perception module, the target chasing performance of the planning module, and the zero-shot sim-to-real deployment of a drone. | ||
|
||
## Bibtex <a id="bibtex"></a> | ||
``` | ||
@ARTICLE{10542210, | ||
author={Yoo, Seungyeon and Jung, Seungwoo and Lee, Yunwoo and Shim, Dongseok and Kim, H. Jin}, | ||
journal={IEEE Robotics and Automation Letters}, | ||
title={Mono-Camera-Only Target Chasing for a Drone in a Dense Environment by Cross-Modal Learning}, | ||
year={2024}, | ||
volume={9}, | ||
number={8}, | ||
pages={7254-7261}, | ||
keywords={Drones;Task analysis;Planning;Target tracking;Vehicle dynamics;Training;Image reconstruction;Vision-based navigation;visual learning;deep learning for visual perception;deep learning methods}, | ||
doi={10.1109/LRA.2024.3407412}} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
--- | ||
title: "(RA-L, submitted) Distributed Multi-Agent Trajectory Planning for Target Tracking Using Dynamic Buffered Voronoi and Inter-Visibility Cells" | ||
categories: | ||
- Journal | ||
tags: | ||
- Aerial Tracking | ||
header: | ||
teaser: /assets/image/thumbnail/homepage_thumbnail.jpeg | ||
journals: RA-L | ||
authors: <u>Yunwoo Lee</u>, Jungwon Park, and H. Jin Kim | ||
--- | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters