Intelligent Systems


2024


no image
Event-based Non-Rigid Reconstruction of Low-Rank Parametrized Deformations from Contours

Xue, Y., Li, H., Leutenegger, S., Stueckler, J.

International Journal of Computer Vision (IJCV), 2024 (article)

Abstract
Visual reconstruction of fast non-rigid object deformations over time is a challenge for conventional frame-based cameras. In recent years, event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. In this paper, we propose a novel approach for reconstructing such deformations using event measurements. Under the assumption of a static background, where all events are generated by the motion, our approach estimates the deformation of objects from events generated at the object contour in a probabilistic optimization framework. It associates events to mesh faces on the contour and maximizes the alignment of the line of sight through the event pixel with the associated face. In experiments on synthetic and real data of human body motion, we demonstrate the advantages of our method over state-of-the-art optimization and learning-based approaches for reconstructing the motion of human arms and hands. In addition, we propose an efficient event stream simulator to synthesize realistic event data for human motion.

DOI [BibTex]

2024

DOI [BibTex]

2022


no image
Weakly Supervised Learning of Multi-Object 3D Scene Decompositions Using Deep Shape Priors

Elich, C., Oswald, M. R., Pollefeys, M., Stueckler, J.

Computer Vision and Image Understanding (CVIU), 2022 (article) Accepted

Abstract
Representing scenes at the granularity of objects is a prerequisite for scene understanding and decision making. We propose PriSMONet, a novel approach based on Prior Shape knowledge for learning Multi-Object 3D scene decomposition and representations from single images. Our approach learns to decompose images of synthetic scenes with multiple objects on a planar surface into its constituent scene objects and to infer their 3D properties from a single view. A recurrent encoder regresses a latent representation of 3D shape, pose and texture of each object from an input RGB image. By differentiable rendering, we train our model to decompose scenes from RGB-D images in a self-supervised way. The 3D shapes are represented continuously in function-space as signed distance functions which we pre-train from example shapes in a supervised way. These shape priors provide weak supervision signals to better condition the challenging overall learning task. We evaluate the accuracy of our model in inferring 3D scene layout, demonstrate its generative capabilities, assess its generalization to real images, and point out benefits of the learned representation.

Link Preprint link (url) DOI Project Page [BibTex]

2022

Link Preprint link (url) DOI Project Page [BibTex]


no image
Visual-Inertial Odometry with Online Calibration of Velocity-Control Based Kinematic Motion Models

Li, H., Stueckler, J.

IEEE Robotics and Automation Letters (RA-L), 2022, Accepted for oral presentation at IEEE ICRA 2023 (article) Accepted

Abstract
Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints. In this paper, we propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online. Including such a motion model can help to improve the accuracy of VIO. Compared to several previous approaches proposed to integrate wheel odometer measurements for this purpose, our method does not require wheel encoders and can be applied when the robot motion can be modeled with velocity-control based kinematic motion model. We use radial basis function (RBF) kernels to compensate for the time delay and deviations between control commands and actual robot motion. The motion model is calibrated online by the VIO system and can be used as a forward model for motion control and planning. We evaluate our approach with data obtained in variously sized indoor environments, demonstrate improvements over a pure VIO method, and evaluate the prediction accuracy of the online calibrated model.

preprint [BibTex]

preprint [BibTex]

2020


no image
Numerical Quadrature for Probabilistic Policy Search

Vinogradska, J., Bischoff, B., Achterhold, J., Koller, T., Peters, J.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(1):164-175, 2020 (article)

DOI [BibTex]

2020

DOI [BibTex]


no image
TUM Flyers: Vision-Based MAV Navigation for Systematic Inspection of Structures

Usenko, V., Stumberg, L. V., Stückler, J., Cremers, D.

In Bringing Innovative Robotic Technologies from Research Labs to Industrial End-users: The Experience of the European Robotics Challenges, 136, pages: 189-209, Springer International Publishing, 2020 (inbook)

link (url) [BibTex]

link (url) [BibTex]


no image
Visual-Inertial Mapping with Non-Linear Factor Recovery

Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.

IEEE Robotics and Automation Letters (RA-L), 5, 2020, presented at IEEE International Conference on Robotics and Automation (ICRA) 2020, preprint arXiv:1904.06504 (article)

Abstract
Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.

Code Preprint link (url) Project Page [BibTex]

Code Preprint link (url) Project Page [BibTex]

2018


no image
Omnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras

Matsuki, H., von Stumberg, L., Usenko, V., Stueckler, J., Cremers, D.

IEEE Robotics and Automation Letters (RA-L) & Int. Conference on Intelligent Robots and Systems (IROS), Robotics and Automation Letters (RA-L), IEEE, 2018 (article)

[BibTex]

2018

[BibTex]

2016


no image
NimbRo Explorer: Semi-Autonomous Exploration and Mobile Manipulation in Rough Terrain

Stueckler, J., Schwarz, M., Schadler, M., Topalidou-Kyniazopoulou, A., Behnke, S.

Journal of Field Robotics (JFR), 33(4):411-430, Wiley, 2016 (article)

[BibTex]

2016

[BibTex]


no image
Multi-Layered Mapping and Navigation for Autonomous Micro Aerial Vehicles

Droeschel, D., Nieuwenhuisen, M., Beul, M., Stueckler, J., Holz, D., Behnke, S.

Journal of Field Robotics (JFR), 33(4):451-475, 2016 (article)

[BibTex]

[BibTex]

2015


no image
Perception of Deformable Objects and Compliant Manipulation for Service Robots

Stueckler, J., Behnke, S.

In Soft Robotics: From Theory to Applications, Springer, 2015 (inbook)

link (url) [BibTex]

2015

link (url) [BibTex]


no image
Efficient Dense Rigid-Body Motion Segmentation and Estimation in RGB-D Video

Stueckler, J., Behnke, S.

International Journal of Computer Vision (IJCV), 113(3):233-245, 2015 (article)

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]

2014


no image
Rough Terrain Mapping and Navigation using a Continuously Rotating 2D Laser Scanner

Schadler, M., Stueckler, J., Behnke, S.

Künstliche Intelligenz (KI), 28(2):93-99, Springer, 2014 (article)

link (url) DOI [BibTex]

2014

link (url) DOI [BibTex]


no image
Dense Real-Time Mapping of Object-Class Semantics from RGB-D Video

Stueckler, J., Waldvogel, B., Schulz, H., Behnke, S.

Journal of Real-Time Image Processing (JRTIP), 10(4):599-609, Springer, 2014 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking

Stueckler, J., Behnke, S.

Journal of Visual Communication and Image Representation (JVCI), 25(1):137-147, 2014 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Active Recognition and Manipulation for Mobile Robot Bin Picking

Holz, D., Nieuwenhuisen, M., Droeschel, D., Stueckler, J., Berner, A., Li, J., Klein, R., Behnke, S.

In Gearing Up and Accelerating Cross-fertilization between Academic and Industrial Robotics Research in Europe: Technology Transfer Experiments from the ECHORD Project, pages: 133-153, Springer, 2014 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Increasing Flexibility of Mobile Manipulation and Intuitive Human-Robot Interaction in RoboCup@Home

Stueckler, J., Droeschel, D., Gräve, K., Holz, D., Schreiber, M., Topaldou-Kyniazopoulou, A., Schwarz, M., Behnke, S.

In RoboCup 2013, Robot Soccer World Cup XVII, pages: 135-146, Springer, 2014 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2013


no image
Efficient 3D Object Perception and Grasp Planning for Mobile Manipulation in Domestic Environments

Stueckler, J., Steffens, R., Holz, D., Behnke, S.

Robotics and Autonomous Systems (RAS), 61(10):1106-1115, 2013 (article)

link (url) DOI [BibTex]

2013

link (url) DOI [BibTex]


no image
NimbRo@Home: Winning Team of the RoboCup@Home Competition 2012

Stueckler, J., Badami, I., Droeschel, D., Gräve, K., Holz, D., McElhone, M., Nieuwenhuisen, M., Schreiber, M., Schwarz, M., Behnke, S.

In RoboCup 2012, Robot Soccer World Cup XVI, pages: 94-105, Springer, 2013 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2012


no image
RoboCup@Home: Demonstrating Everyday Manipulation Skills in RoboCup@Home

Stueckler, J., Holz, D., Behnke, S.

IEEE Robotics and Automation Magazine (RAM), 19(2):34-42, 2012 (article)

link (url) DOI [BibTex]

2012

link (url) DOI [BibTex]


no image
Towards Robust Mobility, Flexible Object Manipulation, and Intuitive Multimodal Interaction for Domestic Service Robots

Stueckler, J., Droeschel, D., Gräve, K., Holz, D., Kläß, J., Schreiber, M., Steffens, R., Behnke, S.

In RoboCup 2011, Robot Soccer World Cup XV, pages: 51-62, Springer, 2012 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2008


no image
Hierarchical Reactive Control for Humanoid Soccer Robots

Behnke, S., Stueckler, J.

International Journal of Humanoid Robots (IJHR), 5(3):375-396, 2008 (article)

link (url) [BibTex]

2008

link (url) [BibTex]