Skip to content
Tom Krajnik edited this page Dec 10, 2022 · 50 revisions

What is FreMEn ?

The Frequency Map Enhancement (FreMEn) is an enabling technology for long-term mobile robot autonomy in changing environments [1], but is applicable to problems outside the robotics domain. It allows to introduce the notion of dynamics into most spatial models used in the mobile robotics domain, improving the robots'' ability to cope with naturally-occuring environment changes. FreMEn is based on an assumption that from a mid- to long-term perspective, some of the environment dynamics are periodic. To reflect that, FreMEn models the uncertainty of elementary environment states by combination of periodic functions rather than by a constant probability, which is usual in environment models used in mobile robotics.

Spatio-temporal occupancy grid of the Lincoln Centre for Autonomous Systems (L-CAS) office. The static cells are in green and cells that exhibit daily periodicity are in red. FreMEn-based occupancy grid of the Lincoln Centre for Autonomous Systems (L-CAS) office. The static cells are in green and cells that exhibit daily periodicity are in red.

How is that useful ?

Modeling the uncertainties as probabilistic functions of time allows integration of long-term observations of the same environment into memory-efficient spatio-temporal models. These models can predict the future environment states with a given level of confidence. The predictive power of the FreMEn models improves the ability of mobile robots to operate in changing environments for long periods of time. In several long-term experiments, FreMEn demonstrated to improve mapping [1,2,3], localization [1,3,4,5,6], path planning [7], robotic search [8], activity recognition [9], patrolling [10], exploration [11,12], task scheduling [13] and human-robot interaction [14]. Moreover, it allows for temporal-context-based novelty and anomaly detection [1,2]. The FreMEn method outperformed other temporal models (e.g. Gaussian processes) both in terms of prediction accuracy and computational efficiency [8,9,10,12,15].

How does it work ?

The concept is based on the idea of frequency transforms, which represent functions of time by the frequencies that make them up. FreMen simply takes a given sequence of long-term observation of a particular environment state, calculates its frequency spectra by the Fourier transform and stores the most prominent spectral components. These components correspond to the observed periodicities of the given environment state. Knowledge of the spectral components allows to calculate the probability of the environment state for a given time.

The picture below illustrates the use of FreMEn for visual localization in changing environments. The long-term observations of a particular image feature visibility (red,centre), are transferred to the spectral domain (left). Transferring the most prominent spectral components (left, green) to the time domain provides an analytic expression (centre) representing the probability of the feature being visible at a given time (green, centre). This allows to predict the feature visibility at a time when the robot performs self-localization (blue). In this case, the FreMEn model is applied to all features visible at a given location, which allows to predict its appearance for a specific time.

FreMEn for Visual Localization Click the picture to see a detailed explanation - make sure you have sound on.

Does it apply only to image features ?

No, the concept is quite universal - it works with any environment models that represent the environment by a set of discrete components with binary states, e.g. occupancy grids with cells that are occupied or free or topological map with edges that are traversable or not. The classic models represent the uncertainty of these states by probability, which is constant unless updated through direct observation. FreMEn represents the uncertainty by combination of periodic functions obtained through frequency analysis. An overview of experiments with various FreMEn models is provided as a poster.

Where can I download it ?

FreMEn is implemented as a ROS action server and it's available at this github repository. Early version of the FreMEn method is part of a software release of the EU-funded project STRANDS. You can get its source code on STRANDS github or as a Ubuntu package.

References

  1. T.Krajnik, J.P.Fentanes, J.Santos, T.Duckett: FreMEn: FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments.IEEE Transactions on Robotics, 2017. [bibtex]
  2. T.Krajnik, J.P.Fentanes, G.Cielniak, C.Dondrup, T.Duckett: Spectral Analysis for Long-Term Robotic Mapping. In proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2014. [bibtex]
  3. T.Krajnik, J.M.Santos, B.Seemann, T.Duckett: FROctomap: An Efficient Spatio-Temporal Environment Representation. In proceedings of Towards Autonomous Robotic Systems (TAROS), 2014. [bibtex]
  4. T.Krajnik, J.P.Fentanes, O.M.Mozos, T.Duckett, J.Ekekrantz, M.Hanheide: Long-term topological localisation for service robots in dynamic environments using spectral maps. In proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014. [bibtex]
  5. T.Krajnik, J.P.Fentanes, J.Santos, K.Kusumam, T.Duckett: FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments. In proceedings of the ICRA Workshop on Visual Place Recognition in Changing Environments (VPRiCE), 2014. [bibtex]
  6. J.P.Fentanes, T.Krajnik, J.Santos, T.Duckett: Persistent Localization and Life-long Mapping in Changing Environments using the Frequency Map Enhancement. In proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016. [bibtex]
  7. J.P.Fentanes, B.Lacerda, T.Krajnik, N.Hawes, M.Hanheide: Now or later? predicting and maximising success of navigation actions from long-term experience. In proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2015. [bibtex]
  8. T.Krajnik, M.Kulich, L.Mudrova, R.Ambrus, T.Duckett: Where is waldo at time t? using spatio-temporal models for mobile robot search. In proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2015. [bibtex]
  9. C.Coppola, T.Krajnik, N.Bellotto, T.Duckett: Learning temporal context for activity recognition.In proceedings of the European Conference on Artificial Intelligence (ECAI), 2016. [bibtex]
  10. T.Krajnik, J.M.Santos, T.Duckett: Life-Long Spatio-Temporal Exploration of Dynamic Environments. In proceedings of the European Conference on Mobile Robotics (ECMR), 2015. [bibtex]
  11. J.M.Santos, T.Krajnik, J.P.Fentanes, T.Duckett: Lifelong Information-driven Exploration to Complete and Refine 4D Spatio-Temporal Maps.. Robotics and Automation Letters (RAL), 2016. [bibtex]
  12. J.M.Santos, T.Krajnik, T.Duckett: Spatio-temporal Exploration Strategies for Long-term Autonomy of Mobile Robots.. Robotics and Autonomous Systems (RAS), 2016. [bibtex]
  13. M.Kulich, T.Krajnik, L.Preucil and T.Duckett: To Explore or to Exploit? Learning Humans’ Behaviour to Maximize Interactions with Them..In proceedings of the International Workshop on Modelling and Simulation for Autonomous Systems (MESAS), 2016. [bibtex]
  14. M.Hanheide, D.Hebesberger, T.Krajnik: The When, Where, and How: An Adaptive Robotic Info-Terminal for Care Home Residents - A long-term Study.In proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2017. [bibtex]
  15. F.Jovan, J.Wyatt, N.Hawes, T.Krajnik: A Poisson-Spectral Model for Modelling the Spatio-Temporal Patterns in Human Data Observed by a Robot. In proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016. [bibtex]