I am passionate about my overarching research goal: providing robots with perceptual abilities that allow safe, intelligent interactions with humans in real-world environments. To develop these perceptual abilities, I believe it is useful to study the principles of the human, animal and insect visual systems. I use these principles to develop new computer vision and machine learning algorithms and validate their effectiveness in intelligent robotic systems. I am enthusiastic about this forward/reverse engineering approach, which combines concepts from computer science and engineering with those from biology and social sciences, as it offers the dual benefit of uncovering principles inherent in the animal visual system, as well as applying these principles to its artificial counterpart. I am a recipient of two best thesis awards and two best poster awards and have acquired grants in excess of 400,000 AUD.
In a range of experiments on four challenging datasets, we show that our spiking net achieves comparable performance to the #SoTA, and degrades gracefully in performance with an increasing number of reference places. 4/n
But place recognition is a harder problem with many more classes. To deal with this, we introduce a novel weighted assignment scheme using ambiguity-informed #salience: Does a #neuron represent (i.e. fire for) just a single place or multiple places? 3/n
We considered #placerecognition as a #classification task: Which of N places is the robot in, given significant appearance change since last observing this place? Classification has long been studied with spiking nets, e.g. in digit recognition. 2/n
Here we go: @rosorg#ROS2 on a new shiny @Apple M1 Silicon (osx-arm64) thanks to @RoboStack and @condaforge. Still not quite ready for the public (some issues in the installation process), but we are nearly there! Stay tuned and follow @RoboStack to get updates.