Learning Object Affordances from Sensory-Motor Interaction via Bayesian Networks with Auto-Encoder Features

Authors

DOI:

https://doi.org/10.18201/ijisae.2020261584

Keywords:

affordance, cognitive robotics, developmental robotics, perception, learning

Abstract

In this paper, we study learning relationships between objects, actions, and effects. “Affordance” is an ecological psychology concept, which addresses how humans learn these relationships and which is also studied in cognitive robotics to transform the same ability to robots. Our model is built on top of two existing models in this field and use their strengths to introduce a novel system, where an anthropomorphic robot observes its environment and changes in that environment after executing pre-learned actions. Robot transforms these observations to object and effect properties in the same space and object affordances are learned using Bayesian Networks. The dimensions of features are decreased through autoencoders to achieve a compact network. Usage of a probabilistic model helps our system to deal with missing information or to make predictions for object properties and actions along with effect properties. We illustrate the advantages of our model by comparing it with the two aforementioned models.

Downloads

Download data is not yet available.

References

D. Silver et al., “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018.

G. Dulac-Arnold, D. J. Mankowitz, and T. Hester, “Challenges of real-world reinforcement learning,” arXiv preprint arXiv:1904.12901, 2019.

E. W. Bushnell and J. P. Boudreau, “Motor development and the mind: The potential role of motor abilities as a determinant of aspects of perceptual development,” Child Development, vol. 64, no. 4, pp. 1005– 1021, 1993.

K. S. Bourgeois, A. W. Khawar, S. A. Neal, and J. J. Lockman, “Infant manual exploration of objects, surfaces, and their interrelations”, Infancy, vol. 8, no. 3, pp. 233–252, 2005.

H. L. Pick, “Eleanor j. gibson: Learning to perceive and perceiving to learn.” Developmental Psychology, vol. 28, no. 5, p. 787, 1992.

J. J. Gibson, The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates, 1986.

M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,”Trends Neurosci, vol. 15, pp. 20–25, 1992.

Ben-Gal I., Bayesian Networks, in Ruggeri F., Faltin F. & Kenett R., Encyclopedia of Statistics in Quality & Reliability, Wiley & Sons (2007).

D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing. Vol 1: Foundations. MIT Press, Cambridge, MA, 1986.

J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, E. Thelen, Autonomous Mental Development by Robots and Animals, Science 291 (2001) 599–600.

M. Lungarella, G. Metta, R. Pfeifer, G. Sandini, Developmental robotics: a survey, Connection Science 15 (2003) 151–190.

M. Asada, K. Hosoda, Y. Kuniyoshi, H. Ishiguro, T. Inui, Y. Yoshikawa, M. Ogino, C. Yoshida, Cognitive developmental robotics: a survey, IEEE Transactions on Autonomous Mental Development 1 (2009) 12–34.

A. Stoytchev, Some basic principles of developmental robotics, IEEE, Transactions on Autonomous Mental Development 1 (2009) 122–130.

G. Metta, P. Fitzpatrick, Better vision through manipulation, Adaptive Behavior 11 (2003) 109–128.

P. Fitzpatrick, G. Metta, L. Natale, A. Rao, G. Sandini, Learning about objects through action -initial steps towards artificial cognition, in: Proc. of ICRA 03, IEEE, 2003, pp. 3140–3145.

A. Stoytchev, Behavior-grounded representation of tool affordances, in: In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE, Barcelona, Spain, 2005, pp. 18–22.

E. Ugur, E. Şahin, Traversability: A case study for learning and perceiving affordances in robots, Adaptive Behavior 18 (2010).

E. Erdemir, C. B. Frankel, K. Kawamura, S. M. Gordon, S. Thornton, B. Ulutas, Towards a cognitive robot that uses internal rehearsal to learn affordance relations, IEEE/RSJ International Conference on Intelligent Robots and Systems (2008) 2016–2021.

G. Fritz, L. Paletta, M. Kumar, G. Dorffner, R. Breithaupt, R. Erich, Visual learning of affordance based cues, in: S. Nolfi, G. Baldassarre, R. Calabretta, J. Hallam, D. Marocco, J.-A. Meyer, D. Parisi (Eds.), From animals to animats 9: Proceedings of the Ninth International Conference on Simulation of Adaptive Behaviour (SAB), LNAI. Volume 4095., Springer-Verlag, Berlin, Roma, Italy, 2006, pp. 52–64.

J. Sinapov, A. Stoytchev, Detecting the functional similarities between tools using a hierarchical representation of outcomes, in: 7th IEEE International Conference on Development and Learning, IEEE, 2008, pp. 91–96.

S. Griffith, J. Sinapov, M. Miller, A. Stoytchev, Toward interactive learning of object categories by a robot: A case study with container and non-container objects, in: Proc. of the 8th IEEE Intl. Conf. on Development and Learning (ICDL), IEEE, Shanghai, China, 2009, pp. 1–6.

I. Cos-Aguilera, L. Canamero, G. M. Hayes, Using a SOFM to learn object affordances, in: In Proceedings of the 5th Workshop of Physical Agents, Girona, Catalonia, Spain.

Y. Demiris, A. Dearden, From motor babbling to hierarchical learning by imitation: a robot developmental pathway, in: Fifth International Workshop on Epigenetic Robotics, Lund University, 2005, pp. 31–37.

S. Hart, R. Grupen, D. Jensen, A relational representation for procedural task knowledge, in: Proceedings of the National Conference on Artificial Intelligence, AAAI Press, 2005, pp. 1280–1285.

L. Montesano, M. Lopes, A. Bernardino, J. Santos-Victor, Learning object affordances: From sensory–motor maps to imitation, IEEE Transactions on Robotics 24 (2008) 15–26.

R. Petrick, D. Kraft, K. Mourão, N. Pugeault, N. Krüger, M. Steedman, Representation and integration: Combining robot control, high-level planning, and action learning, in: P. Doherty, G. Lakemeyer, A. Pobil (Eds.), Proceedings of the 6th International Cognitive Robotics Workshop, 2008, pp. 32–41.

F. Wörgötter, a. Agostini, N. Krüger, N. Shylo, B. Porr, Cognitive agents a procedural perspective relying on the predictability of Object-Action-Complexes (OACs), Robotics and Autonomous Systems 57 (2009) 420–432.

J. Modayil, B. Kuipers, The Initial Development of Object Knowledge by a Learning Robot., Robotics and Autonomous Systems 56 (2008) 879–890.

E. Ugur, E. Oztop, and E. Sahin, “Goal emulation and planning in perceptual space using learned affordances,” Robot. Autonom. Syst., vol. 59, no. 7–8, pp. 580–595, 2011.

R. M. Haralick, L. G. Shapiro, Computer and Robot Vision, Volume I, Addison-Wesley, 1992.

Giovanni Saponaro, Lorenzo Jamone, Alexandre Bernardino, and Giampiero Salvi. “Beyond the Self: Using Grounded Affordances to Interpret and Describe Others’ Actions”. In: IEEE Transactions on Cognitive and Developmental Systems (2019). doi: 10.1109/TCDS.2018.2882140 (cit. on pp. 14, 50)

M. Andries, R. O. Chavez-Garcia, R. Chatila, A. Giusti, and L. M. Gambardella, “Affordance equivalences in robotics: a formalism,” Frontiers in Neurorobotics, vol. 12, p. 26, 2018. doi: 10.3389/fnbot.2018.00026

Jain, Raghvendra, and Tetsunari Inamura. "Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools." Artificial Life and Robotics 18.1-2 (2013): 95-103.

E. Ugur, J. Piater, Emergent structuring of interdependent affordance learning tasks using intrinsic motivation and empirical feature selection, IEEE Transactions on Cognitive and Developmental Systems (TCDS), 9(4), pp. 328-340, 2017.

Seker, M. Yunus, Ahmet E. Tekden, and Emre Ugur. "Deep effect trajectory prediction in robot manipulation." Robotics and Autonomous Systems 119 (2019): 173-184.

Chu, Fu-Jen, Ruinian Xu, and Patricio A. Vela. "Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes." arXiv preprint arXiv:1909.05770 (2019).

Li, Yikun, Lambert Schomaker, and S. Hamidreza Kasaei. "Learning to Grasp 3D Objects using Deep Residual U-Nets." arXiv preprint arXiv:2002.03892 (2020).

Do, Thanh-Toan, Anh Nguyen, and Ian Reid. "Affordancenet: An end-to-end deep learning approach for object affordance detection." 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018.

M. Imre, E. Oztop, Y. Nagai, E. Ugur, Affordance-Based Altruistic Robotic Architecture for Human-Robot Collaboration, Adaptive Behavior, 27(4), pp. 223-241, 2019.

Downloads

Published

26.06.2020

How to Cite

Akbulut, M. T., & Ugur, E. (2020). Learning Object Affordances from Sensory-Motor Interaction via Bayesian Networks with Auto-Encoder Features. International Journal of Intelligent Systems and Applications in Engineering, 8(2), 52–59. https://doi.org/10.18201/ijisae.2020261584

Issue

Section

Research Article