Oxford logo
[Kwi17] M. Kwiatkowska. Cognitive Reasoning and Trust in Human-Robot Interactions. In Proc. 14th Annual Conference on Theory and Applications of Models of Computation (TAMC 2017), pages 3-11, Springer. 2017. [pdf] [bib]
Downloads:  pdf pdf (173 KB)  bib bib
Notes: The original publication is available at link.springer.com.
Abstract. We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interactions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as “agent A has 99% trust in agent B’s ability or willingness to perform a task” and the role it can play in explaining trust-based decisions and agent’s dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reasoning about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate trust in human-robot interactions. The paper concludes by summarising future challenges for modelling and verification in this important field.

QAV:

Home

People

Projects

Publications