Abstract.
We consider the setting of stochastic multiagent systems and
formulate an automated verification framework for quantifying
and reasoning about agents’ trust. To capture human trust,
we work with a cognitive notion of trust defined as a subjective
evaluation that agent A makes about agent B’s ability to
complete a task, which in turn may lead to a decision by A
to rely on B. We propose a probabilistic rational temporal
logic PRTL, which extends the logic PCTL with reasoning
about mental attitudes (beliefs, goals and intentions), and
includes novel operators that can express concepts of social
trust such as competence, disposition and dependence. The
logic can express, for example, that “agent A will eventually
trust agent B with probability at least p that B will behave
in a way that ensures the successful completion of a given
task”. We study the complexity of the automated verification
problem and, while the general problem is undecidable, we
identify restrictions on the logic and the system that result in
decidable, or even tractable, subproblems.
|