Trust as a Precursor to Belief Revision
[摘要] Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted.In this paper, we define trust as a pre-processing step before revision.We emphasize that trust in an agent is often restricted to a particular domain of expertise.We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising.We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates.We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information.
[发布日期] [发布机构]
[效力级别] [学科分类] 人工智能
[关键词] [时效性]