6

I'm having a little trouble with the definition of rationality, which goes something like:

An agent is rational if it maximizes its performance measure given its current knowledge.

I've read that a simple reflex agent will not act rationally in a lot of environments. For example, a simple reflex agent can't act rationally when driving a car, as it needs previous perceptions to make correct decisions.

However, if it does its best with the information it's got, wouldn't that be rational behaviour, as the definition contains "given its current knowledge"? Or is it more like: "given the knowledge it could have had at this point if it had stored all the knowledge it has ever received"?

Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time it's allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?

nbro
  • 42,615
  • 12
  • 119
  • 217
Mr. Eivind
  • 578
  • 5
  • 27

2 Answers2

6

When we use the term rationality in AI, it tends to conform to the game theory/decision theory definition of rational agent.

In a solved or tractable game, an agent can have perfect rationality. If the game is intractable, rationality is necessarily bounded. (Here, "game" can be taken to mean any problem.)

There is also the issue of imperfect information and incomplete information.

Rationality isn't restricted to objectively optimal decisions but includes subjectively optimal decisions, where the the optimality can only be presumed. (That's why defection is the optimal strategy in 1-shot Prisoner's Dilemma, where the agents don't know the decisionmaking process of the the competitor.)

  • Rationality here conforms to Russell & Norvig's definition, where it is related to performance in an environment.

What may be rational in one environment may not be rational in a different environment. Additionally, what may be locally rational for a simple reflex agent will not appear rational from the perspective of an agent with more knowledge, or a learning agent.

Iterated Dilemmas, where there is communication in the form of prior choices, may provide an analogy. An agent that always defects, even where the competitor has shown willingness to cooperate, may not be regarded as rational because defecting vs. a cooperative agent does not maximize utility. A simple reflex agent wouldn't have the capacity to alter its strategy.

However, rationality used in the most general sense might allow that, to the agent making the decision, if the decision is based on achieving an objective, and the decision is reached utilizing the information available to that agent, the decision is may be regarded as rational, regardless of actual optimality.

DukeZhou
  • 6,209
  • 5
  • 27
  • 54
4

I've read that a simple reflex agent will not act rationally in a lot of environments. E.g. a simple reflex agent can't act rationally when driving a car as it needs previous perceptions to make correct decisions.

I wouldn't say that the need for previous perceptions is the reason why a simple reflex agent doesn't act rationally. I'd say the more serious issue with simple reflex agents is that they do not perform long-term planning. I think that is the primary issue that causes them to not always act rationally, and that is also consistent with the definition of rationality you provided. A reflex-based agent typically doesn't involve long-term planning, and that's why it in fact does not often do best given the knowledge it has.

Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time its allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?

An algorithm like minimax in its "purest" formulation (without a limit on search depth) would be rational for games like chess, since it would play optimally. However, that is not feasible in practice, it would take too long to run. In practice, we'll run algorithms with a limit on search depth, to make sure that they stop thinking and pick a move in a reasonable amount of time. Those will not necessarily be rational. This gets back to bounded rationality as described by DukeZhou in his answer.

The story is not really clear if we try to talk about this in terms of "picking the best move given the time it's allowed to use" though, because what is or isn't possible given a certain amount of time depends very much on factors such as:

  • algorithm we choose to implement
  • speed of our hardware
  • efficiency of implementation / programming language used
  • etc.

For example, hypothetically I could say that I implement an algorithm that requires a database of pre-computed optimal solutions, and the algorithm just looks up the solutions in the database and instantly plays the optimal moves. Such an algorithm would be able to truly be rational, even given a highly limited amount of time. It would be difficult to implement in practice because we'd have difficulties constructing such a database in the first place, but the algorithm itself is well-defined. So, you can't really include something like "given the time it's allowed to use" in your definition of rationality.

Dennis Soemers
  • 10,519
  • 2
  • 29
  • 70