An agent can be viewed as anything perceiving its environment through sensors and acting upon it through actuators.
Mathematically speaking, the agent's behavior is described by the agent function.
Internally, the agent function for an artificial agent will be implemented by the agent program.
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act accordingly, but omniscience is impossible in reality.
To the extent that the agent relies on the prior knowledge of its designer rather than its percepts we say that the agent lacks autonomy.
If an agent's sensors give it a complete access over the state of the environment at each point in time, we say the task environment is fully observable.
If the next state of the environment is completely determined by the current state and the action executed by the agent we say the environment is deterministic, otherwise it is stochastic.
In an episodic task environment, the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action.
If the environment can change while an agent is deliberating, the environment is dynamic, otherwise it is static.
In a known environment, the outcomes for all actions are given.
Agent = architecture + program
4 kinds of agent programs:
Simple reflex agents
Model-based reflex agents
Goal-based agent
Utility-based agents
Simple reflex agents select actions on the basis of the current percept, ignoring the rest of the percept history.
Model-based reflex agents maintain some sort of internal state that depends on the percept history.
A general learning agent consists of a critic, a learning element, a performance element, and a problem generator.
No comments:
Post a Comment