9.2 Problem Solving and Searching Techniques

This section focuses on how to model and solve problems using search techniques. Problem-solving in AI often involves representing problems as state spaces and using search algorithms to find solutions. The section discusses both uninformed and informed search techniques, as well as game-playing strategies like minimax and adversarial search.


1. Definition of Problem Solving in AI

Problem solving in AI involves finding a path from an initial state to a goal state in a search space. The process generally involves:

  • State space: The set of all possible configurations or states of the problem.

  • Initial state: The starting point of the problem.

  • Goal state: The desired solution or the target state.

  • Actions: Operations that move from one state to another.

  • Transition model: Describes the effect of each action on the state.


A state space is a conceptual space that contains all possible states of the problem. Problem-solving in AI can be visualized as a search through this state space, where the agent explores various states using actions until it reaches a goal.

  • State: A representation of a specific configuration in the problem.

  • Path: A sequence of actions leading from one state to another.

  • Goal State: The state that solves the problem.

  • Search: The process of exploring states to find a solution.


3. Problem Formulation

Problem formulation is the process of defining a problem in terms of states, actions, and goals. A well-defined problem includes:

  • Initial state: The starting point of the problem.

  • Actions: The set of actions that can be performed.

  • Transition model: Describes how actions lead from one state to another.

  • Goal state: The state that represents a solution.

  • Path cost: A function that assigns a cost to each path.


4. Well-Defined Problems

A well-defined problem has clearly specified components: an initial state (starting point), a goal state (desired outcome), a set of valid actions (permissible moves), and a transition model (how actions change states). It is deterministic, meaning the result of each action is predictable. These characteristics make it easy to apply algorithms and systematically find a solution.

  • Example: The 8-puzzle problem, Robot navigation problem, Sudoku e.t.c

Robot navigation problem:

  • Initial State: The robot starts at a specific location in a grid (e.g., coordinates (0,0)).

  • Goal State: The robot needs to reach a target location (e.g., coordinates (3,3)).

  • Valid Actions: The robot can move up, down, left, or right.

  • Transition Model: Moving the robot from one grid cell to another based on the chosen action.


5. Constraint Satisfaction Problem (CSP)

A constraint satisfaction problem (CSP) is a problem where the goal is to find a solution that satisfies a set of constraints. It consists of:

  • Variables: Entities that need to be assigned values.

  • Domains: The set of possible values that can be assigned to each variable.

  • Constraints: Conditions that the variables must satisfy.

Examples include Sudoku, the 8-Queens problem, and map coloring.

Here on Sudoku

  • Variables: The cells of a 9x9 grid.

  • Domains: Each variable (cell) must take a value from 1 to 9.

  • Constraints:

    • Each number (1-9) must appear exactly once in each row, column, and 3x3 subgrid.

  • The objective is to fill in the grid while satisfying these constraints.

Applications of CSPs

  1. Scheduling Problems: CSPs are widely used in scheduling tasks where multiple tasks must be assigned to time slots or resources.

  2. Puzzle Solving: Problems like Sudoku, the 8-Queens problem, and crossword puzzles are classic examples of CSPs.

  3. Resource Allocation: In operations research, CSPs are used to allocate limited resources to tasks under various constraints.

  4. Artificial Intelligence: CSPs are used to model many types of problems in AI, including planning and decision-making under constraints.


6. Uninformed Search Techniques

Uninformed search techniques, also known as blind search techniques, explore the state space without using any domain-specific knowledge to guide the search. These methods explore all possible states or actions systematically. Although uninformed searches do not make use of heuristics, they are essential for solving problems where no prior knowledge about the problem space is available. Below are the most commonly used uninformed search techniques:


a. Depth First Search (DFS)

Description: Depth First Search (DFS) is one of the most straightforward search techniques. It starts from the initial state and explores as far down a branch of the search tree as possible before backtracking. When it reaches a node with no further possible moves (i.e., a leaf node), it backtracks to the most recent node with unexplored children and continues searching.

Working:

  • DFS uses a stack (LIFO - Last In First Out) to keep track of the nodes to visit next. Alternatively, it can also be implemented using recursion, where each recursive call represents a deeper level in the tree.

  • It traverses down one branch until no further nodes are left in that path, then backtracks to try the next available branch.

Pros:

  • Space-efficient: It only stores the current path from the root to the current node in memory, meaning the space complexity is much lower than other methods like BFS.

Cons:

  • Can get stuck in deep or infinite branches: If the search tree has infinite branches or very deep paths, DFS might never return to explore other branches. For example, if the search goes down a path without a solution, it will explore it entirely before backtracking, potentially missing solutions elsewhere.

  • No guarantee of finding the shortest path: DFS does not necessarily find the shortest path because it does not consider all possible paths at each level before going deeper.


b. Breadth First Search (BFS)

Description: Breadth First Search (BFS) is another classic search technique where the search explores all possible nodes at the present depth level before moving on to nodes at the next depth level. This search guarantees that the solution is found by expanding the shallowest nodes first.

Working:

  • BFS uses a queue (FIFO - First In First Out) to store the nodes to be explored next. Each time a node is expanded, its children are added to the queue.

  • The process continues until a goal is found or there are no more nodes left to explore.

Pros:

  • Guaranteed to find the shortest path (in terms of number of edges) in an unweighted problem because BFS explores all nodes at a given depth before moving to the next level.

  • Complete search: If a solution exists, BFS will always find it.

Cons:

  • Space-inefficient: BFS stores all nodes at the current level in memory, meaning it can quickly use a lot of space, especially if the search space is large.

  • Performance: If the state space has many branches, BFS may explore a significant number of irrelevant nodes before reaching the goal.


c. Depth Limited Search (DLS)

Description: Depth Limited Search (DLS) is a modification of Depth First Search (DFS) where the depth of the search is limited to a fixed value, preventing the search from going deeper than a predefined depth limit.

Working:

  • DLS uses the same idea as DFS, but with a constraint on the maximum depth the search can go.

  • If the depth of a node exceeds the limit, the search backtracks and tries other unexplored paths.

  • It is a more controlled version of DFS, ensuring that it doesn’t get stuck in deep or infinite branches.

Pros:

  • Prevents infinite loops: It avoids the problem of getting stuck in infinite loops by setting a depth limit.

  • Useful for large search spaces: In large or infinite spaces, DLS can be more efficient than DFS since it limits the depth of search.

Cons:

  • May miss solutions beyond the depth limit: If the goal is located beyond the set depth, DLS won’t find it. This is particularly problematic in dynamic or large search spaces where the goal's location is uncertain.

  • Not complete: Unlike BFS, DLS does not guarantee a solution if one exists beyond the depth limit.


d. Iterative Deepening Search (IDS)

Description: Iterative Deepening Search (IDS) is a hybrid search method that combines the space efficiency of DFS with the completeness of BFS. IDS performs DFS with increasing depth limits until a solution is found.

Working:

  • IDS starts with a depth limit of 0, performs a depth-limited DFS, and increments the depth limit after each iteration.

  • It continues this process until a solution is found. Each new iteration starts a new DFS from the root, but the depth limit is increased each time.

Pros:

  • Combines space efficiency of DFS with completeness of BFS: IDS doesn’t require much memory since it’s based on DFS but guarantees a solution by eventually exploring all depths.

  • Avoids the need for infinite memory: Unlike BFS, it doesn't store all nodes at each depth level, making it more efficient in terms of memory.

Cons:

  • Redundant exploration: IDS repeatedly explores the same nodes at each level, leading to redundant work. This makes IDS less efficient than BFS when searching for shallow solutions.

  • May be slower than BFS for shallow solutions: Because IDS repeatedly re-explores the same nodes at each depth level, it can take longer than BFS for problems with shallow solutions.


e. Bidirectional Search

Description: Bidirectional Search is an optimization technique that performs two simultaneous searches: one from the initial state and one from the goal state. The two searches meet somewhere in the middle.

Working:

  • One search starts from the initial state and explores forward, while the second search starts from the goal state and explores backward.

  • When the two searches meet, the solution is found, combining the results from both sides.

Pros:

  • Faster than BFS: Since the two searches meet in the middle, the search space is effectively halved, reducing the search time significantly. This is especially beneficial for large problems.

Cons:

  • Requires the problem to be reversible: For bidirectional search to work, the problem must be reversible, meaning the search must be able to progress both forward and backward.

  • Goal state must be known: Bidirectional search requires that the goal state be predefined or known before searching.

Uninformed search techniques are fundamental in AI search algorithms but can be inefficient in terms of time and space. While methods like DFS and BFS are easy to implement, they may not always be the most efficient in handling large or complex search spaces. Techniques like IDS and Bidirectional Search help improve efficiency by combining different search strategies. However, these methods are still limited by the problem's size and the search space, and they may require enhancements like heuristics to further optimize their performance in real-world applications.


7. Informed Search Techniques

Informed search techniques use domain-specific knowledge (heuristics) to guide the search toward the goal more efficiently. These techniques are more advanced than uninformed search and aim to reduce the search space, improving performance.


a. Greedy Best-First Search

Greedy best-first search selects the node that appears to be closest to the goal, based solely on a heuristic function h(n), which estimates the cost to the goal from the current node.

  • Working:

    • It evaluates nodes based on the heuristic value h(n) and always expands the node with the smallest h(n).

    • Does not consider the cost of the path so far (unlike A* search).

    • Often used in solving problems like pathfinding in graphs.

  • Pros:

    • Can be faster than uninformed search as it uses heuristic knowledge to guide the search directly toward the goal.

    • Efficient for problems where the heuristic is well-suited to the domain.

  • Cons:

    • Not guaranteed to find the optimal solution.

    • Can get stuck in local minima (a solution that appears to be the best locally but is not optimal globally).

    • Susceptible to choosing suboptimal paths due to lack of path cost consideration.


b. A* Search

A* search uses a combination of the cost to reach the node (g(n)) and a heuristic estimate of the cost to the goal (h(n)) to evaluate nodes. It uses the evaluation function:

f(n) = g(n) + h(n)

  • g(n): Cost from the start node to the current node.

  • h(n): Heuristic estimate of the cost from the current node to the goal.

Working:

  • Expands the node with the smallest f(n).

  • Combines the benefits of uniform cost search (path cost consideration) and greedy best-first search (goal-directed search).

  • Guaranteed to find the optimal solution if the heuristic h(n) is admissible (does not overestimate the cost to the goal).

  • Pros:

    • Guarantees the optimal solution if the heuristic is admissible and consistent.

    • More efficient than uninformed searches for most problems.

  • Cons:

    • Can be slow for very large search spaces, as it may explore many nodes.

    • Memory-intensive, as it stores all visited nodes in memory.


c. Hill Climbing

Hill climbing is an iterative algorithm that always moves in the direction of the neighbor with the highest value or the steepest gradient based on an evaluation function.

  • Working:

    • Starts with an initial solution.

    • Continuously selects the neighbor with the best evaluation until no better neighbors exist.

    • Stops when it reaches a peak (local or global).

  • Types of Hill Climbing:

    • Simple Hill Climbing: Evaluates one neighbor at a time and chooses the first better neighbor.

    • Steepest-Ascent Hill Climbing: Evaluates all neighbors and chooses the best one.

  • Pros:

    • Simple to implement and computationally efficient.

    • Works well for problems where the evaluation function is smooth and has no local minima.

  • Cons:

    • May get stuck in local optima.

    • Prone to plateauing (flat areas with no improvement).

    • Does not consider previously visited states, leading to inefficiency in some cases.


d. Simulated Annealing

Simulated annealing is a probabilistic search technique inspired by the annealing process in metallurgy. It aims to avoid local minima by allowing occasional moves to worse solutions, especially in the early stages of the search.

  • Working:

    • Starts with an initial solution and a high "temperature."

    • Gradually reduces the temperature over time.

    • At higher temperatures, the algorithm allows worse moves with higher probability.

    • As the temperature decreases, the algorithm becomes more selective, eventually converging on a solution.

  • Key Components:

    • Temperature: Controls the probability of accepting worse solutions.

    • Cooling Schedule: Determines how the temperature decreases over iterations.

  • Pros:

    • Can escape local minima by exploring worse solutions early in the process.

    • Suitable for problems with many local optima.

  • Cons:

    • Computationally slow, especially for large problems.

    • Requires careful tuning of parameters like the initial temperature and cooling schedule.


Game playing in artificial intelligence involves developing strategies and techniques to enable machines to play competitive games like chess, checkers, or go. These games are characterized by opponents, where each player’s goal is to maximize their own chances of winning while minimizing their opponent's. Such games are often modeled using adversarial search techniques.

Game Playing and Adversarial Search

Game playing in artificial intelligence involves developing strategies and techniques to enable machines to play competitive games like chess, checkers, or go. These games are characterized by opponents, where each player’s goal is to maximize their own chances of winning while minimizing their opponent's. Such games are often modeled using adversarial search techniques.


a. Mini-Max Search

  • What is it? Mini-max is a decision-making algorithm used in zero-sum games, where one player’s gain is equivalent to the other player’s loss. The algorithm assumes that both players play optimally.

  • How it works:

    1. The game is represented as a tree, where each node corresponds to a game state, and the edges represent possible moves.

    2. The maximizing player (e.g., Player A) aims to choose a move that maximizes their score.

    3. The minimizing player (e.g., Player B) aims to counteract by minimizing Player A’s score.

    4. The algorithm works recursively, propagating the scores from the leaf nodes (endgame outcomes) to the root, where the best move is chosen.

  • Pros: Guarantees optimal moves in zero-sum games if both players play optimally.

  • Cons: Computationally expensive for deep game trees.

b. Alpha-Beta Pruning

  • What is it? Alpha-beta pruning is an optimization technique for Mini-Max Search that eliminates unnecessary evaluations of nodes in the search tree, improving efficiency. It determines whether exploring a particular branch is worth the computational effort.

  • How it works:

    1. Two parameters are maintained:

      • Alpha: The best value found so far for the maximizer.

      • Beta: The best value found so far for the minimizer.

    2. During the search, if a node’s value is determined to be worse than the current alpha or beta, that branch is pruned (skipped).

    3. This reduces the number of nodes that need to be evaluated, making the search faster.

  • Pros: Significantly reduces the number of nodes evaluated, improving efficiency.

  • Cons: Still requires a good heuristic to maximize its effectiveness.


Conclusion

AI problem-solving and search techniques are foundational for intelligent systems. Uninformed search techniques are useful when no domain knowledge is available, whereas informed search techniques, such as A* and greedy best-first search, leverage heuristics to improve search efficiency. Game-playing strategies, including minimax and alpha-beta pruning, allow agents to make optimal decisions in competitive environments.

Last updated