Lab 7 Solutions
Optimizing Paths with Intelligent Guesses
1. Best-First Search: The Nearest Apple
Problem Scenario:
Spider-Man needs to find the minimum number of steps to reach the nearest apple cart ('A') from his starting position ('S') on a city grid. The grid contains obstacles (buildings represented by '#'). He can move up, down, left, or right.
Sample Input
Grid:
[
['S', '.', '.', '#', '.', '.'],
['.', '#', '.', '#', '.', '#'],
['.', '#', '.', '.', '.', '.'],
['.', '.', '#', '#', '.', '.'],
['#', '.', '#', 'A', '.', '#']
]
Expected Output
Minimum steps required: 9
# Define a helper function to calculate Manhattan distance (our heuristic)
def manhattan_distance(p1, p2):
# p1 and p2 are tuples like (row, col)
# Calculate and return the absolute difference of coordinates
# Main search function
def best_first_search(grid):
# Find start ('S') and apple ('A') positions
# Initialize a priority queue to store (heuristic_cost, steps, current_position)
# The priority queue will automatically sort by the first element (heuristic_cost)
# Initialize a 'visited' set to keep track of visited cells
# Add the starting position to the priority queue with 0 steps
# Loop while the priority queue is not empty:
# Get the cell with the smallest heuristic cost from the queue
# If this cell is the apple, return the number of steps taken
# If this cell has been visited, continue to the next iteration
# Mark the current cell as visited
# Explore neighbors (up, down, left, right):
# For each neighbor, check if it's a valid move (within grid, not an obstacle)
# If valid, calculate its heuristic cost to the apple
# Add (heuristic_cost, new_steps, neighbor_position) to the priority queue
# If the loop finishes and the apple wasn't found, return -1
This solution uses a Priority Queue (implemented with Python's `heapq` library) which is the core of a Best-First Search.
1. Heuristic: We use the Manhattan Distance `(|x1 - x2| + |y1 - y2|)` as our heuristic. It's a perfect estimate for grid-based movement where you can't move diagonally.
2. Priority Queue: We store tuples in the format `(heuristic_cost, steps, position)`. The queue automatically keeps the entry with the lowest `heuristic_cost` at the top, ensuring we always explore the most promising cell next.
3. Search Process: The algorithm continuously extracts the "closest" cell (by heuristic) from the queue. It explores its valid neighbors, calculates their heuristics, and adds them back to the queue. A `visited` set prevents us from getting stuck in loops.
4. Goal: When the cell we extract from the queue is the apple's location, we've found our path. Because we always prioritize the path that *looks* best, this greedy approach finds the solution efficiently.
import heapq
def solve_best_first_search():
grid = [
['S', '.', '.', '#', '.', '.'],
['.', '#', '.', '#', '.', '#'],
['.', '#', '.', '.', '.', '.'],
['.', '.', '#', '#', '.', '.'],
['#', '.', '#', 'A', '.', '#']
]
def manhattan_distance(p1, p2):
return abs(p1[0] - p2[0]) + abs(p1[1] - p2[1])
rows, cols = len(grid), len(grid[0])
start_pos, apple_pos = None, None
for r in range(rows):
for c in range(cols):
if grid[r][c] == 'S':
start_pos = (r, c)
elif grid[r][c] == 'A':
apple_pos = (r, c)
if not start_pos or not apple_pos:
return "Start or Apple not found"
# Priority queue: (heuristic_cost, steps, position_tuple)
pq = [(manhattan_distance(start_pos, apple_pos), 0, start_pos)]
visited = set()
while pq:
heuristic, steps, current_pos = heapq.heappop(pq)
if current_pos == apple_pos:
return f"Minimum steps required: {steps}"
if current_pos in visited:
continue
visited.add(current_pos)
r, c = current_pos
# Explore neighbors: Up, Down, Left, Right
for dr, dc in [(0, 1), (0, -1), (1, 0), (-1, 0)]:
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] != '#' and (nr, nc) not in visited:
new_steps = steps + 1
new_heuristic = manhattan_distance((nr, nc), apple_pos)
heapq.heappush(pq, (new_heuristic, new_steps, (nr, nc)))
return "Path not found"
# To test:
# print(solve_best_first_search())
2. Beam Search: Tracking Villains
Problem Scenario:
Spider-Man is tracking multiple leads to find a villain's lair. He starts at a location ('A') and explores connected locations (nodes). To be efficient, he uses Beam Search with a beam width of 2. This means at each level of his search, he only considers the 2 most promising paths based on heuristic values (estimated distance to the goal).
Sample Input
Graph Heuristics (estimated distance to Goal 'G'):
A: 10, B: 8, C: 9, D: 6, E: 4, F: 5, G: 0
Connections:
A -> B, C
B -> D, E
C -> F, G
Beam Width (β): 2
Start Node: 'A'
Goal Node: 'G'
Expected Output
Path Found: ['A', 'B', 'E']
(Note: The optimal path is A -> C -> G. Beam search is not guaranteed to be optimal as it pruned the path through C at the first level because B had a better heuristic value than C, and the beam width was limited to 2).
# Main beam search function
def beam_search(graph, heuristics, start, goal, beam_width):
# Initialize the beam with the starting path: [(heuristic, [path_nodes])]
# For the start node, this would be [(heuristics[start], [start])]
# Loop until the goal is found or the beam becomes empty
# Create a list to store all possible next paths (candidates)
# For each path in the current beam:
# Get the last node of the path
# If the last node is the goal, return this path
# For each neighbor of the last node:
# Create a new path by extending the current path
# Calculate the heuristic of the neighbor
# Add (heuristic, new_path) to the candidates list
# If no candidates were generated, the search fails
# Sort the candidates based on their heuristic values (ascending)
# Select the top 'beam_width' candidates to form the new beam
# This is the "pruning" step
# If the loop finishes, no path was found
Beam Search is a heuristic search algorithm that balances thoroughness and efficiency.
1. Beam Width (β): This is the key parameter. It defines the maximum number of paths to keep at each level of the search. A small beam width is faster but risks pruning the optimal path. A large beam width is more thorough but slower.
2. Level-by-Level Search: The search proceeds in levels. At each level, it generates all possible next steps from the current set of paths (the "beam").
3. Sorting and Pruning: All these potential next paths (candidates) are then sorted based on their heuristic value. The algorithm "prunes" the list by keeping only the top `β` candidates. This new, smaller set becomes the beam for the next level.
4. In this example (β=2): From 'A', the neighbors are 'B' (heuristic 8) and 'C' (heuristic 9). Both are kept. From 'B' and 'C', the candidates are 'D'(6), 'E'(4), 'F'(5), 'G'(0). We sort these: G, E, F, D. We keep only the top 2: the paths to 'G' and 'E'. The algorithm might explore 'E' next and terminate if it's considered "close enough", or find the path to 'G' first. The output shows one possible non-optimal path found.
def solve_beam_search():
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': [], 'E': [], 'F': [], 'G': []
}
heuristics = {'A': 10, 'B': 8, 'C': 9, 'D': 6, 'E': 4, 'F': 5, 'G': 0}
start = 'A'
goal = 'G'
beam_width = 2
# The beam stores tuples of (heuristic, path_list)
beam = [(heuristics[start], [start])]
while beam:
candidates = []
# Generate all possible next steps from current beam
for h, path in beam:
last_node = path[-1]
if last_node == goal:
return f"Path Found: {path}"
for neighbor in graph[last_node]:
new_path = path + [neighbor]
new_heuristic = heuristics[neighbor]
candidates.append((new_heuristic, new_path))
if not candidates:
break
# Sort all candidates by heuristic
candidates.sort(key=lambda x: x[0])
# Prune the candidates to the beam width
beam = candidates[:beam_width]
return "Path not found"
# To test:
# print(solve_beam_search())