-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathtsp.py
113 lines (83 loc) · 3.08 KB
/
tsp.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
from typing import Dict, List, Optional, Tuple
import numpy as np
def dynamic_programming(
distance_matrix, # np.ndarray
):
"""
Solve TSP to optimality with dynamic programming.
Parameters
----------
distance_matrix
Distance matrix of shape (n x n) with the (i, j) entry indicating the
distance from node i to j. It does not need to be symmetric
Returns
-------
permutation
A permutation of nodes from 0 to n that produces the least total
distance
distance
The total distance the optimal permutation produces
Notes
-----
Algorithm: cost of the optimal path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Consider a TSP instance with 3 nodes: {0, 1, 2}. Let dist(0, {1, 2}) be the
distance from 0, visiting all nodes in {1, 2} and going back to 0. This can
be computed recursively as:
dist(0, {1, 2}) = min(
c_{0, 1} + dist(1, {2}),
c_{0, 2} + dist(2, {1}),
)
wherein c_{0, 1} is the cost from going from 0 to 1 in the distance matrix.
The inner dist(1, {2}) is computed as:
dist(1, {2}) = min(
c_{1, 2} + dist(2, {}),
)
and similarly for dist(2, {1}). The stopping point in the recursion is:
dist(2, {}) = c_{2, 0}.
This process can be generalized as:
dist(ni, N) = min ( c_{ni, nj} + dist(nj, N - {nj}) )
nj in N
and
dist(ni, {}) = c_{ni, 0}
With starting point as dist(0, {1, 2, ..., tsp_size}). The notation
N - {nj} is the difference operator, meaning set N without node nj.
Algorithm: compute the optimal path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The previous process returns the distance of the optimal path. To find the
actual path, we need to store in a memory the following key/values:
memo[(ni, N)] = nj_min
with nj_min the node in N that provided the smallest value of dist(ni, N).
Then, the process goes backwards starting from
memo[(0, {1, 2, ..., tsp_size})].
In the previous example, suppose memo[(0, {1, 2})] = 1.
Then, look for memo[(1, {2})] = 2.
Then, since the next step would be memo[2, {}], stop there. The optimal
path would be 0 -> 1 -> 2 -> 0.
Reference
---------
https://en.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm#cite_note-5
"""
N = frozenset(range(1, distance_matrix.shape[0]))
memo = {}
# Step 1: get minimum distance
def dist(ni, N):
if not N:
return distance_matrix[ni, 0]
# Store the costs in the form (nj, dist(nj, N))
costs = [
(nj, distance_matrix[ni, nj] + dist(nj, N.difference({nj})))
for nj in N
]
nmin, min_cost = min(costs, key=lambda x: x[1])
memo[(ni, N)] = nmin
return min_cost
best_distance = dist(0, N)
# Step 2: get path with the minimum distance
ni = 0 # start at the origin
solution = [0]
while N:
ni = memo[(ni, N)]
solution.append(ni)
N = N.difference({ni})
return solution, best_distance