-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sTriangulation.neighbour_simplices() method is expensive #86
Comments
For many of the tasks we use stripy for, it is not necessary to navigate the mesh but rather to identify nearby nodes / centroids and for this we generally use the k-d tree interface. We are entirely aware that this is not the same thing as identifying the natural neighbours or the neighbour simplices but it certainly does not suffer the same issues as finding points using the underlying fortran routines. Also, other than conversion to f90, we left the TRMESH etc codes alone and any other functionality we added was through the python-level interface. We are happy to consider pull requests for any enhancements, of course. L |
In my experiments, the fortran subroutine trlist2() is fast for relatively small meshes (1000s or 10000s of elements), but suffers from the above mentioned scaling problem for larger meshes. Would you consider adding an option to use a K-D Tree to perform the same task on larger meshes, which seems to has better scaling properties, but is slower for small meshes? The following is a quick attempt at what this might look like. It works iteratively, applying a larger and larger search area on each iteration until all neighbours have been found. One would still need to sort the neighbour array at the end, but this is a relatively cheap operation as it can be done using a single pass through the neighbour array. class sTriangulation(object):
...
def neighbour_simplices_python(self, iterations=10):
""" TODO """
n_simplices = self.simplices.shape[0]
k_values = [4**n for n in range(1, iterations) if 4**n < n_simplices]
k_values.append(n_simplices)
# Compute element centroids in (x, y, z)
mids = self.points[self.simplices].mean(axis=1)
mids /= np.linalg.norm(mids, axis=1).reshape(-1,1)
# Form KDTree
tree = scipy.spatial.cKDTree(mids)
# Flag identifying whether all neighbours have been found for each element
found = np.zeros(n_simplices)
# Neighbour array
nbe = -1 * np.ones_like(self.simplices)
# Iteratively find all neighbours for all elements.
for k in k_values:
#print(k)
simplex_indices = np.asarray(found==0).nonzero()[0]
mids_subset = mids[simplex_indices]
_, nbe_tree = tree.query(mids_subset, k=k, distance_upper_bound=2.0)
n_simplices_subset = simplex_indices.shape[0]
for i in range(n_simplices_subset):
simplex_idx = simplex_indices[i]
counter = 0
for j in range(k):
neighbour_idx = nbe_tree[i, j]
if np.in1d(self.simplices[simplex_idx, :], self.simplices[neighbour_idx, :]).sum() == 2:
nbe[simplex_idx, counter] = neighbour_idx
counter += 1
# Check if all neighbours were found
if counter == 3:
found[simplex_idx] = 1
break
#print('Found {} %'.format(100*np.count_nonzero(found)/n_simplices))
if np.count_nonzero(found) == n_simplices:
return nbe
return nbe |
From some tests I have done, the time taken to identify neighbour simplices in a spherical triangulation scales with N^{2}, where N is is the number of simplices. This seems to be consistent with the underlying Fortran code, which includes two nested loops over the element set. Have you thought about ways in which the efficiency of this operation could be improved; for example, by exploiting aspects of the geometry or by using threading to perform the task in parallel?
The text was updated successfully, but these errors were encountered: