Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional NCFlex features and updates #197

Merged
merged 43 commits into from
Nov 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
f79b7b1
TST: Fixed surface test calculator
Fraser-Birks Sep 23, 2023
ff59112
ENH: Added ability to calculate gradients of Cauchy-Born corrector re…
Fraser-Birks Sep 23, 2023
8a2fd25
TST: Added tests for Cauchy-Born shift gradient
Fraser-Birks Sep 23, 2023
f19006e
ENH: Added ability to map and apply 111 diamond Pandey reconstructions
Fraser-Birks Sep 23, 2023
4e068b3
WIP: Cauchy-born sublattices are now set on unit cell before cluster
Fraser-Birks Sep 23, 2023
8c1dbea
WIP: Improved preconditioning, added shift gradients, added parallel
Fraser-Birks Sep 23, 2023
7241aad
Merge branch 'master' into ParallelNCFlex
Fraser-Birks Sep 23, 2023
fd773bd
WIP: Added parallel NCFlex script
Fraser-Birks Sep 24, 2023
541b904
WIP: Added kill confirm queue for parallelism
Fraser-Birks Sep 24, 2023
8338085
Merge branch 'adaptivecont' of github.com:libAtoms/matscipy into adap…
Fraser-Birks Sep 24, 2023
83b5621
Merge branch 'adaptivecont' into ParallelNCFlex
Fraser-Birks Sep 24, 2023
10aa552
BUG: Changed default of cb to not be None due to errors with parameter
Fraser-Birks Sep 26, 2023
c5ba684
WIP: Added debugging print statements for alpha_range
Fraser-Birks Sep 26, 2023
a4eae34
BUG: Fixed bug to ensure correct behaviour of alpha period finder wh…
Fraser-Birks Sep 26, 2023
1d036e9
BUG: Fixed crash when searching for new K, alpha when no value yet
Fraser-Birks Sep 26, 2023
eb581b7
BUG: Fixed bug with xmask not defined
Fraser-Birks Sep 27, 2023
822e613
BUG: Fixed divide by 0 error if no K values available
Fraser-Birks Sep 27, 2023
471cd78
WIP: Added functions for building 3D sincliar kink-crack geometries
Fraser-Birks Oct 3, 2023
5ecb44c
WIP: added explore direction choice
Fraser-Birks Oct 6, 2023
77148dd
WIP: Improved functionality for mapping 2x1 111 reconstructions and a…
Fraser-Birks Oct 9, 2023
110a436
WIP: Added functionality to create kink-periodic fracture cells for
Fraser-Birks Oct 9, 2023
8680f8f
WIP: Added parallel script that allows for measurement of simultaneous
Fraser-Birks Oct 10, 2023
7072308
BUG: Fixed bug where occasionally manager would recieve message from
Fraser-Birks Oct 10, 2023
8c303e9
ENH: Added ability to map 3x1 surface reconstructions on 110 silicon
Fraser-Birks Oct 25, 2023
18cd409
WIP: Allowed control over alpha backtracking
Fraser-Birks Oct 25, 2023
17bf399
ENH: Added feature to allow alpha backtracking
Fraser-Birks Oct 25, 2023
fbdbe53
Merge branch 'ParallelNCFlex' into 3D_NCFlex
Fraser-Birks Oct 25, 2023
1c00d45
WIP: Added expressions and test for mode II crack fields
Fraser-Birks Oct 26, 2023
450e0f3
TST: Added more mode II unit tests
Fraser-Birks Oct 26, 2023
7ae42c4
WIP: Added option to follow a constant G contour NCFlex
Fraser-Birks Oct 31, 2023
ba6e97d
ENH: Added a stable sort based on r, theta and z for fracture clusters
Fraser-Birks Nov 2, 2023
140c411
TST: Added another iron system for testing
Fraser-Birks Nov 2, 2023
40508fa
ENH: Made modeII file reading back-compatible with old scripts
Fraser-Birks Nov 3, 2023
8b678dc
WIP: Added parallel scripts for massively parallel mode II and rI
Fraser-Birks Nov 3, 2023
980ccf2
TST: Weakened cauchy-born corrector test tol a little
Fraser-Birks Nov 3, 2023
e8dc327
Merge pull request #195 from libAtoms/MixedModeNCFlex
Fraser-Birks Nov 3, 2023
36a4689
TST: Added 3D build cell unit tests
Fraser-Birks Nov 6, 2023
45fc3fa
Merge pull request #196 from libAtoms/3D_NCFlex
Fraser-Birks Nov 6, 2023
3931d00
Merge branch 'master' into ParallelNCFlex
Fraser-Birks Nov 6, 2023
6d6463c
MAINT: PEP8 compliancy
Fraser-Birks Nov 6, 2023
9b4b00e
MAINT: Fixed bug with incorrect Voigt notation and cleaned code
Fraser-Birks Nov 15, 2023
9213fc3
MAINT: Code restructuring and error catching
Fraser-Birks Nov 15, 2023
802e6a6
MAINT: Removed debugging print statements
Fraser-Birks Nov 15, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
505 changes: 355 additions & 150 deletions matscipy/cauchy_born.py

Large diffs are not rendered by default.

326 changes: 269 additions & 57 deletions matscipy/fracture_mechanics/clusters.py

Large diffs are not rendered by default.

1,468 changes: 921 additions & 547 deletions matscipy/fracture_mechanics/crack.py

Large diffs are not rendered by default.

1,107 changes: 1,084 additions & 23 deletions matscipy/surface_reconstruction.py

Large diffs are not rendered by default.

778 changes: 778 additions & 0 deletions scripts/fracture_mechanics/parallel_NCFlex.py

Large diffs are not rendered by default.

439 changes: 439 additions & 0 deletions scripts/fracture_mechanics/parallel_NCFlex_seperate_curves.py

Large diffs are not rendered by default.

195 changes: 195 additions & 0 deletions scripts/fracture_mechanics/parallel_mode_II_NCFlex.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
import matscipy; print(matscipy.__file__)
from matscipy import parameter

from mpi4py import MPI
from matscipy.fracture_mechanics.crack import CubicCrystalCrack, SinclairCrack

from ase.units import GPa
import numpy as np

from copy import deepcopy

import h5py

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
import params
import os

import time

num_processors = parameter('num_processors')

#read parameters in from params file
calc = parameter('calc')
fmax = parameter('fmax', 1e-3)
max_opt_steps = parameter('max_opt_steps', 100)
max_steps = parameter('max_arc_steps', 10)
vacuum = parameter('vacuum', 10.0)
flexible = parameter('flexible', True)
continuation = parameter('continuation', False)
ds = parameter('ds', 1e-2)
nsteps = parameter('nsteps', 10)
a0 = parameter('a0') # lattice constant
k0 = parameter('k0', 1.0)
extended_far_field = parameter('extended_far_field', False)
alpha0 = parameter('alpha0', 0.0) # initial guess for crack position
dump = parameter('dump', False)
precon = parameter('precon', False)
traj_file = parameter('traj_file', 'x_traj.h5')
restart_file = parameter('restart_file', traj_file)
traj_interval = parameter('traj_interval', 1)
direction = parameter('direction', +1)
ds_max = parameter('ds_max', 0.1)
ds_min = parameter('ds_min', 1e-6)
ds_aggressiveness=parameter('ds_aggressiveness', 1.25)
opt_method=parameter('opt_method', 'krylov')
cb = parameter('cb', 'None')
rI = parameter('r_I')
rIII = parameter('r_III')
cutoff = parameter('cutoff')
dk = parameter('dk', 1e-4)
dalpha = parameter('dalpha', 1e-1)
explore_direction = parameter('explore_direction', 0)
allow_alpha_backtracking = parameter('allow_alpha_backtracking',False)
k1_curve_file = parameter('k1_curve_file')
num_data_points = parameter('num_data_points')
follow_G_contour = parameter('follow_G_contour',False)
folder_name = parameter('folder_name','data')
if cb == 'None':
cb = None

if cb is not None:
if rank == 0:
#cb.initial_regression_fit()
#cb.save_regression_model()
cb.load_regression_model()
cb_mod = cb.get_model()
#communicate cb model to all other processors using mpi broadcast
else:
cb_mod = 0
cb_mod = comm.bcast(cb_mod, root=0)
cb.set_model(cb_mod)

cryst = params.cryst.copy()

pos = cryst.get_positions()
sx, sy, sz = cryst.cell.diagonal()
xpos = pos[:,0] - sx/2
ypos = pos[:,1] - sy/2

crk = CubicCrystalCrack(parameter('crack_surface'),
parameter('crack_front'),
C=parameter('C')/GPa,cauchy_born=cb)


k1g = crk.k1g(parameter('surface_energy'))
print('griffthk1,',k1g)
cluster = params.cluster.copy()
if crk.cauchy_born is not None:
crk.cauchy_born.set_sublattices(cluster,np.transpose(crk.RotationMatrix),read_from_atoms=True)


sc = SinclairCrack(crk, cluster, calc, k0 * k1g,
alpha=alpha0,
vacuum=vacuum,
variable_alpha=flexible,
extended_far_field=extended_far_field,rI=rI,
rIII=rIII,cutoff=cutoff,incl_rI_f_alpha=True)
sc.k1g = k1g
sc.variable_alpha = True #True
sc.variable_k = True
sc.cont_k = 'k2'


if rank == 0:
#now read in the initial data file and start to send over data
hf = h5py.File(k1_curve_file, 'r')
x_traj = hf['x']
x = x_traj[:,:]
hf.close()

total_data_points = np.shape(x)[0]
sample_frac = int(total_data_points/num_data_points)
x = x[::sample_frac,:]

data_points_per_proc = int(np.floor(np.shape(x)[0]/num_processors))
#k2 distance to walk size

num_data_point_array = [data_points_per_proc for i in range(num_processors)]
sum_diff = np.shape(x)[0] - np.sum(num_data_point_array)
for i in range(sum_diff):
num_data_point_array[i] += 1
# print(num_data_point_array)
time.sleep(3)
#now communicate over all data
for proc_num in range(1,num_processors):
# print(proc_num)
comm.send(num_data_point_array[proc_num], dest=proc_num, tag=12)
comm.send(x[sum(num_data_point_array[:proc_num]):sum(num_data_point_array[:proc_num+1]),:], dest=proc_num, tag=10)
comm.send(os.getpid(),dest=proc_num,tag=13)
#the first set of data (proc_num 0) is assigned to rank 0 (this processor)
num_data_points = num_data_point_array[0]
x = x[0:num_data_points,:]
pid = os.getpid()

else:
num_data_points = comm.recv(source=0, tag=12)
x = comm.recv(source=0, tag=10)

#recieve pid
pid = comm.recv(source=0, tag=13)

for i in range(num_data_points):
assert sc.cont_k == 'k2'
#pull kII0 from x
sc.variable_k = True
x0 = x[i,:]
sc.set_dofs(x0)
sc.variable_k = False
converged=False
dkcurr = dk
for attempt in range(10):
kII0 = x[i,-1]
k_x0 = kII0/sc.k1g
alpha_x0 = x[i,-3]
print(f'Rescaling K_II from {kII0} to {kII0 + dkcurr}')
k_x1 = k_x0 + dkcurr
sc.kII = k_x1*sc.k1g
if follow_G_contour:
# print('before',sc.kI)
sc.kI = np.sqrt(((k_x0*sc.k1g)**2 + sc.kI**2) - (k_x1*sc.k1g)**2)
# print('after',sc.kI)
time.sleep(5)

sc.update_atoms()
print('starting search...')
try:
sc.optimize(ftol=0.0001, steps=100,method='krylov')
converged=True
break
except RuntimeError:
print('No convergence, retrying with smaller k step')
dkcurr = dkcurr/2

if converged == False:
print('No convergence, skipping this point')
continue
x1 = np.r_[sc.get_dofs(), k_x1 * sc.k1g]
alpha_x1 = sc.alpha
print(f'k0={k_x0}, k1={k_x1} --> alpha0={alpha_x0}, alpha1={alpha_x1}')

#start that continuation
sc.variable_k = True
traj_file_name = f'{folder_name}/main_process_{pid}_sub_process_{rank}_point_{i}.h5'
print('starting ncflex...')
sc.arc_length_continuation(x0, x1, N=nsteps,
ds=ds, ds_max=0.05, ftol=fmax, max_steps=10,
direction=explore_direction,
continuation=continuation,
traj_file=traj_file_name,
traj_interval=traj_interval,
precon=precon,opt_method='krylov',parallel=False,
allow_alpha_backtracking=allow_alpha_backtracking,
follow_G_contour=follow_G_contour)

190 changes: 190 additions & 0 deletions scripts/fracture_mechanics/parallel_rI_convergence_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
import matscipy; print(matscipy.__file__)
from matscipy import parameter

from mpi4py import MPI
from matscipy.fracture_mechanics.crack import CubicCrystalCrack, SinclairCrack

from ase.units import GPa
import numpy as np

import h5py

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
import params
import os

import time

num_processors = parameter('num_processors')
#read parameters in from params file
calc = parameter('calc')
fmax = parameter('fmax', 1e-3)
cb = parameter('cb', 'None')
r_I_vals = parameter('r_I_vals')
r_III_vals = parameter('r_III_vals')
cutoff = parameter('cutoff')
k1_curve_file = parameter('k1_curve_file')
num_data_points = parameter('num_data_points')
follow_G_contour = parameter('follow_G_contour',False)
folder_name = parameter('folder_name','data')
a0 = parameter('a0') # lattice constant
k0 = parameter('k0', 1.0)
alpha0 = parameter('alpha0', 0.0) # initial guess for crack position
traj_file = parameter('traj_file', 'x_traj.h5')
extended_far_field = parameter('extended_far_field', False)
vacuum = parameter('vacuum', 10.0)
flexible = parameter('flexible', True)
cont_k = parameter('cont_k','k1')
clusters = parameter('clusters')#get full set of clusters being used

if cb == 'None':
cb = None

if cb is not None:
if rank == 0:
#cb.initial_regression_fit()
#cb.save_regression_model()
cb.load_regression_model()
cb_mod = cb.get_model()
#communicate cb model to all other processors using mpi broadcast
else:
cb_mod = 0
cb_mod = comm.bcast(cb_mod, root=0)
cb.set_model(cb_mod)

crk = CubicCrystalCrack(parameter('crack_surface'),
parameter('crack_front'),
C=parameter('C')/GPa,cauchy_born=cb)

k1g = crk.k1g(parameter('surface_energy'))
print('griffthk1,',k1g)

if rank == 0:
#now read in the initial data file and start to send over data
hf = h5py.File(k1_curve_file, 'r')
x_traj = hf['x']
x = x_traj[:,:]
hf.close()

total_data_points = np.shape(x)[0]
sample_frac = int(total_data_points/num_data_points)
x = x[::sample_frac,:]

data_points_per_proc = int(np.floor(np.shape(x)[0]/num_processors))
#k2 distance to walk size

num_data_point_array = [data_points_per_proc for i in range(num_processors)]
sum_diff = np.shape(x)[0] - np.sum(num_data_point_array)
for i in range(sum_diff):
num_data_point_array[i] += 1
# print(num_data_point_array)
time.sleep(3)
#now communicate over all data
for proc_num in range(1,num_processors):
# print(proc_num)
comm.send(num_data_point_array[proc_num], dest=proc_num, tag=12)
comm.send(x[sum(num_data_point_array[:proc_num]):sum(num_data_point_array[:proc_num+1]),:], dest=proc_num, tag=10)
comm.send(os.getpid(),dest=proc_num,tag=13)
#the first set of data (proc_num 0) is assigned to rank 0 (this processor)
num_data_points = num_data_point_array[0]
x = x[0:num_data_points,:]
pid = os.getpid()
else:
num_data_points = comm.recv(source=0, tag=12)
x = comm.recv(source=0, tag=10)

#recieve pid
pid = comm.recv(source=0, tag=13)

for dp in range(num_data_points):
#each process creates a file titled f'{pid}_{x[dp][-1]}'
#this file contains the data for the current data point
#the file is created in the folder f'{folder_name}'

#create folder if it doesn't exist
if not os.path.exists(folder_name):
os.mkdir(folder_name)
#create file, with names rouneded to 3dp

file_name = f'{folder_name}/{pid}_rank_{rank}_point_{dp}.h5'
#create file
hf = h5py.File(file_name,'w')
for i, curr_cluster in enumerate(clusters):
# print(r_I_vals[i],r_III_vals[i])
cluster = curr_cluster.copy()
if crk.cauchy_born is not None:
crk.cauchy_born.set_sublattices(cluster,np.transpose(crk.RotationMatrix),read_from_atoms=True)

sc = SinclairCrack(crk, cluster, calc, k0 * k1g,
alpha=alpha0,
vacuum=vacuum,
variable_alpha=flexible,
extended_far_field=extended_far_field,rI=r_I_vals[i],
rIII=r_III_vals[i],cutoff=cutoff,incl_rI_f_alpha=True)

sc.write_atoms_to_file()
sc.k1g = k1g
sc.variable_alpha = True #True
sc.variable_k=True
sc.cont_k = cont_k

if i == 0:
#on the first iteration, just set prev_relaxed to the initial config
#and pass
x_initial = x[dp]
sc.set_dofs(x_initial[:])
#sc.write_atoms_to_file()
#check if both kI and kII are in x[dp], if they are then
#remove whichever is involved in continuation
if len(x_initial) == len(sc)+1:
if sc.cont_k == 'k1':
idx = -1
sc.kII = x_initial[idx]
kII0 = sc.kII
elif sc.cont_k == 'k2':
idx = -2
sc.kI = x_initial[idx]
kI0 = sc.kI
x_initial = np.delete(x_initial,idx)

prev_u, prev_alph, prev_k = sc.unpack(x_initial[:],reshape=True)
continue

#now take sc.atoms, mask out the inner r_I_vals[i-1] atoms, and set to
sx, sy, sz = cluster.cell.diagonal()
pos_rI = cluster.get_positions()[sc.regionI]
xcoord, ycoord = pos_rI[:, 0], pos_rI[:, 1]
cx, cy = sx/2, sy/2
r = np.sqrt((xcoord - cx)**2 + (ycoord - cy)**2)
mask = r<r_I_vals[i-1]
sc.alpha = prev_alph
if sc.cont_k == 'k1':
sc.kI = prev_k
sc.kII = kII0
elif sc.cont_k == 'k2':
sc.kII = prev_k
sc.kI = kI0

sc.u[mask] = prev_u
sc.update_atoms()

#sc.write_atoms_to_file()
sc.variable_k=False
#now relax
try:
sc.optimize(ftol=0.0001, steps=100,method='krylov')
except RuntimeError:
#proceed to next data point
break
sc.write_atoms_to_file()
#get out data
sc.variable_k = True

x_new = sc.get_dofs()
prev_u, prev_alph, prev_k = sc.unpack(x_new,reshape=True)

#write x_new to a new file dataset called f'r_I_vals[i]'
with h5py.File(file_name,'a') as hf:
hf.create_dataset(f'r_I_{r_I_vals[i]}',data=x_new, compression='gzip')

Loading
Loading