-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
more examples? #26
Comments
See this section of the developer guide. It describes how precision modes are handled.
.cc files are for Common Compute, allowing the same code to be used for both CUDA and OpenCL.
You can find lots of examples in https://github.com/openmm/openmm/blob/master/platforms/common/src/CommonKernels.cpp. Be sure you read through the full developer guide first. It provides essential background to understand the code. |
Cool, I'm going to have to apologize for missing a couple things in the docs there (the .rst files are just the package to build the docs u see on the site in the main repo - kinda like MD files on Github nowadays) but I think I need a mem_fence and a couple other things here (this is re: the other issue on the openmm/openmm thing). I really think some kind of implementation like I'm thinking of would be more general purpose for everyone, let's say what GlobalVariables we want to buffer and get em copied over every 5000 steps or whatever you specify. Maybe it could be a plugin but the way I was thinking of it, it's almost like putting it in the base classes to integrate with anything like CustomDihedralForce or CustomCVForce, unfortunately, but we are allowed to try to do something new here right? because I'm not charging ya anything and you can test it however you like before putting it out there :D. the aMD implementation is a big deal where every 1 step helps a lot, I have a spreadsheet with the graphs I could email you offline about it to see how crappy it looks compared to some much shorter jobs I never published but have on Github, I sent it to John at MSKCC already. |
NAMD had made it easy to just say gimme colvars every 1 step but we're doing OpenMM here, I have to give 2 distances and 10 dihedrals so it was quite a lot, there's no other technique out there to really do the enhanced sampling like this where especially the dihedrals are involved, I'm even wondering if putting CMAP into the same force group would help too but that's a simple one already implemented of course and NAMD already had that, kind of silly not to have that boosted too but the dihedral picture without it is much easier to explain to people (and still can be exceptionally difficult for those who didn't run it before). There's no chance at all at giving good 2D PMFs with 1D PMFs this noisy with 1.3 microsecond of data already (vs. 200 and 300 ns with about the same or better data for some other similar protein system with more amino acids but output every 2fs)! |
Hey, I had some fun with the archive but the kernel is just this:
So a lot of the rest is a sort of boilerplate you need, I like that you used the ternary ? : (fun in JavaScript too with React), and I suppose I could understand how since you probably also have some definition defining "real double" (or float depending on precision mode) somewhere in the main source that you might paste this inside a CUDA or OpenCL kernel for compilation. But no .cl file here to work with at all.
But it basically does close to nothing 😺 are there some more examples floating around anywhere or should I go back to the main source openmm/openmm and just dig further into the full source code? I am trying to look for some more sophisticated examples where, for example, we are declaring some memory on the GPU in OpenCL. I have this issue where I'm trying to make a faster solution than just a loop that does 1 simulation step and then saves some global variables and I have very little to go on here.
I do like, however, that you can skip the Lepton. I was trying to implement a Lowe-Andersen thermostat and realized that the limitations of the Lepton sort of makes it close to impossible to just make a "simple" solution in it. I need atoms within certain distances and could piggyback off of existing code there.
So an example where say the kernels are more detailed and work with memory and/or existing cutoff implementations might be more helpful for me to see what the options are for the two use cases (saving global variables regularly and Lowe-Andersen). This is nice as a start though.
The text was updated successfully, but these errors were encountered: