Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syntax and capability enhancments #14

Open
4 tasks
dgasmith opened this issue Jun 7, 2016 · 2 comments
Open
4 tasks

Syntax and capability enhancments #14

dgasmith opened this issue Jun 7, 2016 · 2 comments

Comments

@dgasmith
Copy link
Collaborator

dgasmith commented Jun 7, 2016

The DF-MP2 example as per our discussion.

# Initialize base objects
iaQ = DiskTensor("iaQ DF Tensor", oshape, vshape, Qshape)
eps_o = CoreTensor("Occupied eigenvalues", oshape)
eps_v = CoreTensor("Virtual eigenvalues", vshape)

# Initialize temps
Qv_1 = CoreTensor("Qv_1", Qshape, vshape)
Qv_2 = CoreTensor("Qv_2", Qshape, vshape)

vv_numer_1 = CoreTensor("vv_numer_1", vshape, vshape)
vv_numer_2 = CoreTensor("vv_numer_2", vshape, vshape)

denom = CoreTensor("vv_denom", vshape, vshape)
vv_denom = CoreTensor("vv_denom", vshape, vshape)
vv_denom["i,j"] = - eps_v["i"] - eps_v["j"]

for i in range(ndocc):

    Qv_1[“ab"] = READ(iaQ, {j, "a", "b"})
    for j in range(ndocc):
        Qv_2["ab"] = READ(iaQ, {j, "a", “b"})

        # Form integrals
        vv_numer_1["ab"] = Qv_1["aQ"] * Qv_2["bQ"]

        # Form denominator
        denom["ab"] = vv_denom["ab"]
        denom["ab"] += (eps_o[i] + eps_o[j])

        # OS correlation
        vv_numer_2["ab"] = vv_numer_1["ab"]
        vv_numer_2["ab"] *= vv_numer_1["ab"]

        MP2corr_OS += vv_numer_2["ab"] / denom["ab"]

        # SS correlation
        vv_numer_2["ab"] = vv_numer_1["ab"]
        vv_numer_2["ab"] -= vv_numer_1["ba"]
        vv_numer_2["ab"] *= vv_numer_1["ab"]

        MP2corr_SS += vv_numer_2["ab"] / denom["ab"]

Features needed

  • Slicing syntax for read/writing from/to Core/Disk tensors. As a note if we do like the READ syntax it can be done in C++ through the use of std::tuples instead of std::vector.
  • Element-wise arithmetic -, +, /, *
  • Broadcasting. At its simplest form this is the ability to add a single number to a tensor.
  • Reduce (sum) operations. It would be nice if we could do reduce(vv_numer_2["ab"] / denom["ab"]) that would not require a second tensor to hold the product of the divisor. Some describe this as lazy evaluation.
@kannon92
Copy link
Contributor

kannon92 commented Jun 7, 2016

I was not able to attend this meeting but here is one more feature that I ask for.

Handling of repeated indices: F["pq"] = h["pq"] + *B["Qpq"]B["Qrr"] - ...

At one point, ambit would complain about this type of contraction. It's not hard to work around this, but it seems silly that ambit can't handle this simple contraction from SCF.

@amjames
Copy link

amjames commented Jun 7, 2016

Would be very difficult to enable the syntax for permutations to work with slices? I took a look under the hood, but wasn't able to work anything out.

Here is an example from a toy-code I was playing with, adapted from the hartree-fock examples in libint2's tests. I couldn't get around using an intermediate tensor to do the permutation, then slicing that into the "full" results.

//… looping over s1,s2,s3 
for(auto s4 =0; s4 <= s4_max; ++s4){
    // the same already done for s1,s2,s3 above
    size_t shell4start = shell2bf[s4];
    size_t shell4size = basis_[s4].size();
    std::vector<size_t> s4result = {shell4start,shell4start+shell4size};
    std::vector<size_t> s4alone = {0,shell4size};


    // libint2::Engine object, engine.compute() returns computed integrals for this quartet
    const double* buf = engine.compute(basis_[s1],basis_[s2], basis_[s3], basis_[s4]);
    // build a temporary tensor for this block
    Tensor Temp_pqrs = Tensor::build(CoreTensor, "pqrs",{shell1size,shell2size,shell3size,shell4size});
    // set the tensor elements to the integral buffer
    std::vector<double> &pqrs_data = pqrs.data();
    pqrs_data.assign(buf,buf+pqrs_data.size());

    //Set the full result pqrs slice fullResult is a core tensor with dims {nbf,nbf,nbf,nbf}
    fullResult({s1result,s2result,s3result,s4result}) = pqrs({s1alone,s2alone,s3alone,s4alone});

   // now permute indices and write a different slice of fullResult with the same data
   // it would be nice to be able to permute slices like this maybe 
   fullResult({s1result,s2result,s4result,s3result})("pqsr") = pqrs({s1alone,s2alone,s3alone,s4alone})("pqrs");
   // but that syntax is invalid. The only way I could figure out to do this would be to create an
   // intermediate tensor with the correct dimensions for the permutation
   Tensor temp_pqsr = Tensor::build(CoreTensor,"temp", {shell1size,shell2size,shell4size,shell3size});
   temp_pqsr("pqsr") = pqrs("pqrs");
   fullResult({shell1result,shell2result,shell4result,shell3result}) = temp_pqsr({shell1alone,shell2alone,shell4alone,shell3alone});

// and so on .. 
}

I wonder if there is a way to add a ()(std::string) operator for a SlicedTensor, to get a LabeledTensor, or possibly some new class that merges them (LabeledSlicedTensor). This could be nonsense but there is my $0.02.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants