Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] using TFT without past target values #1585

Open
mahaassr opened this issue Jul 18, 2024 · 5 comments
Open

[ENH] using TFT without past target values #1585

mahaassr opened this issue Jul 18, 2024 · 5 comments
Labels
enhancement New feature or request feature request New feature or request

Comments

@mahaassr
Copy link

-Hi,

I have a question regarding the use of the Temporal Fusion Transformer (TFT) model.
Is it possible to effectively use the TFT model without providing past target values in the known or unknown inputs? Specifically, I am only passing the target value as the target in the TimeSeriesDataset class and never include past target values in the known or unknown inputs.
Could you please provide some guidance in such scenarios?
Thank you for your assistance!
Best regards,

Maha

@moogoofoo
Copy link

Did you find an answer to this question? I have the same problem/question.

@fkiraly
Copy link
Collaborator

fkiraly commented Sep 13, 2024

I think it is fixed by this: #1667

Generally, it is hard to understand the bug without minimal reproducible code - it would be appreciated if you could post code, or check whether the PR fixes the failure in your case.

@fkiraly fkiraly changed the title Question on using TFT without past target values [ENH] using TFT without past target values Sep 13, 2024
@fkiraly fkiraly added enhancement New feature or request feature request New feature or request labels Sep 13, 2024
@moogoofoo
Copy link

moogoofoo commented Sep 14, 2024

For my issue, I didn't want the target values being sent to the encoder, which for me causes leakage when there is some future aspect in the target values.. Not at all sure that this is the best approach but it seems like it might work for me.

class MyTimeSeriesDataSet(TimeSeriesDataSet):

def __getitem__(self, idx: int) -> Tuple[Dict[str, torch.Tensor], torch.Tensor]:
    """
    Get sample for model

    Args:
        idx (int): index of prediction (between ``0`` and ``len(dataset) - 1``)

    Returns:
        Tuple[Dict[str, torch.Tensor], torch.Tensor]: x and y for model
    """

[......]

at the end of the function I changed for my multi-target case:

    if self.multi_target:
        encoder_target = [t[:encoder_length] for t in target]
    # Added the following hack so that the encoder_target values are zeroed out and thus the encoder is not able to use them
        for each_encoder_target in encoder_target:
            each_encoder_target[:] = 0.0
        target = [t[encoder_length:] for t in target]
    else:
        encoder_target = target[0][:encoder_length]
        target = target[0][encoder_length:]
        target_scale = target_scale[0]

@moogoofoo
Copy link

moogoofoo commented Sep 14, 2024

More appropriately, shouldn’t there be some way of specifying which taget variables should not be sent to the encoder? As for the documentation it wasn’t at all clear to me this is what was happening and it took me a while to understand this. The documentation should be abundantly clear about this.

@fkiraly
Copy link
Collaborator

fkiraly commented Sep 14, 2024

Does this issue summarize the documentation request well?
#1591

What would help a lot if (in #1591) you could point exactly to classes or methods, with import locations, where you think documentation is currently unclear, @moogoofoo.
(Pull requests, of course, are also always appreciated)

Further, if you think the interface should change to a specific target state, an explicit explanation in this issue would be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants