Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to interpret the trained model? #698

Closed
landkwon94 opened this issue Jul 19, 2023 · 1 comment
Closed

How to interpret the trained model? #698

landkwon94 opened this issue Jul 19, 2023 · 1 comment

Comments

@landkwon94
Copy link

Description

Hello Sir!

First of all, thank you so much for sharing your amazing codes!

I am using TFT model for train/validate/test for time-series forecasting.

After training the model, I want to visualize 'feature importance' and 'attention weights' such as figures below (from the original paper)

image

image

Is there any module we can use to perform this visualization in neuralforecast?

I will wait for your reply!

Many thanks for your contributions :)

Link

No response

@landkwon94 landkwon94 changed the title How to interpret the trained model?[<Library component: Models|Core|etc...>] How to interpret the trained model? Jul 19, 2023
@marcopeix
Copy link
Contributor

#1104 solves this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants