-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding model with cycle consistency and VampPrior #2421
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2421 +/- ##
==========================================
- Coverage 84.81% 84.62% -0.20%
==========================================
Files 173 183 +10
Lines 14793 15436 +643
==========================================
+ Hits 12547 13063 +516
- Misses 2246 2373 +127
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, thanks for putting in this PR! A couple of high-level comments for now.
Could you restructure the files as follows?
csi/model/_model.py
->csi/_model.py
csi/model/_utils.py
->csi/_utils.py
csi/module/_module.py
->csi/_module.py
csi/module/_priors.py
->csi/_priors.py
csi/nn/_base_components.py
->csi/_components.py
csi/train/_trainingplans.py
->csi/_trainingplans.py
- Refactor
csi/module/_module.py
to usescvi.module.base.LossOutput
instead of our previousLossRecorder
and removecsi/module/_loss_recoder.py
- If the plan is to directly inherit from
UnsupervisedTrainingMixin
without any major changes except setting_training_plan_cls
, this can be done within_model.py
by directly subclassingUnsupervisedTrainingMixin
and setting_training_plan_cls
there instead, so I would recommend doing that and deletingcsi/model/_training.py
I can take a look at compatibility with scArches in the next round.
We're also slowly migrating to newer typing annotations in Python. Could you add a from __future__ import annotations
to the top of every file (except __init__.py
s) and migrate typing from:
Optional[x]
->x | None
Union[x, y]
->x | y
List[x]
->list[x]
Tuple[x]
->tuple[x]
Callable
->callable
- There's more, I don't remember all of them. The pre-commit hooks will warn about missing ones.
I added changes and adapted tutorial accrordingly |
Thanks for addressing the comments! I'll take another pass at it soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor typing/style comments as well as implementation ones. I still don't understand all parts of the implementation so I will take another pass at this
for more information, see https://pre-commit.ci
For code simplicity the use of mock covariates was replaced with real covariates in cycle. Needs testing on real data.
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@canergen @ori-kron-wis The PR is now ready for review. @ori-kron-wis Can you help me fix the cuda test err? For some reason, it can not import SysVI, although for me, it works locally. I also was not able to test on GPU since I only have MacOS access and I get the err where mps is detected but not used. - If you have any solutions for that also please let me know. |
self.var_param = Parameter(torch.zeros(1, n_output)) | ||
else: | ||
raise ValueError("Mode not recognised.") | ||
self.activation = torch.exp if activation is None else activation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you still need nan_to_num if you use here a softplus activation? You should consider changing it. It's just for backward compatibility still exp in scvi.
super().train(*args, **kwargs) | ||
|
||
@torch.inference_mode() | ||
def get_latent_representation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it need it's own get_latent_representation or can it inherit from VAEMIXIN class?
First look, looks good. Two comments. Should we wait for FClayers to be compatible and then use FClayers? Or do you think it will anyhow not work? |
The FCLayers I use now are the a base for the suggestion that you are working of so it should work with minor modifications in how API is accessed in SysVI. If that is planned to be finished soon then lets wait, but otherwise I suggest merging so that we finally get it in main and then switching afterwards Also @canergen can you please help with tests. I can not check code on GPU locally (don't have any) and the CI tests fail - I think not due to my side as I can make the below import
|
@canergen I think that all the requests that could be solved at this time were implemented. I would very much appreciate it if we could resolve this, as having this as PR is creating double the amount of work for me since others keep using different SysVI versions from my fork and local repository. Last thing: |
When running the full benchmark on the new version we realised it no longer performs as the version before changes in autumn. Will have a look at it, but I am afraid it may be one of the many "non-straightforward" changes required to re-use more of the scvi-tools code. @canergen If that is the case I will probably give up on this PR and then people need to use the stable version of my fork, before requested changes, as I also lack capacity to work on this. |
See my mail. Happy to look through it once and see whether we can make it work again. Otherwise, we should close here - which would be quite unfortunate. |
Thanks for the reminder. As also mentioned in the email reply, I will inspect the performance in more detail over the holidays and then let you know about merging/closing by early January. |
Closes #2383