Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Blue screen error while using DeFooocus on rtx 3050 6gb vram and 16gb ram #28

Open
2 tasks done
jake200m opened this issue Aug 24, 2024 · 0 comments
Open
2 tasks done
Labels
bug Something isn't working

Comments

@jake200m
Copy link

Prerequisites

Describe the problem

sir I was using DeFooocus and it was working fine on my laptop which has a ryzen 7 7840HS cpu and rtx 3050 gpu with 6 gb vram but suddenly the screen freezed and blue screen error was appeared and then it restarted automatically. Now the laptop is working but I am curious why did it happen and what can I do to avoid it? Is it gonna harm my device like motherboard dead issue if I continue to use DeFooocus on my laptop. Please guide me

Full console log output

Already up-to-date
Update succeeded.
[System ARGV] ['DeFooocus\\entry_with_update.py', '--attention-split', '--in-browser', '--theme', 'dark', '--preset', 'anime']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 0.2
Loaded preset: D:\Firefox Downloads\defooocus_portable\DeFooocus\presets\anime.json
Total VRAM 6144 MB, total RAM 15655 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 6GB Laptop GPU : native
VAE dtype: torch.bfloat16
Using split optimization for cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: D:\Firefox Downloads\defooocus_portable\DeFooocus\models\checkpoints\animaPencilXL_v100.safetensors
Request to load LoRAs [['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Firefox Downloads\defooocus_portable\DeFooocus\models\checkpoints\animaPencilXL_v100.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.31 seconds
Started worker with PID 35700
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865

Version

DeFooocus0.2

Where are you running Fooocus?

Locally

Operating System

Windows 11

What browsers are you seeing the problem on?

Firefox

@jake200m jake200m added the bug Something isn't working label Aug 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant