Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gui: display info-level logs when starting daemon #748

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

jp1ac4
Copy link
Collaborator

@jp1ac4 jp1ac4 commented Oct 26, 2023

This is to resolve #475.

I've tried to follow the approach suggested in #475 (comment).

I was having trouble using a mpsc::channel in the iced::Subscription closure as it can't be shared between threads, so for now I've created a channel using crossbeam_channel::unbounded instead. Perhaps there's a different way of creating the subscription that would allow me to use mpsc::channel.

I'm using the same Message variant for both Liana and bitcoind logs, and distinguishing between them in on_log() based on the step. I could also use a different message for each case.

@edouardparis
Copy link
Member

Concept ACK

@jp1ac4 jp1ac4 force-pushed the show-daemon-info-level-logs branch from 7516ba3 to 6f09708 Compare October 27, 2023 07:46
@darosior
Copy link
Member

I tested it on mainnet and it works fine. I'll let @edouardparis judge the approach (esp whether introducing this new dependency is necessary). I wanted to have this in v3, but i'm starting to lean toward having it in v4 instead given the bugs we've got to deal with already in v3.

@darosior
Copy link
Member

darosior commented Oct 30, 2023

Is there anything left to do here? I don't know about the implementation, but i unloaded my lianad watchonly wallet from my local mainnet node for a couple days in order to test this and it worked like a charm. It displayed "Loading watchonly wallet." for around 10 seconds before starting up while before it would just appear to be freezing.

@jp1ac4
Copy link
Collaborator Author

jp1ac4 commented Oct 30, 2023

That's great to hear. I don't have any further changes planned for now. I'll change it to ready for review.

@jp1ac4 jp1ac4 marked this pull request as ready for review October 30, 2023 15:03
@darosior
Copy link
Member

darosior commented Nov 1, 2023

Just a self reminder to re-assess the approach here in light of the bugs with the way we were connecting the logs from bitcoind: #773.

@darosior
Copy link
Member

Just a self reminder to re-assess the approach here in light of the bugs with the way we were connecting the logs from bitcoind: #773.

Since this is using an unbounded channel it appears it would never create a deadlock of the kind we had with bitcoind's logs?

@jp1ac4
Copy link
Collaborator Author

jp1ac4 commented Nov 10, 2023

Since this is using an unbounded channel it appears it would never create a deadlock of the kind we had with bitcoind's logs?

I suppose not, unless there's some external "global limit" imposed on its capacity, although we won't be logging as much as bitcoind's IBD so would be less likely to reach it in any case. But otherwise, as we won't be reading from the channel most of the time, is there a chance it might grow too large and consume lots of memory? Ideally there'd be a way to close the channel when we're done with it.

fn record_debug(&mut self, field: &tracing::field::Field, value: &dyn std::fmt::Debug) {
if field.name() == "message" {
let msg = format!("{:?}", value);
let _ = self.sender.send(msg);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check error indicating that the other side of the channel is disconnected in order to stop sending message in it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't found a way to use the error to tell this layer to stop being triggered. I was trying to set a flag of some kind, but the on_event method only takes &self so I didn't manage to.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With my latest changes, we may not need to do anything here as we only send messages to the channel while start_bitcoind_and_daemon is running.

@@ -36,6 +39,7 @@ pub struct Logger {
Registry,
>,
level_handle: reload::Handle<filter::LevelFilter, Registry>,
receiver: crossbeam_channel::Receiver<String>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it should be removed.
the layer can be added in set_running_mode and the receiver returned as Result of the method

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not possible, change it in Option<> and use take to pass it to loader.
The ideal would be that this Receiver is dropped once loader changed its state.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When adding it in set_running_mode, I'm not sure how to make it active without calling init as is done in setup. The subscriber returns a message that it's trying to read from a disconnected channel (and this message is shown in the GUI on the loader screen).

The second approach using take seems to work in terms of making the channel become disconnected once the loader has completed (or shortly thereafter).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might be able to create an mpsc channel inside the iced subscription closure itself as in this example:
https://github.com/iced-rs/iced/blob/dd249a1d11c68b8fee1828d58bae158946ee2ebd/futures/src/subscription.rs#L286-L355
Then the channel should be dropped once the loader completes.

We would pass the sender back to the loader and would just need a way to subscribe to log events while the loader is running. Perhaps using the layer with https://docs.rs/tracing/latest/tracing/dispatcher/fn.set_default.html.

@jp1ac4
Copy link
Collaborator Author

jp1ac4 commented Nov 10, 2023

Thanks for the review. I'm trying to make those changes currently.

Just wondering whether it would also be worth considering a similar approach as for the bitcoind logs and reading the GUI log file?

@darosior
Copy link
Member

Reading the logs from the file we write them to certainly sounds more roundabout and brittle (as in there is more moving parts, more could go wrong). Another avenue i thought of but don't know whether it's possible is to "hook" the log messages sent by the daemon. Maybe the GUI's tracing can do that?

@jp1ac4 jp1ac4 force-pushed the show-daemon-info-level-logs branch from 6f09708 to 8bb05df Compare December 15, 2023 00:12
@jp1ac4
Copy link
Collaborator Author

jp1ac4 commented Dec 15, 2023

I've made a change so that only the function start_bitcoind_and_daemon is instrumented with the channel layer, which should mean that no log messages are being sent to the channel otherwise. Furthermore, the channel is created as part of Loader::new() and so should be dropped once loading completes and the app starts.

I didn't manage yet to wrap the channel layer around the global default subscriber, which means that while start_bitcoind_and_daemon is being executed, logs are not being written to any other layers and are only displayed in the Liana GUI. Once that function completes, the usual logging resumes.

@jp1ac4 jp1ac4 marked this pull request as draft December 18, 2023 16:52
@jp1ac4
Copy link
Collaborator Author

jp1ac4 commented Dec 18, 2023

I've set this back to draft for now as it needs further work to keep existing logging while the daemon is starting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

GUI: Daemon "info" level logs when starting it
3 participants