Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] UI jank when joining room, closing room, and idle in room #663

Open
willsmanley opened this issue Dec 18, 2024 · 7 comments
Open

[bug] UI jank when joining room, closing room, and idle in room #663

willsmanley opened this issue Dec 18, 2024 · 7 comments

Comments

@willsmanley
Copy link

Not sure how to get better stack tracing for what is causing the slow frames, but it seems like there are some synchronous pieces of code particularly when joining and leaving a webrtc room. Even when idle in an active room, the frames are really slow.

I don't know much more about it than just being able to visibly see the animations on the screen freeze when the room is opened, but I'm maybe considering opening an invisible webview to handle the webrtc room via browser instead of relying on the flutter implementation.

Screenshot 2024-12-18 at 1 36 22 PM

@cloudwebrtc have you noticed this before? maybe it is flutter_webrtc related and not livekit, but not sure.

@cloudwebrtc
Copy link
Contributor

cloudwebrtc commented Dec 21, 2024

hi @willsmanley ,Does this issue mainly occur on flutter web?

@willsmanley
Copy link
Author

I've only experienced this on a physical iOS device. I havent tested web or android yet.

I'm thinking about trying a hidden web view to leverage browser based webrtc to see if that helps. That's how I got transparent video background working since it's supported in safari and chrome but not natively in flutter.

@willsmanley
Copy link
Author

willsmanley commented Dec 22, 2024

since this library does not render any widgets, shouldnt it be possible to run in an isolate? i can help investigate if you have any pointers where to start

i tried this but it seems to silently hang on .connect():

  _room = await Isolate.run(() async {
      final room = livekit.Room();
      await room.connect(
        url!,
        accessToken!,
        roomOptions: const livekit.RoomOptions(
          adaptiveStream: true,
          dynacast: true,
          defaultAudioCaptureOptions: livekit.AudioCaptureOptions(
            echoCancellation: true,
            autoGainControl: true,
            noiseSuppression: true,
          ),
        ),
      );
      return room;
  });

@willsmanley
Copy link
Author

willsmanley commented Dec 22, 2024

since the isolate strategy didn't seem to work for me, I tried reimplementing in a webview which actually worked well:

  1. Deploy this simple webrtc relay page to a webhosting service (it is better to do this so you have HTTPS which is not feasible with serving it as a local HTML file):
<script src="https://cdn.jsdelivr.net/npm/livekit-client/dist/livekit-client.umd.min.js"></script>
<script>
  let room;
  
  async function startCall(token, wsURL) {
    room = new LivekitClient.Room();
    await room.connect(wsURL, token);
    console.log('connected to room', room.name);

    await room.localParticipant.setMicrophoneEnabled(true);

    room.on(LivekitClient.RoomEvent.TrackSubscribed, (track, publication, participant) => {
      console.log('subscribed to track', track);

      if (track.kind === LivekitClient.Track.Kind.Audio) {
        const audioEl = track.attach();
        document.body.appendChild(audioEl);

        audioEl.play().catch((error) => {
          console.warn('Could not autoplay audio:', error);
        });
      }
    });
  }

  async function stopCall() {
    if(room){
      room.localParticipant.setMicrophoneEnabled(false);
      room.disconnect();
      room = null;
    }
  }
  
  async function requestMicrophone() {
      try {
          const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
          console.log('Microphone access granted', stream);
      } catch (error) {
          console.error('Microphone access denied:', error);
      }
  }

  window.onload = requestMicrophone;
</script>
  1. Then refactor your dart widget to render an invisible in app webview which should point to the URL for the page you deployed above ^
import 'package:flutter/material.dart';
import 'dart:async';
import 'dart:convert';
import 'package:flutter/cupertino.dart';
import 'package:http/http.dart' as http;
import 'package:permission_handler/permission_handler.dart';
import 'package:flutter_inappwebview/flutter_inappwebview.dart';

class VoiceService {
  String? accessToken;
  String? url;
  InAppWebViewController? webViewController;
  bool _initialized = false;

  showMicrophonePermissionDialog() {
    showCupertinoDialog(
      context: navigatorKey.currentContext!,
      barrierDismissible: false,
      builder: (BuildContext context) {
        return CupertinoAlertDialog(
          title: const Text("Microphone Permission Required"),
          content: const Text(
              "Please enable microphone access in your settings to continue."),
          actions: [
            TextButton(
              child: const Text(
                "Go to Settings",
              ),
              onPressed: () {
                openAppSettings();
              },
            ),
          ],
        );
      },
    );
  }

  Future<void> getAccessTokenAndUrl() async {
    try {
      final response = await http.Client().post(
        Uri.parse('https://myurl.com/get-access-token-and-url'),
      );
      final responseData = jsonDecode(response.body);
      accessToken = responseData['accessToken'];
      url = responseData['url'];
    } catch (e) {
      print('Error getting accessToken and url to initialize agent call: $e');
    }
  }

  stopCall() async {
    accessToken = null;
    url = null;
    _initialized = false;
    if (webViewController != null) {
      await webViewController!.evaluateJavascript(source: 'stopCall()');
      webViewController = null;
    }
    print('voice service cleared');
  }

  Future<void> startCall() async {
    // 1. Ensure mic permission is granted
    await ensureMicPermissionGranted();

    // 2. Set initialization variable so only one call is started at a time
    if (_initialized) {
      print('call already initialized, skipping');
      return;
    } else {
      _initialized = true;
    }

    // 3. Get access token and url
    print('getting access token');
    if (accessToken == null || url == null) {
      await getAccessTokenAndUrl();
    }
    if (accessToken == null || url == null) {
      print('failed to fetch access token and url, stopping call');
      stopCall();
      return;
    }

    // 4. Check if stopCall was called while fetching access token and url
    // this could be done by leaving the screen or other means
    if (_initialized == false) {
      print('stopCall was called while fetching access token and url, stopping call');
      stopCall();
      return;
    }

    // 5. Ensure webview controller was already set
    if(webViewController == null) {
      print('webview controller not set, stopping call');
      stopCall();
      return;
    }

    // 6. Join webrtc room in the webview
    try {
      await webViewController?.evaluateJavascript(
          source: 'startCall("${accessToken!}", "${url!}")');
    } catch (error) {
      print('Error starting call in webview: $error');
      await stopCall();
      return;
    }

    // 7. Check if stopcall was called while joining webrtc room
    if (_initialized == false) {
      stopCall();
      return;
    }
  }

  ensureMicPermissionGranted() async {
    final status = await Permission.microphone.request();
    if (status != PermissionStatus.granted) {
      print('Microphone permission not granted');
      showMicrophonePermissionDialog();

      // If mic was not granted, check every second and then restart call once granted
      Timer.periodic(const Duration(seconds: 1), (timer) async {
        var status = await Permission.microphone.status;
        if (status == PermissionStatus.granted) {
          timer.cancel();
          startCall();
        }
      });
    }
  }
}

final voiceService = VoiceService();

class VoiceScreen extends StatefulWidget {
  final VoidCallback? onClose;

  const VoiceScreen({
    super.key,
    this.onClose,
  });

  @override
  VoiceScreenState createState() => VoiceScreenState();
}

class VoiceScreenState extends State<VoiceScreen> with WidgetsBindingObserver {
  @override
  void initState() {
    WidgetsBinding.instance.addObserver(this);
    super.initState();
  }

  @override
  void didChangeAppLifecycleState(AppLifecycleState state) {
    if (state == AppLifecycleState.resumed) {
      setState(() {});
    }
    super.didChangeAppLifecycleState(state);
  }

  @override
  void dispose() {
    WidgetsBinding.instance.removeObserver(this);
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(),
      body: Stack(
        children: [
          const Text('visible content goes here'),
          SizedBox(
            width: 0,
            height: 0,
            child: InAppWebView(
              onConsoleMessage:
                  (InAppWebViewController controller, ConsoleMessage message) {
                print('webview console message: ${message.message}');
              },
              initialUrlRequest:
                  URLRequest(url: WebUri('https://mywebrtcrelaypage.com/')),
              onPermissionRequest: (InAppWebViewController controller,
                  PermissionRequest request) async {
                return PermissionResponse(
                  action: PermissionResponseAction.GRANT,
                  resources: [PermissionResourceType.MICROPHONE],
                );
              },
              onWebViewCreated: (controller) {
                voiceService.webViewController = controller;
              },
              onLoadStop: (controller, url) async {
                await voiceService.startCall();
              },
            ),
          )
        ],
      ),
    );
  }
}

This seems to be more performant than using this package directly. Hopefully the performance issues can be fixed but this is a pretty good solution in the sense that the JS SDK seems to be very reliable. I don't have a use case for video but this is totally fine for audio.

Only weird thing is that unlike with this library, there seems to be a "recording start" and "recording stop" noise that ios physical devices play when you enter or leave the room.

I would still like to do some more investigation within this library to determine if the UI lag is coming from the livekit SDK layer or the flutter_webrtc layer.

@willsmanley
Copy link
Author

yep i think i conclusively determined this is on the livekit SDK side. i reimplemented everything with raw flutter_webrtc and dont see any jank issues.

@cloudwebrtc
Copy link
Contributor

It may be because the connect/disconnect process is expanded by many await processes, causing the UI to wait for many asynchronous events to complete.

@willsmanley
Copy link
Author

That sounds right. I'm switching to raw flutter_webrtc for better control. Still trying to figure out the destroy/dispose on hot reload issue, I made a ticket there which affects the livekit and stream wrappers as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants