These notes are a mixture of guidelines, configuration details and setup
recommendations for using Jack Audio Server on
FreeBSD. Most of it is based on my experiments and
findings while doing a rewrite of the Jack driver backend for FreeBSD. Since
then I’ve also become involved with development of the FreeBSD kernel audio
subsystem.
Read this document online, download a PDF, or
contribute via repository.
The hardware and system requirements vary greatly depending on the use case. It makes sense to understand the different scenarios before diving into the details of settings and configuration.
This is the simplest use case, given that there is no need to synchronize the output with external events. Latency settings can be relaxed. This means large buffers and longer processing cycles keep the timing constraints and the CPU load of the system at a minimum. If you do have to synchronize with external events, see the section on live processing.
Usually recording also includes playback, to e.g. capture a guitar playing along a previously recorded drum track. Certainly you want the timing of the guitar track to match the drums. There is no need to go low latency, since audio software can compensate the latency while recording. But it’s essential to have a known stable latency, for multiple takes and also between recording sessions.
Note
|
While it’s technically possible to use different devices for recording and playback, this is not recommended. There will be problems when the devices are not synchronized and run at a slightly different speed. |
Obviously you want to hear your instrument playing while recording. It is highly preferable to circumvent the software stack altogether and have an external monitoring solution in hardware. This could be a complete mixing desk or just some direct monitoring knobs on your audio interface. Going through the software stack will introduce noticeable latency.
If you have to do monitoring through Jack, use extra low latency settings. It may help to run Jack with realtime priority and setup the monitoring directly as connections on the Jack server, bypassing other Jack clients.
This means scenarios where low latency is crucial, like add some plugin effects to a live performance, emulating guitar amps or playing a software synthesizer. Hardware, driver and applications all have to work together under very tight timing constraints. This is the most demanding use case.
Improvements in the FreeBSD audio stack and the upcoming Jack 1.9.23 release should make this feasible in the near future. In my experiments, I did get roundtrip latency down to around 5-6ms, which is ok for many live processing tasks. But this is only possible with some specific hardware and tuning, don’t expect it to just work.
There seem to be only few drivers geared towards audio interfaces for music
creation and recording. The other pcm drivers (man sound
) do work with
Jack. But a soundblaster or internal sound card may have problems recording a
guitar or driving your studio headphones.
- snd_uaudio
-
USB audio interfaces, most class compliant devices should be supported.
- snd_hdspe
-
RME HDSPe AIO and HDSPe RayDAT, professional PCIe sound cards from RME.
- snd_hdsp (coming in 15.0-RELEASE)
-
RME HDSP 9632 and HDSP 9652, professional PCI (not PCIe) sound cards from RME.
The RME cards work really well, with very low latency if desired. They are expensive though, even the used ones often carry a hefty price tag for a sound card. With an old PCI slot (or a compatible adapter) available, a used HDSP 9632 or HDSP 9652 may be an affordable option for a high quality audio interface.
Note
|
The |
USB audio interfaces are more common, inexpensive, and come in many varieties
suitable for different setups. The snd_uaudio
driver implements the protocol
for generic USB audio class devices, so look out for a Class Compliant
device.
Tip
|
Class Compliant devices are often marketed as being "iPad compatible". |
There are limitations with USB audio devices though:
-
Some are not completely Class Compliant, only tested on Windows and Mac.
-
Some need USB quirks to get them running.
-
High sample rates with lots of channels may not be supported.
-
Hardware mixing and routing is typically not accessible in software.
Note
|
The |
If your hardware is recognized correctly, it appears in the system logs as pcm device
# dmesg | grep pcm pcm0: <ATI R6xx (HDMI)> at nid 3 on hdaa0 pcm1: <Realtek ALC262 (Analog)> at nid 21 and 24,25,26 on hdaa1 pcm2: <Realtek ALC262 (Front Analog Headphones)> at nid 27 on hdaa1 pcm3: <USB audio> on uaudio0
and also by device driver
# dmesg | grep uaudio uaudio0 on uhub2 uaudio0: <Roland EDIROL UA-25EX, rev 1.10/1.00, addr 2> on usbus4 uaudio0: Play[0]: 48000 Hz, 2 ch, 24-bit S-LE PCM format, 2x2ms buffer. uaudio0: Record[0]: 48000 Hz, 2 ch, 24-bit S-LE PCM format, 2x2ms buffer. uaudio0: MIDI sequencer. pcm3: <USB audio> on uaudio0 uaudio0: No HID volume keys found.
Otherwise check the handbook on how to setup your sound card. If unsure, try to load all available sound drivers through
# kldload snd_driver
and see if the device appears in the logs.
Most drivers don’t need any special treatment and work just fine. They are loaded at boot time or when the device is attached. Some drivers for less common hardware have to be loaded explicitly, like the RME cards:
snd_hdspe_load="YES"
Another reason to load a driver explicitly is to apply some driver specific settings. Consult the driver’s man page for what settings are available.
USB audio is a bit special due to the variety of hardware and supported
formats. Some important parameters can be set prior to attaching the device
(man snd_uaudio
). In case the USB audio device is permanently connected, this
has to be done at boot time:
snd_uaudio_load="YES" # (1) hw.usb.uaudio.default_channels="2" # (2) hw.usb.uaudio.default_bits="24" hw.usb.uaudio.default_rate="48000" hw.usb.quirk.0="0x0a4a 0xc150 0x0000 0xffff UQ_CFG_INDEX_1" # (3) hw.usb.quirk.1="0x0582 0x00e6 0x0000 0xffff UQ_AU_VENDOR_CLASS"
-
Force loading the driver, prerequisite for other settings.
-
Default number of channels, sample size and sample rate.
-
Quirks to make some incompatible devices work.
If a USB device supports multiple configurations, the driver will choose the "best" one. You can make it prefer a different channel count, sample size and sample rate by setting the defaults here. Usually the sample rate can be changed at runtime, but the other settings are fixed when the device is attached.
Caution
|
Only set defaults here when really needed. They are applied to all USB audio devices, even to those they were not intended for. |
Quirks are needed when devices don’t adhere to standards and only work with
some special treatment. See man usb_quirk
.
These are system-wide settings to manage sound devices. Sound devices are
numbered for identification, with an unnumbered alias /dev/dsp
which
represents the default device.
hw.snd.verbose=2 # (1) hw.snd.default_auto=0 # (2) hw.snd.default_unit=1 # (3)
-
Get more info from
/dev/sndstat
, recommended! -
Automatically assign the default sound device
/dev/dsp
. -
Manually set the default sound device
/dev/dsp
.
See man sound
for more details and possible values. The default sound device is
picked up by desktop environments and other software like browsers. I prefer to
set it to some internal sound card, and not to my main audio interface, to
avoid conflicts.
Warning
|
Order and numbering of sound devices is not fixed and may change on reboot if new hardware is added. |
One important latency factor is the number of samples that the device driver processes at once. For USB devices this can be set at boot time:
hw.usb.uaudio.buffer_ms="2"
I highly recommend to use the minimum value here, which is 2 milliseconds of sample data. Apart from reducing the transfer latency, it also has another effect. Even if Jack processes a larger block of samples per cycle, this smoothes out the cycle times.
Note
|
In 14.1-RELEASE, the minimum buffer_ms value was reduced to 1ms for USB
audio devices. This is recommended for low latency setups.
|
The redesigned driver backend of the upcoming Jack 1.9.23 will be largely
independent of the hardware processing cycles. Small buffer_ms
values are
then only relevant for low latency use cases.
For non-USB devices have a look at the corresponding man page. If the driver
provides no dedicated knobs, it may be worth a try to lower the generic sound
latency tunables (man sound
):
hw.snd.latency=0 hw.snd.latency_profile=0
These mainly affect the buffering latency, which is irrelevant to Jack. But some device drivers adapt to these tunables and process smaller blocks of samples at once.
Although not directly involved, timing accuracy also plays a role with latency. Inaccurate timer wakeups contribute to buffer over- and underruns, especially with low-latency setups. The following increases overall timing accuracy of the system and is recommended for all use cases:
kern.timecounter.alloweddeviation=0
The downside of short processing cycles and timing accuracy is more frequent system wakeups, which translates to higher power and battery consumption on laptops.
Sound devices on FreeBSD accept various sample formats, sample rates and channel configurations. They also support concurrent access with separate volume control per application. This is achieved by an automatic conversion stage in front of the hardware driver, dynamically composed of format conversion, channel mixing and volume control stages as needed.
Conversion stages show up in cat /dev/sndstat
as feeder_format, feeder_mixer
or feeder_volume:
... pcm3: <USB audio> at ? kld snd_uaudio (1p:1v/1r:1v) default snddev flags=0x2e6<AUTOVCHAN,SOFTPCMVOL,BUSY,MPSAFE,REGISTERED,VPC> [pcm3:play:dsp3.p0]: spd 48000, fmt 0x00200010/0x00210000, flags 0x00002100, 0x00000006 interrupts 1053, underruns 0, feed 1052, ready 0 [b:4608/2304/2|bs:4096/2048/2] channel flags=0x2100<BUSY,HAS_VCHAN> {userland} -> feeder_mixer(0x00200010) -> feeder_format(0x00200010 -> 0x00210000) -> {hardware} ...
Note
|
Automatic conversion is not applied to audio interfaces with more than 8 channels. |
While very convenient in general, this behaviour has some drawbacks when using Jack. The conversion stages introduce noticeable latency, irregular processing cycles and hinder buffer management by reporting incorrect buffer statistics.
There are two solutions, depending on how the audio interface is used.
- Bitperfect Mode
-
Completely disable conversion and concurrent access. This makes sense if Jack is the only program to open the device.
/etc/sysctl.confdev.pcm.3.play.vchans=0 dev.pcm.3.rec.vchans=0 dev.pcm.3.bitperfect=1
- Exclusive Mode
-
Configure Jack to open the device in exclusive mode (see Jack configuration). Make sure the device is not used by any other program at the same time. Also we have to set the sample rate and format of the device to match exactly what we want to use with Jack.
/etc/sysctl.confdev.pcm.3.play.vchanformat=s24le:2.0 dev.pcm.3.play.vchanrate=48000 dev.pcm.3.rec.vchanformat=s24le:2.0 dev.pcm.3.rec.vchanrate=48000
When running Jack, we can check cat /dev/sndstat
again to make sure there is
no conversion going on - there should be only feeder_root between userland and
hardware:
... pcm3:play:dsp3.p0[pcm3:virtual:dsp3.vp0]: spd 48000, fmt 0x00210000, flags 0xb000010c, 0x00000001, pid 1779 (jackdbus) interrupts 0, underruns 0, feed 2467, ready 8928 [b:0/0/0|bs:16368/2046/8] channel flags=0xb000010c<RUNNING,TRIGGERED,BUSY,VIRTUAL,BITPERFECT,EXCLUSIVE> {userland} -> feeder_root(0x00210000) -> {hardware} ...
Jack tries to lock part of its memory in RAM, to prevent it from being swapped
out by the operating system. We have to explicitly allow this in
/etc/login.conf
, for the users that want to run Jack. For simplicity I just
change the resource limit of the default login class, search for "memorylocked"
and increase it to at least
... :memorylocked=128M:\ ...
This should be sufficient for Jack. Remember to run
# cap_mkdb /etc/login.conf
afterwards and then logout and login again with the user running Jack.
The FreeBSD scheduler is able to run processes with so-called realtime priority, which means these processes will not be interrupted by other processes, or even drivers. Running Jack with realtime priority can help a great deal to avoid gaps in audio processing, in particular with modern desktop environments.
Traditionally, only root was allowed to run processes at realtime priority.
Starting with FreeBSD 13.1, this privilege can be granted to individual users.
We have to load the mac_priority
kernel module, through
# kldload mac_priority
or at system boot for a permanent setup:
kld_list="mac_priority"
Then we just add the audio user to the realtime
group.
# pw groupmod realtime -m joe
The man pages have more info on this, see man mac_priority
and man rtprio
.
Warning
|
Misbehaving processes running at realtime priority can render a system unusable by starving all other processes. Only selected processes or threads should be run at realtime priority. |
Fortunately, Jack and Jack clients take care of this and only elevate threads to realtime priority when needed. This can be enabled in the Jack settings.
Obviously we have to install Jack from packages (pkg install jackit
) or ports
(audio/jack
) first. Some USB devices provide MIDI ports, they can be made
accessible via audio/jack_umidi
. To make any use of Jack we probably need
additional software like:
- Jack GUI
-
Both
audio/qjackctl
andaudio/cadence
are GUI utilities for managing a Jack server and its audio connections in a graphical way.WarningThe Jack settings produced by the Cadence GUI are incompatible with FreeBSD. QjackCtl creates valid Jack configurations with the oss
driver, but doesn’t support all settings. - DAW
-
I’d suggest
audio/ardour
for full-blown projects, but there’s some alternatives with different scope and varying state of completeness. Likeaudio/muse-sequencer
,audio/traverso
oraudio/zrythm
. - Plugins
-
Search for package names that end in "-lv2", there’s plenty of’em. E.g.
audio/calf-lv2
andaudio/lsp-plugins-lv2
will cover the basics, there’saudio/guitarix-lv2
for guitar effects, andaudio/avldrums-lv2
is a decent MIDI drum set. - Synthesizers and Samplers
-
I have no experience with standalone synthesizers, but searching the packages for "synth" will give you a some options. The same goes for samplers, I only use
audio/hydrogen
to prototype MIDI drum tracks.
Tip
|
A quick and easy way to test Jack sound output is to start Hydrogen and open one of the demos coming with it. |
Starting Jack server via DBUS is the "modern" approach and most current Jack
clients assume that to be the default. Usually DBUS service is required to run
desktop environments anyway, it is enabled in rc.conf
:
dbus_enable="YES"
Then you should be able to configure and run Jack through the jack_control
command line interface (see Jack Settings). The DBUS
approach is quite flexible. It is common for audio software to start a Jack
server on demand if it’s not already running.
The alternative is to start Jack server by an RC service which can be enabled at boot time:
jackd_enable="YES" # (1) jackd_user="joe" # (2) jackd_args="--realtime -d oss --rate 48000 --period 1024 --device /dev/dsp0" # (3)
-
Enable Jack RC service at boot time, set "NO" to start it manually.
-
The user for which the Jack server is started.
-
Command line arguments to start Jack server with, see
man jackd
.
If you want Jack server and clients to run with realtime priority, set the
--realtime
command line argument and grant the Jack user
realtime privileges.
Note
|
The jackd_rtprio variable in rc.conf was used to start Jack server
with realtime priority, regardless of the user’s privileges. This is not
supported anymore, due to problems with Jack clients.
|
The Jack RC service can be convenient for some rather static setups, but it is restricted to a single audio software user. To keep Jack permanently running in the background is also useful when running other sound servers on top. PulseAudio for example can integrate Jack as an audio sink and source.
Caution
|
Mixing the DBUS and RC service methods can be problematic, make sure you only run one instance of Jack server at a time. |
Let’s focus on the DBUS approach here. Don’t worry, the RC service takes the
same setting parameters, just as command line arguments. The idea of the DBUS
control interface is to change the current settings as needed, and then start
the Jack server with these settings. Settings are persistently stored in
~/.config/jack/conf.xml
.
# jack_control help
prints an overview of the subcommands of the control interface. Before anything else we have to set the driver backend, OSS in our case.
# jack_control ds oss
Then we can examine the driver specific settings.
# jack_control dp
Individual driver parameters can be modified as follows:
# jack_control dps rate 48000
Relevant parameters for the OSS driver backend are
- rate
-
Sample rate used by the audio interface, like 44100, 48000, 96000.
- period
-
Length of a Jack processing cycle in samples (per channel). The duration depends on the sample rate, e.g. a period of 384 at 48kHz results in 384 / 48000 = 8ms. A lower value means less overall latency, but also more risk of playback and recording gaps.
TipI recommend to use a multiple of what the device driver processes at once. With 96 (2ms) for a USB device at 48kHz, a period of 192 (4ms), 384 (8ms) or 768 (16ms) would be feasible. If unsure, search the logs for read blocks
andwrite blocks
- Jack tries to detect the block size processed by the driver when opening the device.WarningA few Jack clients expect the period to be a power of two (256, 512, 1024…), but most software and plugins do not rely on that. Some processing methods require the period to be a multiple of 16 though. - nperiods
-
Additional output buffer, in number of periods. Increasing this value by one will increase playback latency by one period. With the default value of 1, the buffer-induced latency stays between 0 and 1 period for input, and between 1 and 2 periods for output in normal operation. Also Jack processing can be 1 period late, before playback and recording gaps occur.
Tip
|
The default of 1 extra period works well in most setups, there’s rarely any need to change this. |
- wordlength
-
Sample size in bits (16, 24, 32).
- inchannels
-
Number of recording channels of the sound device.
- outchannels
-
Number of playback channels of the sound device.
- excl
-
Exclusive access, no other application can use the sound device while Jack is running. Recommended!
- capture
-
Path to the recording device,
/dev/dsp2
for example. Reset the playback and device settings to open the device in recording only mode. - playback
-
Path to the playback device. If in duplex mode, the device must represent the same audio interface as capture, or you will likely run into synchronization problems. Reset the capture and device settings to open the device in playback only mode.
- device
-
Alternative to set the same device path for both recording and playback. Do not combine with capture and playback settings.
- input-latency
-
External recording latency in samples, includes the whole path from analog input through the audio interface to the device driver. This is used for latency correction by Jack clients.
- output-latency
-
Same as input-latency, but for the whole path from device driver to analog output. Also used for latency correction.
The engine parameters affect the general behaviour of Jack, most of them should be left untouched. Available parameters can be listed with
# jack_control ep
- realtime
-
Let Jack and Jack clients elevate select threads to be scheduled with realtime priority. Set this to
False
if the user does not have realtime privileges, see Realtime Priority. - verbose
-
This can be temporarily enabled to help with debugging. Jack will spit out a lot of details into the logs, turn it off for normal operation.
- sync
-
By default, Jack operates in what it calls asynchronous mode. A processing cycle fetches a period of capture samples first, then outputs the playback samples processed in the previous cycle, and finally lets the clients process the samples of the current cycle. Jack will not wait for all clients to finish, and proceed with the next cycle when time is up.
If synchronous mode is enabled here, Jack will output the playback samples processed in the current cycle, bypassing the extra buffer with one period of latency. This comes at the cost of strictly waiting for all clients to finish, thus clients and plugins that misbehave may quickly render Jack operation unstable.
Sometimes you want to switch between different audio interfaces, or even just different configurations of the same interface. Since GUI tools like QjackCtl and Cadence can’t handle Jack settings correctly on FreeBSD, we have to resort to shell scripts:
#!/bin/sh
# Jack config for the Focusrite 18i20
jack_control dps rate 48000
jack_control dps period 768
jack_control dps wordlength 32
jack_control dps inchannels 18
jack_control dps outchannels 20
jack_control dps input-latency 80
jack_control dps output-latency 80
Jack DBUS server stores its settings persistently, means we can adjust only the settings that change from interface to interface, and leave the others untouched. If the device numbers change, symbolic links may be used to create a device alias on the fly.
# ln -s /dev/dsp3 /dev/dsp_jack
If started as an RC service, the settings have to be changed in /etc/rc.conf
before doing a restart of the service.
As mentioned previously, software like Ardour can compensate the playback and recording latency of a system when recording. This not only involves the internal buffer and driver delays, but also the bus communication (PCI, USB), and conversion between analog and digital on the audio interface.
Even with numbers from the interface manufacturer, it is difficult to foretell
the actual latency of a specific setup. So our goal is to measure the latency
of our setup. The easiest way to do this is to create a physical loopback
connection on the audio interface, connecting a playback output back to a
recording input. Then we can measure the roundtrip latency as seen by Jack
clients, using the jack_iodelay
utility.
The idea is to model the recording process, and use a loopback cable in place of the musician. Let’s say you are listening to a drum track on headphones, and recording a guitar performance to that. Then you would use the loopback cable to connect the headphone output to the guitar recording input, for latency measurement. In principle this should also include speaker and effect delays, but that’s usually not practicable.
-
Turn down the volume on the audio interface, where available.
-
Connect the loopback cable to your playback output or a similar (analog) output.
-
Connect the other end to your recording input or a similar (analog) input.
-
Make sure the output is compatible with the input, there are different impedances (guitars, microphones) and line levels.
-
Find out which channels these are mapped to in Jack.
Warning
|
Double-check that your audio interface does not monitor the loopback input on the output - you don’t want to create a feedback loop on the interface! |
If direct monitoring can’t be disabled on the audio interface, there’s the possibility to use a left stereo channel as output and a right channel as input. Given the monitoring is in stereo, not mono.
With the loopback connection in place, we can start the Jack server and do the
measurement. Set the latency correction parameters, input-latency
and
output-latency
, to 0. Make sure the other settings are the same as you would
use for recording. A change of the sample rate for example will produce a
different latency.
-
Start the Jack server.
-
Open a graphical Jack connection manager, like QJackCtl or Catia.
-
Run
jack_iodelay
in a separate terminal, Jack connectors will appear. -
Connect the output of
jack_iodelay
to the loopback playback channel. -
Connect the loopback recording channel to the input of
jack_iodelay
. -
Turn up the volume on the audio interface, if available.
Now jack_iodelay
should print a series of measured latencies, with
corresponding correction parameters for Jack.
... 2157.119 frames 44.940 ms total roundtrip latency extra loopback latency: 1005 frames use 502 for the backend arguments -I and -O ...
Note the correction parameter values, "backend arguments -I and -O" corresponds
to the input-latency
and output-latency
settings. Then disconnect
jack_iodelay
in Jack and stop it in the terminal by pressing Ctrl + C
.
The correction parameters printed will split the total roundtrip latency equally between input and output. Actual latencies may be asymmetric, but that doesn’t matter for recording.
For comparison, in my measurements OSS and USB audio hardware add about 20ms of roundtrip latency. That results in correction parameters of around 480 samples (10ms each) when running at 48kHz. If you get unusually high values, or even unstable values, there may be something wrong with the measurement or your setup.
Note
|
Up until Jack 1.9.21, the minimum internal latency Jack computes is off by one period. This means the measuring method still leads to valid correction parameters, but does not reflect the actual roundtrip latency of OSS and hardware. A fix is underway. |
After setting the latency correction parameters in Jack, recordings should match the timing of existing tracks within about 1-2 ms.