[DELETED]

Optimize your system for ultimate performance.

Moderators: MattKingUSA, khz

Post Reply
crocket
Established Member
Posts: 68
Joined: Fri Mar 29, 2019 11:56 am

[DELETED]

Post by crocket »

[DELETED]
Last edited by crocket on Sun Sep 01, 2019 2:02 am, edited 26 times in total.
crocket
Established Member
Posts: 68
Joined: Fri Mar 29, 2019 11:56 am

Re: A decent sound card seems to eliminate pops and clicks in the bridge between ALSA and JACK.

Post by crocket »

[DELETED]
Last edited by crocket on Fri Aug 30, 2019 2:36 am, edited 1 time in total.
mtlwstlk
Established Member
Posts: 5
Joined: Tue Aug 20, 2019 8:06 pm

Re: A decent sound card seems to eliminate pops and clicks in the bridge between ALSA and JACK.

Post by mtlwstlk »

if you're doing streaming and want to shorten latencies over the network card, there's kernel parameters such as
(to be used in sysctl.d settings)
net.core.busy_poll = 1
net.core.busy_read = 1
net.core.netdev_budget_usecs = 10000
net.ipv4.tcp_low_latency = 1

if you're doing this-> "IRQ_FORCED_THREADING", then you should cpu pinning these threads to an isolate cpu core..

Here I use cpu pinning with taskset manually, but it should also be possible to do with cgroup settings.

If you've got systemd running on your system, you can use this,

/etc/systemd/system.conf
"
[Manager]
CPUAffinity=1 2 3
"

the user.conf I think inherit this value,

so say you put all IRQ threads onto cpu core0 (taskset -apc 0 <pid of IRQ thread>),

this gives more cpu time for user applications which default now going to CPU core from 1 to 3, and not onto cpu core 0 which is getting all the noise from IRQ threads. (IRQ threads are noisy and create jitter -- however keep some IRQ threads across all 4 cores, such as for video as that takes a lot of resource.)

This example only makes sense if you've got more than 2 cores.. otherwise the better thing to do is to stick with "irqbalance" and not bother with enabling irq threads at all as it won't make much any difference. Enabling IRQ threads only makes sense when you want to move these threads to an isolated core, that's their purpose..

Note: if you're "pinning" IRQ threads, then you want to be sure you're not using irqbalance that would take those irq threads placing them back on your user cores 1-3..

that's just an example system that I am using(on a 4-core system), if you've got more cores, then you could define more in the system.conf setting.

There's another tip or two I can give and that is to use these to your bootline,

skew_tick=1 acpi_irq_nobalance rcutree.jiffies_till_first_fqs=0

The other thing going with systemd, is that it's got extra scheduling options built into its settings...

You can set CPU weights and priorities, etc..

eg -- I've noticed that altering the weight of systemd-udevd.service has somewhat helped on workstations..

You can always appropriate more CPU time with systemd unit files, and it doesn't always have to be "less cpu time" for a service, you can reserve more cpu time for more important applications..

Here there is very little use in having systemd-udevd run at its default 1000 CPU weight, and tone it down to "1" --
The only task udevd performs is to scan for usb-plugin/plugout events and to creating device nodes in /dev , and nothing else..

a copy of /lib/systemd/systemd/systemd-udevd.service can be made to /etc/systemd/system/systemd-udevd.service,

Here what happens is setting the udevd service to just the first core of the system,(core 0), and lowering its cpuweight..
"
CPUAffinity=0
Nice=19
CPUWeight=1
"
(Goes under "[Service]")

the default scheduling policy is "other", and it should remain at that.

I use these on all my workstations, and perhaps these may help in your case.

gl&hf
Post Reply