[Swan] Libreswan Performance
Craig Marker
cmarker at inspeednetworks.com
Tue Mar 28 20:10:22 UTC 2017
I am running some iperf3 bandwidth tests and noticing poor performance on Libreswan v3.19. When I ran similar tests on v3.15, I didn’t run into these performance issues. On v3.15,
I would see bandwidth around 700Mbps, where now the max I’ve seen in 300Mbps. It seems like the process ‘ksoftirqd’ is hogging the CPU. I’ve tried playing around with linux irq
settings, as well as tx and rx queue length and CPU affinity. No luck. I’m wondering if anyone has experienced this and, if so, how they have handled it.
The iperf3 client is on one end of the IPsec tunnel’s local network. The iperf3 server is on the other end’s local network. This is a representation of my setup:
iperf3 client —> tunnel endpoint A —> tunnel endpoint B —> iperf3 server
10.200.1.1 10.200.1.210 10.200.0.210 10.200.0.92 10.200.2.2 10.200.2.11
The iperf3 client command I’m running is iperf3 -c 10.200.2.11 -p 54321 -t 120 -R and the iperf3 server command I’m running is iperf3 -sp 54321. I’ve also tried running
with multiple iperf3 client/servers, and the performance results are about the same.
Here’s the configuration for tunnel endpoint B:
config setup
dumpdir=/var/run/pluto/
virtual-private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v4:100.64.0.0/10,%v6:fd00::/8,%v6:fe80::/10
protostack=netkey
# begin conn local
conn local
left=10.200.0.210
leftid="@client"
leftsubnet=10.200.1.0/24
rightid="@is1"
rightsubnet=10.200.2.0/24
rightcert=server
right=10.200.0.92
authby=rsasig
auto=ignore
type=tunnel
compress=no
pfs=yes
ikepad=yes
authby=rsasig
phase2=esp
ikev2=permit
esn=no
Here’s the configuration for tunnel endpoint A:
config setup
dumpdir=/var/run/pluto/
virtual-private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v4:100.64.0.0/10,%v6:fd00::/8,%v6:fe80::/10
protostack=netkey
# begin conn local
conn local
left=10.200.0.210
leftid="@client"
leftsubnet=10.200.1.0/24
leftcert=client
rightid="@is1"
rightsubnet=10.200.2.0/24
right=10.200.0.92
authby=rsasig
auto=ignore
type=tunnel
compress=no
pfs=yes
ikepad=yes
authby=rsasig
phase2=esp
ikev2=permit
esn=no
# end conn local
This is what I’m seeing when I run top on tunnel endpoint A:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13 root 20 0 0 0 0 R 99.7 0.0 24:32.94 ksoftirqd/1
434 root 20 0 0 0 0 S 0.3 0.0 0:08.04 xfsaild/dm-1
913 root 0 -20 0 0 0 S 0.3 0.0 0:00.39 kworker/1:1H
9683 root 20 0 0 0 0 S 0.3 0.0 0:03.94 kworker/0:3
21067 root 20 0 0 0 0 S 0.3 0.0 0:01.26 kworker/1:0
25264 cmarker 20 0 157696 2216 1556 R 0.3 0.0 0:00.76 top
1 root 20 0 46168 6684 3956 S 0.0 0.1 0:10.36 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
Let me know if any other information would be useful, and I’ll do my best to provide it!
Thanks!
--
cm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.libreswan.org/pipermail/swan/attachments/20170328/40f67c40/attachment-0001.html>
More information about the Swan
mailing list