[Swan-dev] [libreswan/libreswan] pluto segfault in nat_traversal (#367) (fwd)

Paul Wouters paul at nohats.ca
Tue Sep 15 16:52:35 UTC 2020

---------- Forwarded message ----------
Date: Tue, 15 Sep 2020 11:23:20
From: Daniel Wendler <notifications at github.com>
Cc: Subscribed <subscribed at noreply.github.com>
To: libreswan/libreswan <libreswan at noreply.github.com>
Subject: [libreswan/libreswan] pluto segfault in nat_traversal (#367)

Over the last weeks we upgrade our VPN Gateways from v3.27 / v3.29 to mostly v3.31.
After the update we have problems with the NAT detection code although we doesn't use NAT-T on most
of our IKEv1 tunnels:

#55534: EXPECTATION FAILED: i != NULL (in natify_initiator_endpoints() at nat_traversal.c:1082)

The tunnel couldn't be established and we have to workaround this with nat-ikev1-method=none, the
Error Message still exists but the tunnel could be established (in reverse we have to set
encapsulation=yes for tunnels we want to use NAT-T).

On one of our Gateways we also see this message paired with another EXPECTATION FAILED:

2020-09-15T04:22:54+02:00 ... #9319: EXPECTATION FAILED: i != NULL (in natify_initiator_endpoints() a
t nat_traversal.c:1082)
2020-09-15T04:22:54+02:00 ... #9319: EXPECTATION FAILED: state #9319 is not an IKE SA (in pexpect_ike
_sa() at state.c:516)
2020-09-15T04:22:54+02:00 ... #9319: EXPECTATION FAILED: state #9319 is not an IKE SA (in pexpect_ike
_sa() at state.c:516)

After that it becomes even worse as the pluto crashes:

2020-09-15T04:22:54+02:00 ... err ipsec__plutorun[5597]: !pluto failure!:  exited with error status 1
39 (signal 11)
2020-09-15T04:22:54+02:00 ... err ipsec__plutorun[5599]: restarting IPsec after pause...

We do have an core dump, take a look into it and got the following BT:

Reading symbols from OBJ.linux.amd64/programs/pluto/pluto...done.
[New LWP 30885]
[New LWP 30888]
[New LWP 30890]
[New LWP 30889]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/lib/ipsec/pluto --config /etc/ipsec.conf --nofork'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  update_pending (old_ike=0x0, new_ike=0x0) at ./programs/pluto/pending.c:376
376             pp = host_pair_first_pending(old_ike->sa.st_connection);
[Current thread is 1 (Thread 0x7f67498c1840 (LWP 30885))]
(gdb) bt
#0  update_pending (old_ike=0x0, new_ike=0x0) at ./programs/pluto/pending.c:376
#1  0x000056498c85c17b in v1_maybe_natify_initiator_endpoints (st=st at entry=0x56498ce6ce90, where=...)
  at ./programs/pluto/nat_traversal.c:986
#2  0x000056498c81cca2 in quick_outI1_tail (r=r at entry=0x56498d2beb38, st=st at entry=0x56498ce6ce90) at 
#3  0x000056498c81d12a in quick_outI1_continue (st=0x56498ce6ce90, mdp=0x7ffdb8744a30, r=0x56498d2beb
38) at ./programs/pluto/ikev1_quick.c:616
#4  0x000056498c853e28 in pcr_completed (st=0x56498ce6ce90, mdp=0x7ffdb8744a30, task=0x56498d2beae0) 
at ./programs/pluto/pluto_crypt.c:676
#5  0x000056498c853291 in handle_helper_answer (st=0x56498ce6ce90, mdp=0x7ffdb8744a30, arg=0x56498d2b
eae0) at ./programs/pluto/pluto_crypt.c:660
#6  0x000056498c803f6d in resume_handler (fd=<optimized out>, events=<optimized out>, arg=0x7f67340cd
a60) at ./programs/pluto/server.c:827
#7  0x00007f6747b8f5a0 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#8  0x000056498c807b3d in call_server (conffile=0x56498cc57110 "/etc/ipsec.conf") at ./programs/pluto
#9  0x000056498c7c2f7d in main (argc=<optimized out>, argv=<optimized out>) at ./programs/pluto/pluto
(gdb) quit

As this happens a few times a day it hurts a little as we do have interruptions in production

Maybe an quick fix could be (we compiled it but doesn't have an proof if this works):

--- pending.c	2020-03-03 20:48:13.000000000 +0100
+++ pending.c.new	2020-09-15 17:09:14.006863409 +0200
@@ -373,6 +373,9 @@
  	struct pending *p, **pp;

+	if (old_ike == NULL)
+		return;
  	pp = host_pair_first_pending(old_ike->sa.st_connection);
  	if (pp == NULL)

You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, orunsubscribe.[AAW5L6KREHQIBWAS3SOHMWLSF6BGRA5CNFSM4RNJWFMKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C

More information about the Swan-dev mailing list