[Swan] IPv6 host-to-host using klips

Erik Andersson erik at ingate.com
Fri Oct 9 16:55:24 UTC 2015


Hi,

Running libreswan 3.15 on Centos 7. I'm trying to setup a host-host 
tunnel between two IPv6 endpoints on the same subnet. Using the 
following configuration:

config setup
     protostack=klips
     interfaces="ipsec0=eth0"

conn mytunnel
     left=2001:470:dc8c:1000::28:60
     right=2001:470:dc8c:1000::28:70
     connaddrfamily=ipv6
     authby=secret
     auto=add

conn v6neighbor-hole-in
     left=::1
     leftsubnet=::0/0
     leftprotoport=58/34560
     rightprotoport=58/34816
     rightsubnet=::0/0
     right=::0
     connaddrfamily=ipv6
     authby=never
     type=passthrough
     auto=route
     priority=1

conn v6neighbor-hole-out
     left=::1
     leftsubnet=::0/0
     leftprotoport=58/34816
     rightprotoport=58/34560
     rightsubnet=::0/0
     right=::0
     connaddrfamily=ipv6
     authby=never
     type=passthrough
     auto=route
     priority=1

When I try to bring up the tunnel I get the following output:

On host 28:60:

[root at vpn-f1 ~]# ipsec auto --up mytunnel
002 "mytunnel" #1: initiating Main Mode
104 "mytunnel" #1: STATE_MAIN_I1: initiate
003 "mytunnel" #1: received Vendor ID payload [Dead Peer Detection]
003 "mytunnel" #1: received Vendor ID payload [FRAGMENTATION]
003 "mytunnel" #1: received Vendor ID payload [RFC 3947]
002 "mytunnel" #1: enabling possible NAT-traversal with method RFC 3947 
(NAT-Traversal)
002 "mytunnel" #1: transition from state STATE_MAIN_I1 to state 
STATE_MAIN_I2
106 "mytunnel" #1: STATE_MAIN_I2: sent MI2, expecting MR2
003 "mytunnel" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal) 
sender port 500: no NAT detected
002 "mytunnel" #1: transition from state STATE_MAIN_I2 to state 
STATE_MAIN_I3
108 "mytunnel" #1: STATE_MAIN_I3: sent MI3, expecting MR3
003 "mytunnel" #1: received Vendor ID payload [CAN-IKEv2]
002 "mytunnel" #1: Main mode peer ID is ID_IPV6_ADDR: 
'2001:470:dc8c:1000::28:70'
002 "mytunnel" #1: transition from state STATE_MAIN_I3 to state 
STATE_MAIN_I4
004 "mytunnel" #1: STATE_MAIN_I4: ISAKMP SA established 
{auth=PRESHARED_KEY cipher=aes_256 integ=sha group=MODP2048}
002 "mytunnel" #2: initiating Quick Mode 
PSK+ENCRYPT+TUNNEL+PFS+UP+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW 
{using isakmp#1 msgid:c4b5581b proposal=defaults 
pfsgroup=OAKLEY_GROUP_MODP2048}
117 "mytunnel" #2: STATE_QUICK_I1: initiate
002 "mytunnel" #2: transition from state STATE_QUICK_I1 to state 
STATE_QUICK_I2
004 "mytunnel" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel 
mode {ESP=>0x19bf43c9 <0x291c2985 xfrm=AES_128-HMAC_SHA1 NATOA=none 
NATD=none DPD=passive}

On host 28:70:

[root at vpn-f1 ~]# ipsec auto --up mytunnel
002 "mytunnel" #3: initiating Quick Mode 
PSK+ENCRYPT+TUNNEL+PFS+UP+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW 
{using isakmp#1 msgid:bc8dedb6 proposal=defaults 
pfsgroup=OAKLEY_GROUP_MODP2048}
117 "mytunnel" #3: STATE_QUICK_I1: initiate
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 500ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 1000ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 2000ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 4000ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 8000ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 16000ms for 
response
010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 32000ms for 
response
031 "mytunnel" #3: max number of retransmissions (8) reached 
STATE_QUICK_I1.  No acceptable response to our first Quick Mode message: 
perhaps peer likes no proposal
002 "mytunnel" #3: deleting state #3 (STATE_QUICK_I1)

If I use netkey instead of klips the tunnel is successfully setup. Am I 
missing any necessary configuration options for klips?

Another thing. When I browsed the archives I noticed the post 
https://lists.libreswan.org/pipermail/swan/2015/001168.html. Don't know 
if that ever got resolved. This is just a long shot but we experienced 
memory leak issues with pluto when there was a PFS group mismatch on a 
large number of tunnels (approx. 40). The following patch mitigated our 
issue:

--- a/programs/pluto/ikev1_quick.c
+++ b/programs/pluto/ikev1_quick.c
@@ -2252,6 +2252,10 @@ static void quick_inI1_outR1_cryptocontinue1(
 
complete_v1_state_transition(&qke->qke_md, e);
                                 release_any_md(&qke->qke_md);
                         }
+               } else if (e == STF_FAIL + NO_PROPOSAL_CHOSEN) {
+                       /* No PFS */
+                       if(md)
+                               release_md(qke->qke_md);
                 }
         }
         reset_cur_state();
@@ -2300,6 +2304,10 @@ static void quick_inI1_outR1_cryptocontinue2(
                         complete_v1_state_transition(&dh->dh_md, e);
                         release_any_md(&dh->dh_md);
                 }
+       } else if (e == STF_FAIL + NO_PROPOSAL_CHOSEN) {
+               /* No PFS */
+               if(dh->dh_md)
+                       release_md(dh->dh_md);
         }

         reset_cur_state();

Thanks in advance,

/Erik


More information about the Swan mailing list