[Swan] IPv6 host-to-host using klips

Erik Andersson erik at ingate.com
Mon Oct 12 14:34:43 UTC 2015


Hi Paul,

Thanks for the quick reply.

On 10/09/2015 08:31 PM, Paul Wouters wrote:
> On Fri, 9 Oct 2015, Erik Andersson wrote:
>
>> conn mytunnel
>>    left=2001:470:dc8c:1000::28:60
>>    right=2001:470:dc8c:1000::28:70
>>    connaddrfamily=ipv6
>>    authby=secret
>>    auto=add
>
>> When I try to bring up the tunnel I get the following output:
>
>> On host 28:60:
>
>> [root at vpn-f1 ~]# ipsec auto --up mytunnel
>
>> 004 "mytunnel" #2: STATE_QUICK_I2: sent QI2, IPsec SA established
>> tunnel mode {ESP=>0x19bf43c9 <0x291c2985 xfrm=AES_128-HMAC_SHA1
>> NATOA=none NATD=none DPD=passive}
>
> So that's good. The tunnel came up.
>
Output from ipsec status on the two endpoints after running ipsec auto 
--up mytunnel on 28:60.

Host 28:60:

000 Total IPsec connections: loaded 3, active 1
000
000 State Information: DDoS cookies not required, Accepting new IKE 
connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)
000
000 #2: "mytunnel":500 STATE_QUICK_I2 (sent QI2, IPsec SA established); 
EVENT_SA_REPLACE in 28043s; newest IPSEC; eroute owner; isakmp#1; idle; 
import:admin initiate
000 #2: "mytunnel" esp:16a2f0d6 at 2001:470:dc8c:1000::28:70 
esp:e9295d09 at 2001:470:dc8c:1000::28:60 
tun:1000 at 2001:470:dc8c:1000::28:70 tun:1001 at 2001:470:dc8c:1000::28:60 
ref=0 refhim=4294901761 Traffic:! ESPmax=4194303B
000 #1: "mytunnel":500 STATE_MAIN_I4 (ISAKMP SA established); 
EVENT_SA_REPLACE in 2602s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); 
idle; import:admin initiate
000
000 Bare Shunt list:
000

Host 28:70

000 Total IPsec connections: loaded 3, active 0
000
000 State Information: DDoS cookies not required, Accepting new IKE 
connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)
000
000 #2: "mytunnel":500 STATE_QUICK_R1 (sent QR1, inbound IPsec SA 
installed, expecting QI2); EVENT_v1_RETRANSMIT in 8s; isakmp#1; idle; 
import:not set
000 #2: "mytunnel" esp:e9295d09 at 2001:470:dc8c:1000::28:60 
esp:16a2f0d6 at 2001:470:dc8c:1000::28:70 
tun:1000 at 2001:470:dc8c:1000::28:60 tun:1001 at 2001:470:dc8c:1000::28:70 
ref=0 refhim=4294901761 Traffic:! ESPmax=4194303B
000 #1: "mytunnel":500 STATE_MAIN_R3 (sent MR3, ISAKMP SA established); 
EVENT_SA_REPLACE in 3274s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); 
idle; import:not set
000
000 Bare Shunt list:
000

>> On host 28:70:
>>
>> [root at vpn-f1 ~]# ipsec auto --up mytunnel
>> 002 "mytunnel" #3: initiating Quick Mode
>
> It detected the tunnel was already up, so it is doing a rekey of phase2
> only:
>
>> PSK+ENCRYPT+TUNNEL+PFS+UP+IKEV1_ALLOW+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW
>> {using isakmp#1 msgid:bc8dedb6 proposal=defaults
>> pfsgroup=OAKLEY_GROUP_MODP2048}
>> 117 "mytunnel" #3: STATE_QUICK_I1: initiate
>> 010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 500ms for
>> response
>> 010 "mytunnel" #3: STATE_QUICK_I1: retransmission; will wait 1000ms
>> for response
>
>> 031 "mytunnel" #3: max number of retransmissions (8) reached
>> STATE_QUICK_I1. No acceptable response to our first Quick Mode
>> message: perhaps peer likes no proposal
>
> but failing. What does the pluto log on the other end say when this
> happens?
I've uploaded the pluto logs for both the endpoints (host A 
2001:470:dc8c:1000::28:60 and host B 2001:470:dc8c:1000::28:70). The log 
files can downloaded via these links:

https://www.ingate.com/plutologs/pluto_host_a.log
https://www.ingate.com/plutologs/pluto_host_b.log

If you look in the file pluto_host_a.log and under the injected mark 
"Host A after ipsec auto --up mytunnel on Host B" you will see what 
happens on the other end (host a).

The log line which caught my attention is ""mytunnel" #2: discarding 
duplicate packet -- exhausted retransmission; already STATE_QUICK_I2". 
But I cannot tell if that's normal or not.

>
>> If I use netkey instead of klips the tunnel is successfully setup. Am
>> I missing any necessary configuration options for klips?
>
> Odd. No you are not missing anything.
>
>> Another thing. When I browsed the archives I noticed the post
>> https://lists.libreswan.org/pipermail/swan/2015/001168.html. Don't
>> know if that ever got resolved. This is just a long shot but we
>> experienced memory leak issues with pluto when there was a PFS group
>> mismatch on a large number of tunnels (approx. 40). The following
>> patch mitigated our issue:
>>
>> --- a/programs/pluto/ikev1_quick.c
>> +++ b/programs/pluto/ikev1_quick.c
>> @@ -2252,6 +2252,10 @@ static void quick_inI1_outR1_cryptocontinue1(
>>
>> complete_v1_state_transition(&qke->qke_md, e);
>>                                release_any_md(&qke->qke_md);
>>                        }
>> +               } else if (e == STF_FAIL + NO_PROPOSAL_CHOSEN) {
>> +                       /* No PFS */
>> +                       if(md)
>> +                               release_md(qke->qke_md);
>
> That looks reasonable, but we should take a closer look. Since this is
> in inI1_outR1, any STF_FAIL should cause us to delete the entire state.
> So perhaps there is a better place where this can be deleted to prevent
> the memory loss more generally. For example, an STF_FATAL might also
> need to release the md to prevent the leak.
>
> Adding Hugh to the CC: since he's looked this code last.
>
Ok. Note that we used this patch in pretty old pluto code. Before the 
fork. Maybe it's already fixed in recent libreswan releases.

Thanks,

/Erik
>>                }
>>        }
>>        reset_cur_state();
>> @@ -2300,6 +2304,10 @@ static void quick_inI1_outR1_cryptocontinue2(
>>                        complete_v1_state_transition(&dh->dh_md, e);
>>                        release_any_md(&dh->dh_md);
>>                }
>> +       } else if (e == STF_FAIL + NO_PROPOSAL_CHOSEN) {
>> +               /* No PFS */
>> +               if(dh->dh_md)
>> +                       release_md(dh->dh_md);
>
> same here.
>
> Paul


More information about the Swan mailing list