[Swan] Setting up Libreswan with AWS server

Mattias Mattsson ratatosk71 at yahoo.com
Mon Mar 23 23:55:33 UTC 2020


Hi,
I’ve been trying to set up Libreswan to have a client behind a NAT and a server on an AWS instance and I need some help on completing this setup. Any assistance would be appreciated.
The basic setup would be (I've changed the public IP to 1.1.1.100 in all text below).
<client ubuntu 10.0.2.15> -- <NAT 1.1.1.100> -- <AWS 3.101.15.189> -- <AWS Private IP 172.31.10.176>
I’ve created a Ubuntu 18.04 client in a VM using Virtualbox and started a Ubuntu 18.04 EC2 instance in AWS. I installed Libreswan from source (in order to ensure that I get the exact same version on both client and server) following these directions (roughly); https://computingforgeeks.com/how-to-install-libreswan-on-ubuntu/
I’ve also disabled ICMP redirects and rp-filters according to these directions
https://libreswan.org/wiki/FAQ#Why_is_it_recommended_to_disable_send_redirects_in_.2Fproc.2Fsys.2Fnet_.3F
https://libreswan.org/wiki/FAQ#Why_is_it_recommended_to_disable_rp_filter_in_.2Fproc.2Fsys.2Fnet_.3F
In order to focus on establishing the connection I’ve created a minimalistic configuration file for the client and the server. 
ipsec.secrets (common for client and server)
10.0.2.15 172.31.10.176 : PSK "abcdefghijklmnopqrstuvwxyz0123456789"
ipsec.conf (client)
conn ipsec_aws
  authby=secret
  encapsulation=yes
  right=3.101.15.189
  rightid=172.31.10.176
  left=10.0.2.15
  ikev2=no
ipsec.conf (server)
config setup
  protostack=netkeyconn ipsec_aws
  authby=secret
  encapsulation=yes
  right=172.31.10.176
  left=1.1.1.100
  leftid=10.0.2.15
  ikev2=no
With this setup, the tunnel seems to have been established and I see keep-alives between client and server. However, I cannot connect from the client to the server using e.g. ssh.
 
The server shows the following IPsec connection from 'ipsec auto --status'
000 Connection list:
000  
000 "ipsec_aws": 172.31.10.176<172.31.10.176>...1.1.1.100<1.1.1.100>[10.0.2.15]; erouted; eroute owner: #2
 
The server shows the following IP XFRM from 'ip xfrm state'
# ip xfrm state
src 1.1.1.100 dst 172.31.10.176
        proto esp spi 0x614eef4e reqid 16389 mode tunnel
        replay-window 32 flag af-unspec
        auth-trunc hmac(sha1) 0x8a6dd1d895008c75bd7a20e0160f05e8e83cb4ae 96
        enc cbc(aes) 0x60b4a07fda9d290b1ad74b060a557721
        encap type espinudp sport 65255 dport 4500 addr 0.0.0.0
        anti-replay context: seq 0xd, oseq 0x0, bitmap 0x00001fff
src 172.31.10.176 dst 1.1.1.100
        proto esp spi 0xb531180d reqid 16389 mode tunnel
        replay-window 32 flag af-unspec
        auth-trunc hmac(sha1) 0xbded8cbbbd1c49056a3b72d7a15fa3a24b5ab77c 96
        enc cbc(aes) 0xb8e88ca3ac6e67b3130f745570f968d8
        encap type espinudp sport 4500 dport 65255 addr 0.0.0.0
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
 
Doing tcpdump on the server (excluding the jump host to avoid ssh traffic from the jump host) when trying to connect from client to server via ssh.
# tcpdump -ni eth0 not host xx.xx.xx.xx
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
22:28:25.884430 IP 1.1.1.100.65255 > 172.31.10.176.4500: UDP-encap: ESP(spi=0x614eef4e,seq=0x7), length 100
22:28:25.884430 IP 10.0.2.15.54602 > 3.101.15.189.22: Flags [S], seq 2809184565, win 64240, options [mss 1460,sackOK,TS val 2951550712 ecr 0,nop,wscale 7], length 0
 
I can see that the initial TCP SYN arrives from the client as a UDP encapsulated ESP packet from the client's public IP (1.1.1.100.65255 > 172.31.10.176.4500). The packet is decrypted and forward to the server's public IP but from the client's private IP (10.0.2.15.54602 > 3.101.15.189.22). Given that the AWS instance does not have the public IP it will not process the packet.
 
The question is what should be done here. Should I try to DNAT the packet to the AWS instance’s private IP? Should I try to use the servers private IP in the client configuration ‘right’ setting in order to use the server’s private IP instead? If I change the ipsec.conf settings I can't get the tunnel to connect. But if I keep them, the client gets an ip xfrm policy between the clients private IP and the servers public IP so I have to use the servers public IP to when doing ssh to get the traffic to go through the tunnel. 
Any help would be appreciated. Thanks!
/ Mattias
 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.libreswan.org/pipermail/swan/attachments/20200323/dac781e9/attachment.html>


More information about the Swan mailing list