[Swan] VPN setup

Darko Luketic info at icod.de
Tue Jan 20 16:40:33 EET 2015


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Michael,

what's the advantage of this (GRE) over assigning an additional
private IP to each server's NIC and going with IPsec alone?
In my case each server doesn't need more than 1 private IP, since I'll
have various private services listening on different ports and no
VMery going on and I don't need broadcasts (unless I missed something).
Unauthenticated requests won't be allowed through pluto.
And the datacenter's switches make sure private IP addresses won't get
routed.

2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
    link/ether 01:23:45:67:89:ab brd ff:ff:ff:ff:ff:ff
    inet 123.123.123.123/27 brd 123.123.123.124 scope global enp5s0
       valid_lft forever preferred_lft forever
    inet 10.0.1.1/32 scope global enp5s0
       valid_lft forever preferred_lft forever

Regarding load-balancing, this is all least-cost.
I'll have
- - DNS LB
- - Each server running nginx (or haproxy) at the front
- - Each server LB through nginx with local first, remote second
Application Layer
- - Each server has one (Go compiled) program listening on
[Private_IFaceAddr]:[9000+UserIDnum]
- - Each program will do CRUD operations with the Database Layer
- - Each (sub)domain will be its own user with its own set of static assets
Database/Storage Layer
- - Each server will have 1 replicaset on 1 of the 2 drive's partitions
- - All replica sets will form a cluster to distribute partitioned data
for instance
Server1 will have RS0 on HDD0, RS1 on HDD1, RS2 arbiter on rootfs
Server2 will have RS0 on HDD0, RS1 on arbiter on rootfs, RS2 on HDD1
Server3 will have RS0 arbiter on rootfs, RS1 on HDD0, RS2 on HDD1
The replica sets will listen on 20000+RSid
for instance
S1:20000 RS0
S2:20000 RS0
S3:20000 RS0
where
S1=10.0.1.1
S2=10.0.1.2
S3=10.0.1.3

RS0+RS1+RS2 = a shard of 3 replica sets

Each server has a mongos instance listening for requests by programs.

2 nodes will be in 1 dc 1 in another.

Expanding the cluster will require 3 more servers, but with ~15 TB
storage for data and files distributed across 3 nodes, this will last
for a while and for a price where you'd have to pay 5-fold if not
10-fold to have it all on 1 server(and then you'd not have high
availability).

48GB ECC RAM (sure could be more)
6x3TB HDDs (this is the expensive part on those 1 server solutions)
12 cores (24 logical)
600Mbit/s guaranteed (that's 200 / node x 3)
60TB outbound traffic (20TB / node x 3, also the expensive part on
self-owned hardware/shared rack solutions)

= ~5k-15k concurrent connections possible

for ~120€/month (incl. VAT)
If you can beat that I'll be the first to be your customer :)

Sure there is some latency because it's distributed, and hardware is
refurbished and 2 years old. But as long as I don't have to replace
parts out of my own pocket and it works that's good enough for me.

I believe that's a pretty solid setup
I have a distributed/high available, replicated database/storage, on
the back
Website/Application listeners in the middle
Load balancing via nginx (for now) on the front
Load balancing via DNS on the front

What ever can go wrong? (famous last words ;) )

On 01/19/2015 07:21 PM, Michael Schwartzkopff wrote:
> Am Montag, 19. Januar 2015, 13:41:14 schrieb Darko Luketic:
>> Hello,
>> 
>> I'm not sure if ipsec/libreswan is the way to go.
>> 
>> What I want is 2 (or more) servers to share the same private
>> subnet.
> 
> No. IPsec is a layer 3 protocol. You can connect two networks. What
> you are lookting for is a layer2 tunnel over a layer3 network. I
> would suggest that you have a look at
> 
> http://lartc.org/howto/lartc.tunnel.gre.html
> 
> Additionally you could (and perhaps you should) encrypt the traffic
> of the GRE tunnel. Here IPsec and StrongS/WAN can help you.
> 
> 
>> Let's take the 2 servers scenario for starters.
>> 
>> Both servers have 1 public ipv4 address and a /64 ipv6 prefix. 
>> Both servers should share the same private subnet. 10.0.0.0 s1
>> should have 10.0.0.1 s2 should have 10.0.0.2 (and likewise sX
>> should have 10.0.0.X for 4,6,8... servers)
>> 
>> I'm not sure where to start or what the configuration should be.
>> 
>> I have created hostkeys on both s1s2.conf ### config setup 
>> protostack=netkey
>> 
>> conn s1s2 leftid=@s1 #does this need the fqdn? 
>> left=publicIPv4_of_s1 leftrsasigkey=theleftkey_s1 rightid=@s2 #or
>> is this just an internal identifier? right=publicIPv4_of_s2 
>> rightrsasigkey=therightkey_s2 authby=rsasig auto=add ###
>> 
>> I'm not sure how to proceed next.
>> 
>> So the end result should be something like: mongodb
>> replicaset_s1s2 listen 10.0.0.1:27017 & 10.0.0.2:27017 website1
>> service listen 10.0.0.1:10000 10.0.0.2:10000 So I can have nginx
>> listening on s1_public_IPs & s2_public_IPs and this should load
>> balance to 10.0.0.1:10000 & 10.0.0.2:10000 and those should
>> likewise connect to 10.0.0.1:27017 & 10.0.0.2:27017 so I don't
>> need TLS overhead for DB connections. ^ this is just to visualize
>> what I had in mind, so that it's clear why I need a specific
>> subnet for each server
>> 
>> And the next question is, let's say I expand those 2 servers to 3
>> ( because mongodb needs an arbiter, a 3rd server to decide who's
>> the primary and replica) and the 3rd server should be part of the
>> VPN as 10.0.0.3
> 
> 
> Perhaps a loadbalancer is what you are looking for?
> 
> Mit freundlichen Grüßen,
> 
> Michael Schwartzkopff
> 

- -- 
Freundliche Grüße / Best regards

Darko Luketic

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJUvmjeAAoJENrR4EaH4PXFMyUH/A09ZV5bQxGNCIxMBlQINDBo
TLBfEiD/MM9nA9RMXPs+q5KG2cpp0qIjKr9VCCsfo0dScwyx1XcPsbYOdMHJ8X4u
994JhRiWh3o82KkHGp6zOxv4xeCz8cDl/ud81LS76bWtQyBS2vUuu6oT2s35ZgfK
SJa5Z1lklkk0KSqNNBV3S5VizTVXGP7MwugahQzzTZfa7jwCOQI+Anzl/xMDsNmU
b/wrwFM0LyyBHgYVEcyb1lR/+UYajTFj/fWR7FndxWTNveFHyzV2pXEWeAK0drtS
80wwkKSZhDHGgrFjKFFZ24CQhW5K7a56R/F9QGgKVcjYgyrAl67WGIeGC2JEXNs=
=RpVI
-----END PGP SIGNATURE-----


More information about the Swan mailing list