[Swan] VPN setup

Michael Schwartzkopff ms at sys4.de
Tue Jan 20 17:05:21 EET 2015


Am Dienstag, 20. Januar 2015, 15:40:33 schrieb Darko Luketic:
> Hi Michael,
> 
> what's the advantage of this (GRE) over assigning an additional
> private IP to each server's NIC and going with IPsec alone?

Layer2 networking will not work with Layer3 instance. In Layer2 the 
destination is found via a ARP request (or IPv6 neigh solicitaion ;-). In 
layer3 you have the routing table. How do you tell a server in the left side 
to route a package that it is supposed to send a ARP request? 

> In my case each server doesn't need more than 1 private IP, since I'll
> have various private services listening on different ports and no
> VMery going on and I don't need broadcasts (unless I missed something).

ARP requests are layer2 broadcasts.

> Unauthenticated requests won't be allowed through pluto.
> And the datacenter's switches make sure private IP addresses won't get
> routed.

Of course, you could take the effort to make things work somehow. I presume 
that it might even work somehow. But it will take a lot of time. It will be a 
solution that noone can debug. So why don't you follow the "internet" way of 
doing things?


> Regarding load-balancing, this is all least-cost.
> I'll have

Loadbalancing is low cost.

> - DNS LB

WHY in the world do you need DNS loadbalancing? DNS protocol has intrinic 
redundancy. You can add as much resolvers as you want the resolv.conf. Use the 
force, Luke! Use the protocols wisely.

> - Each server running nginx (or haproxy) at the front
> - Each server LB through nginx with local first, remote second

How many servers do you have? Two or more?

> Application Layer
> - Each server has one (Go compiled) program listening on
> [Private_IFaceAddr]:[9000+UserIDnum]
> - Each program will do CRUD operations with the Database Layer
> - Each (sub)domain will be its own user with its own set of static assets
> Database/Storage Layer
> - Each server will have 1 replicaset on 1 of the 2 drive's partitions
> - All replica sets will form a cluster to distribute partitioned data
> for instance
> Server1 will have RS0 on HDD0, RS1 on HDD1, RS2 arbiter on rootfs
> Server2 will have RS0 on HDD0, RS1 on arbiter on rootfs, RS2 on HDD1
> Server3 will have RS0 arbiter on rootfs, RS1 on HDD0, RS2 on HDD1
> The replica sets will listen on 20000+RSid
> for instance
> S1:20000 RS0
> S2:20000 RS0
> S3:20000 RS0
> where
> S1=10.0.1.1
> S2=10.0.1.2
> S3=10.0.1.3
> 
> RS0+RS1+RS2 = a shard of 3 replica sets
> 
> Each server has a mongos instance listening for requests by programs.
> 
> 2 nodes will be in 1 dc 1 in another.
> 
> Expanding the cluster will require 3 more servers, but with ~15 TB
> storage for data and files distributed across 3 nodes, this will last
> for a while and for a price where you'd have to pay 5-fold if not
> 10-fold to have it all on 1 server(and then you'd not have high
> availability).
> 
> 48GB ECC RAM (sure could be more)
> 6x3TB HDDs (this is the expensive part on those 1 server solutions)
> 12 cores (24 logical)
> 600Mbit/s guaranteed (that's 200 / node x 3)
> 60TB outbound traffic (20TB / node x 3, also the expensive part on
> self-owned hardware/shared rack solutions)
> 
> = ~5k-15k concurrent connections possible
> 
> for ~120€/month (incl. VAT)
> If you can beat that I'll be the first to be your customer :)
> 
> Sure there is some latency because it's distributed, and hardware is
> refurbished and 2 years old. But as long as I don't have to replace
> parts out of my own pocket and it works that's good enough for me.
> 
> I believe that's a pretty solid setup
> I have a distributed/high available, replicated database/storage, on
> the back
> Website/Application listeners in the middle
> Load balancing via nginx (for now) on the front
> Load balancing via DNS on the front
> 
> What ever can go wrong? (famous last words ;) )

Wow! Here things get complicated. Without proper understanding of the complete 
design of your service I would not like to comment on your setup. Better shut 
up than say something stupid.

I fear I do not have the time to dig deeper into your setup. I estimate about 
one working day to understand the whole setup in such a depth to be able to 
judge qualified about it. I am quite busy satisfying paying customers, and 
please understand, that I am not able to invest so much time in mail listing 
consulting. It sounds like a realy interesting project, so I would help you if 
I would not have any other work. Sorry.


Mit freundlichen Grüßen,

Michael Schwartzkopff

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


More information about the Swan mailing list