[Swan] VPN setup

Darko Luketic info at icod.de
Tue Jan 20 20:57:30 EET 2015


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256



On 01/20/2015 04:05 PM, Michael Schwartzkopff wrote:
> Am Dienstag, 20. Januar 2015, 15:40:33 schrieb Darko Luketic:
>> Hi Michael,
>> 
>> what's the advantage of this (GRE) over assigning an additional 
>> private IP to each server's NIC and going with IPsec alone?
> 
> Layer2 networking will not work with Layer3 instance. In Layer2 the
>  destination is found via a ARP request (or IPv6 neigh solicitaion
> ;-). In layer3 you have the routing table. How do you tell a server
> in the left side to route a package that it is supposed to send a
> ARP request?
> 
>> In my case each server doesn't need more than 1 private IP, since
>> I'll have various private services listening on different ports
>> and no VMery going on and I don't need broadcasts (unless I
>> missed something).
> 
> ARP requests are layer2 broadcasts.
> 
>> Unauthenticated requests won't be allowed through pluto. And the
>> datacenter's switches make sure private IP addresses won't get 
>> routed.
> 
> Of course, you could take the effort to make things work somehow. I
> presume that it might even work somehow. But it will take a lot of
> time. It will be a solution that noone can debug. So why don't you
> follow the "internet" way of doing things?
> 
> 

Ok suppose I use GRE to connect
S1 with S2
S1 with S3
S2 with S3

I now have a GRE tunnel between S1 and S2 with 4 byte (32bit) overhead
for GRE and a MTU that is 24 byte lower than default 1500.
Now I create another tunnel with IPsec that signs packets with RSA
3712 bit and encrypts with aes128 and hashes with hmac-sha1. (or gcm
or gcm_null as Paul suggested).
But with 4 byte overhead in each packet.
So I'm sorry, but I don't see the benefit of adding unneccessary
overhead, when VTI does about the same as GRE (in the broadest sense),
except if you're able to explain why I should double tunnel, when GRE
as far as I can make out doesn't use AH or a signature, only CRC, if
enabled (spoofable).
Sorry, but the internet way of doing things is not good enough of an
explanation. :)
I need a specific solution to my use case, encrypted traffic over an
potentially insecure connection with a private subnet so I can get rid
of TLS handshake overhead for the DB servers and also use other
services like distcc without using ssh, or the described scenario.
I understand however if you're pre-occupied (I know the situation), no
worries, as stated further down below :) and that you have no
obligation to respond. It's just that you threw it out there so I was
asking.

>> Regarding load-balancing, this is all least-cost. I'll have
> 
> Loadbalancing is low cost.
> 
>> - DNS LB
> 
> WHY in the world do you need DNS loadbalancing? DNS protocol has
> intrinic redundancy. You can add as much resolvers as you want the
> resolv.conf. Use the force, Luke! Use the protocols wisely.
> 

I meant LB via DNS, aka each of the public IPs listed in a response to
a DNS query :) client perspective.
Not RR DNS LBing, hah.

>> - Each server running nginx (or haproxy) at the front - Each
>> server LB through nginx with local first, remote second
> 
> How many servers do you have? Two or more?

2 now, 3 soon.

> 
>> Application Layer - Each server has one (Go compiled) program
>> listening on [Private_IFaceAddr]:[9000+UserIDnum] - Each program
>> will do CRUD operations with the Database Layer - Each
>> (sub)domain will be its own user with its own set of static
>> assets Database/Storage Layer - Each server will have 1
>> replicaset on 1 of the 2 drive's partitions - All replica sets
>> will form a cluster to distribute partitioned data for instance 
>> Server1 will have RS0 on HDD0, RS1 on HDD1, RS2 arbiter on
>> rootfs Server2 will have RS0 on HDD0, RS1 on arbiter on rootfs,
>> RS2 on HDD1 Server3 will have RS0 arbiter on rootfs, RS1 on HDD0,
>> RS2 on HDD1 The replica sets will listen on 20000+RSid for
>> instance S1:20000 RS0 S2:20000 RS0 S3:20000 RS0 where 
>> S1=10.0.1.1 S2=10.0.1.2 S3=10.0.1.3
>> 
>> RS0+RS1+RS2 = a shard of 3 replica sets
>> 
>> Each server has a mongos instance listening for requests by
>> programs.
>> 
>> 2 nodes will be in 1 dc 1 in another.
>> 
>> Expanding the cluster will require 3 more servers, but with ~15
>> TB storage for data and files distributed across 3 nodes, this
>> will last for a while and for a price where you'd have to pay
>> 5-fold if not 10-fold to have it all on 1 server(and then you'd
>> not have high availability).
>> 
>> 48GB ECC RAM (sure could be more) 6x3TB HDDs (this is the
>> expensive part on those 1 server solutions) 12 cores (24
>> logical) 600Mbit/s guaranteed (that's 200 / node x 3) 60TB
>> outbound traffic (20TB / node x 3, also the expensive part on 
>> self-owned hardware/shared rack solutions)
>> 
>> = ~5k-15k concurrent connections possible
>> 
>> for ~120€/month (incl. VAT) If you can beat that I'll be the
>> first to be your customer :)
>> 
>> Sure there is some latency because it's distributed, and hardware
>> is refurbished and 2 years old. But as long as I don't have to
>> replace parts out of my own pocket and it works that's good
>> enough for me.
>> 
>> I believe that's a pretty solid setup I have a distributed/high
>> available, replicated database/storage, on the back 
>> Website/Application listeners in the middle Load balancing via
>> nginx (for now) on the front Load balancing via DNS on the front
>> 
>> What ever can go wrong? (famous last words ;) )
> 
> Wow! Here things get complicated. Without proper understanding of
> the complete design of your service I would not like to comment on
> your setup. Better shut up than say something stupid.
> 
> I fear I do not have the time to dig deeper into your setup. I
> estimate about one working day to understand the whole setup in
> such a depth to be able to judge qualified about it. I am quite
> busy satisfying paying customers, and please understand, that I am
> not able to invest so much time in mail listing consulting. It
> sounds like a realy interesting project, so I would help you if I
> would not have any other work. Sorry.
> 

No problem. And I didn't mean to imply anything. The whole idea has
grown over the course of ~1½ years (or more), so no worries.
Don't let yourself be troubled by me, take it easy and good luck with
your projects/customers :) and thanks for the feedback and the GRE hint
Michael.
Have a nice evening.
> 
> Mit freundlichen Grüßen,
> 
> Michael Schwartzkopff
> 

- -- 
Freundliche Grüße / Best regards

Darko Luketic

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJUvqUUAAoJENrR4EaH4PXFLR8H/Azw5f8aIMAyGpEBntfq22ZJ
D0jQ3YJQTE/ibqNSM1ImCWnwTdLjgdAZDRtWmGKmNvJYJfpfaxxwO/AklRprvjuV
6IRY8VK326oZZRU9hoPr5g0qJseGsjVQJdFKvZ7BD/owk9Dt3A88EQKdq3Fvx7Z+
D+D+9sPPKAZ1ZWh27DxP3EQzWRayf12MVrUNYJCdlTns7T4HRC8E7fqbLhPpv1at
4ThZNK4jRetJBLBSdubKn5/kWGrOrSB0Mv2r7/ypR1gEToAIqgUGY+G7w+cWH4sF
5/eVEz2ogMMJkphzkrEDqbkDGiiZsuWLopQRkKWF+sjBjKB/B2ecPmMev6L3/P4=
=Cd+G
-----END PGP SIGNATURE-----


More information about the Swan mailing list