[Swan-dev] generating x509 certificates

Andrew Cagney andrew.cagney at gmail.com
Tue Feb 3 22:21:15 EET 2015


Antony,


On 3 February 2015 at 12:08, Antony Antony <antony at phenome.org> wrote:
> well in the past it would only run after the 10th or 12th of each month or so:)
>
> I have a suspicion you committed this change without a full run and comparing the results of a known 'good' run. If this is the case, there is a chance others may waste time chasing this because dist_certs.* wipes their certificates.

Did I do a "good" run?  Yes; several; and the x509 tests consistently
fail.  I then tested each of my changes, re-ran applicable builds, and
the x509-pluto-01 as a unit test.  Since that test went from a fail to
pass, and the change is clearly adding the missing certificates, I
consider it good to go.  A full test run won't change this result.

Anyway, this begs a question, what is the correct way to establish a
baseline so I can test my changes?  I prefer that term as it suggests
unchanging and reproducible.  While there should be 0 failures trunk
always tends to have a bit of give-n-take, and getting repeatable
tests can be difficult.

My process is as follows:

- install/configure a "supported development environment"; I'm using
Fedora 20 on an i5, certainly not sluggish

- checkout a *fresh* copy of the repo as in: cd ~ && git clone
https://.. /libreswan.git
-- I had to add kvmsetup.sh by hand, i think that is a bug
-- I had to add Makefile.inc.local to add -Werror, I think that is a bug

- run testing/libvirt/install.sh to set up the test framework
-> if I think the VMs are corrupt then I should be able to run
uninstall.sh ; install.sh to rebuild them

- build/install: swan-update on west, then swan-install on the others
-> it would be nice to automate this

- cd testing/pluto && make check

The result was a 'baseline' ('good' run) with over 100 failures before it hung.

Clearly this isn't very good, so far I've found:

- strongswan in FC21 doesn't include GCM or CTR; for the GCM and CTR
interop tests to work, a custom version of strongswan is needed

- the "wip" tests need to be disabled, it was one of those that hung
(If it is possible to clearly identify wip results as something to
ignore and ensure they don't hang then running them is probably
mostely harmless; google for "KFAIL")

- there are no certificates (hence this thread) so the x509 tests can't work

However, none of these "fixes" fit into the model of a reproducible
test framework.

Andrew


More information about the Swan-dev mailing list