[Swan-dev] test runs seems slow

Antony Antony antony at phenome.org
Sun Aug 30 21:36:04 EEST 2015


Hi Hugh,
The pyhton blow up was my mistake, sorry about that.
I pushed a change about 10 hours ago. Your test run was complete, the results are not lost.   
>From your mail I gather the results are in: 
/home/build/results/bluebird.mimosa.com/2015-08-29-bluebird.mimosa.com-v3.15-+23-g2042b16-dirty-master/  

The part that exploded is when swantest was trying to summarize the testruns.

Now on to run time. Lately, with the introduction of Auth NULL, we added many tests. Now a testrun takes over 10 hours for me. Including greping through historic logs.  I am working on some optimizations, so grep only the last run. However, a full runs of 354 tests take 8+ hours. Many tests have sleep 1 to 5 minutes in them. 

See the change in runtime over the year.
http://hal.phenome.org:8081/results/

May be you need some other qucik way. 

-antony




On Sun, Aug 30, 2015 at 12:37:12AM -0400, D. Hugh Redelmeier wrote:
> I'm running the test system on a reasonably beefy machine:
> 
> - Fedora 21 x86-64 + updates
> - Intel(R) Core(TM) i5-2400S CPU @ 2.50GHz [4 cores]
> - 8G of RAM
> - Crucial_CT240M50 SSD
> - nothing else using the machine
> 
> It seems slow.  8.5 hours for a run of the test suite yesterday.  It's
> not at all clear what is using the time, but there is some anomalous
> bechviour which I will describe.
> 
> It is very slow to wake up the Gname desktop after a screen-blank.  (I
> usually access via ssh but I run the tests from the desktop.)  The
> only window open is a Gnome Terminal with the test run.
> 
> Results of top are not stable (reasonable) but there are some
> surprising periods.
> 
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                    
>  2655 build     20   0 1196860 887508   4268 R 100.3 11.0 287:44.40 mission-control                                                                                                            
>  2558 build     20   0 2290640 1.092g   9920 R  70.4 14.2 269:35.93 gnome-settings-                                                                                                            
>  2596 build     20   0 2730324 999584  29976 R  69.4 12.4 316:33.94 gnome-shell                                                                                                                
> 29569 build     20   0 1680804 246612  23744 R  52.8  3.1   0:09.93 qemu-system-x86                                                                                                            
> 29576 build     20   0 3053908 256432  23428 R  50.5  3.2   0:09.63 qemu-system-x86                                                                                                            
> 
> Look at the cummulative processor time for the top 3 programs.  The
> machine has been up 5 days and in that time has basically run the test
> suite twice and done nothing else.
> 
> What is mission control?  It seems to be a dbus thing.  Why would it
> take 100% of a core?
> 
> Why would gnome-settings- take 70% of a core?
> 
> Why would the gnome-shell have taken more than 5 hours of CPU when I
> barely use the console?
> 
> dmesg is full of messages like these:
> 
> [509806.845867] device vnet0 entered promiscuous mode
> [509806.851888] swan02: port 2(vnet0) entered listening state
> [509806.851894] swan02: port 2(vnet0) entered listening state
> [509806.870880] device vnet1 entered promiscuous mode
> [509806.873900] swan12: port 2(vnet1) entered listening state
> [509806.873906] swan12: port 2(vnet1) entered listening state
> [509806.891902] device vnet2 entered promiscuous mode
> [509806.894926] swan92: port 2(vnet2) entered listening state
> [509806.894932] swan92: port 2(vnet2) entered listening state
> [509806.953005] device vnet3 entered promiscuous mode
> [509806.959010] swan01: port 2(vnet3) entered listening state
> [509806.959017] swan01: port 2(vnet3) entered listening state
> [509806.983038] device vnet4 entered promiscuous mode
> [509806.989078] swan12: port 3(vnet4) entered listening state
> [509806.989086] swan12: port 3(vnet4) entered listening state
> [509807.013070] device vnet5 entered promiscuous mode
> [509807.019071] swan94: port 2(vnet5) entered listening state
> [509807.019077] swan94: port 2(vnet5) entered listening state
> [509807.374278] kvm: zapping shadow pages for mmio generation wraparound
> [509807.988840] kvm: zapping shadow pages for mmio generation wraparound
> 
> I guess this is from all the tcpdump commands in the test suite.  Kind
> of reasonable.
> 
> journalctl has replaced /var/log/*.  I'm still not used to it.  When I
> fire it up and try to get to the latest entries, it takes a couple of
> minutes of CPU to get there.  There are over 5 million lines, dating
> back as far as 8 months.  I didn't configure it and so this seems to
> be a dumb default configuration.  Remember, this is on a fairly inactive
> machine, one that has been off most of the time and only used for
> running libreswan tests.
> 
> The end of the journal has a lot of SELinux messages.  I guess our
> test suite is annoying it.  I have enforcement set to permissive but
> the messages misleadingly suggest that SELinux is actually enforcing
> restrictions.
> 
> The journal seems to go through phases.  One phase is the same junk as
> is in dmesg plus more stuff like NetworManager messages and the other
> phase is pages of SELinux messages.
> 
> I just tried "journalctl -f" and it was fast.  But boring: the test
> run has finished while I was composing the message.
> 
> A new run of top shows none of the pigs listed above taking any time
> now.  It is as if something the test suite is doing is causing these
> other processes to spin.  I don't understand d-bus -- could the tests
> be swamping it?
> 
> (Starting journalctl and typing "G" to get to the most recent entries
> burns almost 2 minutes of CPU even with the system quiescent.)
> _______________________________________________
> Swan-dev mailing list
> Swan-dev at lists.libreswan.org
> https://lists.libreswan.org/mailman/listinfo/swan-dev
> 


More information about the Swan-dev mailing list