Monolith 2018 04

2018-04-03

I have finally gotten tired of using WIFI and having 65-wifi devices in such a concentrated area can't be good for the wifi radios nor the humans in the area. As we saw when I first began using WIFI performance was abysmal to say the least. I put together some 12VDC-powered ethernet switches and made an enclosure and it currently sits on top of the monolith chassis with 66-cpus attached to 66-ports, plus cables to wire the 5-switches together plus one uplink which is about 72 ports. .

2018-04-18-001-monolith.jpg

I still have a lot of work left to do, I am planning on putting a base on the monolith chassis, but I need to add some power control (solid-state relays) that can control power for each tier. Once I finish adding that support I will be able to work on hiding it all inside some sort of base assembly.

2018-04-18-002-monolith.jpg

This is a close-up of the (5) ethernet switches and the myriad cables. It is somewhat organized now.

2018-04-18-003-monolith.jpg

This piece of furniture has finally become useful and is holding all the components of the power supply and the 35AH battery. As documented elsewhere the 35AH battery serves as a UPS and is also designed to carry the initial current load of startup of all three tiers which could be significant.

2018-04-09

I am only running 64-cpu since node47 seems to be failing with "308" communication errors now.

mpi@master:/master/mpi_tests$ mpirun -np 64 -f /master/mpi_tests/machinefile /master/mpi_tests/cpi
Process 2 of 64 is on node02
Process 3 of 64 is on node03
Process 5 of 64 is on node05
Process 6 of 64 is on node06
Process 7 of 64 is on node07
Process 10 of 64 is on node10
Process 9 of 64 is on node09
Process 11 of 64 is on node11
Process 13 of 64 is on node13
Process 14 of 64 is on node14
Process 18 of 64 is on node18
Process 19 of 64 is on node19
Process 20 of 64 is on node20
Process 22 of 64 is on node22
Process 21 of 64 is on node21
Process 27 of 64 is on node27
Process 28 of 64 is on node28
Process 29 of 64 is on node29
Process 24 of 64 is on node24
Process 32 of 64 is on node32
Process 30 of 64 is on node30
Process 34 of 64 is on node34
Process 36 of 64 is on node36
Process 37 of 64 is on node37
Process 38 of 64 is on node38
Process 41 of 64 is on node41
Process 47 of 64 is on node48
Process 50 of 64 is on node51
Process 44 of 64 is on node44
Process 51 of 64 is on node52
Process 54 of 64 is on node55
Process 55 of 64 is on node56
Process 49 of 64 is on node50
Process 56 of 64 is on node57
Process 48 of 64 is on node49
Process 52 of 64 is on node53
Process 53 of 64 is on node54
Process 60 of 64 is on node61
Process 57 of 64 is on node58
Process 58 of 64 is on node59
Process 46 of 64 is on node46
Process 4 of 64 is on node04
Process 8 of 64 is on node08
Process 12 of 64 is on node12
Process 16 of 64 is on node16
Process 17 of 64 is on node17
Process 15 of 64 is on node15
Process 26 of 64 is on node26
Process 23 of 64 is on node23
Process 25 of 64 is on node25
Process 33 of 64 is on node33
Process 35 of 64 is on node35
Process 31 of 64 is on node31
Process 39 of 64 is on node39
Process 40 of 64 is on node40
Process 42 of 64 is on node42
Process 43 of 64 is on node43
Process 62 of 64 is on node63
Process 59 of 64 is on node60
Process 63 of 64 is on node64
Process 64 of 64 is on node65
Process 61 of 64 is on node62
Process 45 of 64 is on node45
Process 1 of 64 is on node01
pi is approximately 3.1415926544231265, Error is 0.0000000008333334
wall clock time = 0.752705
mpi@master:/master/mpi_tests$

2018-04-16

Script started on Mon 16 Apr 2018 10:55:55 PM MDT
mpi@master:/master/mpi_tests$ ./stress^M
+ mpirun -np 65 -f /master/mpi_tests/machinefile /master/mpi_tests/system^M
node02 22:56:04 up 45 min, 0 users, load average: 0.16, 0.17, 0.15^M
node01 22:56:05 up 46 min, 0 users, load average: 0.02, 0.07, 0.11^M
node03 22:56:05 up 45 min, 0 users, load average: 0.12, 0.12, 0.10^M
node05 22:56:05 up 45 min, 0 users, load average: 0.21, 0.13, 0.12^M
node04 22:56:05 up 45 min, 0 users, load average: 0.06, 0.10, 0.13^M
node06 22:56:05 up 45 min, 0 users, load average: 0.22, 0.16, 0.13^M
node07 22:56:05 up 45 min, 0 users, load average: 0.07, 0.09, 0.08^M
node09 22:56:05 up 45 min, 0 users, load average: 0.10, 0.10, 0.10^M
node11 22:56:05 up 44 min, 0 users, load average: 0.09, 0.10, 0.13^M
node08 22:56:05 up 45 min, 0 users, load average: 0.20, 0.12, 0.13^M
node10 22:56:05 up 44 min, 0 users, load average: 0.02, 0.08, 0.11^M
node12 22:56:06 up 44 min, 0 users, load average: 0.02, 0.09, 0.13^M
node14 22:56:06 up 44 min, 0 users, load average: 0.13, 0.08, 0.09^M
node15 22:38:45 up 44 min, 0 users, load average: 0.03, 0.08, 0.13^M
node13 22:56:06 up 44 min, 0 users, load average: 0.05, 0.12, 0.13^M
node17 22:56:06 up 44 min, 0 users, load average: 0.13, 0.11, 0.07^M
node19 22:56:06 up 44 min, 0 users, load average: 0.14, 0.17, 0.14^M
node18 22:56:06 up 44 min, 0 users, load average: 0.20, 0.11, 0.10^M
node21 22:56:06 up 44 min, 0 users, load average: 0.30, 0.28, 0.21^M
node20 22:56:06 up 44 min, 0 users, load average: 0.05, 0.18, 0.21^M
node22 22:56:06 up 44 min, 0 users, load average: 0.09, 0.16, 0.11^M
node16 22:56:06 up 44 min, 0 users, load average: 0.07, 0.10, 0.11^M
node23 22:56:06 up 43 min, 0 users, load average: 0.03, 0.10, 0.13^M
node24 22:56:06 up 43 min, 0 users, load average: 0.06, 0.09, 0.09^M
node25 22:56:06 up 43 min, 0 users, load average: 0.05, 0.10, 0.13^M
node29 22:56:07 up 43 min, 0 users, load average: 0.19, 0.17, 0.15^M
node28 22:56:07 up 43 min, 0 users, load average: 0.09, 0.17, 0.16^M
node26 22:56:07 up 43 min, 0 users, load average: 0.15, 0.13, 0.14^M
node31 22:56:07 up 43 min, 0 users, load average: 0.03, 0.08, 0.12^M
node30 22:56:07 up 43 min, 0 users, load average: 0.09, 0.12, 0.13^M
node27 22:56:07 up 43 min, 0 users, load average: 0.08, 0.09, 0.12^M
node36 22:56:07 up 42 min, 0 users, load average: 0.13, 0.12, 0.14^M
node38 22:56:07 up 42 min, 0 users, load average: 0.50, 0.27, 0.19^M
node37 22:56:07 up 42 min, 0 users, load average: 0.08, 0.12, 0.18^M
node40 22:56:07 up 42 min, 0 users, load average: 0.06, 0.10, 0.13^M
node41 22:56:07 up 42 min, 0 users, load average: 0.02, 0.09, 0.13^M
node42 22:56:07 up 42 min, 0 users, load average: 0.12, 0.09, 0.11^M
node45 22:56:07 up 42 min, 0 users, load average: 0.06, 0.08, 0.12^M
node39 22:56:07 up 42 min, 0 users, load average: 0.31, 0.17, 0.15^M
node32 22:56:07 up 43 min, 0 users, load average: 0.17, 0.20, 0.15^M
node34 22:56:07 up 43 min, 0 users, load average: 0.22, 0.17, 0.15^M
node33 22:56:07 up 43 min, 0 users, load average: 0.05, 0.04, 0.05^M
node44 22:56:07 up 42 min, 0 users, load average: 0.05, 0.13, 0.13^M
node49 22:56:07 up 42 min, 0 users, load average: 0.03, 0.07, 0.08^M
node48 22:56:07 up 42 min, 0 users, load average: 0.07, 0.11, 0.13^M
node52 22:56:07 up 41 min, 0 users, load average: 0.19, 0.17, 0.14^M
node50 22:56:07 up 42 min, 0 users, load average: 0.15, 0.11, 0.12^M
node53 22:56:07 up 41 min, 0 users, load average: 0.01, 0.10, 0.12^M
node56 22:56:07 up 41 min, 0 users, load average: 0.04, 0.07, 0.11^M
node43 22:56:08 up 42 min, 0 users, load average: 0.21, 0.12, 0.13^M
node47 22:56:08 up 42 min, 0 users, load average: 0.02, 0.11, 0.14^M
node35 22:56:08 up 42 min, 0 users, load average: 0.09, 0.10, 0.07^M
node54 22:56:08 up 41 min, 0 users, load average: 0.16, 0.15, 0.16^M
node57 22:56:08 up 41 min, 0 users, load average: 0.12, 0.18, 0.15^M
node65 22:56:08 up 41 min, 0 users, load average: 0.12, 0.10, 0.11^M
node62 22:56:08 up 41 min, 0 users, load average: 0.05, 0.10, 0.13^M
node46 22:56:08 up 42 min, 0 users, load average: 0.15, 0.13, 0.12^M
node60 22:56:08 up 41 min, 0 users, load average: 0.03, 0.11, 0.13^M
node59 22:56:08 up 41 min, 0 users, load average: 0.03, 0.10, 0.13^M
node51 22:56:08 up 42 min, 0 users, load average: 0.05, 0.16, 0.15^M
node58 22:56:08 up 41 min, 0 users, load average: 0.13, 0.15, 0.14^M
node55 22:56:08 up 41 min, 0 users, load average: 0.12, 0.11, 0.07^M
node64 22:56:08 up 41 min, 0 users, load average: 0.08, 0.08, 0.07^M
node63 22:56:08 up 41 min, 0 users, load average: 0.12, 0.13, 0.14^M
node61 22:56:08 up 41 min, 0 users, load average: 0.06, 0.10, 0.13^M
+ mpirun -np 65 -f /master/mpi_tests/machinefile /master/mpi_tests/sample3^M
Process 1 on host node01 has the partial result of 0.024157^M
Process 8 on host node08 has the partial result of 0.023763^M
Process 9 on host node09 has the partial result of 0.023650^M
Process 11 on host node11 has the partial result of 0.023385^M
Process 14 on host node14 has the partial result of 0.022884^M
Process 13 on host node13 has the partial result of 0.023064^M
Process 18 on host node18 has the partial result of 0.022030^M
Process 26 on host node26 has the partial result of 0.019715^M
Process 41 on host node41 has the partial result of 0.013482^M
Process 42 on host node42 has the partial result of 0.012994^M
Process 2 on host node02 has the partial result of 0.024142^M
Process 3 on host node03 has the partial result of 0.024114^M
Process 4 on host node04 has the partial result of 0.024072^M
Process 5 on host node05 has the partial result of 0.024016^M
Process 6 on host node06 has the partial result of 0.023945^M
Process 7 on host node07 has the partial result of 0.023861^M
Process 10 on host node10 has the partial result of 0.023524^M
Process 12 on host node12 has the partial result of 0.023231^M
Process 17 on host node17 has the partial result of 0.022263^M
Process 15 on host node15 has the partial result of 0.022690^M
Process 19 on host node19 has the partial result of 0.021784^M
Process 20 on host node20 has the partial result of 0.021525^M
Process 21 on host node21 has the partial result of 0.021254^M
Process 23 on host node23 has the partial result of 0.020674^M
Process 22 on host node22 has the partial result of 0.020970^M
Process 24 on host node24 has the partial result of 0.020366^M
Process 28 on host node28 has the partial result of 0.019017^M
Process 29 on host node29 has the partial result of 0.018651^M
Process 31 on host node31 has the partial result of 0.017888^M
Process 34 on host node34 has the partial result of 0.016665^M
Process 33 on host node33 has the partial result of 0.017083^M
Process 30 on host node30 has the partial result of 0.018275^M
Process 35 on host node35 has the partial result of 0.016237^M
Process 32 on host node32 has the partial result of 0.017490^M
Process 25 on host node25 has the partial result of 0.020046^M
Process 36 on host node36 has the partial result of 0.015800^M
Process 37 on host node37 has the partial result of 0.015354^M
Process 38 on host node38 has the partial result of 0.014899^M
Process 39 on host node39 has the partial result of 0.014435^M
Process 40 on host node40 has the partial result of 0.013963^M
Process 45 on host node45 has the partial result of 0.011485^M
Process 48 on host node48 has the partial result of 0.009915^M
Process 49 on host node49 has the partial result of 0.009380^M
Process 55 on host node55 has the partial result of 0.006065^M
Process 57 on host node57 has the partial result of 0.004928^M
Process 58 on host node58 has the partial result of 0.004355^M
Process 64 on host node64 has the partial result of 0.000876^M
Process 27 on host node27 has the partial result of 0.019371^M
Process 43 on host node43 has the partial result of 0.012498^M
Process 44 on host node44 has the partial result of 0.011995^M
Process 46 on host node46 has the partial result of 0.010968^M
Process 50 on host node50 has the partial result of 0.008839^M
Process 51 on host node51 has the partial result of 0.008293^M
Process 52 on host node52 has the partial result of 0.007742^M
Process 54 on host node54 has the partial result of 0.006628^M
Process 53 on host node53 has the partial result of 0.007187^M
Process 56 on host node56 has the partial result of 0.005498^M
Process 60 on host node60 has the partial result of 0.003202^M
Process 47 on host node47 has the partial result of 0.010444^M
Process 59 on host node59 has the partial result of 0.003779^M
Process 63 on host node63 has the partial result of 0.001459^M
Process 62 on host node62 has the partial result of 0.002041^M
Process 65 on host node65 has the partial result of 0.000292^M
Process 61 on host node61 has the partial result of 0.002622^M
Process 16 on host node16 has the partial result of 0.022483^M
The result =0.999704^M
+ mpirun -np 65 -f /master/mpi_tests/machinefile /master/mpi_tests/cpi^M
Process 2 of 65 is on node02^M
Process 3 of 65 is on node03^M
Process 6 of 65 is on node06^M
Process 4 of 65 is on node04^M
Process 5 of 65 is on node05^M
Process 7 of 65 is on node07^M
Process 8 of 65 is on node08^M
Process 13 of 65 is on node13^M
Process 9 of 65 is on node09^M
Process 10 of 65 is on node10^M
Process 11 of 65 is on node11^M
Process 16 of 65 is on node16^M
Process 18 of 65 is on node18^M
Process 17 of 65 is on node17^M
Process 15 of 65 is on node15^M
Process 22 of 65 is on node22^M
Process 19 of 65 is on node19^M
Process 24 of 65 is on node24^M
Process 27 of 65 is on node27^M
Process 26 of 65 is on node26^M
Process 29 of 65 is on node29^M
Process 25 of 65 is on node25^M
Process 31 of 65 is on node31^M
Process 33 of 65 is on node33^M
Process 34 of 65 is on node34^M
Process 35 of 65 is on node35^M
Process 37 of 65 is on node37^M
Process 40 of 65 is on node40^M
Process 39 of 65 is on node39^M
Process 42 of 65 is on node42^M
Process 45 of 65 is on node45^M
Process 46 of 65 is on node46^M
Process 50 of 65 is on node50^M
Process 47 of 65 is on node47^M
Process 51 of 65 is on node51^M
Process 53 of 65 is on node53^M
Process 54 of 65 is on node54^M
Process 52 of 65 is on node52^M
Process 56 of 65 is on node56^M
Process 58 of 65 is on node58^M
Process 55 of 65 is on node55^M
Process 60 of 65 is on node60^M
Process 61 of 65 is on node61^M
Process 62 of 65 is on node62^M
Process 12 of 65 is on node12^M
Process 14 of 65 is on node14^M
Process 20 of 65 is on node20^M
Process 21 of 65 is on node21^M
Process 23 of 65 is on node23^M
Process 28 of 65 is on node28^M
Process 30 of 65 is on node30^M
Process 32 of 65 is on node32^M
Process 36 of 65 is on node36^M
Process 38 of 65 is on node38^M
Process 43 of 65 is on node43^M
Process 41 of 65 is on node41^M
Process 44 of 65 is on node44^M
Process 48 of 65 is on node48^M
Process 49 of 65 is on node49^M
Process 59 of 65 is on node59^M
Process 57 of 65 is on node57^M
Process 63 of 65 is on node63^M
Process 64 of 65 is on node64^M
Process 65 of 65 is on node65^M
Process 1 of 65 is on node01^M
pi is approximately 3.1415926544231270, Error is 0.0000000008333338^M
wall clock time = 0.772840^M
+ mpirun -np 65 -f /master/mpi_tests/machinefile python /master/mpi_tests/helloworld.py^M
Hello, Cluster! Python process 3 of 65 on node03.^M
Hello, Cluster! Python process 4 of 65 on node04.^M
Hello, Cluster! Python process 20 of 65 on node20.^M
Hello, Cluster! Python process 27 of 65 on node27.^M
Hello, Cluster! Python process 2 of 65 on node02.^M
Hello, Cluster! Python process 36 of 65 on node36.^M
Hello, Cluster! Python process 37 of 65 on node37.^M
Hello, Cluster! Python process 38 of 65 on node38.^M
Hello, Cluster! Python process 43 of 65 on node43.^M
Hello, Cluster! Python process 56 of 65 on node56.^M
Hello, Cluster! Python process 57 of 65 on node57.^M
Hello, Cluster! Python process 22 of 65 on node22.^M
Hello, Cluster! Python process 21 of 65 on node21.^M
Hello, Cluster! Python process 18 of 65 on node18.^M
Hello, Cluster! Python process 28 of 65 on node28.^M
Hello, Cluster! Python process 30 of 65 on node30.^M
Hello, Cluster! Python process 33 of 65 on node33.^M
Hello, Cluster! Python process 44 of 65 on node44.^M
Hello, Cluster! Python process 45 of 65 on node45.^M
Hello, Cluster! Python process 49 of 65 on node49.^M
Hello, Cluster! Python process 51 of 65 on node51.^M
Hello, Cluster! Python process 53 of 65 on node53.^M
Hello, Cluster! Python process 58 of 65 on node58.^M
Hello, Cluster! Python process 63 of 65 on node63.^M
Hello, Cluster! Python process 62 of 65 on node62.^M
Hello, Cluster! Python process 5 of 65 on node05.^M
Hello, Cluster! Python process 7 of 65 on node07.^M
Hello, Cluster! Python process 8 of 65 on node08.^M
Hello, Cluster! Python process 10 of 65 on node10.^M
Hello, Cluster! Python process 11 of 65 on node11.^M
Hello, Cluster! Python process 12 of 65 on node12.^M
Hello, Cluster! Python process 13 of 65 on node13.^M
Hello, Cluster! Python process 9 of 65 on node09.^M
Hello, Cluster! Python process 16 of 65 on node16.^M
Hello, Cluster! Python process 17 of 65 on node17.^M
Hello, Cluster! Python process 23 of 65 on node23.^M
Hello, Cluster! Python process 29 of 65 on node29.^M
Hello, Cluster! Python process 34 of 65 on node34.^M
Hello, Cluster! Python process 39 of 65 on node39.^M
Hello, Cluster! Python process 41 of 65 on node41.^M
Hello, Cluster! Python process 50 of 65 on node50.^M
Hello, Cluster! Python process 52 of 65 on node52.^M
Hello, Cluster! Python process 54 of 65 on node54.^M
Hello, Cluster! Python process 64 of 65 on node64.^M
Hello, Cluster! Python process 48 of 65 on node48.^M
Hello, Cluster! Python process 26 of 65 on node26.^M
Hello, Cluster! Python process 32 of 65 on node32.^M
Hello, Cluster! Python process 6 of 65 on node06.^M
Hello, Cluster! Python process 60 of 65 on node60.^M
Hello, Cluster! Python process 55 of 65 on node55.^M
Hello, Cluster! Python process 47 of 65 on node47.^M
Hello, Cluster! Python process 15 of 65 on node15.^M
Hello, Cluster! Python process 59 of 65 on node59.^M
Hello, Cluster! Python process 40 of 65 on node40.^M
Hello, Cluster! Python process 42 of 65 on node42.^M
Hello, Cluster! Python process 31 of 65 on node31.^M
Hello, Cluster! Python process 25 of 65 on node25.^M
Hello, Cluster! Python process 46 of 65 on node46.^M
Hello, Cluster! Python process 65 of 65 on node65.^M
Hello, Cluster! Python process 35 of 65 on node35.^M
Hello, Cluster! Python process 14 of 65 on node14.^M
Hello, Cluster! Python process 19 of 65 on node19.^M
Hello, Cluster! Python process 1 of 65 on node01.^M
Hello, Cluster! Python process 61 of 65 on node61.^M
Hello, Cluster! Python process 24 of 65 on node24.^M
^CCtrl-C caught... cleaning up processes^M
^M
mpi@master:/master/mpi_tests$ exit^M
exit^M
Script done on Mon 16 Apr 2018 10:57:35 PM MDT

2018-04-18

I am very pleased with the cluster performance. I am also able to generate significant bandwidth for testing network attacks and studying packet attacks. By not using wireless the network performance has really improved and although I am using cheap Netgear consumer-grade unmanaged switches it has been a vast improvement.

I need to provide some benchmarks to accurately detail what performance actually is and will do that soon.

2018-04-23

mpi@master:/master/tmp/john/magnumripper-JohnTheRipper-f110f98/run$ ./john --status=unixmd5|more
1 0g 0:09:15:43  2/3 0g/s 6.663p/s 2442c/s 2442C/s
2 0g 0:09:15:57  2/3 0g/s 6.538p/s 2413c/s 2413C/s
3 0g 0:09:15:22  2/3 0g/s 6.620p/s 2419c/s 2419C/s
4 0g 0:09:15:21  2/3 0g/s 6.574p/s 2421c/s 2421C/s
5 0g 0:09:15:57  2/3 0g/s 6.535p/s 2417c/s 2417C/s
6 0g 0:09:16:11  2/3 0g/s 6.650p/s 2415c/s 2415C/s
7 0g 0:09:14:55  2/3 0g/s 6.666p/s 2421c/s 2421C/s
8 0g 0:09:15:46  2/3 0g/s 6.618p/s 2418c/s 2418C/s
9 0g 0:09:15:54  2/3 0g/s 6.628p/s 2417c/s 2417C/s
10 0g 0:09:15:37  2/3 0g/s 6.632p/s 2420c/s 2420C/s
11 0g 0:09:16:28  2/3 0g/s 6.626p/s 2418c/s 2418C/s
12 0g 0:09:16:49  2/3 0g/s 6.645p/s 2419c/s 2419C/s
13 0g 0:09:15:36  2/3 0g/s 6.584p/s 2420c/s 2420C/s
14 0g 0:09:15:52  2/3 0g/s 6.647p/s 2418c/s 2418C/s
15 0g 0:09:14:45  2/3 0g/s 6.622p/s 2420c/s 2420C/s
16 0g 0:09:16:00  2/3 0g/s 6.585p/s 2418c/s 2418C/s
17 0g 0:09:15:49  2/3 0g/s 6.581p/s 2418c/s 2418C/s
18 0g 0:09:15:51  2/3 0g/s 6.586p/s 2417c/s 2417C/s
19 0g 0:09:15:22  2/3 0g/s 6.498p/s 2419c/s 2419C/s
20 0g 0:09:15:32  2/3 0g/s 6.534p/s 2420c/s 2420C/s
21 0g 0:09:16:01  2/3 0g/s 6.388p/s 2360c/s 2360C/s
22 0g 0:09:14:42  2/3 0g/s 6.539p/s 2420c/s 2420C/s
23 0g 0:09:14:57  2/3 0g/s 6.564p/s 2421c/s 2421C/s
24 0g 0:09:15:50  2/3 0g/s 6.480p/s 2418c/s 2418C/s
25 0g 0:09:16:51  2/3 0g/s 6.484p/s 2420c/s 2420C/s
26 0g 0:09:15:35  2/3 0g/s 6.549p/s 2419c/s 2419C/s
27 0g 0:09:16:09  2/3 0g/s 6.542p/s 2418c/s 2418C/s
28 0g 0:09:16:14  2/3 0g/s 6.364p/s 2361c/s 2361C/s
29 0g 0:09:15:02  2/3 0g/s 4.737p/s 1693c/s 1693C/s
30 0g 0:09:15:22  2/3 0g/s 6.300p/s 2332c/s 2332C/s
31 0g 0:09:16:19  2/3 0g/s 6.538p/s 2420c/s 2420C/s
32 0g 0:09:15:29  2/3 0g/s 6.550p/s 2420c/s 2420C/s
33 0g 0:09:15:41  2/3 0g/s 6.526p/s 2420c/s 2420C/s
34 0g 0:09:15:28  2/3 0g/s 6.570p/s 2411c/s 2411C/s
35 0g 0:09:15:06  2/3 0g/s 6.589p/s 2419c/s 2419C/s
36 0g 0:09:16:18  2/3 0g/s 6.530p/s 2412c/s 2412C/s
37 0g 0:09:15:04  2/3 0g/s 6.506p/s 2414c/s 2414C/s
38 0g 0:09:15:54  2/3 0g/s 6.500p/s 2409c/s 2409C/s
39 0g 0:09:15:20  2/3 0g/s 6.500p/s 2416c/s 2416C/s
40 0g 0:09:16:02  2/3 0g/s 6.484p/s 2421c/s 2421C/s
41 0g 0:09:14:50  2/3 0g/s 6.461p/s 2413c/s 2413C/s
42 0g 0:09:16:15  2/3 0g/s 6.420p/s 2418c/s 2418C/s
43 0g 0:09:15:14  2/3 0g/s 6.459p/s 2421c/s 2421C/s
44 0g 0:09:14:48  2/3 0g/s 6.455p/s 2417c/s 2417C/s
45 0g 0:09:15:08  2/3 0g/s 6.501p/s 2422c/s 2422C/s
46 0g 0:09:14:46  2/3 0g/s 6.480p/s 2421c/s 2421C/s
47 0g 0:09:15:41  2/3 0g/s 6.426p/s 2417c/s 2417C/s
48 0g 0:09:16:01  2/3 0g/s 6.489p/s 2418c/s 2418C/s
49 0g 0:09:16:13  2/3 0g/s 6.523p/s 2418c/s 2418C/s
50 0g 0:09:16:13  2/3 0g/s 6.508p/s 2417c/s 2417C/s
51 0g 0:09:16:14  2/3 0g/s 6.511p/s 2412c/s 2412C/s
52 0g 0:09:14:58  2/3 0g/s 6.574p/s 2421c/s 2421C/s
53 0g 0:09:15:15  2/3 0g/s 6.579p/s 2422c/s 2422C/s
54 0g 0:09:15:32  2/3 0g/s 6.574p/s 2420c/s 2420C/s
55 0g 0:09:16:11  2/3 0g/s 6.575p/s 2420c/s 2420C/s
56 0g 0:09:14:48  2/3 0g/s 6.581p/s 2421c/s 2421C/s
57 0g 0:09:14:50  2/3 0g/s 6.585p/s 2423c/s 2423C/s
58 0g 0:09:15:32  2/3 0g/s 6.574p/s 2421c/s 2421C/s
59 0g 0:09:15:53  2/3 0g/s 6.571p/s 2419c/s 2419C/s
60 0g 0:09:16:20  2/3 0g/s 6.569p/s 2420c/s 2420C/s
61 0g 0:09:16:13  2/3 0g/s 6.571p/s 2422c/s 2422C/s
62 0g 0:09:15:35  2/3 0g/s 6.564p/s 2420c/s 2420C/s
63 0g 0:09:15:55  2/3 0g/s 6.556p/s 2420c/s 2420C/s
64 0g 0:09:15:53  2/3 0g/s 6.551p/s 2418c/s 2418C/s
65 0g 0:09:15:00  2/3 0g/s 6.555p/s 2419c/s 2419C/s
mpi@master:/master/tmp/john/magnumripper-JohnTheRipper-f110f98/run$