Pyrit in a lab

A student at the John von Neumann Faculty of Informatics / University of Óbuda turned a computer lab (with the blessing from his teacher) into a cluster of 19 Pyrit-running nodes. Using a GeForce GTX260 GPU on each node, he was able to achieve a performance of 126.000 Pairwise Master Keys per second.

While these GPUs are in no terms up to date, it is still an impressive example: 2.064.384.000 rounds of SHA1 or 123 gigabytes of data crunched every second in a network of 19 low-cost PCs.

2 Comments

  1. I would like to know how the hell he got this to workm below is what everyone else is getting when they run pyrit serve

    Serving 1 active clients; 0 PMKs/s; 0.0 TTS Exception in thread Thread-119:
    Traceback (most recent call last):
    File “/usr/lib/python2.6/threading.py”, line 525, in __bootstrap_inner
    self.run()
    File “/usr/local/lib/python2.6/dist-packages/cpyrit/network.py”, line 50, in run
    self.server.gather(self.client.uuid, 5000)
    File “/usr/lib/python2.6/xmlrpclib.py”, line 1199, in __call__
    return self.__send(self.__name, args)
    File “/usr/lib/python2.6/xmlrpclib.py”, line 1489, in __request
    verbose=self.__verbose
    File “/usr/lib/python2.6/xmlrpclib.py”, line 1253, in request
    return self._parse_response(h.getfile(), sock)
    File “/usr/lib/python2.6/xmlrpclib.py”, line 1392, in _parse_response
    return u.close()
    File “/usr/lib/python2.6/xmlrpclib.py”, line 838, in close
    raise Fault(**self._stack[0])
    Fault:

  2. I too


Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

  • RSS Unknown Feed

    • An error has occurred; the feed is probably down. Try again later.