I’ve just posted demonstration code for a remote-exploit against Aircrack-ng to the svn-repository. It causes aircrack-ng and airdecap-ng to crash when reading specially crafted dump-files and can also crash remote airodump-ng sessions by sending specially crafted packets over the air, kicking out those pesky folks who try to read our traffic. I am also 90% sure that this denial-of-service can be escalated to remote-code-execution by carefully introducing new stations to airodump-ng (for memory allocation) and then causing a heap corruption as demonstrated.
There an example dump-file here. It will do no harm to your computer ;-)
From the code’s comments:
A remote-exploit against the aircrack-ng tools. Tested up to svn r1675.
The tools’ code responsible for parsing IEEE802.11-packets assumes the
self-proclaimed length of a EAPOL-packet to be correct and never to exceed
a (arbitrary) maximum size of 256 bytes for packets that are part of the
EAPOL-authentication. We can exploit this by letting the code parse packets
a) proclaim to be larger than they really are, possibly causing the code
to read from invalid memory locations while copying the packet;
b) really do exceed the maximum size allowed and overflow data structures
allocated on the heap, overwriting libc’s allocation-related
structures. This causes heap-corruption.
Both problems lead either to a SIGSEGV or a SIGABRT, depending on the code-
path. Careful layout of the packet’s content can even possibly alter the
instruction-flow through the already well known heap-corruption paths
in libc. Playing with the proclaimed length of the EAPOL-packet and the
size and content of the packet’s padding immediately end up in various
assertion errors during calls to free(). This reveals the possibility to
gain control over $EIP.
Given that we have plenty of room for payload and that the tools are
usually executed with root-privileges, we should be able to have a
single-packet-own-everything exploit at our hands. As the attacker can
cause the various tools to do memory-allocations at his will (through
faking the appearance of previously unknown clients), the resulting
exploit-code should have a high probability of success.
Please note that the latest ATI-driver is still broken. The second GPU on multi-GPU cards (e.g. HD5970) produces random, invalid results. There is also a random-segfault-bug which *should* however not be triggered when using Pyrit through the command-line client.
Masterzorag has posted a video on his blog about running Pyrit on a Playstation3, utilizing the Cell B.E. processor. As far as I know, this is the very first case of using the Playstation3 or the Cell B.E. for attacking WPA(2)-PSK.
Please notice that this video and the related information is provided by Masterzorag. Please ask questions on his blog, I’m just hotlinking the video here :-)
I’ve changed the way Pyrit stores handshake information internally: The previous version basically tried every possible combinations, the latest svn-revision uses very efficient data-structures from the beginning. This cuts the the time to parse a specially crafted file with hundreds of handshakes from almost four minutes to around two seconds.
Pyrit also now shoots down every and all example-handshakes I’m currently aware of, usually with the first handshake-combination it tries.
In another news: The default workunit size can now be configured with the key ‘workunit_size‘ in Pyrit’s configuration file (usually ~/.pyrit/config). The default size has also been raised to 50,000 passwords. This increases memory footprint while importing passwords but reduces I/O overhead later on. People with 4gb of RAM and fast hardware might want to set this to a value of 100,000 or 150,000.
Please note that the default workunit size only comes to effect while populating a database with passwords (import_password / import_unique_passwords) and changing the key’s value in Pyrit’s configuration file has no effect on existing databases.
Another big ToDo solved: Handling large capture-files used to be painfully slow in Pyrit. Therefore Pyrit from version 0.3.1-dev r232 on now uses libpcap (the heart of tcpdump) to parse capture-files and bind to live capture-sources. Due to some new BPF-filter trickery, reading and parsing a capture file is super fast from now on.
For example, reading a 23mb (500.000 packets) file used to take about 1 minute, 30 seconds on my MacBook; the same thing is now done in 7 seconds!
The new libpcap-core also allows us to read packets from live devices. The command stripLive can have the option “-r” to take the name of a network-device (e.g. wlan0). You can therefore have Pyrit gather packets directly from the air and produce very small capture files like this:
pyrit -r wlan0 -o wlan0.cap stripLive
Notice that Pyrit does not care to take the device into monitor mode or change channels. You should use a tool like Kismet for that and have Pyrit take the score.
About two weeks ago the update to Pyrit 0.3.1-svn r226 made Pyrit more picky about how it parsed the information from a capture-file and reconstruct the fourway-handshake. This change solved some cases where Pyrit would combine packets from overlapping or incomplete handshakes which made the task of finding the correct PMK impossible. Being more strict solved some cases of non-working capture-files but opened a whole can of worms on the other end: Pyrit would sometimes pick up packets from an incomplete handshake, look for the remaining parts and ignore other, more valuable packet-combinations.
The latest development-revision Pyrit 0.3.1-svn r231 brings relief to this problem and solves a to-do that had been marked as such since the packet-handling code was first checked in. Pyrit now has the ability to analyse, parse and work with multiple authentications and rate their quality. This brings a huge increase to Pyrit’s ability of working with packet-captures-files.
Here is an example of how the result of analysing a capture-file may look like from now on:
#1: AccessPoint 00:0b:86:c2:a4:85 (‘linksys’):
#1: Station 00:13:ce:55:98:ef, 3 handshake(s):
#1: Good quality (HMAC_SHA1_AES)
#2: Good quality (HMAC_SHA1_AES)
#3: Good quality (HMAC_SHA1_AES)
As you can see, Pyrit has detected three possible handshakes (WPA2-PSK in this case) and rated them as being of good quality. The quality of a handshakes is (currently) determined like this:
- “good” handshakes include the challenge from the AccessPoint, the response from the Station and the confirmation from the AccessPoint.
- “workable” handshakes include only the response from the Station and the confirmation from the AccessPoint.
- “bad” handshakes include only the challenge from the AccessPoint and the response from the Station (but not the confirmation).
Multiple handshakes of the same quality (like in the example above) are rated by how close to each other the packets resembling the handshake are. That way, vaguely related packets that accidentally resemble to a complete handshake are not completely ignored, but of little priority.
To pursue with the original behaviour, Pyrit picks the single most valuable handshake by itself and works only with this single handshake. The attack-modes therefore now understand a new option “–all-handshakes“. When this option is passed:
- attack_passthrough attacks all workable handshakes at the same time. This does not affect performance as the bottleneck is computing the PMK.
- attack_batch and attack_db work down the list of possible handshakes one after the other.
- attack_cowpatty attacks all workable handshakes at the same time. This impacts performance (e.g. 2 handshakes == 50% throughput)
Additionally, the behaviour of strip and stripLive has changed: Pyrit no longer places the (selected) packets from a single authentication but all authentication-related packets into the new file.
Please notice that Pyrit used to create an index on a table in an unprofitable column-order. This causes a full table scan for operations like “eval” and “delete_essid” which may take a very long time for large (hundreds of millions of entries) databases.
This problem is about to get fixed for 0.3.1-svn so Pyrit creates new databases with their indexes in in proper column-order. Older databases can be fixed by creating a new index like this (use the database-server’s sql-console):
CREATE INDEX myrescueidx ON results (essid_id, _key);
People using the on-disk (“file://”) storage backend are not affected.