Saturday, September 24, 2016

Splunk Stacking Redline and MIR host-based forensic artifacts

By Tony Lee, Max Moerles, Ian Ahl, and Kyle Champlin

Introduction

Mandiant’s free forensics tool, Redline®, is well-known for its powerful ability to hunt for evil using IOCs, collect host-based artifacts, and even analyze that collected data.  While this gratis capability is fantastic, it is limited to analyzing data from only one host at a time.  But imagine the power and insight that can be gained when looking at a large set of host-based data; especially when the hosts are standardized using a base build or gold disk image.  This would allow analysts to stack this data and use statistics to find outliers and anomalies within the network.  These discovered anomalies could include:

·         Unique services within an organization (names, paths, service owners)
·         Unique processes within an organization (names, paths, process owners)
·         Unique persistent binaries (names, paths, owners)
·         Drive letters/mappings that don't meet corporate standards
·         Infrequent user authentication (such as forgotten or service accounts)

Any of the above example issues could be misconfigurations or incidents--neither of which should be left unnoticed or unsolved.

Requirements and Prototyping

To solve the stacking problem, we had four major requirements.  We needed a platform that could:

1)      Monitor a directory for incoming data
2)      Easily parse XML data (since both Redline and MIR output evidence to XML)
3)      Handle large files and break them into individual events
4)      Apply “big data” analytics to lots of hosts and lots of data


After looking at the requirements and experimenting a bit, Splunk seemed like a good fit.  We started our prototyping by parsing a few output files and creating dashboards within our freely available side project the Splunk Forensic Investigator App.  The architecture looks like the following:

Figure 1:  Architecture required to process Redline and MIR files within Splunk

We gave this app the ability to process just a few Redline and MIR output files such as system, network, and drivers.  Then we solicited feedback and were pleased with the response.

Results

Since the prototype gained interest, we continued the development efforts and the Splunk Forensic Investigator app now handles the following 15 output files:

System
Network
Processes
Services
Ports
Tasks
Prefetch
ShimCache
DNS
User Accounts
URL History
Driver Modules
Persistence
File Listings
Event Logs

After installation and setup, the first dashboard you will see when processing MIR and Redline output is the MIR Analytics dashboard.  This provides heads up awareness of the number of hosts processed, number of individual events, top source types, top hosts, and much more as shown in Figure 2.

Figure 2:  Main MIR Analytics dashboard

Additionally, every processed output type includes both visualization dashboards and analysis dashboards.  Visualization dashboards are designed flush out the anomalies using statistics such as counts, unique counts, most frequent, and least frequent events.  An example can be seen in Figure 3’s visualization example.

Figure 3:  Example visualization dashboard which shows least and most common attributes
The analysis dashboards parse the XML output from Redline and MIR to display it in a human readable and searchable format.  An example can be seen below in Figure 4.

Figure 4:  Example analysis dashboard which shows raw event data

Conclusion

If you use Redline or MIR and would like to stack data from multiple hosts, feel free to download our latest version of the Splunk Forensic Investigator App.  Follow the instructions on the Splunk download page and you should be up and running in no time.  This work can also be expanded to HX, but it will most likely require a bit of pre-processing by first reading the manifest.json file to determine the contents of the randomized file names.  We hope this is useful for other FireEye/Mandiant/Splunk enthusiasts.

Head nod to the "Add-on for OpenIOC by Megan" for ideas:  https://splunkbase.splunk.com/app/1517/ 

Monday, June 6, 2016

Event acknowlegement using Splunk KV Store

By Tony Lee


Introduction

Whether you use Splunk for operations, security, or any other purpose--it can be helpful to be able to acknowledge events and add notes.  Splunk provides a few different methods to accomplish this task:  using an external database, writing to files, or the App Key Value Store (aka KV Store).  The problem with using an external database is that it requires another system to provision and protect and can add unwanted complexity.  Writing to files can be problematic in a distributed Splunk architecture that may use clustered or non-clustered components.  The last option is the Splunk KV Store which appears to be the current recommendation from Splunk, but this can also appear complex at first--thus we will do our best to break it down in this article.

In the most basic explanation, the KV Store allows users to write information to Splunk and recall it at a later time.  Furthermore, KV Store lookups can be used to augment your event data by mapping event fields to fields assigned in your App Key Value Store collections. KV Store lookups can be invoked through REST endpoints or by using the following SPL search commands: lookup, inputlookup, and outputlookup.  REST commands can require additional permissions, so this article will look at possibilities using the search commands.

References

Before we get started, we will list some references that helped in our understanding of the Splunk KV Store:
http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/ConfigureKVstorelookups
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Outputlookup
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Inputlookup
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Lookup

Deciding on the fields

For this example, we wanted to add a couple of fields to augment our event data.  Namely an acknowledgement field (we will call this Ack) and a notes field (we will call this Notes).  We will match the unique event id field with a field that is also called id.

So, in summary, we have id, Ack, and Notes.  Splunk also uses an internal _key field, but we will not reference this directly in our efforts.

Getting started

Per our references above on configuring KV Store lookups, we will need two supporting configurations:

  1. A collections.conf file specifying our collection name
  2. A stanza in transforms.conf to specify kvstore parameters

cat collections.conf 
#
# Splunk app KV Store collection file
#
[acknotescoll]



head transforms.conf 

[acknotes]
external_type = kvstore
collection = acknotescoll
fields_list = _key, id, Ack, Notes

Interacting with KV Store using search

The reference links provide helpful examples, but they do not provide everything necessary.  Some of this was discovered through a bit of trial and error.  Especially the flags and resulting behavior.  We list below the major actions that can be taken and the search commands necessary to perform those actions: 

Write new record:
| localop | stats count | eval id=101 | eval Ack="Y" | eval Notes="These are notes for event 101"| outputlookup acknotes append=True

Note:  Without append=True, the entire KV Store is erased and only this record will be present


Update a record (only works if the record already exists):
| inputlookup acknotes where id="100" | eval Ack="N" | eval Notes="We can choose not to ack event 100" | outputlookup acknotes append=True

Note:  Without append=True, the entire KV Store is erased and only this record will be present


Read all records:
| inputlookup acknotes


Read a record (A new search):
| inputlookup acknotes where id="$id$" | table _key, id, Ack, Notes


Read a record (combined with another search):
<search> | lookup acknotes where id="100" | table _key, id, Ack, Notes

Limitation and work around

Unfortunately, it does not look like Splunk has a single search command/method to update a record, but create the record if it does not already exist.  I may be mistaken about this and hope that I am missing some clever flag, so feel free to leave comments in the feedback section below.  To get around this limitation, we first created a "simple" search command to check for the existence of a record.

Determine if record exists:
| inputlookup acknotes where id="108" | appendpipe [stats count | where count==0] | eval execute=if(isnull(id),"Record Does Not Exist","Record Exists!") | table execute

Example of a record that exists


Example of record that does not exist


Conditional update:
Now that we can determine if a record exists and we know how to create a new record and update an existing record, we can combine all three to modify and/or create entries depending on their existence.

<query>| inputlookup acknotes where id="$id$" | appendpipe [stats count | where count==0] | eval execute=if(isnull(id),"| localop | stats count | eval id=$id$ | eval Ack=\"$Ack$\" | eval Notes=\"$Note$\" | outputlookup acknotes append=True","| inputlookup acknotes where id=\"$id$\" | eval Ack=\"$Ack$\" | eval Notes=\"$Note$\" | outputlookup acknotes append=True") | eval kvid=$id$ | eval kvack="$Ack$" | eval kvnote="$Note$" | eval Submit="Click me to Submit" | table kvid, kvack, kvnote, execute, Submit</query>

Results

These are just some examples of what is possible.

You could create an event acknowledgement page

Event acknowledgement page

Once the fields are filled in at the top with the event id, acknowledgement, and notes, it could create the command to either update or add a new entry to the KV Store.  Clicking the Submit hyperlink will actually run that command and modify the KV Store.

Event acknowledgement page filled out and waiting for click to submit

Once the data is populated in the KV Store, these records can be mapped to the original events to add this data for analysts.

Original event data with KV Store augmentation

Conclusion

Hopefully this helps expose some of the interesting possibilities of using Splunk's KV Store to create an event acknowledgement/ticketing system using search operations.  Feel free to leave feedback below--especially if there is an easier search operation for updating a record and adding a new one if it does not already exist.  Thanks for reading.

Sunday, May 8, 2016

Forensic Investigator Splunk App - Version 1.1.4

By Tony Lee


Introduction

Our last release, version 1.1.3 was a pretty exciting release with new tools such as the chat program, link extractor, and various monitoring tools.  This time we focused on adding host enumeration tools that can be useful when trying to discover information about a remote host.  In addition, we have added a bulk search option that allows users to search on a list of items such as MD5 hashes, IP addresses or URLs for example.  Here is what we have in store for you in version 1.1.4 which is now available for free via the Splunk App store.

High Level

New Features in v1.1.4
 - Updated Investigator Chat 2.0!
 - Added Ping tool (Host --> Ping)
 - Added SMB Share Viewer (Host --> SMB Share Viewer)
 - Added NetBIOS Viewer (Host --> NetBIOS Viewer)
 - Added Port scanner (Host --> Port Scanner)
 - Added Banner grabber (Host --> Banner grabber)
 - Added Bulk searching of data using any field (Toolbox -> Bulk Search - Wild)
 - Added Bulk searching of data using a specific field (Toolbox -> Bulk Search - Field)
 - Added ASCII Table cheatsheet (Toolbox -> Cheat sheets -> ASCII Table)
 - Added Ports and services cheatsheet (Toolbox -> Cheat sheets -> Ports and Services)
 - Added subnetting cheatsheet (Toolbox -> Cheat sheets -> Subnetting)

Maintenance in v.1.1.4
 - Renamed the xml files to increase simplicity

Investigator Chat 2.0

The chat program received a pretty slick upgrade that makes it much more functional and easier to use.  Big thanks to Kyle for that upgrade.  It now lacks the annoying 5 second refresh rate.



Host Tools

Secure environments will lock down command prompts and restrict access to certain tools--thus it can be useful to have some host enumeration tools that can be activated through Splunk to query remote hosts.

Ping Tool

This is the simplest tool to reach out and see if the host is alive.  The assumption is that ICMP is not blocked at the network or host.


SMB Share Viewer

It can be nice to check for Windows shares as well.  If run from Windows, it will use net view and will not see "hidden" shares (those that end in a $ sign, such as C$, ADMIN$, IPC$).  If run from Linux, it will use smbclient and will see hidden shares.


NetBIOS Viewer

It is also useful to be able to pull NetBIOS table information from a remote host to determine function, users, domain and more.



Port Scanner

Determining the open ports can also be useful for determining the function of a host.  Unfortunately, nmap or other port scanners may not always be available... so we provided a python based port scanner exposed through Splunk.



Banner Grabber

Taking it a step further, we added a python based banner grabber as well.  It should be able to pull most banners, but let us know if it struggles against a particular service.




Bulk Searching - Wild and specific field


Often we have a large list of MD5 hashes, IP addresses, or URLs to run through Splunk.  We could search one item at a time, but that is slow.  We could create a complex boolean statement, but that takes time.  How about just copying and pasting that list into a search field?  Perfect!  This has been tested with Chrome and Firefox which seems to work best.  The file should contain one search item per line.  When copied and pasted into the Splunk Search list field, the browser should separate the terms with spaces.  There are two versions, one which you must specify the field and one that will search all fields (wild).



Cheatsheets - ASCII table, Ports and services, Subnetting



Finally, everyone can use some cheatsheets.  Quick references such as an ASCII table, ports and services, and subnet information.  No more wasting time searching the Internet--especially if you are on a closed network.  These are now local references available in Splunk.


Conclusion

Hopefully you will enjoy the new features of the app.  As always, we appreciate the great feedback we are receiving.  Please send more ideas from within the app using Help --> Send Feedback.

Thursday, April 14, 2016

Forensic Investigator Splunk App - Version 1.1.3

By Tony Lee


Introduction

It has been a little while since we released new features in the Forensic Investigator Splunk App, so we are excited about the latest update.  We have received excellent feedback on the app and have also been brainstorming some ideas for new tools to include.  Here is what we have in store for you in version 1.1.3 which is now available for free via the Splunk App store.

High Level

New Features
 - Added a chat program for collaboration!  It is a first stab, but give it a try (Help -> Chat Program)
 - Added an additional whois lookup vendor - api.hackertarget.com - ex: http://api.hackertarget.com/whois/?q=splunk.com
 - Added a link extractor to rip links out of a page (URL/IP -> Link Extractor)
 - Added permalink information to VT lookup page
 - Added disk usage monitor (Help -> Disk Monitor)  (Uses REST API)
 - Added license analysis page (Help -> License Usage) (*Need to have _internal logs on indexer and role based access)

Bug Fixes
 - Fixed VT lookup script, incorrectly detecting MD5 hashes in URL - if (re.findall(r"(^[a-fA-F\d]{32})", sys.argv[1]))
 - Fixed VT Lookup script, removed leading white spaces lstrip()
 - Fixed bug in BulkWhois to provide state/province information


Chat Program

This is a first stab at a collaboration mechanism within Splunk.  It works for a quick and dirty.  The only annoyance is the refresh every 5 seconds.  I am sure it can be made fancier with some Java Script so if you do a little dev and want to contribute--we would appreciate it.



Additional WHOIS vendor

For a while, it appears that bulkwhois had an ISP issue.  Thus, we added a second provider as another option.  Big thanks to hackertarget.com.


Link Extractor

This is useful if you don't want to visit a potentially malicious site, but you want to know the links on the site.  This tool will rip all of the links from the page safely and quickly.



Disk Usage

This last tool is useful for those who need to monitor how much storage is left on their indexers.  This is customizable to your server name and volume that holds indexed data.  By default it is set to my development box which is a simple Kali VM.


Conclusion

Hopefully you will enjoy the new features of the app.  As always, we appreciate the great feedback we are receiving.  Please send more ideas from within the app using Help --> Send Feedback.

Monday, February 15, 2016

Processing Mandiant Redline Files Using Splunk

By Tony Lee

Introduction

Do you use Mandiant's Redline (https://www.fireeye.com/services/freeware/redline.html) for performing host investigation?  Do you use Splunk for centralized log collection and monitoring?  How about using these two tools together?  The team behind the Splunk Forensic Investigator app (https://splunkbase.splunk.com/app/2895/) is experimenting with ingesting Redline collections.  We have made good progress on proving that it is possible to automate the ingestion of Redline collections and use Splunk to carve and display data from multiple hosts at the same time.  However we were wondering how many people would find this capability useful enough to see the work completed.  Check out the prototyping below and let us know if you would find this useful by leaving a comment below (account not necessary).

We have example output below:

System info displayed in Redline


System info displayed in Splunk


Driver modules displayed in Redline



Driver modules displayed in Splunk


Above and beyond replication

Recreating the Redline output is all well and good, however keep in mind that ingesting the data into Splunk allows you to filter, search, and carve across multiple systems at the same time.  Additionally, it would allow you to use Splunk's big data crunching capabilities.  It is very simple to ask Splunk to apply statistical analysis to large data sets to help look for anomalies within hosts such as:
  • Drive letters/mappings that don't meet corporate standards
  • Logged in/on users that occur infrequently (such as service accounts)
  • Forgotten operating systems that may be weak points or exploited first within a network



Or when analyzing drivers on multiple hosts, an investigator could glance at a dashboard and determine any of the following and more:
  • Number of drivers per host
  • Largest driver
  • Smallest driver
  • Most common driver file name
  • Most common driver path
  • Least common driver file name
  • Least common driver path

Conclusion

 These are just some examples of interesting data one might pull from analyzing many collections.  The possibilities are probably endless.  Let us know what you think.  Thanks.



Friday, December 11, 2015

Fun with Zigbee Wireless - Part V (Active attacks)

By Tony Lee

Introduction

This time, let's explore some active attacks.  Active attacks that use packet injection require flashing the RZUSBSTICK and thus firmware upgrades will also be covered in this article.
    Friendly reminder:  As always use this information responsibly.  Make sure you own the equipment prior to experimentation and learning.  We do not condone malicious intentions, are not held responsible for your actions, and will not bail you out of jail.

    Firmware Upgrade

    The first step in a firmware upgrade is to obtain the new image.  This could either be from Atmel, Luxoft, or in this case the KillerBee firmware from github.  

    Download
    Below we show the wget command to download the firmware and the head command to show what the firmware looks like.



    root@kali:~/tools/killerbee# wget https://raw.githubusercontent.com/riverloopsec/killerbee/master/ firmware/kb-rzusbstick-002.hex
    root@kali:~/tools/killerbee# head kb-rzusbstick-002.hex :100000000C94B4000C94D3000C94D3000C94D30043 :100010000C94D3000C94D3000C94D3000C94D30014 :100020000C94D3000C94D3000C94220E0C94D300A7 :100030000C94D3000C94D3000C94D3000C94D300F4 :100040000C943D0B0C94910B0C94FC0B0C94D30072 :100050000C947A0B0C94D3000C94D3000C94D30022 :100060000C94D3000C94D3000C94D3000C94D300C4 :100070000C94D3000C94D3000C94D3000C94D300B4 :100080000C94D3000C94D3000C94D3000C94D300A4 :100090000C94D3000C94D300E409FC062507220835



    Connections
    The image below shows all of the connections necessary to flash the RZUSBSTICK.  The Dragon programmer connects to the laptop via a USB cable.  The 100mm female to female ribbon cable connects the dragon to the 100mm to 50mm stand off adapter.  The 50 mm male to male connects the stand off adapter to the RZUSBSTICK (which is plugged into the USB stand also plugged into the laptop).  You can either solder the 50mm connector to the RZUSBSTICK or you can hold the pins at an angle to make a firm connection.  Since we were flashing multiple USB sticks, we did not solder the pins.  Note that PIN 1 is closest to the LED.




    Flash command

    We typically get this command ready prior to the hardware being connected.  That way when the hardware is connected we only need to hit the enter key.



    root@kali:~/tools/killerbee# avrdude -P usb -c dragon_jtag -p usb1287 -B 10 -U flash:w:kb-rzusbstick-002.hex avrdude: jtagmkII_initialize(): warning: OCDEN fuse not programmed, single-byte EEPROM updates not possible avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.01s avrdude: Device signature = 0x1e9782 avrdude: NOTE: FLASH memory has been specified, an erase cycle will be performed To disable this feature, specify the -D option. avrdude: erasing chip avrdude: jtagmkII_initialize(): warning: OCDEN fuse not programmed, single-byte EEPROM updates not possible avrdude: reading input file "kb-rzusbstick-002.hex" avrdude: input file kb-rzusbstick-002.hex auto detected as Intel Hex avrdude: writing flash (26818 bytes): Writing | ################################################## | 100% 2.99s avrdude: 26818 bytes of flash written avrdude: verifying flash memory against kb-rzusbstick-002.hex: avrdude: load data flash data from input file kb-rzusbstick-002.hex: avrdude: input file kb-rzusbstick-002.hex auto detected as Intel Hex avrdude: input file kb-rzusbstick-002.hex contains 26818 bytes avrdude: reading on-chip flash data: Reading | ################################################## | 100% 3.24s avrdude: verifying ... avrdude: 26818 bytes of flash verified avrdude: safemode: Fuses OK avrdude done. Thank you.


    avrdude options defined:
    -P port
    -c programmer-id
    -p partno
    -B bitclock
    -U Perform memory operation
    Memtype:operation:filename

    Verification

    If using lsusb, the brief information does not change.  However, use lsusb -D (ex:  lsusb -D /dev/bus/usb/001/030) to see that the iProduct and iSerial values change to the following:

    • iProduct:  KILLERB001
    • iSerial:  FFFFFFFFFFFF
    When plugging into VMware, the text changes because the iProduct value above changed.  Visually the KillerBee firmware will also change the blue light to amber on the RZUSBSTICK.


    If these changes are present the firmware upgrade was successful.

    Active attack

    In the previous passive attack article, we showed the zbid tool to list the devices.  There should be a noticeable difference now.



    Before:
    root@kali:~# zbid Monkey-patching usb.util.get_string() Dev Product String Serial Number 2:7 RZUSBSTICK 3FA0F6A01C25

    After: root@kali:~# zbid Monkey-patching usb.util.get_string() Dev Product String Serial Number 1:30 KILLERB001 FFFFFFFFFFFF



    zbstumbler
    Now that we have the KillerBee firmware image loaded, we can use more interesting tools that use active techniques for discovery--including zbstumbler.  This is a bit of a head nod to the old netstumbler tool as it can use active packets to locate networks (regardless of the channel).  As a bonus, we can also use the same card to transmit and receive.  Notice in the usage and output below that we do not need to specify a channel and the card still discovered both the hub and the outlet on channel 19.


    root@kali:# zbstumbler Monkey-patching usb.util.get_string() Warning: You are using pyUSB 1.x, support is in beta. zbstumbler: Transmitting and receiving on interface '1:24‘ New Network: PANID 0x2B55 Source 0x7A7C Ext PANID: fd:c3:43:24:23:71:f0:52 Stack Profile: ZigBee Enterprise Stack Version: ZigBee 2006/2007 Channel: 19 New Network: PANID 0x2B55 Source 0x0000 Ext PANID: fd:c3:43:24:23:71:f0:52 Stack Profile: ZigBee Enterprise Stack Version: ZigBee 2006/2007 Channel: 19



    zbwireshark
    zbwireshark allows users to sniff and review ZigBee traffic in real-time within Wireshark.  The tool creates a pipe which Wireshark then reads data from.  This tool can technically be used with the default firmware since it is passive in nature, but we found it slightly more stable after the KillerBee firmware upgrade.  There are still some stability issues either way.


    zbwireshark being used to sniff and display packets in real-time

    Sniff and Replay Packets

    Once devices are discovered (zbstumbler) and understood (zbwireshark), the it may be possible to capture traffic and then replay that traffic back to the device.  This did not work in our limited testing, but it is worth a shot.

    zbdump and zbreplay
    We already used zbdump in the previous article, however we will cover the syntax here for completion.  The new tool here is zbreply.  This tool will take the pcap from zbdump and replay it using the flashed RZUSBSTICK.  -f specifies the channel, -w specifies the name of the pcap to write the captured packets, and -r specifies the name of the pcap to read the captured packets.



    root@ubuntu:# ./zbdump -f 19 -w operating.pcap Monkey-patching usb.util.get_string() Warning: You are using pyUSB 1.x, support is in beta. zbdump: listening on '1:34', link-type DLT_IEEE802_15_4, capture size 127 bytes 54 packets captured root@ubuntu:# ./zbreplay -f 19 -r operating.pcap Monkey-patching usb.util.get_string() Warning: You are using pyUSB 1.x, support is in beta. zbreplay: retransmitting frames from 'operating.pcap' on interface '1:34' with a delay of 1.0 seconds. 27 packets transmitted



    Obtain a Key

    Similar to zbdsniff discussed in the prior article, the objective here is to obtain a key to decrypt ZigBee traffic.

    zbkey
    This tool is different from zbdsniff because it is active in nature.  Instead of passively scanning a pcap, zbkey attempts to retrieve a key by sending an associate request followed by a data request after an association response is received.

    Here are a few pro-tips when trying this attack:
    • Try attacking each device separately
    • First attack the hub
    • Then attack the child device
    • Try placing them in pairing mode
    • Try changing the hardware address



    root@kali:~# zbkey -f 19 -p 2B55 -s 0.1 -a d052a8006b550001 Monkey-patching usb.util.get_string() Warning: You are using pyUSB 1.x, support is in beta. Sending association packet... Sending data request packet... Received frame Length of packet received in associate_handle: 27 0000: 63 cc d7 55 2b 01 00 55 6b 00 a8 52 d0 01 00 55 c..U+..Uk..R...U 0010: 6b 00 a8 52 d0 02 ff ff 02 d8 eb k..R....... Association response status was not successful. Received 2. Received frame Length of packet received in associate_handle: 27 0000: 63 cc d7 55 2b 01 00 55 6b 00 a8 52 d0 01 00 55 c..U+..Uk..R...U 0010: 6b 00 a8 52 d0 02 ff ff 02 d8 eb k..R........ --snip— Sorry, we didn't hear a device respond with an association response. Do you have an active target within range?



    zbkey options defined:
    -f channel
    -p PAN ID
    -s sleep
    -a ZigBee hardware address

    Denial of Service

    When all else fails, it may be interesting to check the resiliency to denial of service.  Fortunately, the KillerBee suite has a tool for this as well.

    zbassocflood
    This tool attempts to trasmit a flood of associate requests to a target network.  It does require the PAN ID (-p), the channel (-c), and timing (-s).


    root@kali:~# zbassocflood -p 0x2b55 -c 19 -s 0.1 Monkey-patching usb.util.get_string() Warning: You are using pyUSB 1.x, support is in beta. zbassocflood: Transmitting and receiving on interface '1:34' ++++++......++++++......++++++......++++++......++++++......++++++......++++++......++++++......++++++^C Sent 102 associate requests.


    In the interest of full disclosure, we were not able to obtain a key or cause a denial of service.  More hardware and testing is required for to complete our research.

    Conclusion

    This article covered quite a bit of information including flashing the RZUSBSTICK as well as outlining the KillerBee software that can be used for active attacks against the 2.4 GHz ZigBee frequency range.  Passive attacks primarily covered sniffing and replaying, obtaining a key, and denial of service.  The following tools were covered in this article:
    • avrdude (flash)
    • zbstumbler
    • zbwireshark
    • zbdump (repeat)
    • zbreplay
    • zbkey
    • zbassocflood
    We are interested in hearing feedback from others regarding the success shown with the tools covered in this article.  Feel fee to leave feedback in the comments section below.

    While onsite testing, it may be useful to have an attack methodology flow chart to follow.  Here is one we created to help stay on track and create a repeatable process.



    Happy hacking. :)

    Tuesday, December 1, 2015

    Fun with Zigbee Wireless - Part IV (Passive attacks)

    By Tony Lee

    Introduction

    In our previous zigbee articles, we covered ZigBee usage, history, one hardware option, and a handful of software options:
    History:  http://securitysynapse.blogspot.com/2015/11/fun-with-zigbee-wireless-part-i.html 
    Hardware: http://securitysynapse.blogspot.com/2015/11/fun-with-zigbee-wireless-part-ii.html
    Software: http://securitysynapse.blogspot.com/2015/11/fun-with-zigbee-wireless-part-iii.html

    This time, let's explore some passive attacks.  This means that we will not send any packets--we will only listen to what is already being sent.  Active attacks which require packet injection require flashing the RZUSBSTICK and thus will be covered in the next article.
    Friendly reminder:  As always use this information responsibly.  Make sure you own the equipment prior to experimentation and learning.  We do not condone malicious intentions, are not held responsible for your actions, and will not bail you out of jail.

    List devices

    Most of the Windows software will let you know when the RZUSBSTICK is plugged in.  However, if wanting to use some of the more flexible Killerbee tools in Linux, we will need to first list the available devices.  For this, we use the zbid command.

    zbid:

    root@kali:~# zbid
    Monkey-patching usb.util.get_string() Dev Product String Serial Number 2:7 RZUSBSTICK 3FA0F6A01C25



    This should show at least one device if it is plugged in.  If nothing shows up or an error occurs do the following:

    • Check to make sure the USB stick is plugged in and a light illuminated
    • Check dmesg to check for errors
    • Reinstall Killerbee software per our instructions in the last article

    Discovery

    Now that we have a working RZUSBSTICK, let's discover some ZigBee devices use the existing firmware on the device.  Fortunately, ZigBee has a limited number of channels (11-26) because we could not find very good passive options for tools that hopped through all of the channels using this hardware.  Pro-tip:  Try channel 19 first--it is a popular default channel.

    zbfind
    One tool seemed to have a lot of promise, but we could not get it working.  zbfind is a GUI tool with passive and active network detection features and works similar to "net stumbler".  Keep in mind though that active discovery mode requires the RZUSBSTICK to be flashed with the KillerBee firmware, but even this did not help the tool function properly.  The screenshot below shows promise.



    Screenshot from:  http://www.willhackforsushi.com/

    zbopenear
    zbopenear is a very interesting tool in that it can listen (and write to pcap) on multiple channels at the same time (given enough RZUSBSTICKs).  Since there are 16 channels, it would require 16 RZUSBSTICKs to listen on all channels at the same time.  At a cost of $42.50 per stick x 16 sticks, it would be a total of $680.  This tool did work but defaulted to channel 11 (the first channel).



    root@kali:~# zbopenear  Monkey-patching usb.util.get_string() Found device at 1:3: ‘RZUSBSTICK' Assigning to channel 11. Cap1:3: Launching a capture on channel 11. Warning: You are using pyUSB 1.x, support is in beta. Capturing on '1:3' at channel 11. Result: zb_c11_20151012-1128.pcap

    Sniff and Analyze Packets

    Once devices are discovered, the last phase in the passive attack is to sniff and analyze packets.  Most of the Windows tools discussed in the last article have the ability to sniff and analyze packets as well.  In this section, we will focus on some of the KillerBee tools.

    zbdump
    zbdump is like tcpdump for ZigBee.  It can save packets in both pcap and DainTree format.  For our testing we will use pcap format so we can open it in Wireshark (which natively understands the ZigBee protocol).  The following command can be used to run zbdump.  -f specifies the channel and -w specifies the name of the pcap to write the captured packets.



    root@kali:~# zbdump -f 19 -w test.pcap zbdump: listening on '002:006', link-type DLT_IEEE802_15_4, capture size 127 bytes 66 packets captured


    After capturing some packets, we will now open the pcap in Wireshark to learn about the protocol and components.



    Good to know info
    When looking at the packet capture above there are a few things to note:

    • Source and destination fields in packet captures are assigned network IDs (think IP address)
      • Ex:  Source:  0x7a7c is the ZigBee network ID assigned when the device joined
      • Source of 0x0000 is usually a controller
    • Extended addresses are hardware addresses
      • Ex:  Extended Source:  00:0d:6f:00:04:49:7d:13
      • Instead of 48-bit (like NICs), ZigBee hardware addresses are 64-bit in length

    Obtain a Key 

    zbdsniff
    The last passive tool on the list is zbdsniff.  This tool searches pcap files for ZigBee keys.  However, we did not get any output from the file--which may indicate that there were no keys available.


    root@ubuntu:# zbdsniff operating.pcap Monkey-patching usb.util.get_string() Processing operating.pcap Processed 1 capture files.


    Conclusion

    This article outlined the KillerBee software that can be used for passive attacks against the 2.4 GHz ZigBee frequency range.  Passive attacks primarily covered sniffing and analyzing ZigBee packets.  The following tools were covered in this article:
    • zbfind
    • zbdump
    • zbopenear
    • zbdsniff
    Some of the more interesting attacks require packet injection capabilities.  For this feature we must upgrade the firmware on the RZUSBSTICK which will be covered in the next article.  We are interested in hearing feedback from others regarding the success shown with the tools covered in this article.  Feel fee to leave feedback in the comments section below.