Thursday, November 24, 2016

Installing Latest Kali Linux Rolling with All Packages on Any Android Device From Scratch With Latest LinuxDeploy Source

Before going through all of these steps, it might interest you to know that most Android devices usually do not have wireless drivers patched for injection, that being said, Kali still has many other useful tools that don't require patched drivers, i.e. metasploit. So, if you were wanting Kali on Android for wifi penetration testing, then you should look into Offensive Security's NetHunter, which includes patched drivers, but is only supported on a very limited set of Android devices. I'm currently looking into porting NetHunter to my LG G3 US990, and from what I can tell, is possible and encouraged. -Installing Linux on your Android device requires root, but anyone that's reading this probably has root already, right? -Get a 32GB class 10 microsd card, put it in your phone and choose to use it for Android only (format in ext4)
-Install Android Studio for your OS
-Clone latest LinuxDeploy source (git clone https://github.com/meefik/linuxdeploy.git)
-Open and build LinuxDeploy
-Install APK (adb install linuxdeploy.apk)
-Open LinuxDeploy, Go to settings, select: lock screen, lock wifi, wake lock, network trigger, enable cli
-Hit bottom right download button, select:
distribution: kali linux
installation type: directory
username: root
password: your_password
init: enable
init system: run-parts
mounts: enable
mount points: /store/emulated and /data
ssh: enable
-Menu -> install
-Menu -> configure
-Start
-Install VNC app, and connect to localhost:5900
-Open Terminal
-apt-get update
-apt-get install kali-linux-all # This installs like 3GB of packages, so be prepaired to wait
leave mysql password blank (default in kali and by default only listens on localhost)
set macchanger prompt to no
setuid root for kismet? -> No
set sslh to standalone
set wireshark to no
-To install only certain categories of tools (like to remove wifi or install password tools only):
apt-cache search kali-linux
apt-get remove --purge kali-linux-wifi
apt-get install kali-linux-pwtools
-Then to install openvas and tor:
apt-get install tor privoxy openvas

Monday, November 21, 2016

Here is a Patch That Adds Multithreading Support for coWPAtty 4.6 genpmk.c and the cowpatty.c Hashfile Attack, I changed the version to 4.7

I have added a patch to enable multi-threading (via pthreads) to the genpmk.c program of the coWPAtty wireless tools suite. genpmk.c breaks down the WPA/WPA2 dictionary attack process into two functions. The first allows you to pre-compute hashes for a given ssid from a given wordlist. Second, you use the hash file created to perform the dictionary attack on a 4-way handshake capture file. I have added posix thread support to help speed up the time it takes to calculate the hashes. You can download the source files from my github page here:
coWPAtty 4.7 multithreaded
You might be thinking, that breaking the process down doesn't speed it up any, and you would be correct. The benefit of this program is to allow for precomputing of hashes for common essid's. The WPA/WPA2 hashing algorithm does 4096 iterations, and salts it with the essid, so getting those calculations out of the way once will speed up future runs significantly. To prevent against this type of attack you would need to change your essid to something somewhat unique, and definitely something other than the default (if the default is not unique). Some ISP's have been trying to thwart this type of attack by giving default essid's containing the serial number part of the device's MAC ID. You might have seen Comcast's "HOME-XXXX" networks around, this coupled with a long default password that is random alpha-numeric sequence where a wordlist would take petabytes of space, makes the wifi network pretty secure against any attacks. The genpmk/cowpatty software is best used to attack essid's that you find most frequently, e.g. a default essid of "linksys". This program could also be ran in a shell script to calculate hashes for multiple essid's.

Usage:
./genpmk -f wordlist -d output_hashfile -s linksys -n 4
The -n flag is optional, it tells genpmk how many threads to create. It will default to the number of cpu's on the system + 1

Lists of essid's can be created easily too, with tools like crunch and seq. Here is a command that will generate all variations of the "NETGEARXX" essid:
seq 00 99 | perl -pe "s/^(.*)$/NETGEAR\1/g;"
or to generate all "HOME-XXXX" networks, we could use crunch:
crunch 4 4 ABCDEF0123456789 | perl -pe "s/^(.*)$/HOME-\1/g;"

Tuesday, November 15, 2016

Socks5 Proxy Scraping Perl Module for proxychains and DNS Leak Prevention

This is a perl module I wrote for scraping Socks5 proxies off websites. It verifies that the proxy is working before adding it to the mysql database. I will be adding support for performing OCR on an image matched by the ip or port regex with the tesseract perl library. I have also written a php frontend, and scripts that run on cron jobs to update the Max Mind geoip database and check the proxies in the mysql database.


# Socks5 proxy scraping module that stores working proxies in mysql database
# © Michael Craze -- http://projectcraze.us.to
#
# Example usage:
# use Proxy_Scraper;
#
# my $db = "proxies";
# my $user = "proxy_db_user";
# my $password = "proxy_db_pass";
#
# my $geoipdb="/home/$USER/code/get_proxies/GeoLiteCity.dat";
#
# # URL to scrape
# my $url = "http://www.some_proxy_site.com/socks5-list/";
#
# # IE7 - some pages print without javascript for IE7
# my $user_agent = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)';
#
# # Time in seconds to wait for a response from the site we are accessing via the proxy when checking
# my $time_out = 5;
#
# # Need parens on port_re, not ip_re
# my $ip_re = qr/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/;
# my $port_re = qr/$ip_re:(\d{1,5})/;
#
# my $ps = Proxy_Scraper->new($user,$password,$db,$geoipdb);
# $ps->scrape_url($url,$ip_re,$port_re,$user_agent,$timeout); 
# $ps->close();
#
# The scrape_url() method can be called as many times as wanted on different urls before calling close

#!/usr/bin/perl

package Proxy_Scraper;

use strict;
use warnings;
use Exporter;
use LWP::Simple;
use LWP::UserAgent;
use Data::Dumper;
use Geo::IP;
use Socket;
use DBI;
use DateTime;
use Net::Whois::Raw;
use Net::Ping;

use vars qw($VERSION @ISA @EXPORT);

require Exporter;

$VERSION = 1.000_001;
@ISA = qw(Exporter);
@EXPORT = (); # list functions/variables that modules exports here

my $DEBUG_LEVEL = 0;
my $GET_WHOIS = 0;

# Dump of Max Mind GeoIPCity.dat Record
#$VAR1 = \bless( {
#                   'city' => 'Mountain View',
#                   'country_code3' => 'USA',
#                   'region_name' => 'California',
#                   'country_code' => 'US',
#                   'postal_code' => '94040',
#                   'continent_code' => 'NA',
#                   'metro_code' => 807,
#                   'area_code' => 650,
#                   'country_name' => 'United States',
#                   'longitude' => '-122.0881',
#                   'region' => 'CA',
#                   'latitude' => '37.3845',
#                   'dma_code' => 807
#                 }, 'Geo::IP::Record' );

sub new{
 my $class = shift;
 my $self = {
  user => shift,
  password => shift,
  db => shift,
  geoipdb => shift,
 };

 $self->{dsn} = "DBI:mysql:$self->{db}";

 $self->{gi} = Geo::IP->open($self->{geoipdb}, GEOIP_STANDARD);

 $self->{dbh} = DBI->connect($self->{dsn}, $self->{user}, $self->{password}, {
  PrintError => 0,
  RaiseError => 1,
  AutoCommit => 1,
 });

 bless $self, $class;
 return $self;
}

sub close{
 my ($self) = @_;
 $self->{dbh}->disconnect;
}

sub  trim { $_[0] =~ s/^\s+|\s+$//g; return $_[0]; };

sub is_numeric { $_ =~ m/^\d+$/ ? return 1 : return 0; };

sub get_current_date_time{
 my $dt = DateTime->now;
 return join ' ', $dt->ymd, $dt->hms;  
}

# reverse dns lookup
sub get_dns{ return gethostbyaddr(inet_aton($_[0]), AF_INET); };

# Check that the proxy is up and working
sub check_proxy{
 my ($self, $ip, $port) = @_;
 my $ua = new LWP::UserAgent(agent => $self->{user_agent});
 $ua->timeout($self->{time_out});
 $ua->proxy([qw(http https)] => "socks://$ip:$port");
 my $res = $ua->get("http://google.com");
 if($DEBUG_LEVEL){
  print "\nWhile checking proxy got: " . $res->code . " " . $res->message . "\n";
 }
 $res->code eq "200" ? return 1 : return 0;
}

sub ping{
 my $hostname = shift;
 my $p = Net::Ping->new();
 my $n = 2;
 my $time = 0;
 my $success = 0;
 if($DEBUG_LEVEL){
  print "Pinging $hostname $n times.\n";
 }
 foreach my $c (1 .. $n) {
  my ($ret, $duration, $ip) = $p->ping($hostname);
  if ($ret) {
   $success++;
   $time += $duration;
  }
 }
 if (not $success) {
  if($DEBUG_LEVEL){
   print "All $n pings failed.\n";
  }
  return 0;
 } 
 else {
  if ($success < $n) {
   my $i = ($n - $success);
   print $i . " lost packets. Packet loss ratio: " . int(100 * ($i / $n)) . "\n";
   return int(100 * ($n - $success) / $n);
  }
  if($DEBUG_LEVEL){
   print "Average round trip: " . ($time / $success) . "\n";
  }
  return ($time / $success);
 }
}

sub traceroute{
 my $host = shift;
 my $tr = Net::Traceroute->new(host => $host);
 if($tr->found) {
  my $hops = $tr->hops;
  if($hops > 1) {
   return "Router was " .
    $tr->hop_query_host($tr->hops - 1, 0) . "\n";
  }
  else{
   return "1 or less hops\n";
  }
 }
 else{
  return "No route found.\n";
 }
}

sub get_whois_str{
 $Net::Whois::Raw::CHECK_FAIL = 1;
 return whois($_[0]);
}

# Builds a hash table of proxies we have already found so we don't add them twice
sub get_proxies_from_db{
 my $self = shift;
 my %seen = %{$_[0]};
 my $sql = 'SELECT ip, port FROM proxies';
 my $sth = $self->{dbh}->prepare($sql);
 my $rv = $sth->execute();
 if($rv < 0){
  print STDERR $DBI::errstr;
 }
 while(my @row = $sth->fetchrow_array) {
  my $ip = $row[0];
  my $port = $row[1];
  $seen{$ip} = $port;
 }
 if($DEBUG_LEVEL >= 2){
  print Dumper \%seen;
 }
}

# Checks if scraped proxy is already in our database
sub proxy_in_db{
 my $ip = shift;
 my $port = shift;
 my %seen = %{$_[0]};
 for my $key (keys %seen){
  if($ip eq $key && $port eq $seen{$key}){
   return 1;
  }
 }
 return 0;
}

sub store_proxy_and_whois{
 my $self = shift;
 my $sth = $self->{dbh}->prepare("INSERT INTO proxies (ip, port, dns, country_code, country_name, region_name, city, postal_code, area_code, latitude, longitude, whois, ping, added, last_checked) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)");
 my $rv = $sth->execute($_[0], $_[1], $_[2], $_[3], $_[4], $_[5], $_[6], $_[7], $_[8], $_[9], $_[10], $_[11], $_[12], $_[13], $_[14]);
 if($rv < 0){
  print STDERR $DBI::errstr;
 }
}

sub store_proxy{
 my $self = shift;
 my $sth = $self->{dbh}->prepare("INSERT INTO proxies (ip, port, dns, country_code, country_name, region_name, city, postal_code, area_code, latitude, longitude, whois, ping, added, last_checked) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)");
 my $rv = $sth->execute($_[0], $_[1], $_[2], $_[3], $_[4], $_[5], $_[6], $_[7], $_[8], $_[9], $_[10], $_[11], $_[12], $_[13]);
 if($rv < 0){
  print STDERR $DBI::errstr;
 }
}

sub print_proxy_csv{
 print join(',',@_) . "\n";
}

sub scrape_url{
 my $self = shift;
 $self->{url} = shift;
 $self->{ip_re} = shift;
 $self->{port_re} = shift;
 $self->{user_agent} = shift;
 $self->{time_out} = shift;
 my %seen = {};
 $self->get_proxies_from_db(\%seen);
 my $ua = new LWP::UserAgent(agent => $self->{user_agent});
 $ua->timeout($self->{time_out});
 my $res = $ua->get($self->{url});
 if($res->code eq "200"){
  if($DEBUG_LEVEL >= 3){
   print $res->decoded_content;
  }
  my @ips = $res->decoded_content =~ m/($self->{ip_re})/gi;
  my @ports = $res->decoded_content =~ m/$self->{port_re}/gi;
  my $i=0;
  foreach my $ip (@ips){
   my @csv_items = ();
   my $port = $ports[$i];
   if(proxy_in_db($ip,$port,\%seen)){
    print "$ip:$port Already Seen.\n";
    last;
   }
  
   # Max Mind GeoIP record
   my $r = $self->{gi}->record_by_addr($ip);
   
   if($self->check_proxy($ip,$port)){
    my $ping = ping($ip);
    my $dns = get_dns($ip);
    my $whois_data = "";
    if($GET_WHOIS){
     $whois_data = get_whois_str($dns);
    }
    push(@csv_items,$ip,$port,$dns,$r->country_code,$r->country_name,$r->region_name,$r->city,$r->postal_code,$r->area_code,$r->latitude,$r->longitude);
    print_proxy_csv(@csv_items);
    
    my $now = get_current_date_time();

    if(defined $whois_data){
     $self->store_proxy_and_whois($ip, $port, $dns, $r->country_code, $r->country_name, $r->region_name, $r->city, $r->postal_code, $r->area_code, $r->latitude, $r->longitude, $whois_data, $ping, $now, $now);
    }
    else{
     $self->store_proxy($ip, $port, $dns, $r->country_code, $r->country_name, $r->region_name, $r->city, $r->postal_code, $r->area_code, $r->latitude, $r->longitude, $ping, $now, $now);
    }
   }
   else{
    print "$ip:$port is down.\n";
   }
   if($i > 0 && $i % 100 == 0){
    print "\n";
   }
   $i++;
  }
  return 0;
 }
 else{
  print STDERR "Couldn't get url ($self->{url}): " . $res->code . " " . $res->message . "\n";
  return 1;
 }
}

1;

Fixing MythTV's mythexport Web Interface on Ubuntu Server 16.04

In the current versions of Ubuntu mythexport installs to /var/www, however the new home for the www-data user is /var/www/html. mythexport still creates a symlink /var/www/mythexport which points to /usr/share/mythtv/mythexport. The perl mythtv bindings look in the www-data home directory for the .mythtv folder, which according to /etc/passwd is /var/www. So make sure there is a .mythtv folder and .mythtv/config.xml file symlinked to /etc/mythtv/config.xml. Move the mythexport symlink from /var/www/mythexport to /var/www/html/mythexport:
mv /var/www/mythexport /var/www/html/mythexport
Next since mythexport uses some hardcoded paths in the source code, we need to find all instances of /var/www and replace them with /var/www/html. To do this use grep:
pushd /usr/share/mythtv/mythexport grep -E "\/var\/www" *.cgi
Looks like we need to change two files:
mythexportRSS.cgi: my $file_len = -s "/var/www/mythexport/video/$rss_file"; save_system_setup.cgi:sudo mkdir -p $location && sudo chmod 775 $location && sudo chown mythtv:mythtv $location && sudo ln -s -f $location /var/www/mythexport/video


vi mythexportRSS.cgi
Search and replace vim command: %s/\/var\/www/\/var\/www\/html/g
Then edit save_system_setup.cgi with the same vim command.
I also notice the mythexport apache configurations file was in sites-available and not in sites-enabled. So I ran the commands:
a2ensite mythexport service apache2 restart /etc/init.d/mythexport restart
You should now be able to browse to http://mythtv_ip/mythexport and setup your configuration.