Wednesday, December 28, 2011

Nice free application that I found today

I thought I would share this with anyone interested  http://www.mohtab-software.co.uk/downloads.  From my initial checks of the program, it looks decent and contains a lot of information collections: Quran, Hadith, Fiqh, and companion biographies.  Very awesome...similar to winalim, but its free.

Wednesday, December 21, 2011

Firefox 9 upgrade failure, previous upgrade requires reboot

For quite some time now, everytime I do an upgrade of firefox, I keep getting this popup message saying that a previous upgrade requires a reboot.  Regardless of whether I rebooted or not, the message never went away.  Typically bypassing the reboot and saying you don't want to reboot solved the problem and let me continue working.  Now with the new Firefox 9 install I tried today, it gave me that same popup during the installation process, but when trying to bypass, it just closes the installer and doesn't give any error.  From the mozilla help forums it looks like the recommended suggestion (from the top google search results) is to wipe out the whole mozilla firefox folder and do a new install.  This seems a bit over the top, so I took a few seconds to look at the folder (typically c:\program files\mozilla firefox).  Scrolling down the list of files, a name jumps out at you as something obvious.  There is a file called firefox.exe.moz-upgrade.  I renamed this file and tried the install again, this time no error.  The file was recreated at the end of the upgrade, where it prompts me to reboot.  After reboot, the file was removed during the post install clean up this time.  So I guess sometime in the past, one of those cleanup's failed.  In any case, sometimes the answer is a bit simpler than the sledgehammer troubleshooting approach (i.e. Delete a users profile, reimage a whole computer, etc).  Hopefully the new version of Firefox doesn't hang the application repeatedly for no apparent reason.

Tuesday, December 13, 2011

Photos in Outlook and Sharepoint (thumbnailphoto)

After assisting with some issues related to outlook crashing due to users uploading their own photos via software that corrupted the images, I looked around to see some of the solutions available for uploading thumbnails.  I won't go into all of the details of what the thumbnailphoto attribute is used for, but generally in newer outlook versions it will show your userphoto in the emails that people receive.  Also it ties in with sharepoint and possibly some other products that I'm not very familiar with.  The AD attribute allows for 100kb of data by default, however Outlook wants a 10kb max image and I have read the recommendations for size is 96x96.  For the solutions that are available, most will just do an upload of the image and assume you sized it correctly already.  Some others come as exe files that do it all for you, some of which are programs you have to buy (even though they suck, and its easy to build your own), and others require .NET 4.0 for something that can be done in much lower versions, etc etc.  I didn't like too many of any of them, and from what I can see of user circulated information, one of the easiest solutions was using ldp.exe to upload the file to your own user object.  But in any case, I thought it would be nice to have a self service method of uploading your photo using the recommended image size and data size.  So I threw together this powershell (v1 compatible) script to handle this.  At the moment, I didn't feel like trying to wrap a GUI around all of this, but perhaps I'll do that in the future.  The inspiration for this code goes back to the Microsoft scripting games 2011 Advanced day 8 competition which involved striping EXIF data and resizing graphics files.  At the time, I didn't have enough time to figure out how to use the WIA com objects to handle this work, so for this script I have used some of the examples of other competitors as guidance for these methods.



#Add-ADPhoto   Powershell v1 compatibile script for updating
#user thumbnailphoto attribute.  Resizes input photo to recommended
#dimensions and size.  Only updates for the currently logged in user.
#This is a script for user self service.

$infile = $args[0]
$aspect = $args[1]

function usage {
    write-host "Usage: Add-ADPhoto filename [aspect]"
    write-host "   Provide the name of an image file in your current directory."
    write-host "   If you wish to preserve the aspect ratio of the image, type"
    write-host "   1 after your file name.  Images are resized to the recommended"
    write-host "   96x96, converted to JPG and set to 70% quality to limit size."
    exit 

}
$imagefile = (pwd).path + "\" + $infile
$imagefileout = (pwd).path + "\adout.jpg"

##############################################################################
#Check to see if the argument for filename was provided, and that it exists###
##############################################################################
if ([string]::isnullorempty($infile) -or -not (test-path $imagefile)) {
    &usage
}


###############################
#Remove any old converted file#
###############################
if (test-path $imagefileout) {
    del -path $imagefileout -ErrorAction "silentlycontinue"
}

$Image = New-Object -ComObject Wia.ImageFile
$ImageProcessor = New-Object -ComObject Wia.ImageProcess


##########################################################
#Try loading the file, if its not an image this will fail#
##########################################################
$Image.LoadFile($ImageFile)

if (-not $?) { &usage }


#############################################################
#Create filters, set aspect ratio setting, change dimensions#
#to max 96pixels, convert to JPG and set quality            #
#############################################################
$Scale = $ImageProcessor.FilterInfos.Item("Scale").FilterId
$ImageProcessor.Filters.Add($Scale)
$Qual = $ImageProcessor.FilterInfos.Item("Convert").FilterID
$ImageProcessor.Filters.Add($qual)

if ([string]::isnullorempty($aspect) -or [string]$aspect -ne "1") {
    $ImageProcessor.Filters.Item(1).Properties.Item("PreserveAspectRatio") = $false
} else {
    $ImageProcessor.Filters.Item(1).Properties.Item("PreserveAspectRatio") = $true
}

$ImageProcessor.Filters.Item(1).Properties.Item("MaximumHeight") = 96
$ImageProcessor.Filters.Item(1).Properties.Item("MaximumWidth") = 96
$ImageProcessor.Filters.Item(2).Properties.Item("FormatID") = "{B96B3CAE-0728-11D3-9D7B-0000F81EF32E}"

####################################################################
#Drop image quality until it meets the size recommendation of 10kb #
####################################################################
$quality = 80
do {
    Remove-Item -path $imagefileout -ErrorAction "silentlycontinue"
    $ImageProcessor.Filters.Item(2).Properties.Item("Quality") = $quality
    $Image = $ImageProcessor.Apply($Image)
    $Image.SaveFile($ImageFileOut)
    [byte[]]$imagedata = get-content $imagefileout -encoding byte
    $quality -= 10
} while ($imagedata.length -gt 10kb)


#####################################################################
#Find domain, and Account distinguished name.  Open user object, add#
#thumbnailphoto data and save.
#####################################################################
$de = new-object directoryservices.directoryentry("LDAP://" + $env:logonserver.substring(2))
$ds = new-object directoryservices.directorysearcher($de)
$ds.filter = "(&(objectclass=user)(samaccountname=" + $env:username + "))"
$myaccount = $ds.findone()
$de = new-object directoryservices.directoryentry($myaccount.path)
$de.properties["Thumbnailphoto"].clear()
$de.properties["Thumbnailphoto"].add($imagedata) |out-null
$de.setinfo()
Write-Host "Photo has been uploaded"

Monday, December 12, 2011

DNS Negative caching

In the past I have frequently run into the problem of delay in seeing new DNS records over a large DNS environment.  For example, if we want to put a new server in with a HOST record, it may take an hour or more before it can be seen throughout an enterprise or for external entities and customers.  There are several factors that could cause delay.  First of all, if you have Primary/Secondary servers, there could be delays in zone transfers of the new record.  In Active Directory environments, you have AD replication delays.  In other environments, systems may specifically have negative lookup caching, or by default act this way.  In this post, I will focus on Negative Caching.  You may ask, what is that?  Simply put, if a system is doing negative caching, and it does a dns lookup for a record, but gets no result, the system will remember this failed lookup and hold on to it for a period of time.  This prevents the system from trying to lookup the record again until a timeout has occurred and the negative cached entry is flushed.  You can view the cache in windows with ipconfig /displaydns, and a negative entry looks like this:
   elwood.bobscountrybunker.com
   ----------------------------------------
   Name does not exist.

In Windows clients, there are registry settings for Dns Client service to do this, but the Windows DNS server does not have it.  BIND servers will do it when acting as an intemediate dns system in the lookup process.  So if we have a client machine doing a lookup for elwood.bobscountrybunker.com and it is sending this lookup to 8.8.8.8 (google dns), this server tries to find a record at the authoritative holder of the zone bobscountrybunker.com.  If no result is found, the 8.8.8.8 server will negatively cache this failure for a specified period of time.  The amount of time the server will cache it is provided by the bobscountrybunker.com SOA record for this zone.  If you look at the last value of an SOA record (the minimum TTL), this will be used for the negative cache time period.  Lookups will have their own timeout on a per record basis.  So if the TTL was 10 minutes, our original lookup is counting down in cache from 10 minutes.  In a few minutes from now, if we look up another record that fails, this new lookup will start at 10 minutes while our original lookup may be down to 7 minutes.  This factor is important when thinking of how fast you want new records to be seen, and also how long dns lookup failures will cause unavailability.  At the same point, you don't want too low of a value which could cause increased load on your DNS infrastructure.  If you are troubleshooting these issues from a client perspective, you can use nslookup to see what the timeouts of a particular record are, so you can see the delay in some intermediate dns system.  For example:

nslookup -type=a -nosearch -d2 elwood.bobscountrybunker.com 8.8.8.8

Will do a lookup for HOST records with no dns suffix search, run in debug mode and point the dns query to 8.8.8.8.  The debug mode will show extra information on the lookup.  At the end of the output, you want to look at the authority record

Got answer (104 bytes):
    HEADER:
        opcode = QUERY, id = 2, rcode = NXDOMAIN
        header flags:  response, want recursion, recursion avail.
        questions = 1,  answers = 0,  authority records = 1,  additional = 0

    QUESTIONS:
        elwood.bobscountrybunker.com, type = A, class = IN
    AUTHORITY RECORDS:
    ->  bobscountrybunker.com
        type = SOA, class = IN, dlen = 46
        ttl = 1754 (29 mins 14 secs)
        primary name server = ns0.phase8.net
        responsible mail addr = support.phase8.net
        serial  = 2009042001
        refresh = 28800 (8 hours)
        retry   = 3600 (1 hour)
        expire  = 604800 (7 days)
        default TTL = 86400 (1 day)

where you can see the TTL status in cache on that server.  If you continue to run the command, you can watch this value decrease.  The same idea works for viewing valid records that are cached.  Individual records have their own TTL (not always following the same as the zone SOA record), which will point out how long they are to be cached.  So if you changed a record and want to know why the new data is not up to date in your dns queries, you can use the same methodology to track it down.

If you want to change the TTL of a zone in Microsoft DNS, open the zone properties, and go to the SOA tab.  Here you will see two TTL values, one is Minimum default TTL...this is for your resource records cache length.  The other is TTL for this record, which is split into  DDDDD:HH:MM:SS input format, and this is where you control negative caches of lookup's into this zone.  By default, Microsoft DNS sets this to 1 hour.

On the client side, the Microsoft dnscache will cache negative results for a default of 15 minutes.  This can be adjusted with the registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DNSCache\Parameters\MaxNegativeCacheTtl (ref).

Monday, October 24, 2011

Migrating a lot of zones from Microsoft DNS to BIND

In my last post, I gave some solutions to migrating zones from Microsoft DNS to BIND dns zones. If you have a lot of zones, you may be wondering if there is an easy way to run these steps on all of them to migrate the whole configuration. Well with dnscmd and some basic scripting, that is just adding a few more minutes of work.

When you want to see all of your zones:

dnscmd [dnsserver] /enumzones

will list out every zone on the server. The format needs some work though, and you will need to ignore the header and footer output for the command. The rest comes out in the format of Zone Type Partition Options, with no standardization to the whitespacing between them. Since the zone happens to be what we need, we can easily extract that and use it for follow on commands.

If we want to export our zone's to files, we can try this in powershell:

$zones = dnscmd $dnsserver /enumzones
for ($i = 7; $i -lt ($zones.length -3); $i++) {
$zonename = $zones[$i].substring(1)
$zonename = $zonename.substring(0,$zonename.indexof(" "))
$file = $zonename[$i] + ".txt"
dnscmd $dnsserver /exportzone $zonename $file
}

This will go through all the output, strip out the zone names and export them all to a text file named after the zone name. You could use this same method as a way to backup records if you have issues with them being deleted, or zones going missing. Here I started at the 7th line of output, which should bypass all of the headers and ignore the first zone, which will likely be the "." zone. You can check where you want to start by looking at the lines of the $zone array before doing a loop. We end 3 lines short of the end of the output to skip the footer information.

Alternatively if you were to go with the secondary zone on BIND method, you could use dnscmd to set up the allow zone transfer and provide the BINDS server's IP. While doing this and the above example, you could even throw in extra output to file using the zone name to build all of the BIND config file's entries for the new zones (primary or secondary).

In any case, to use dnscmd to set an IP for zone transfers:

dnscmd [dnsservername] /zoneresetsecondaries /Securelist [secondary dns server ip]

If you are running this on a lot of zones and you don't want to do this to all of them, find another method. This will reset the existing settings on the zone.

These examples are just a few ways to do this. There may be some existing powershell cmdlets available that will accomplish some of these tasks. For obtaining a list of zones, and better filtering/handling of them, you could also take a look at doing this with: Get-wmiobject -namespace root\microsoftdns -class microsoftdns_zone -computer [remote dns server name].

Migrating Windows DNS to Linux BIND

Recently I have encountered several people who were trying to do DNS migrations between operating systems for various reasons. I thought it would be nice to put together a good tutorial on this. If you search around you will find other answers, most of which tell you to pull the DNS text file from a windows machine and copy it over to Linux. That works if you have a non-active directory integrated DNS zone and the file is already there. I wouldn't suggest trying to convert an AD integrated zone that is used in production to a primary non-AD integrate zone just to do a migration. There are two good ways to get a zone file that BIND can use.

1) Export the zone from windows.
dnscmd [dns server name] /exportzone [zone name] [file name]
This command will export all the zone records into a text file and put it in the %windir%\system32\dns folder.

2) On your Linux machine, create secondary zones for your Windows zones. On the Windows machine, allow zone transfer to the Linux machine. Once the transfer is done, you will have a text copy of the zone file that you can modify and reuse as a master zone.

Example Linux machine 10.1.3.2 and windows machine 10.1.3.10






In both cases, you will need to do some editing to the zone file. You need to update the SOA information and the NS record


Change these values to the name of your BIND server. Place the zone file where BIND can read it, and update your named.conf or related include file to host the zone as a master. Reload BIND and you will be hosting DNS there.

There are always more considerations to a migration than this. You need to consider what IP addresses the clients use for nameservers. If they were pointing to the server you are migrating away from, you may want to do a IP address swap on your servers as a last step of the transfer. Besides clients, you need to be concerned with domain name registration services pointing to the appropriate servers that manage your registered domain names, as well as any DNS forwarders being used. If you are using dynamic dns and you have a lot of registrations from DHCP clients, migrating them as-is would cause their records to now become static.  So you want to look at cleaning up your zone file of this type of entry prior to migration if you want to continue with dynamic dns in the BIND server.  Another big concern is for Active Directory environments. It is not recommended to go away from Microsoft DNS when using active directory due to the large number of records that are required to make that function properly. Failing to keep up with all of the manual changes can greatly impact your AD environment. One method to help avoid some of the headache would be to use both, and leave the _msdcs zone on your windows system. This will require some delegations to be put in place on the BIND server.

Tuesday, October 11, 2011

Viewing McAfee Exclusions (Powershell)

The following script is an example of remote registry key reading using Powershell with .Net classes. If you need to examine Mcafee scan exclusions, you can find them in one of three subkeys. Depending on the risk level of the process, you will need to look in Default, Low risk or High risk locations. Each entry for exclusions is in a named key with numeric increments in the names. Each value contains a pipe separated triple of information which describes the type of the rule, when the rule should be applied (and if it applies to subfolders), and the exclusion pattern. The script will return all of the exclusions for the specified process classification, in the format of a PSObject Array with decoded rule information. Due to the length of the exclusion pattern value, you may need to further format the output or limit the columns returned to better view the results.


#Get-McAfeeExclusions

$server = $Args[0]
$level = $args[1]

if (($server -eq $null) -or ($Server -eq "")) {
  write-host -foregroundcolor "yellow"  "usage:  Get-McAfeeExclusions servername [level]"
  write-host -foregroundcolor "yellow"  "    Enter Server name to list Mcafee AV exclusion list.  Optionally"
  Write-Host -ForegroundColor "yellow"  "    you can enter the level to view (Default, High, Low)."
  write-host 
  exit
}

if ($level -ne $null) {
 if (-not (("Default","High","Low") -contains $level)) {
  Write-Host -ForegroundColor "yellow" "Invalid level specified, use Default | High | Low"
  write-host
  exit
 }
} else {
 $level = "Default"
}

function decode-mcafee-exclusion-code([int]$code) {
 switch ($code) {
  5 { return "Windows File Protection" }
  4 { return "Extension" }
  3 { return "FilePath" }
  2 { return "CreationDate" }
  0 { return "ModifiedDate" }
 }
}

function decode-second-vals([int]$code) {
#for some reason I see path rules with values above 10 which have the same settings for below 10 rules.  7=15, 3=11
 switch ($code) {
  1 {return ("write")}
  2 {return ("read")}
  3 {return ("read","write")}
  5 {return ("subfolder","write")}
  6 {return ("subfolder","read")}
  7 {return ("subfolder","read","write")}
  11 {return ("read","write")}
  15 { return ("subfolder","read","write")}
 }
}

$key = "Software\McAfee\VSCore\On Access Scanner\McShield\Configuration\" + $level
$type = [Microsoft.Win32.RegistryHive]::LocalMachine
$regkey = [Microsoft.win32.registrykey]::OpenRemoteBaseKey($type,$server)
$regkey = $regkey.opensubkey($key)

if (-not ($?)) {
 #error opening key, mcafee may not be installed
 Write-Error ("Unable to open mcafee registry key: " + $key)
 exit 1
}

$vals = $regkey.getvaluenames()
$results = New-Object collections.ArrayList

foreach ($val in $vals) {
 if ($val -match "ExcludedItem") {
  $entry = $regkey.getvalue($val)
  $exclusionvals = $entry.split("|")
  $ruletype = decode-mcafee-exclusion-code $exclusionvals[0]
  $settings = decode-second-vals $exclusionvals[1]
  $excludeditem = $exclusionvals[2]
  $myresult = New-Object psobject
  Add-Member -InputObject $myresult NoteProperty System $server
  Add-Member -InputObject $myresult NoteProperty RuleType $ruletype
  Add-Member -InputObject $myresult NoteProperty Settings $settings
  Add-Member -InputObject $myresult NoteProperty Exclusion $excludeditem
  $results.add($myresult) >$null
 }
}

return $results 
Update: Jan 31, 2013
Now that I have come across some other versions of mcafee, it looks like the registry key structure is not standardized. If you get no values with the script, you can poke around in that same general registry area and find the appropriate key for your implementation.

Friday, October 7, 2011

Website automation for monitoring (oracle access manager 10, diagnostics)

I wanted to share an example that I had building a script for monitoring a web application by looking at its diagnostic page. To get to this page, there are one or more layers of logons, cookies that need to be collected, and forms to fill in before you get to the actual diagnostics information. To build this, I first looked at powershell, but there seemed to be no easy method of doing web interactions involving cookies. Apparently you need to extend the webclient class to allow cookies, which would require developing an app instead of just using built in commands. After looking around a bit, I found PERL LWP which is simple and easy to use. To get going with a complicated web transaction, you can use tools like fiddler or iehttpheaders to walk yourself through the transaction manually and gather the data. This will show you where cookies are used, and what information is passed in POST requests, as well as all of the URL's to use. Strip out all of the image download requests and similar javascript junk you don't need, then create a step by step walk through this with corresponding PERL LWP methods. The results can be processed as text, or any other method you can come up with using other parsing or modules operations. In my example, I just do some text matching and counting to determine what is a successful check and what is a failure.

use LWP::UserAgent;
use HTTP::Cookies;

#put args processing in here
if ($#ARGV < 3) {
 print "usage: runaccess-diag.pl user password server domain";
 exit 1;
}

$user = $ARGV[0];
$pass = $ARGV[1];
$server = $ARGV[2];
$domain = $ARGV[3];
$debug = $ARGV[4];



#################################################################################################
###This url is where you would end up if you tried to directly access the diagnostics page without
###logging in.  This will keep us out of frames and keep it all simple.
#################################################################################

$url = "http://" . $server . '/identity/oblix/apps/admin/bin/front_page_admin.cgi?program=commonLogin&returnUrl=..%2F..%2F..%2F..%2F..%2Faccess%2Foblix%2Fapps%2Fadmin%2Fbin%2Fsysmgmt.cgi%3FloginTry%3D1%26pluginName%3Dsysmgmt%26program%3DgenDiagnostics&backUrl=..%2F..%2F..%2F..%2F..%2Faccess%2Foblix%2Fapps%2Fadmin%2Fbin%2Fsysmgmt.cgi';

#####################################################################################
#Define a few other URLS for handling webgate and logoff, as well as diagnostics page code
#####################################################################################

$starturl = "http://" . $server . "/access/oblix/apps/admin/bin/front_page_admin.cgi";

$altLoginurl = "http://" . $server . "/login/OAMlogin.htm";

$webgateurl = "http://" . $server . "/access/oblix/apps/webgate/bin/webgate.dll";

$diagurl = "http://" . $server . "/access/oblix/apps/admin/bin/sysmgmt.cgi";

$endurl = "http://" . $server . "/access/oblix/lang/en-us/logout.html";

$secondaryurl = "http://" . $server . "/access/oblix/apps/admin/bin/sysmgmt.cgi?loginTry=1&pluginName=sysmgmt&program=genDiagnostics";


$cookie_jar = HTTP::Cookies->new(file => "c:\temp\cookie.lwp");
$browser = LWP::UserAgent->new;
$browser->cookie_jar( $cookie_jar);

######################################################
#open up diag url initially to get logon page cookie##
######################################################

$response = $browser->get($starturl);
sleep 5;
$response = $browser->get($altLoginurl);

if (!($response->content =~ /document.loginform.password.onkeypress/)) {
 print "

Could not load access page for $server

\n"; if ($debug) { print $response->content;} exit 2; } if ($debug) { print $response->content; } ######################################## #POST back to form with logon details ## #NOTE: Not all access servers are the same # They use different POST values and logon # methods, some requireing webgate interaction ######################################## $response = $browser->post($url,[ 'fromloginpage' => "true", 'comp' => "", 'login' => $user, 'password' => $pass, 'LoginDomain' => $domain]); if ($debug) { print "\n\n5\n" . $response->content; } $response = $browser->get($secondaryurl); if ($debug) { print "\n\n6\n" . $response->content; } if ((!($response->content =~ /Please select Access Server/))){ #$response = $browser->get($secondaryurl); $response = $browser->get($altLoginurl); if ($debug) { print "\n\n7\n" . $response->content; } $response = $browser->post($webgateurl,[ 'fromloginpage' => "true", 'comp' => "", 'uid' => $user, 'password' => $pass]); if ($debug) { print "\n\n8\n" . $response->content; } $response = $browser->get($secondaryurl); if ($debug) { print "\n\n9\n" . $response->content; } if ((!($response->content =~ /Please select Access Server/))){ print "

Logon failure for $server

\n"; print "\n\n10\n" . $response->content; exit 2; } } ################################################## #POST back again to run the diagnostics # # #That 'Program' var is not a typo in the script, # #problem is in the access code ################################################## $response = $browser->post($diagurl,[ 'program' => "generateDiagnositcsReportPage", 'allAsServers' => "true", 'as_server' => "true"]); if ($debug) { print "\n\n\n" . $response->content; } ################################################# #READ results of content from diagnostics page ## ################################################# $results = $response->content; if ($debug) { print "\n\n\n" . $response->content; } ################# #Do UP and DOWN status match counting. For a valid server #it will be UP with its overall status and UP for 3 components #and down for 3 components. # #This is not the best checking, ok for 2 servers ############### $match = ">Up<"; $UPcount = () = $results =~ /$match/g; $match = ">Down<"; $DOWNCount = () = $results =~ /$match/g; if ($UPcount < ($DOWNCount +2)) { print "

$server: Diagnostics showing failure

"; print $results; my $response = $browser->get($endurl); exit 3; } else { my $response = $browser->get($endurl); exit 0; }

Thursday, September 8, 2011

Chasing duplicate SPN's

If you have problems with duplicate service principal names causing authentication problems in your domain, you can use a variety of tools to work on this. But first lets look at why duplicate SPN's are an issue.

To understand this problem, here is a basic explanation of the Kerberos authentication flow:

1) User accesses a resource application
2) Resource application tells user to authenticate
3) User connects to domain controller looking for a Kerberos service ticket for that service
4) Domain controller searches for an account with that service principal name
    a) If there is one in the same domain, use that one
    b) If there is more than one in the same domain, results may vary
    c) If there is more than one in multiple domains, results may vary
5) User receives ticket from Domain Controller
6) User presents ticket to resource application
7) Resource application account (computer or service account) attempts to decrypt the ticket to verify it.
    a) If the ticket was encrypted to them, authentication works
    b) If the ticket was encrypted to one of the other duplicate SPN accounts, decryption will fail, and access is denied.


Detection:

All your domain controllers will be logging events when duplicate SPN's are encountered. Unfortunately between 2008 and pre-2008 OS's, the event log source data is different. So when searching event logs you will have to account for this in some way. My example below will pull all the duplicate spn events and just strip out only the conflicted SPN record.

2000/2003 Domain controller:

Get-WMIObject -Computer MyDomainController -Filter "Logfile='system' and eventcode = 11 and sourcename='KDC' and type=5" -Class Win32_NtLogEvent | Foreach-Object { $_.insertionstrings[0]

2008 Domain controller:

Get-WMIObject -Computer MyDomainController -Filter "Logfile='system' and eventcode = 11 and sourcename='Microsoft-Windows-Kerberos-Key-Distribution-Center' and type=5" -Class Win32_NtLogEvent | Foreach-Object { $_.insertionstrings[0] }


Note: This type of query is a bit slow, but better than some methods. If you integrate a timewritten >= ########## this may greatly improve the speed of the event log query against the remote machine.

Take this information and you can use something like queryspn.vbs to look at any specific SPN to see what accounts are configured to use it. After that, analyze which account really needs it, then "setspn -D" the invalid entries away.

Wednesday, September 7, 2011

Generating complex passwords in powershell

Occasionally I'm required to create a password for a user and have never bothered to get around to using a proper random generation system. I decided to check around for powershell examples of this. There is one good example on google that uses [System.Web.Security.Membership]::GeneratePassword(), but apparently I need to upgrade .NET framework everywhere to get that one to work. There are a few longer and so-so examples out there. I decided to whip together something of my own to avoid the use of certain characters, while giving some flexibility as to content and length.




#Generate password

#Requires -Version 2
Param(
 [parameter(mandatory=$false)][int]$length=8,
 [parameter(mandatory=$false)][switch][alias("u")]$upper,
 [parameter(mandatory=$false)][switch][alias("n")]$numeric,
 [parameter(mandatory=$false)][switch][alias("s")]$symbol,
 [parameter(mandatory=$false)][switch][alias("m")]$maxcomplexity
)

function get-digit {
 return (Get-Random -Minimum 0 -Maximum 10)
}

function get-letter([bool]$is_upper) {
 if ($is_upper) {
  $letter = [char](Get-Random -Minimum 65 -Maximum 91)
 } else {
  $letter = [char](get-random -minimum 97 -maximum 123)
 }
 return $letter 
}

function get-validsymbol([int]$set) {
 #get a symbol from the ASCII table, but skip using *, \, `, ", and '
 switch($set) {
  1 { $symbol = Get-Random -Minimum 33 -Maximum 48}
  2 { $symbol = Get-Random -Minimum 58 -Maximum 65}
  3 { $symbol = Get-Random -Minimum 91 -Maximum 97}
  4 { $symbol = Get-Random -Minimum 123 -Maximum 127}  
 }
 while($symbol -match "34|42|44|92|96") {
  $symbol = get-validsymbol $set
 }
 return $symbol
}

function get-symbol {
 $setnum = Get-Random -Minimum 1 -Maximum 5
 return ([char](get-validsymbol $setnum))
}

#main

#look at input parameters and generate available charset limits
$values = @()
if ($maxcomplexity) {
 $values = (1,2,3,4)
} else {
 $values += 1
 if ($upper) {
  $values += 2
 }
 if ($numeric) {
  $values += 3
 }
 if ($symbol) {
  $values += 4
 }
}
$values
Write-Host
$password = ""
for ($i = 0; $i -lt $length; $i++) {
 $set = Get-Random -Minimum 1 -Maximum ($values.length+1)
 $values[$set-1]
 switch ($values[$set-1]) {
  1 { $char = get-letter $false }
  2 { $char = get-letter $true }
  3 { $char = get-digit }
  4 { $char = get-symbol }
 }
 $password += $char
}
return $password


<#
.SYNOPSIS

generate-password.  Create a random password. 

.DESCRIPTION

Generate a random password which by default contains only lowercase letters.  Additionally
you can specify length, use of uppercase letters, numeric characters, and use of symbols.

.PARAMETER length

Length of the password

.PARAMETER upper

User upper case and lower case letters (alias u)

.PARAMETER numeric

Use numbers (alias n)

.PARAMETER symbol

Use symboles (alias s)

.PARAMETER Maxcomplexity

Use all forms of password complexity (alias m)

.EXAMPLE

generate-password

Create an 8 char password of lower case letters.

.EXAMPLE

generate-password -length 6 -m

Create a 6 char password with letters (upper/lower), numbers and symbols

#>

Wednesday, August 24, 2011

Remote server management with alternate credentials

This post is something of a throwback to the early Windows NT days, and is still applicable for newer OS's (NT and anything Windows 2000 and above). If you are trying to manage a system remotely (not using remote desktop or similar VNC type technology), you will frequently be using RPC based connections. Tools like pstools, MMC's (eventvwr, compmgmt.msc, etc), regedit and many others use this type of connection. If you the machine you are connecting to does not allow access with your credentials, is not a member of your forest, or is not joined to a domain; then there is one easy way to get all of your tools working. If you use the command line tool for drive mapping, you can also create an authenticated RPC session between your machines which will be used in any access attempt you make after this.

Here is an example of connecting to a remote server using the local administrator account on that machine:

net use \\remoteserver\ipc$ /user:remoteserver\administrator *

The * at the end of the command will cause a prompt for password to come up when you run it. If the connection is successful, you have authenticated with alternate credentials. Now you can use your RPC based tools for access with no problems.

To remove these connections: net use \\remoteserver\ipc$ /delete

Tuesday, August 23, 2011

Can't connect to terminal services (RDP)

If you do a lot of remote management of servers, you may occasionally come across a machine that does not appear to be responding when you make a terminal services connection to it. This can be caused by configuration issues or sometimes the service has just locked up on bad connections (seen with 2003). If you remote check the services, and the terminal services service is running, you can do some digging in the registry. Here I will point out what is normal for remote desktop in remote administration mode (2 connections + 1 console). Open regedit, use the connect to network registry to access your remote machine. Expand down to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server.

Look for these keys:
Dword TSEnabled = 1
Dword TSUserEnabled = 0
Dword fDenyTSConnections = 0

Occasionally one of these may be incorrect. If you flip it to the correct value it should take effect immediately and allow you access.

As I mentioned earlier, with 2003 servers, I've noticed there are cases that RDP connections can cause problems and screw up terminal services complete. Why this happens, I'm not sure. I've seen it occur when accidentally dragging an icon and it ended up in the RDP window at certain points of the connection. Although one of the terminal services tools allows you to reset the tcp listener (or something similar to this), it doesn't work. Rebooting is the only solution to fix this problem.

Another problem you may frequently run into is too many people connected to the system. In 2008 it gives you a list and option to boot someone off. In 2003, you may see that option when logging into the console session. You can also use these two command line tools to assist with this:

qwinsta /server [name of remote machine]
rwinsta /server [name of remote machine] [session id]

These commands query the sessions and kick the specific session respectively. You can't kick someone logged in at the console though, but there are some tools that you may get to work, such as psshutdown (-o option).

This article covers only some of the problems you may come across. When terminal services (or remote desktop services role) is enabled, you may end up with other problems that have different solutions.

Thursday, July 28, 2011

Strange source of lockouts seen on the ISA server

A few times in our environment we have seen user account lockouts showing up on the ISA servers. These are configured to require authentication in order to allow proxying. In this type of case 90% of the time, the problem will be at the user's workstation. There can be several causes, and it is typically internet enabled applications that don't support windows authentication against proxies. When they receive a request to authenticate, some applications are really stupid and think they are authenticating against their remote service's servers, sending whatever 3rd party username/password combination to your proxy server. If the user name is the same as you domain user ID, you get locked out (have seen this with Skype). Others may allow user's to provide their username and password, and later on after the domain account password is changed, users forgot they where they typed it in. In rare cases, there are applications that find their way into caching domain credentials, but not always keeping up to date with them. The case I will present here is the later, and the details are incomplete.

When you see the proxy server locking out a user, when checking their machine, first look at every obvious internet enabled application. Sometimes you can obviously find something and update it or remove it. If you are still not sure, I recommend installing microsoft netmon 3.3 or higher on the workstation and running it for a while until the next bad password attempt shows up. The advantage of this network capture software is that it can provide the process name or process ID of the application that is doing the communication. Look for the HTTP requests and typically you will want to look for plain text authentication attempts as your culprit. Use this filter:

HTTP.Request.HeaderFields.ProxyAuthorization AND HTTP.Request.HeaderFields.ProxyAuthorization.Authorization.BasicAuthorization.Scheme == "Basic"

In the case I am bringing up, the process name was not provided and the ID was 4. PID 4 is system processes/system services. The packet capture showed plain text authentication using the user's previous domain password. Since it is a system process, we looked at the system services and came up with Akamai NetSession Interface service. This is something that installs as a download manager or similar software that Adobe is bundling with some of its downloads. I didn't get into a deep dive inspection of the machine to see where it manages to cache this plain text password, but this sounds like a good security project for someone to look at. If the software is grabbing domain credentials at some point, it would be nice to know the controls around it. In any case, this issue has come up several times in our environment with the same service. Disabling or removing fixes the problem. The problem may only come up in certain versions or due to some specific use case as we have only seen this a handful of times although there are over a hundred machines with the service.

I hope this information is helpful in troubleshooting this type of authentication failure and lockout source. For additional information on account lockouts, you can visit my account lockout tracking general practice page.

Wednesday, July 27, 2011

The security database on the server does not have a computer account for this workstation trust relationship

This is an error that can come up from time to time for a variety of reasons. When you get this error, usually you cannot access the system in any way: local login, terminal services, RPC connections, shared folder access, etc. When this happens, the computer account may have been deleted, the system may have failed to update its password properly (memory problems, network problems, offline too long, etc). But, what if you can access the server remotely and login, but local logins and terminal service logons are failing? You may see this problem with newer OS's (WIN7 and 2008, or vista) if you are using a disjointed dns namespace.

First of all, what is a disjointed namespace? If you domain is: mycorpdomain.com, and you have other dns zones that different sites use in that same domain, such as east.mycorpdomain.com, and west.mycorpdomain.com, these are disjointed namespaces with subdomains. In the example I will provide, let us assume that the "primary dns suffix" setting of a machine is being pushed through group policy, either at an OU level or at an AD site level.

Let's explain the primary dns suffix setting a little bit. This can be set in the same place that you would set the computer name or change the domain membership, just click the "More" button on this form. The primary dns suffix is an attribute that exists on the computer account in AD, and it is also related to the machine's service principal names (used by kerberos).

When GPO's are used to update primary dns suffix, there are occasions where a machine does not properly update its machine account information. You can see this by looking at the machine details with any AD search tool or ADUC (General tab -> dns name attribute), and the setspn.exe tool.

When the machine fails to update its information, it may show up in two places. To start with you want to look in ipconfig /all from that machine. See what the primary dns suffix is for that machine:

C:\>ipconfig /all

Windows IP Configuration

Host Name . . . . . . . . . . . . : MYMACHINE
Primary Dns Suffix . . . . . . . : east.mycorpdomain.com


Here we see the machine is set to use east. We can use Joeware's adfind to read the other important attributes

C:\>adfind -b dc=mycorpdomain,dc=com -f "cn=MYMACHINE" dnshostname serviceprincipalname

AdFind V01.37.00cpp Joe Richards (joe@joeware.net) June 2007

Using server: myserver.mycorpdomain.com:389
Directory: Windows Server 2003

dn:CN=MYMACHINE,OU=Computers,OU=east,DC=mycorpdomain,DC=com
>dNSHostName: MYMACHINE.mycorpdomain.com
>servicePrincipalName: HOST/MYMACHINE.mycorpdomain.com
>servicePrincipalName: RestrictedKrbHost/MYMACHINE.mycorpdomain.com
>servicePrincipalName: TERMSRV/MYMACHINE.mycorpdomain.com
>servicePrincipalName: TERMSRV/MYMACHINE
>servicePrincipalName: RestrictedKrbHost/MYMACHINE
>servicePrincipalName: HOST/MYMACHINE


If you see here, the dnsHostName attribute is not using the same primary dns name that my machine is using. You may also see some mixed up ServicePrincipalName attributes or some combination of the two. The important ones are the that dnsHostname matches, and the RestrictedKrbHost and HOST serviceprincipalnames match what the machine says its primarydnssuffix is. If they don't, the machine fails to find itself in AD and thinks it doesn't have a computer account, while all the time still acting like it can authenticate to the domain in most cases. This can typically be fixed with some manual Setspn -A commands to add the valid serviceprincipalname attributes to the machine, then reboot.

The problem gets caused somewhere in the delay of change for the primarydnssuffix attribute (only takes effect after reboot) and updates to the machine. I have been told that spn updates may be done as more than one transaction and its possible they were written in more than one place, causing a last writer to win situation that overwrites some of the other updates. That is why you may see some SPN's with the correct disjointed dns name, and some are missing them.

To mitigate this problem, if the computer account's dns suffix is correct (you will see this mostly in vista machines), you can script a job that checks machine accounts for mismatches and fix them proactively. For Windows 7 and higher it is more difficult as the computer account dns suffix is wrong. Generally though you will see this problem on newly built machine as the problem only occurs just after group policy first applies. So spreading the knowledge of the problem to people that build machines is a useful tool to fixing the problem before users see it.

Friday, July 22, 2011

Powershell sometimes less powerful than a snail

Today I was working on a simple old log file deletion script to run against remote machines. Since they are legacy boxes, they don't have powershell remoting capabilities, so I was going for basic access via admin shares. I wanted to clean up files that contained date data in the file name, so it was a pretty simple, wild card match delete operations and a few date operations...something that should take a few seconds to throw together. Just to be on the safe side, I tried running my work through PowerGui Script editor in debug mode and ended up with a hang on a deletion operation. That's wierd since its only handling a few hundred files, something cmd's del command would knock out in no time. It seems with the changes that came with powershell, .NET integration and the object oriented nature of how DEL and DIR were replaced with remove-item and get-childitem, the operations in the background became ridiculously inefficient in some cases. In my example I'm accessing a server that has a ping response time of 249ms from my machine where I'm running the debug. From another machine (my script jobs server) I have a 4ms response time to the target. Notice how this works for me:

From the 4ms server doing a get-childitem operation on a folder with 940 files in it

Friday, July 22, 2011 2:04:50 AM
Friday, July 22, 2011 2:05:02 AM


12 second is a bit slow, but tolerable. Lets see how cmd compares:

get-date; cmd /C "dir \\remoteserver\c$\mydir >c:\temp\somebsfile.txt"; get-date

Friday, July 22, 2011 2:18:08 AM
Friday, July 22, 2011 2:18:09 AM


1 second or less. Much nicer. So, how about that debug machine 249ms away from the target:

Lets start with cmd /C DIR, because my Powershell get-childitem has been running so long already:

Friday, July 22, 2011 3:08:54 PM
Friday, July 22, 2011 3:09:01 PM


and we're waiting for powershell.....

waiting....

15 minutes gone by.....

still waiting....

is this still processing????

firing up netmon.....

yeah its still pulling data over SMB....

SMB query path info every 300ms or so...

comes out like this

Friday, July 22, 2011 3:05:05 PM
Friday, July 22, 2011 3:30:16 PM


Bottom line, powershell remoting is probably a better way if available, otherwise failback to CMD.

Wednesday, May 18, 2011

Windows 2008R2 failing to update DDNS records

If you are running an environment with dynamic DNS, record scavenging, and 2008R2 servers; you may notice records disappearing from time to time. There is a bug in 2008 that causes systems to fail to maintain their records if you make changes to the DNS server IP's that the system uses. If you run ipconfig /registerdns, it forces the system to update and all is well. If you want to get around having this problem with a more permanent solution, Microsoft released this patch recently (kb 2520155).

IPv6 videos

Here are some useful presentations on IPv6 held at last year's DEFCON conference:

Who Cares about IPv6, Sam Browne
IPv6 No Longer Optional, (ARIN) John Curran
Implementing IPv6, (ARIN) Matt Ryanczak

Friday, April 29, 2011

Finding what AD site an IP address is in.

In environments where there are a large number of subnets defined in Active Directory, sometimes it can be difficult to find what site an IP belongs to. You can always run nltest /dsgetsite on the system, but that requires access to the machine and is not very efficient. There is a good command line tool for this called atsn produced by joeware.net, but I thought it would be nice to have something in powershell for this.

This script takes the IP address of the machine as a mandatory argument (ipv4 only). You can provide either a network mask, or a mask length. If no length is provided, then the system will search all subnets from a /32 downwards. The script will search for any less specific subnets that exist in AD and return the first site name that it finds containing this IP.



#get-site-byIP, ipv4


Param(
[Parameter(Mandatory=$true,HelpMessage="IP Address")][validatepattern('^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$')]$ip,
[Parameter(Mandatory=$false,HelpMessage="Netmask")][validatepattern('^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$')]$netmask,
[Parameter(Mandatory=$false,HelpMessage="Mask length")][validaterange(0,32)][int]$masklength
)



function check-subnetformat([string]$subnet) {

$octetsegments = $subnet.split(".")
#Check each octet from last to first. If an octet does not contain 0, check to see
#if it is valid octet value for subnet masks. Then check to make sure that all preceeding
#octets are 255
$foundmostsignficant = $false
for ($i = 3; $i -ge 0; $i--) {
if ($octetsegments[$i] -ne 0) {
if ($foundmostsignificant -eq $true -and $octetsegments[$i] -ne 255) {
Write-Error "The subnet mask has an invalid value"
return $false
} else {
if ((255,254,252,248,240,224,192,128) -contains $octetsegments[$i]) {
$foundmostsignficant = $true
} else {
Write-Error "The subnet mask has an invalid value"
return $false
}

}
}
}
return $true

}


function get-subnetMask-byLength ([int]$length) {

switch ($length) {
"32" { return "255.255.255.255" }
"31" { return "255.255.255.254" }
"30" { return "255.255.255.252" }
"29" { return "255.255.255.248" }
"28" { return "255.255.255.240" }
"27" { return "255.255.255.224" }
"26" { return "255.255.255.192" }
"25" { return "255.255.255.128" }
"24" { return "255.255.255.0" }
"23" { return "255.255.254.0" }
"22" { return "255.255.252.0" }
"21" { return "255.255.248.0" }
"20" { return "255.255.240.0" }
"19" { return "255.255.224.0" }
"18" { return "255.255.192.0" }
"17" { return "255.255.128.0" }
"16" { return "255.255.0.0" }
"15" { return "255.254.0.0" }
"14" { return "255.252.0.0" }
"13" { return "255.248.0.0" }
"12" { return "255.240.0.0" }
"11" { return "255.224.0.0" }
"10" { return "255.192.0.0" }
"9" { return "255.128.0.0" }
"8" { return "255.0.0.0" }
"7" { return "254.0.0.0"}
"6" { return "252.0.0.0"}
"5" { return "248.0.0.0"}
"4" { return "240.0.0.0"}
"3" { return "224.0.0.0"}
"2" { return "192.0.0.0"}
"1" { return "128.0.0.0"}
"0" { return "0.0.0.0"}

}

}

function get-MaskLength-bySubnet ([string]$subnet) {

switch ($subnet) {
"255.255.255.255" {return 32}
"255.255.255.254" {return 31}
"255.255.255.252" {return 30}
"255.255.255.248" {return 29}
"255.255.255.240" {return 28}
"255.255.255.224" {return 27}
"255.255.255.192" {return 26}
"255.255.255.128" {return 25}
"255.255.255.0" {return 24}
"255.255.254.0" {return 23}
"255.255.252.0" {return 22}
"255.255.248.0" {return 21}
"255.255.240.0" {return 20}
"255.255.224.0" {return 19}
"255.255.192.0" {return 18}
"255.255.128.0" {return 17}
"255.255.0.0" {return 16}
"255.254.0.0" {return 15}
"255.252.0.0" {return 14}
"255.248.0.0" {return 13}
"255.240.0.0" {return 12}
"255.224.0.0" {return 11}
"255.192.0.0" {return 10}
"255.128.0.0" {return 9}
"255.0.0.0" {return 8}
"254.0.0.0" {return 7}
"252.0.0.0" {return 6}
"248.0.0.0" {return 5}
"240.0.0.0" {return 4}
"224.0.0.0" {return 3}
"192.0.0.0" {return 2}
"128.0.0.0" {return 1}
"0.0.0.0" {return 0}

}

}

function get-networkID ([string]$ipaddr, [string]$subnetmask) {
$ipoctets = $ipaddr.split(".")
$subnetoctets = $subnetmask.split(".")
$result = ""

for ($i = 0; $i -lt 4; $i++) {
$result += $ipoctets[$i] -band $subnetoctets[$i]
$result += "."
}
$result = $result.substring(0,$result.length -1)
return $result

}

$startMaskLength = 32

#we can take network masks in both length and full octet format. We need to use both. LDAP searches
#use length, and network ID generation is by full octet format.

if ($netmask -ne $null) {
if (-not(&check-subnetformat $netmask)) {
Write-Error "Subnet provided is not a valid subnet"
exit
} else {
$startmasklength = &get-MaskLength-bySubnet $netmask
}
}



$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$mytopleveldomain = $forest.schema.name
$mytopleveldomain = $mytopleveldomain.substring($mytopleveldomain.indexof("DC="))
$mytopleveldomain = "LDAP://cn=subnets,cn=sites,cn=configuration," + $mytopleveldomain
$de = New-Object directoryservices.DirectoryEntry($mytopleveldomain)
$ds = New-Object directoryservices.DirectorySearcher($de)
$ds.propertiestoload.add("cn") > $null
$ds.propertiestoLoad.add("siteobject") > $null


for ($i = $startMaskLength; $i -ge 0; $i--) {
#loop through netmasks from /32 to /0 looking for a subnet match in AD

#Go through all masks from longest to shortest
$mask = &get-subnetMask-byLength $i
$netwID = &get-networkID $ip $mask

#ldap search for the network
$ds.filter = "(&(objectclass=subnet)(objectcategory=subnet)(cn=" + $netwID + "/" + $i + "))"
$fu = $ds.findone()
if ($fu -ne $null) {

#if a match is found, return it since it is the longest length (closest match)
Write-Verbose "Found Subnet in AD at site:"
return $fu.properties.siteobject
}
$fu = $null
}

#if we have arrived at this point, the subnet does not exist in AD

return "Subnet_NOT_Assigned"

Wednesday, March 2, 2011

Vmware module compile failure on kernel 2.6.34.x

I recently went through a distro update for OpenSuSE (11.1 to 11.3) on my home computer, which among the normal things that get broken in this process, VMware was among the usual suspects. Every time there is a kernel update VMware needs to recompile its modules. Typically this is a simple process, but in this case Vmware v7.0.x was failing to recompile its modules on kernel 2.6.34.7-0.7 with a missing header error. With all the appropriate packages installed, I had to go through the google method of finding wild ideas for a fix. I tried a few with no success. To avoid the userland patches to vmware, I just downloaded the latest VMware version and ran through the install. This apparently is an easy fix. So if anyone is still stuck in this mess, try an update.

If I can ever figure out the mystery of the many sound systems in Linux and why they are working, then not working at any given time, this would also make life much easier :).

Tuesday, January 25, 2011

IPv4 pool exhaustion and IPv6

I've seen some stories popping up recently showing what has been known for a long time, Ipv4 is close to out of allocatable space. You can view some reports at this site. With larger companies (like Yahoo!) and the US government pushing towards IPv6, this will be an important skill to have for IT workers. For those with limited exposure, its time to pick up a book and do some playing around. A while back, I was doing this and ran across Hurricane Electric. They are doing a good job of helping with IPv6 technology, and are giving people a chance to place around with some free services. They have a multi-level free certification program for IPv6, which requires hands on real world IPv6 work. The tunneling provides set up instructions for multiple operating systems (including windows). I have been playing with this in the last few days, setting up v6 tunnels, v6 capable web and mail servers, along with DNS. They do have some free dns capability, but for the tests, you need domain names registered elsewhere. Co.cc is good for this as they give free domain registrations in their space. The only problem is they don't let you create AAAA records on their servers. You can however register a domain, and delegate it to ns1.he.net (where you use HE's freedns system to create these records). Another issue that may come up for some users is the ability to allow this type of traffic through a home connection. IP type 41 needs to be allowed, so your home router needs to be able to do more than just TCP/UDP forwarding. If you can get your router to mirror its public IP to a backend machine, its no issue.


IPv6 Certification Badge for nlinley

Wednesday, January 5, 2011

DES encryption, Kerberos and 2008 Server

When dealing with Windows 2008 servers as domain controllers, mixed with legacy applications, you may run in to a problem with encryption support. Older software and platforms may be set to use DES encryption. This has been disabled by default on newer Microsoft OS's including 2008R2 and Windows 7. There are ways to get the support turned back on, though for security reasons this is not recommended. DES uses a weaker key than the other available methods, and most systems should support the windows standard of RC4-HMAC.

Identification of DES usage in your environment:

1) Netmon. Using the older Microsoft version of netmon, you can monitor your domain controllers and look for kerberos traffic that is using encryption other than RC4-HMAC. To do this, run a capture and create a display filter as follows:

AND
|---Kerberos: Encryption type (Etype[0]) <> 0x17
|---Protocol == KERBEROS
|---Any <--> Any

This will show all Kerberos traffic that is not using the standard. This may also find other types such as AES if you are using the latest and greatest. Since there are several types of DES encryption formats for Kerberos, this filter method is the simplest, but you can also create a multiple set of OR statements on that Etype value. Refer to the RFC's to get the values of each.

2) Kinit: If you are already seeing problems with kerberos authentication for certain applications, you can use kinit with debugging options to request a ticket for that service. This can show you if you are getting a bad Encryption type error. Here is one example using java and a keytab:

java -Dsun.security.krb5.debug=true -Dsun.security.krb5.krb5debug=all sun.security.krb5.internal.tools.Kinit -k -t HTTP.keytab HTTP/myservice@MYDOMAIN


-result snip-
Found unsupported keytype (23) for
HTTP/myservice@MYDOMAIN
-/result snip-


3) Searching for DES enabled AD accounts:
Ldap filter: (&(|(objectcategory=user)(objectcategory=computer))(userAccountControl:1.2.840.113556.1.4.804:=2097152))

This will find all DES enabled computer and user accounts in your search scope. You can search the serviceprincipalname attribute to see what applications may be using DES.

4) Kerberos event log events on the DC's



5) Errors in application logs.  ETYPE not supported and similar.  Some logs will show types of encryption configured.  Check java versions (1.4.x and earlier don't support RC4-HMAC).  Check krb5.ini and krb5.conf files for encryption type configurations.

Fixing the problem

There are two ways to go about getting around this problem during an upgrade. First, you can try to identify all of the uses of DES and get rid of them, or you can enabled DES support.

To get rid of DES usage, look for capability in the application for RC4-HMAC or other support encryption standard. Check the application, check the versions of java it uses, etc. If it is java, anything 1.5 and higher supports RC4. Check all the krb5.conf files to ensure any supplied enctype values allow for RC4-HMAC


[libdefaults]
default_tkt_enctypes = rc4-hmac
default_tgs_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac

Do your testing and push the appropriate application owners to do what they can to get away from DES. If keytab's are in use, ensure they are recreated with the appropriate encryption.

If you have applications that cannot get rid of DES, you can look at the steps required to enable DES support on the OS. There are two parts to this. First you will need to patch your 2008 domain controllers with KB978055. This gives the DC the ability to issue DES tickets. If your clients are windows 7 or 2008R2 server themselves, they will need to have some configuration changes. This can be done by a registry fix, or pushed by group policy. Refer to this article for that. When changing the client settings, be careful that you allow all of the required encryption types. If you use a GPO to turn on DES, and don't specify anything else, your machine will only use DES.

UPDATE (2/11/2012): After having the above patch fail to install with an error that it is not applicable, it seems that the patch was rolled into Windows 2008R2 service pack 1. So if you have the service pack installed, you should be fine on the domain controller side.


UPDATE (10/2/2012):  Apparently not all versions of service pack 1 would have this fix.  General distribution release versions of the SP won't have it.  You can check http://support.microsoft.com/?id=2425227 and http://support.microsoft.com/?id=2029058 to get hotfixes for updating the KDC service.  Also ensure that your group policy allowed encryption types include the Future encryption types.  Don't ask why, but for some reason in my testing having everything but that causes DES to fail in some case, while checking it causes it to suddenly work.