Wednesday, December 28, 2011

Nice free application that I found today

I thought I would share this with anyone interested  http://www.mohtab-software.co.uk/downloads.  From my initial checks of the program, it looks decent and contains a lot of information collections: Quran, Hadith, Fiqh, and companion biographies.  Very awesome...similar to winalim, but its free.

Wednesday, December 21, 2011

Firefox 9 upgrade failure, previous upgrade requires reboot

For quite some time now, everytime I do an upgrade of firefox, I keep getting this popup message saying that a previous upgrade requires a reboot.  Regardless of whether I rebooted or not, the message never went away.  Typically bypassing the reboot and saying you don't want to reboot solved the problem and let me continue working.  Now with the new Firefox 9 install I tried today, it gave me that same popup during the installation process, but when trying to bypass, it just closes the installer and doesn't give any error.  From the mozilla help forums it looks like the recommended suggestion (from the top google search results) is to wipe out the whole mozilla firefox folder and do a new install.  This seems a bit over the top, so I took a few seconds to look at the folder (typically c:\program files\mozilla firefox).  Scrolling down the list of files, a name jumps out at you as something obvious.  There is a file called firefox.exe.moz-upgrade.  I renamed this file and tried the install again, this time no error.  The file was recreated at the end of the upgrade, where it prompts me to reboot.  After reboot, the file was removed during the post install clean up this time.  So I guess sometime in the past, one of those cleanup's failed.  In any case, sometimes the answer is a bit simpler than the sledgehammer troubleshooting approach (i.e. Delete a users profile, reimage a whole computer, etc).  Hopefully the new version of Firefox doesn't hang the application repeatedly for no apparent reason.

Tuesday, December 13, 2011

Photos in Outlook and Sharepoint (thumbnailphoto)

After assisting with some issues related to outlook crashing due to users uploading their own photos via software that corrupted the images, I looked around to see some of the solutions available for uploading thumbnails.  I won't go into all of the details of what the thumbnailphoto attribute is used for, but generally in newer outlook versions it will show your userphoto in the emails that people receive.  Also it ties in with sharepoint and possibly some other products that I'm not very familiar with.  The AD attribute allows for 100kb of data by default, however Outlook wants a 10kb max image and I have read the recommendations for size is 96x96.  For the solutions that are available, most will just do an upload of the image and assume you sized it correctly already.  Some others come as exe files that do it all for you, some of which are programs you have to buy (even though they suck, and its easy to build your own), and others require .NET 4.0 for something that can be done in much lower versions, etc etc.  I didn't like too many of any of them, and from what I can see of user circulated information, one of the easiest solutions was using ldp.exe to upload the file to your own user object.  But in any case, I thought it would be nice to have a self service method of uploading your photo using the recommended image size and data size.  So I threw together this powershell (v1 compatible) script to handle this.  At the moment, I didn't feel like trying to wrap a GUI around all of this, but perhaps I'll do that in the future.  The inspiration for this code goes back to the Microsoft scripting games 2011 Advanced day 8 competition which involved striping EXIF data and resizing graphics files.  At the time, I didn't have enough time to figure out how to use the WIA com objects to handle this work, so for this script I have used some of the examples of other competitors as guidance for these methods.



#Add-ADPhoto   Powershell v1 compatibile script for updating
#user thumbnailphoto attribute.  Resizes input photo to recommended
#dimensions and size.  Only updates for the currently logged in user.
#This is a script for user self service.

$infile = $args[0]
$aspect = $args[1]

function usage {
    write-host "Usage: Add-ADPhoto filename [aspect]"
    write-host "   Provide the name of an image file in your current directory."
    write-host "   If you wish to preserve the aspect ratio of the image, type"
    write-host "   1 after your file name.  Images are resized to the recommended"
    write-host "   96x96, converted to JPG and set to 70% quality to limit size."
    exit 

}
$imagefile = (pwd).path + "\" + $infile
$imagefileout = (pwd).path + "\adout.jpg"

##############################################################################
#Check to see if the argument for filename was provided, and that it exists###
##############################################################################
if ([string]::isnullorempty($infile) -or -not (test-path $imagefile)) {
    &usage
}


###############################
#Remove any old converted file#
###############################
if (test-path $imagefileout) {
    del -path $imagefileout -ErrorAction "silentlycontinue"
}

$Image = New-Object -ComObject Wia.ImageFile
$ImageProcessor = New-Object -ComObject Wia.ImageProcess


##########################################################
#Try loading the file, if its not an image this will fail#
##########################################################
$Image.LoadFile($ImageFile)

if (-not $?) { &usage }


#############################################################
#Create filters, set aspect ratio setting, change dimensions#
#to max 96pixels, convert to JPG and set quality            #
#############################################################
$Scale = $ImageProcessor.FilterInfos.Item("Scale").FilterId
$ImageProcessor.Filters.Add($Scale)
$Qual = $ImageProcessor.FilterInfos.Item("Convert").FilterID
$ImageProcessor.Filters.Add($qual)

if ([string]::isnullorempty($aspect) -or [string]$aspect -ne "1") {
    $ImageProcessor.Filters.Item(1).Properties.Item("PreserveAspectRatio") = $false
} else {
    $ImageProcessor.Filters.Item(1).Properties.Item("PreserveAspectRatio") = $true
}

$ImageProcessor.Filters.Item(1).Properties.Item("MaximumHeight") = 96
$ImageProcessor.Filters.Item(1).Properties.Item("MaximumWidth") = 96
$ImageProcessor.Filters.Item(2).Properties.Item("FormatID") = "{B96B3CAE-0728-11D3-9D7B-0000F81EF32E}"

####################################################################
#Drop image quality until it meets the size recommendation of 10kb #
####################################################################
$quality = 80
do {
    Remove-Item -path $imagefileout -ErrorAction "silentlycontinue"
    $ImageProcessor.Filters.Item(2).Properties.Item("Quality") = $quality
    $Image = $ImageProcessor.Apply($Image)
    $Image.SaveFile($ImageFileOut)
    [byte[]]$imagedata = get-content $imagefileout -encoding byte
    $quality -= 10
} while ($imagedata.length -gt 10kb)


#####################################################################
#Find domain, and Account distinguished name.  Open user object, add#
#thumbnailphoto data and save.
#####################################################################
$de = new-object directoryservices.directoryentry("LDAP://" + $env:logonserver.substring(2))
$ds = new-object directoryservices.directorysearcher($de)
$ds.filter = "(&(objectclass=user)(samaccountname=" + $env:username + "))"
$myaccount = $ds.findone()
$de = new-object directoryservices.directoryentry($myaccount.path)
$de.properties["Thumbnailphoto"].clear()
$de.properties["Thumbnailphoto"].add($imagedata) |out-null
$de.setinfo()
Write-Host "Photo has been uploaded"

Monday, December 12, 2011

DNS Negative caching

In the past I have frequently run into the problem of delay in seeing new DNS records over a large DNS environment.  For example, if we want to put a new server in with a HOST record, it may take an hour or more before it can be seen throughout an enterprise or for external entities and customers.  There are several factors that could cause delay.  First of all, if you have Primary/Secondary servers, there could be delays in zone transfers of the new record.  In Active Directory environments, you have AD replication delays.  In other environments, systems may specifically have negative lookup caching, or by default act this way.  In this post, I will focus on Negative Caching.  You may ask, what is that?  Simply put, if a system is doing negative caching, and it does a dns lookup for a record, but gets no result, the system will remember this failed lookup and hold on to it for a period of time.  This prevents the system from trying to lookup the record again until a timeout has occurred and the negative cached entry is flushed.  You can view the cache in windows with ipconfig /displaydns, and a negative entry looks like this:
   elwood.bobscountrybunker.com
   ----------------------------------------
   Name does not exist.

In Windows clients, there are registry settings for Dns Client service to do this, but the Windows DNS server does not have it.  BIND servers will do it when acting as an intemediate dns system in the lookup process.  So if we have a client machine doing a lookup for elwood.bobscountrybunker.com and it is sending this lookup to 8.8.8.8 (google dns), this server tries to find a record at the authoritative holder of the zone bobscountrybunker.com.  If no result is found, the 8.8.8.8 server will negatively cache this failure for a specified period of time.  The amount of time the server will cache it is provided by the bobscountrybunker.com SOA record for this zone.  If you look at the last value of an SOA record (the minimum TTL), this will be used for the negative cache time period.  Lookups will have their own timeout on a per record basis.  So if the TTL was 10 minutes, our original lookup is counting down in cache from 10 minutes.  In a few minutes from now, if we look up another record that fails, this new lookup will start at 10 minutes while our original lookup may be down to 7 minutes.  This factor is important when thinking of how fast you want new records to be seen, and also how long dns lookup failures will cause unavailability.  At the same point, you don't want too low of a value which could cause increased load on your DNS infrastructure.  If you are troubleshooting these issues from a client perspective, you can use nslookup to see what the timeouts of a particular record are, so you can see the delay in some intermediate dns system.  For example:

nslookup -type=a -nosearch -d2 elwood.bobscountrybunker.com 8.8.8.8

Will do a lookup for HOST records with no dns suffix search, run in debug mode and point the dns query to 8.8.8.8.  The debug mode will show extra information on the lookup.  At the end of the output, you want to look at the authority record

Got answer (104 bytes):
    HEADER:
        opcode = QUERY, id = 2, rcode = NXDOMAIN
        header flags:  response, want recursion, recursion avail.
        questions = 1,  answers = 0,  authority records = 1,  additional = 0

    QUESTIONS:
        elwood.bobscountrybunker.com, type = A, class = IN
    AUTHORITY RECORDS:
    ->  bobscountrybunker.com
        type = SOA, class = IN, dlen = 46
        ttl = 1754 (29 mins 14 secs)
        primary name server = ns0.phase8.net
        responsible mail addr = support.phase8.net
        serial  = 2009042001
        refresh = 28800 (8 hours)
        retry   = 3600 (1 hour)
        expire  = 604800 (7 days)
        default TTL = 86400 (1 day)

where you can see the TTL status in cache on that server.  If you continue to run the command, you can watch this value decrease.  The same idea works for viewing valid records that are cached.  Individual records have their own TTL (not always following the same as the zone SOA record), which will point out how long they are to be cached.  So if you changed a record and want to know why the new data is not up to date in your dns queries, you can use the same methodology to track it down.

If you want to change the TTL of a zone in Microsoft DNS, open the zone properties, and go to the SOA tab.  Here you will see two TTL values, one is Minimum default TTL...this is for your resource records cache length.  The other is TTL for this record, which is split into  DDDDD:HH:MM:SS input format, and this is where you control negative caches of lookup's into this zone.  By default, Microsoft DNS sets this to 1 hour.

On the client side, the Microsoft dnscache will cache negative results for a default of 15 minutes.  This can be adjusted with the registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DNSCache\Parameters\MaxNegativeCacheTtl (ref).