Friday, December 26, 2014

Adding workstations to an account's allowed workstation list

For anyone that uses the Logon Workstations attribute on user accounts to restrict what machines an account logs on to, you may find that updating the list can be a bit tedious from the GUI. Updating it from command line tools isn't always easy as well. The attribute in AD is just a string value, and all of the "list" that comes from it is due to the fact that it is stored in CSV format. So you have a list with a max length of 1024 characters in this format, so adding machines to it requires that it be added to the existing list, formatted as csv and limited in length. I put together a script for managing this, though it was targeted at accounts that already have some entries in this attribute, so it may require some extra work to target accounts that don't have any values. This script will check the length and content of each entry to ensure they meet netbios name standards. It will also ignore any duplicates that you may have provided.

param (
Try {
 if ($workstation -is [array]) {
   $workstation = $workstation -join ","
 $UserAccount = Get-ADUser $targetaccount -properties logonworkstations
 if ($useraccount -eq $null) {
  Write-Host "We have NOT found the account $targetaccount"
  throw "Target account not found"
 $UsrAcctArray = $UserAccount.logonworkstations.split(",")
 #clear out any whitespace in the user input
 if ($workstation.length -gt 15 -and $workstation -notmatch ",") {
  throw "Workstations provided are not correct.  Computer names can only be 15 characters or less."

#check all provided names to ensure they meet MS netbios name standards for machines.  If they don't ignore the name provided
 foreach($entry in $Workstation.split(",")){
  $entry = $entry.trim()
  if ((([regex]::match($entry,'^[0-9a-zA-Z_-]{5,15}$')).success) -eq $true) {
   $WorkstationCDL = $WorkstationCDL + $entry.ToUpper() + ","
 #if we have received no valid machines, quit
 if ([string]::isnullorempty($workstationCDL)) {
  throw "No Valid workstation names provided.  Please provide a name that meet Microsoft standards."
 #remove trailing comma, change new workstations to an array, mash it with the old one and check the length.  
 $WorkstationCDL = $WorkstationCDL.substring(0,($WorkstationCDL.length -1))      
 $WKSTarray = $WorkStationCDL.split(",")
 #mash arrays and pull unique names
 $newWorkstations = (($UsrAcctArray + $WKSTarray |select-object -Unique) -join ",").toupper()
 #check length (attribute max is 1024 chars
 if ($newworkstations.length -gt 1024) { 
  throw "The account has too many computers.  Cannot add more."
 try {
  Set-ADUser -identity $targetaccount -logonworkstations $newworkstations
 } catch {
  throw "Unable to modify user object."
} catch { throw $_ }

Tuesday, December 23, 2014

Journey to the hereafter series - Sh Tawfique Chowdhury

This is a very beneficial series of talks on everything that happens from now until the final abode.

All episodes on youtube

Audio can be found at Muslim central audio here. The same on itunes as a podcast.

Friday, November 14, 2014

Slow smartcard logon through remote desktop (RDP)

For anyone that has deployed smartcards, you have probably noticed at some point that smartcard logons are much slower than password logons. When they are done over remote desktop using local smartcard redirection, it can be horribly slow. In my testing, it appeared that whatever the ping response time was (in milliseconds divided by 100) would directly related to how many minutes it would take for me to log onto a remote machine. 200ms being 2 minutes, 800ms being 8minutes. This can cause logons to drop while they are still being processed. If you go searching for details on this problem, there doesn't seem to be much helpful information. Some suggest driver problems can be a factor, and other results may point you to some KB articles for hotfixes such as:

A smart card logon to a terminal session stops responding server that is running Windows Server 2008 and Windows Server 2008 R2
A program that requires you to use a smart card stops responding in a remote desktop connection in Windows Server 2008, in Windows Vista, in Windows 7 or in Windows Server 2008 R2
You may wait for up to 30 seconds when you use a smart card to unlock a computer that is running Windows 7 or Windows Server 2008 R2
Windows 7-based or Windows Server 2008 R2-based Remote Desktop Services server freezes when you try to log on to or log off the server by using a smart card
RDP 8 upgrade

I tried all of these, but none seemed to help. We eventually ended up going through a microsoft support case for the issue, which gave us some more suggested hotfixes to apply in this specific order (none worked):

"0x80100065" error when you call the SCardConnect function in Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012
A computer that has smart card logon enabled stops responding after you remove and then reinsert a smart card in Windows 7, Windows Vista, Windows Server 2008 or Windows Server 2008 R2
"Interactive Logon: Smart card removal behavior" Group Policy setting doesn't work as expected in Windows 7 SP1 or Windows Server 2008 R2 SP1
PIN dialog box appears unexpectedly when you open an encrypted email message after you remove and reinsert a Base CSP smart card in Windows 7 or in Windows Server 2008 R2
The screen saver grace period does not work as expected if the period exceeds 60 seconds on a computer that is running Windows 7 or Windows Server 2008 R2
Number of incorrect PIN retry attempts is less than expected after you unblock a smart card on a computer that is running Windows Vista, Windows Server 2008, Windows 7, or Windows Server 2008 R2

After further debugging and log analysis, the technician told us that the redirection function happens more than 50 times during the logon, with each request being sent individually. Essentially, with all of this back and forth one by one traffic, network latency causes big impact as we had originally reported to them. This was just marked down as expected behaviour that needed to be accepted. Sadly we were left with this situation, however with some further digging around, I did find one obscure forum post for a registry value that I couldn't find documented anywhere on Microsoft (at that time). This value can be created in HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp. The value is a Dword with the name LogonTimeout, and its value is in seconds. You put this value on the server end, and it will allow the connection to stay open longer during the logon process, so you don't have the connection dropping while its showing you the "welcome" screen. Otherwise you need to provide activity into the RDP connection window to reset the default timer, which I believe is 60 seconds. This activity can be mouse clicks or any other input that would go into the remote session. There is another key that you may want to adjust, which is related to timeouts in the smartcard crypto provider. More on this can be found in this article.

Thursday, October 30, 2014

Finding OU's that block GPO inheritence

import-module activedirectory
get-adobject -LDAPFilter "(&(objectclass=OrganizationalUnit)(gPOptions=1))"

Tuesday, October 21, 2014

Windows not fully booting after patching (black screen after windows logo)

I have noticed on several machines over different patch cycles that I come across the occasional machine that will start the boot sequence, it shows the early windows logo and then goes to the temporary black screen prior to the blue background of group policy and other OS startup items. The problem though is that it stays at the black screen without progressing. On some machines there seems to be a "normal" hang here with 0 disk activity and apparently nothing happening, but in this case the hard drive with briefly tick for a moment every few seconds. So it appears like something is happening, but the machine never boots. In the few cases I have seen this, booting to safe mode usually works to resolve the problem. Since patches install over multiple stages of shutdown and preboot, it may be something is preventing it from finishing. Safemode seems to get around this block for the OS to resolve the problem, and the next reboot is back to normal.

Powershell - Listing services with file version details

In the event that you want to inventory system services and look at version details, you can do this with a combination of WMI and get-childitem to read file version details. This can be additionally modified to try to reduce the list of services to only non-microsoft products. Company details are in the Get-ChildItem versioninfo results.

(gci  (gwmi win32_service|select -first 1 -prop *|
 select -expand pathname).replace('"','')).versioninfo|
 select *

Comments           :
CompanyName        : Adobe Systems Incorporated
FileBuildPart      : 4
FileDescription    : Adobe Acrobat Update Service
FileMajorPart      : 1
FileMinorPart      : 7
FileName           : C:\Program Files\Common Files\Adobe\ARM\1.0\armsvc.exe
FilePrivatePart    : 0
FileVersion        : 1, 7, 4, 0
InternalName       : armsvc.exe
IsDebug            : False
IsPatched          : False
IsPrivateBuild     : False
IsPreRelease       : False
IsSpecialBuild     : False
Language           : English (United States)
LegalCopyright     : Copyright © 2013 Adobe Systems Incorporated.  All rights reserved.
LegalTrademarks    :
OriginalFilename   : armsvc.exe
PrivateBuild       :
ProductBuildPart   : 4
ProductMajorPart   : 1
ProductMinorPart   : 7
ProductName        : Adobe Acrobat Update Service
ProductPrivatePart : 0
ProductVersion     : 1, 7, 4, 0
SpecialBuild       :

To collect the basic information for all services, you can run the following:

gwmi win32_service |
select name,caption,@{name="filepath";expression={$_.pathname.split("-")[0].split("/")[0].replace('"','')}} |
select name,caption,filepath,@{
 expression={(gci $_.filepath | select -expand versioninfo).productversion}

Wednesday, June 18, 2014

Performing diagnostic checks on domain controllers without admin rights

Since domain controller access is pretty heavily restricted for security reasons, there are often cases in large organizations where the teams who receive alerts on domain controllers may not be the team that manages them. The receiving team could be a helpdesk, server team, monitoring team, etc. With some basic tools that are part of the windows administrative tools and other microsoft provided support tools, you can perform a wide range of tests on the machine that will give you a pretty good picture of if it is functional.

First of all, DCDIAG.exe is a good command line tool for checking any domain controller's status. Even though some of the tests will not work with a non-privileged account, you can still see some of the most important status results for the server. For non privileged accounts, they can use this to avoid the access denied failures (or they can just visually ignore those):

dcdiag /s:DCServer01 /skip:frsevent /skip:kccevent /skip:systemlog /skip:sysvolcheck /skip:netlogons /skip:replications /skip:services /skip:dfsrevent

If the advertising test passes, the domain controller is likely functioning pretty well (at least soon after a boot). Other than that, you can test LDAP responsiveness. Using portqry.exe will test the port (389 and/or 3268) and dump out the server capabilities that are advertised on connection. If you don't have portqry.exe, ldp.exe (gui tool) or powershell can be used to connect. You can check to see if the SYSVOL and netlogon shares are available with a simple: dir \\dcserver01\sysvol or dir \\dcserver01\netlogon command.

For hardware, it depends on manufacturer. Personally my experience is with dell servers and open manage. Wherever I go, there seems to be this idea embedded in the heads of server support teams that they can't check hardware on a domain controller because they are not domain admins. OMSA allows anyone to log on with user level access and you can see the status of the components.

SCOM monitoring and other tools can provide details of active directory events and errors. Although some of these are based on single events and don't automatically close when everything is fine. They are an additional level of allowing status information to be available to lower level teams.

Thursday, May 29, 2014

Raising young scholars - Tawfique Chowdhury

Recently on the Revival Tours Umrah trip, we had a good Q&A night with Tawfique Chowdhury which was focused on one topic from one of the sisters regarding how to raise a child to be a scholar. He touched on a lot of points for general education, memorization methodologies and texts as well as environment. I felt the information could be very useful to parents out there, so I will try present the information as best as I can.

First of all, the education environment is important. The institutionalized education systems we have now are very inefficient. He pointed out that 1 hour of one on one instruction with a good teacher is like 2 weeks worth of having that subject in a normal school. So homeschooling or similar non-standard educational methods can be very beneficial. There are limits though to how many students a single one to one teacher can handle, and this is around 3 students. Regardless of what educational system you place your child in, make sure they complete a full 12 year secondary school/high school education as this is critical for further studies.

Another key point in the environment is the role of the mother. If we look at scholars of the past, many top scholars were raised by single mothers that strove very hard to educate their children. In the example of Imam Bukhairi, he wrote his first book at the age of 12, and its contents were all of the hadiths that his mother had narrated to him. After that he became blind for 3 years, all the while his mother was making du'a for him continuously. Never forget to make du'a for your child. Parents need to strive for their kids, they are your jihad. For daughters, in the past we have a rich history of female scholars of Islam but today it has become more difficult. One thing that can be done for daughters when they are marriagable age and want to continue studying the deen is to find a husband that will take them places that are supportive of their education.

For memorization, one key point is to focus on starting at an early age in both language learning and memorization of texts. Children are very capable of absorbing information at an early age. Language specifically is easy to learn in the younger ages. Trying to introduce arabic and as many languages as possible is a good idea. As for memorization of Quran and hadith, memorization is more important that understanding and tajweed. Understanding will came in later study with a scholar when the hadith or verse is mentioned, the student remembers it from their earlier memorization, and can add understanding to what they know. Tajweed expertise can also be gained at a later time. As for audio memorization, the reciter that was suggested was Muhammad Ayoob. The reasons for this is the simple style of recitation (not too melodic), and his recitation is the closest to Prophet Muhammad (pbuh). For a mushaf, stick to one copy of the book, not too large and not too small (it should be able to be carried easily and readable). DO NOT USE AN ELECTRONIC DEVICE FOR MEMORIZATION. Notes should be made in the mushaf to help distinguish between pages and aid in memorization. Knowing the meanings or understanding the language can also aid in memorization. Memorization of the quran can typically be done in an average of 1,600 hours. If we really want it, we can do it.

One specific methodology mentioned for memorization in a way that you will never forget it is one that is used in parts of the world with slates. This method covers 1/4 of a page at a time. One day 1 you recite the section 100 times while looking at the text. The next day you recite it 50 times while looking at the text. The third day you read 30 times, half with eyes open and half closed. The fourth day you recite 20 times to someone else without looking.

Kids work well with incentives, so you can come up with a program to reward them for their progress. Sheikh Tawfique mentioned he used money. A suggestion from another in the group was a contest between kids where the one that memorized the most got to select a special food to be cooked once per week, or a restaurant to go to. Also another important point that was brought up by Sheikh Fadi Kablawi in a later talk that we had, you can't just have a child memorizing. You need to instill Iman and a love of Allah in them so they know why they are studying and have the desire to do so.

For hadith, memorization of one or more per day is good. In the sunnah there are only around 10000 hadith. The book Al-Lulu wal-Marjan (the pearls and corals) is the best starting point as it is a collection of the hadiths that are in both Bukhairi and Muslim. So we start with the most authentic of the collection. After this you continue to Ziyadat Bukhairi and Ziyadat Muslim. These books contain the hadiths that are not in the other's collection. So you skip repeated hadiths while memorizing both collections. After this, the remaining 4 of the 6 sunnan books have their own ziyadat book which contains the hadiths from that book which are not found in Bukhairi and Muslim.

Another topic that was brought up was building and maintaining Iman in the family. He mentioned 4 specific books that he though we must have books for every house.

1) Al Kaba'r (the major sins) by Adh_Dhahabi. This may not be easy to find online, but I did manage to get a copy in Mecca. has several versions of it.
2) Al-Munzari's At-Targhib Wat Tarhib A book about good deeds that can be done. I found an english version of this at
3) Ibn Kathir's Seerah. I wasn't able to find a hard copy of this, but there are scanned versions available online
4) Riyadh as Saleeheen by imam Nawawi. This should be easy to find.

For the Quran recitation mentioned above, most Quran applications I have seen do not have this reciter, but you can find him online to download the mp3's. If you have linux/unix (or possibly a mac) you can do this to download all of them from a terminal window with BASH shell:

for i in {1..114}; do file=`printf "%03d\n" $i`.mp3; wget $file; done

For windows users you can use powershell (this is a default part of windows 7 and higher, otherwise you can download it). Open powershell, copy the script below, click the top left of the window, select edit, click paste, hit enter a few times and wait for the files.
$urlpref = ""
$w = new-object
for ($i = 0; $i -le 114; $i++) {
 $filename = ([string]$i).padleft(3,'0')
 $dlURL = $urlpref + $filename

Windows Time Service event 46 - access denied

I recently worked on a case where a domain controller came online with its clock time several hours out of sync (virtualized DC).  In this case, when looking at the system log, during the service start up events, there was a critical error for the windows time service:

- <Event xmlns="">
- <System>
<Provider Name="Microsoft-Windows-Time-Service" Guid="{06EDCFEB-0FD0-4E53-ACCA-A6F8BBF81BCB}" />
<TimeCreated SystemTime="2014-02-21T07:25:24.140175500Z" />
<Correlation />
<Execution ProcessID="452" ThreadID="3648" />
<Security UserID="S-1-5-19" />
<Data Name="ErrorMessage">0x80070005: Access is denied.</Data>

Googling around came up with some details that this error can occur when netlogon service is not started.  Going back to the log showed a Service Control Manager 7022 netlogon service hung during startup.  After a few weeks back and forth with microsoft with netlogon tracing and memory dumps, it just came down to the fact that there were a lot of subnets being processed.  The servers being effected by the slow netlogon startup were all low spec virtualized domain controllers, so they weren't going to perform at their best anyways.  During the service startup, all subnets must be read into memory, which can take a while.  There is also no registry tweaks or configuration changes to get around this...other than cleaning up subnets.  The one thing that we had thought of before the whole case was, if time service needs netlogon running for it to function, why isn't it configured with service dependencies.  Even though the OS doesn't do this by default, some registry hacking will allow you to add a DependsOnService value to the w32time service key to ensure netlogon is started before time service tries to start.  This can be pushed through GPO as well.  For a .REG file you can use this:

Windows Registry Editor Version 5.00


Friday, May 16, 2014

New DC: DFSr trying to replicate with the wrong server

I recently had a problem with a newly promoted domain controller not finishing its initial synchronization of the SYSVOL partition via DFS-r.  Checking the event logs showed a series of connections, 5004 event, followed by 5014:

The DFS Replication service is stopping communication with partner FARFARAWAY for replication group Domain System Volume due to an error. The service will retry the connection periodically. 

DFSR Event ID 5014

Additional Information: 
Error: 1726 (The remote procedure call failed.) 
Connection ID: C526A9D5-6694-4A4D-AF89-EA943200461F 
Replication Group ID: 79993E0B-57C0-49CA-9BA0-FE0D62ABB93E

The server it was trying to connect to is not in the same site, nor the next closest site.  It was several sites away and poorly accessible due to bad network connectivity.  So the RPC failures were the obvious result.  Checking AD sites and services, the connector for the domain controller showed it connecting to a server in the next closest site, which is what was desired.  After a few reboots and playing around with sites and services, it still kept trying to connect to this far away domain controller.  After some digging around in the registry, I found the distant domain controller's name in:


under a key called "Src Root Domain Srv".  This is the initial domain controller that dcpromo tries to use to do the initial replication of the domain controller.  After manually editing this and restarting DFSr, it connected to the server that I wanted it to, and finished the synchronization.

For DFS-R, there is also another entry that you will find in HKLM\System\CurrentControlSet\Services\DFSR\Parameters\SysVols\Seeding Sysvols\(domain-name), called Parent Computer.  Updating this to a desired replication partner and restarting the dfs-r service will cause the machine to try to replicate with the specified machine.

This server had been build by an unattended file that was generated by script.  Checking that, I realized that it wasn't providing a value for: ReplicationSourceDC.  So the dcpromo job was probably just grabbing any random RWDC in the list to use as a partner.  So to fix it, I just added a manual discovery process of the nearest domain controller to provide a value to the file:

$srcdc = Get-ADDomainController -writable -ForceDiscover -discover|select -expand hostname

Wednesday, April 16, 2014

Managing test AD accounts in powershell

It is common that different application teams may require a block of test accounts to test different roles in an application.  So you may come across requests to create large numbers of accounts or modify them (or a subset of them).  Since there are many examples of account creation around the net, I don't want to repeat what is already done.  You can define what you want in your user and use the New-ADUser cmdlet to create them.  Often, users may be a standard name with a numeric identifier attached.  In this case you can do something like this:

for ($i = 0; $i -lt 300; $i++) {

 #for names of the same length
 $name = "testuser" + ([string]$i).padleft(3,'0')
 #or just by numeric
 #$name = "testuser" + $i
 new-aduser [enter options and use $name]

Depending on the desired name format, you can adjust as needed.  When requests come in to change a subset of these accounts, you need to find a way to easily search the correct ones.  You can do this by text matching, or if you were planning ahead, you could have put a numeric identifier in an unused attribute of the user object to help with searches.  Let's assume you want to text match the user names to make changes to TestUser051 through TestUser100.  You can pull the full list of test users and use the $matches special variable in powershell to work with the digits:

$users = get-aduser -LDAPFilter "(&(samaccountname=testuser*)(objectclass=user))"| where {$ -match "\d{3}$" -and ([int]$matches[0] -ge 51 -and [int]$matches[0] -le 100) }

Here we grab all users that match testuser* using get-aduser.  This pipes to the where-object commandlet which matches the last 3 digits.  These digits are stored in the $matches result.  So we pull that data, convert it to a number and ensure that it is in our range.  This leaves the $users variable full of the results we want, and we can later pipe this to foreach loops or other commands to make whatever changes we need.

Friday, April 4, 2014

Unattended installation of FIM CM client

I was going through the unattended install guide for FIM components at technet.  Since they put them all together, but don't clearly separate all of the options, it makes it challenging to find the correct option for specifying servers in the FIM CM client's dialog box which requests you to provide the list of FIM component servers that you connect to.  After playing with a few options, I found SITELOCK_DOMAIN is the correct choice.  You can install with this:

msiexec /i "CM Client.msi" /q ADDLOCAL=CMClient,ChangePin,AppletManagement,SelfServiceControl,ProfileUpdateControl

If you have more than one site, seperate them by semi-colon and quote it:

msiexec /i "CM Client.msi" /q ADDLOCAL=CMClient,ChangePin,AppletManagement,SelfServiceControl,ProfileUpdateControl SITELOCK_DOMAIN=";;"

Monday, March 31, 2014

FIM CM Portal problems

Lately I have been working a lot with the FIM CM portal in support of end users trying to perform self service operations in mixed environment of OS's and versions of IE.  Below are some problems seen, and some suggested workarounds that may help others with the same issues.  From experience, it looks like the portal has problems caused by ActiveX security settings, IE compatibility mode required as well as FIM CM client architecture support issues.

Some ways to get around problems with FIM CM portal:

1) CM portal site is in trusted sites, yet user is getting repeat prompts for logon to the page.  The OS security logs on the portal server show success, yet IIS is not accepting it and ends up at access denied.

Solution to try:  Internet Explorer options -> Security tab ->  Check "Enable Protected Mode", and set security levels to Low.  Restart IE

2)  User is able to get into FIM CM portal, but whenever they click on an operation, nothing happens.  Problem with javascript in the links

Solutions to try:
a) Set compatibility mode for the site.  In newer versions of IE, you can find this in the tools menu
b) Internet Explorer options -> Security tab -> Set security level to Low.  Refresh the page

3) BaseCSP error on smart card operations

Solutions to try:
a)  For XP machines, ensure the BaseCSP hotfix is installed (KB909520)
b)  Ensure the FIM CM Client is installed (displays as "Forefront Identity Manager CM Client" in add/remove programs)
c)  Internet Explorer options -> Security tab -> Set security level to Low.  Refresh the page
d)  If the client machine is x64 bit OS
    1) Check the version of the installed FIM client (what "program files" folder is it under, x86 or the main one).  Try to run the IE version that matches the version of the FIM client
    2)  (IE11) Internet options -> Advanced tab -> Check "Enable 64-bit processes for enhanced protected mode" or "Enabled Enhanced Protected Mode" if you don't have the first option.  Do this if you have the 64 bit FIM CM Client and are running IE 64bit, yet it still fails.
    3)  If you have the 64 bit client installed, and both versions of IE fail, and step #2 isn't available, remove the 64 bit client and install the 32bit client
e)  Ensure ActiveX filtering is off.  This may show up in the address bar as an icon saying components are filtered.  Or you can look in the tools menu to see if it is checked (not all versions of IE have this)
e)  Repair or reinstall the FIM CM client

4) Slow/hanging operations.  Look at 64bit/32bit IE as mentioned in BaseCSP problems.  You may want to try the other version of IE.

Wednesday, March 12, 2014

Parsing DNS Debug logs (microsoft)

I have played around a few times with methods of parsing the ugly data lines that come with Microsoft DNS Server's DNS debug log. Due to the differences in types of queries, there is no fixed number of "columns" defined by spaces. Since this is the delimiter, it causes issues in parsing. Besides that, there is the messed up hostnames in the query values that replace the periods with a parenthesis and length of chars in the following value. As logs can get quite large, trying to parse these with powershell can have mixed results. Sometimes it works ok, other times you watch the process grow to several GB of memory utilization and nothing is happening. So, to find a better way, I thought I would dust off the old Unix Shells by Example book and use some gnuwin32 versions of grep, awk and sed to take care of this file. In order to get down to the raw information that I care about, I'm looking at queries received by the server, the source IP, type of record being searched, and the hostname being looked up. To get this I came up with this to transform to csv output:

   grep.exe Rcv c:\temp\dns.log |grep " Q " | gawk -v OFS="," "{print $8,$14,$15}"| sed -n "s/([0-9]*)/./gp"|sed -n "s/\,\./,/gp"|sed -n "s/\.$//gp"

The $8,$14,$15 numbers represent text columns and you may need to adjust this based on output. Also the number of columns may be inconsistent as the data that shows up between the brackets is not always consistent in the log. You can use notepad++ to do a regex find/replace using \[.*\] to clear this out first. Once columns are aligned this output can be dumped to the script, but if you try to put a redirector to dump to text, it will do it, however it seems grep will give you an infinite loop of errors.  So to work around that, you can split this up into two commands.

First use grep:
   grep.exe Rcv c:\temp\dns.log |grep " Q " > temp.txt

   gawk -v OFS="," "{print $8,$14,$15}" temp.txt | sed -n "s/([0-9]*)/./gp" | sed -n "s/\,\./,/gp" | sed -n "s/\.$//gp" >output.csv

If you want to add the name of the dns server, you can put an extra sed command right before the output rediection
   sed -n "s/^/%computername%,/gp"
if you run it locally, otherwise put in text or some other defined variable there

Additionally you can play with the output, such as looking for source IP's
   awk -v FS="," "{print $1}" output.csv|sort |uniq -c
To get a list of unique client IP's and number of queries

Don't try to run this in powershell. Run in cmd or as a bat file, collect the csv and then you can import to powershell to play around with grouping or whatever you might want to do to see client behavior or records being queried.  If your file is large (I was testing with 200MB), you still won't want to try import-csv in powershell or your machine will grind to a halt.

You can use powershell to try to convert your source IP addresses to hostnames with reverse dns. Copy the text, dump to a variable, split by new-line, run through a foreach loop with: [net.dns]::GetHostByAddress($_).hostname

Additional reference and tools:
1) Gnuwin32 utilities, *nix tools for windows:
2) Parsing logs other DNS logs
3) Reasons why this can be important:

Monday, February 24, 2014

Finding large time changes (windows)

When you are looking at time sync problems on newer Microsoft OS's (2008+), there are several places that may show useful information. Looking in the system log, you can find various events from the source: Time-Service, which tell you what server you are syncing with, if the servers are not available, if you domain controller is advertising time, and other various issues. In addition to that, another source: Kernel-General, may have some useful information. In Event ID #1 of this source, you will see occasional clock changes on the system. It gives both the old DateTime and the new one. This helps show you when large changes to the clock happen, so you can help historically see problematic servers. So, to collect and view this information in a more useful way, I came up with this example:

get-winevent -FilterHashtable @{logname="system"; providername="Microsoft-Windows-Kernel-General"; ID=1}|select -first 100 -Property TimeCreated,Properties,MachineName | foreach {
     $comp = $_.machinename
     $timeskew = new-timespan -start $[0].value -end $[1].value
     $timeskew = [int][math]::abs($timeskew.totalminutes)
     new-object PSObject -property @{
           EventDate = $_.TimeCreated
          TimeDiffMinutes = $timeskew
}|where {$_.TimeDiffMinutes -gt 2}

Here we use Get-WinEvent with a filterhash table to get the events we want. I'm just looking at a limited result here. In each event there are 2 properties which contain the two DateTime values. I'm putting that into a timespan to pull the difference in minutes, removing any negative value and printing out the machinename, Timeskew in minutes and when the change was done. You can add -computer to the initial Get-Winevent to run a list of machines.

Thursday, February 20, 2014

Finding expiring smartcards (or other certificates) on the CA

Recently I was working on a method of discovering and creating alerts for expiring Smartcards.  While looking at some of the various methods to pull details from FIM certificate manager or the AD certificate services CA that issues the certs, I ended up goinig with certutil as the tool of choice for pulling the data.  The build in filtering of the results helped give the ability to confine the results to certain certificate types, and also to avoid anything that was revoked.  Putting this together with output processing in Powershell, it is pretty easy to pull together a list of certs and their expiration time.  Using grouping, you can avoid the problem of having multiple results per user (for those who have already renewed).  Since Smartcards use UPN as an identifier, I went with that attribute to help get the user details.  For other use cases that would need to be modified and the Get-ADUser functionality wouldn't be required.

#this line needs to be modified to target the CA and the OID
#of the cert type that you want to look at (if you are restricting it
#Disposition of 20 means certs that are active
certutil -config "<CA server FQDN>\<CA NAME>" -view -out "user principal name,certificate expiration date" -restrict "certificatetemplate=<templateOID>,Disposition=20"  > .\certdump.txt

$results = @()
$recordbound = $false
Try { $data = get-content .\certdump.txt -erroraction stop } catch { 
  #error handling
For ($i=0; $i -lt $data.count; $i++) {
 #process the multiline record to get user UPN account name and certificate expirations
 if ($recordbound) {
  $result = new-object PSobject
  $UPN = $data[$i].substring($data[$i].indexof('"')+1).trim('"')
  $dateval = $data[$i+1].substring($data[$i+1].indexof(":")+1)
  $dateval = [datetime]$dateval
  add-member -input $result NoteProperty UPN $upn
  add-member -input $result NoteProperty Expiration $dateval
  $i = $i+2
  $recordbound = $false
  $results += $result

 #start of a new multiline record
 if ($data[$i] -match "^Row ") {
  $recordbound = $true

#User may have renewed before, so there can be more than one group by user UPN
$groupdata = $results|group UPN
$curdate = get-date

#Look at all certificates for a given user and select the one with the furthest expiration date (I.e. last one issued to this UPN)
$groupdata = $groupdata | Select Name,@{name="expiration";expression={($|sort expiration |select -last 1).expiration}}

foreach ($entry in $groupdata) {
 $debugstr = "$($  Expiration $($entry.expiration)"
 $timediff = new-timespan -start $entry.expiration -end $curdate
 $days = 0 - $timediff.days
 if ($days -lt 30) {
  $debugstr += ".  In expiration window."
  #expiration warning, need to find primary account email and send it, if they are still active
  $user = get-aduser -ldapfilter "(&(objectclass=user)(userprincipalname=$($" -properties enabled,manager
  if ($user.enabled) {
   $debugstr += "  Account is enabled."
   #do something with it
  } else {
   $debugstr += " User 431 is disabled, ignoring."
 write-debug $debugstr

Friday, February 14, 2014

New Array, not exactly empty

I have a habit of creating powershell scripts that iniitialize an empty array such as


which I than can use later to add items to it with

$a += $something

Normally for what I do with it, that's fine, but recently I wanted to do a check at the end of the script to see if there were any results:

 if ($a.count -gt 0) {  }

expecting this to be false if nothing was added to the array.  But my script was failing unexpectedly as it seems there is a $null value put in the first position of the array.  So a better way to evaluate in my case is:

if ($a[0] -ne $null) {  }

AD Account Expiration, search results not giving expected values

I recently ran into a problem when trying to find accounts that were incorrectly set for expiration date, especially searching for accounts that were set to never expire.  The attribute in ActiveDirectory is "accountExpires", however when dealing with AD powershell cmdlets such as Get-ADUser, it is filtered as AccountExpirationDate.  Typically someone may assume that if an account is set to never expire, it should not have a value for this attribute as it is not mandatory.  So a search for Null or Empty on that attribute will give you all of the results.  However, I found in the environment that I was working with, many accounts had the attribute set at some point, but the value was still a value that shows in the GUI tools as "never expires".  In this case, the value is one second above given the maximum calendar date. (Value in attribute: December 30, 9999 12:00:00 AM (GMT)).

&nbsb &nbsb Friday, December 31, 9999 11:59:59 PM

You can get this value by putting it in with the adjusted current time zone, such as this example of US Central Time:

$forever = [datetime]"12/29/9999 6:00:00 PM"

Using the MaxValue function along with the AddSeconds(1) method will fail.

Alternatively, the AD time format of the value is: 9223372036854775807, so you can do an ldap filter such as:

So when you are trying to find accounts that never expire, you may want to filter in two ways:

1) Attribute is null
2) Attribute is equal to the maximum date value

Thursday, February 13, 2014

Creating your own eventlog source name

In the event that you want a script to be able to write to an eventlog, it is often useful to have your own unique Event Source Name to use.  These events can be searched for with various log gathering tools or monitored with SCOM.  If you just try writing an event with any random name using write-event in powershell you may see this error:

write-eventlog -logname system -source MyScript -eventID 1 -message "test" -entrytype Information
Write-EventLog : The source name "MyScript" does not exist on computer "localhost".
At line:1 char:15
+ write-eventlog <<<<  -logname system -source MyScript -eventID 1 -message "test" -entrytype Information
   + CategoryInfo          : InvalidOperation: (:) [Write-EventLog], InvalidOperationException
   + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteEventLogCommand

To get around this, you can easily register your own name

[System.Diagnostics.EventLog]::CreateEventSource("MyScript", "System")

Then try the write-eventlog command again, and it will work fine.