Monday, September 23, 2013

Adhoc LDAP information requests

Many times in the life of an Active Directory administrator, you will get various business people or application guys coming to you to pull a lot of information from the directory, or to add on to information they already have. Its great in these instances to have some basic powershell skills to make what looks like a lot of work, turn into a few minute task with some adhoc scripting. In this case, I'll skip the AD commandlets and Quest AD commandlets that many people commonly use, and I'll give you a basic sample of an easy request and how one may go about working on it.


Here is an excel file that has 5000 rows containing Full Name, Logon name (a really helpful people making the request), email address, and a blank column to fill in (Current status: enabled/disabled)

First of all, to make things easy, we want to take the Excel file and convert it to CSV with a save-as. We want to keep column headers in the first row to show what the various attributes will be. And to help with further processing, you may want to take out the spaces and any special characters that powershell may not like as part of an attribute name. Once you have that all together, save your csv file in a location where you want to run the script:

$entries = import-csv .\userlist.csv
$de = [System.DirectoryServices.ActiveDirectory.Domain]::getcurrentdomain().getdirectoryentry()
$ds = new-object directoryservices.directorysearcher($de)

foreach ($entry in $entries) {
 $ds.filter = "(&(objectclass=user)(objectcategory=user)(|(samaccountname=$($entry.samaccountname))(mail=$($"
 $DSsearchresult = $ds.findone()
 if ($DSsearchresult -eq $null) {
  $entry.status = "NotFound"
 } else {
  if ($[0] -band 2) {
   $entry.status = "Disabled"
  } else {
   $entry.status = "Active"

$entries|convertto-csv -notypeinformation |out-file results.csv

To walk through this a bit, the first line is importing the CSV file into an array of objects. Each object will have attributes for all of the defined columns in the CSV file's first row...including any blanks. See why its important to have column headers?

Secondly, we want to set up something to search the directory. Here I am just using .NET classes. The DirectoryServices.DirectorySearcher will do the main work. To keep the results small, we use the propertiestoload method to restrict the results to a single attribute, in this case we want to see if the account is enabled so we use UserAccountControl.

Next, the all important loop. Whenever processing large amounts of data, we will end up in a loop, even if its hidden as a pipeline, basically the end result is the same, though loops may be a bit more controllable and readable. Inside I'm defining the ldap search filter. We want these to be restrictive as possible to minimize results and for faster lookup times. Here I'm doing an OR on samaccountname and email just in case the information given wasn't 100% perfect. This will be likely to find users with a better percentage change of success. As we are referencing attribute properties of $entry, we wrap the whole object in a $() to process what is inside first. Then we continue to do a search for a single result (samaccountname should be unique here, so we get one result). If there is no result, we can put a status into that object stating it is not there. Otherwise we view the useraccountcontrol against 0x2 (disabled flag) to see if its active or not. Notice the [0] here. When working with directoryservices results, the properties are usually collections, so treat it like a single value array in most cases (even if the attribute isn't a multivalued one).

Now that we are done with our loop, our whole intial import into the variable $entries will have their Status attributes populated. So we can just dump that back out into CSV format, bring it back into excel and save it in xslx format to sent back to the requestor.

As requests vary a lot, the complexity can grow and further processing and error handling may be required. This is just a good starting point to see how to turn 5000 lookup's into a one minute script. In other cases, other commandlets may be useful for you, or you could take values to write out a large batch file full of dsmod/dsget type commands. Whatever works best and whatever seems the most efficient for you.

Wednesday, September 4, 2013

IE compatibility modes


Let me start by saying, I'm no expert in web applications and browsers, however I have run into HTML compatibility issues on and off for many years.  For those that were developing sites way back in the day when the internet was just starting to be something that people were adopting in the home, you may remember having to write javascript to detect the browser version to see what little nuances were going to work for Netscape and what would work for Internet Explorer.  Now that browser technology is even more all-over-the-place with additional players in the browser field, and a wide range of versions for each.  With IE specifically, you may see anything from IE7 to the latest...if not even older.  Later versions of IE (I believe starting with IE8) have various options in them for compatibility.  Certain websites may fail to work correctly in one version of IE, while they work fine in another.  Newer versions of the browser have the F12 debugging tools, which makes it easy to test sites with.  Whenever you have IE open, you can hit the F12 key and you will get a popup window with various tools for website debugging.  Along the menu bar, the last two items on the right site are browser mode and document mode.  In each, you can see a list of versions that you want to display with.

IE F12 debugger tools

Typically your pages will load at whatever settings you had it at the last time you set it.  However there are some places that can override these.

#1 Group policy.
In both user and computer settings -> Administrative Templates -> Windows Components -> Internet Explorer -> Compatibility View, you will see a list of settings

In this list we can see various options.  Some allow you to provide a list of domain names.  So if you put: in there, anything in the whole namespace (ie:, will all be effected.  Looking through the details will help you decide what may be needed.  The last two options allow lists of sites to be defined.  These settings will effect the Document Mode option in the compatibility functions.  Changing it to IE7 will break HTML5 and newer technologies not supported.

#2 Controlled by the site via META Tags
The X-UA-Compatible meta tag is something you can define in your application. In this article, you can view the allowed values. From my basic testing with this, using the Emulate values seems to force the browser to use that mode. While the others seem to be guidelines which may be ignored if higher version level functionality is implemented in the site. In my case, I threw a canvas tag into a basic website with !DOCTYPE html. When playing with the numbers, it displays with IE9 mode, and with emulate at lower versions, my Canvas disappeared. One interesting thing to note here is that the X-UA-compatible tag seems to override whatever group policy you have in place. That way if you have set for one level, but one outlying site needs HTML5 support, you can override it from the application itself.

example of tag:
<meta http-equiv="X-UA-Compatible" content="IE=8">

NOTE: This tag needs to be the first tag in the HEAD block

In any case, whenever working with web site problems, its always good to check different version levels to see if a site supports all the variations of browsers that may be in the environment, and make adjustments accordingly using whatever method is best.  Also play around with the F12 tools as they can be quite useful.  Similar tools exist in Chrome, and as extensions in firefox.

Tuesday, September 3, 2013

FIM Certificate manager portal "Value does not fall within the expected range"

Recently I ran into an unusual issue with a FIM certificate manager portal installation occasionally throwing the "Value does not fall within the expected range" error when doing searches.  Some searches would work all of the time, while others would fail all of the time.

The error:

1) Exception Information
Exception Type: System.ArgumentException
Message: Value does not fall within the expected range.
ParamName: NULL
Data: System.Collections.ListDictionaryInternal
TargetSite: Int32 SecurityDescriptorToBinarySD(Microsoft.Clm.Security.Structs.VariantIDispatch, IntPtr ByRef, UInt32 ByRef, System.String, System.String, System.String, UInt32)
HelpLink: NULL
Source: Microsoft.Clm.Security.Authorization

StackTrace Information
   at Microsoft.Clm.Security.NativeMethods.SecurityDescriptorToBinarySD(VariantIDispatch vVarSecDes, IntPtr& ppSecurityDescriptor, UInt32& pdwSDLength, String pszServerName, String userName, String passWord, UInt32 dwFlags)
   at Microsoft.Clm.Security.Authorization.SecurityDescriptor.ConvertToByteArray(DirectoryEntry entry)

After digging through various components, checking AD, etc, a pattern seemed to emerge.  Whenever the search result should have returned results of users that were in a specific OU, it would fail.  While results that gave only results in other OU's would work.  On checking the metadata for the OU, the ntSecurityDescriptor had recently changed right around the time that the FIM CM portal started throwing errors.  A large number of property management ACE's had been added, which pushed the size of the ACL too high for the system to deal with.  According to Microsoft, the max size for an ACL is 64k.  My previous post shows how easy it can be to hit that limit when you get too fine grained in your entries.  Removing the added ACE's resolved the issue.

What not to do with access control lists on Active Directory objects

Although Active Directory can give a very fine level of control over properties of objects, its best to perform a bit of planning before making changes.  Some ACL entry changes can give a lot of access, while adding very little to an Access control list, while other property specific changes can make a huge size difference on your access control list.

For those not familiar with access controls, basically all objects in active directory have an attribute on them which specifies the access to the object.  This can be referred to as an Access Control List (ACL), security descriptor (SD, SDDL), or object security.  Within the ACL, there are entries which may be referred to as Access Rules, or Access Control Entries (ACE).  You will see different terminologies in different tools and .NET classes that manipulate the information.  There are also different formats that the rules can be read in.  Typically everyone is familiar with the security tab in Active Directory Users and Computers (available in advanced view).  In the advanced mode of this, you have a better view of the access control entries.  The ACE's themselves contain information such as:

ActiveDirectoryRights :  ReadProperty, WriteProperty
InheritanceType          :  None
ObjectType                :  e45795b2-9455-11d1-aebd-0000f80367c1
InheritedObjectType     : 00000000-0000-0000-0000-000000000000
ObjectFlags                : ObjectAceTypePresent
AccessControlType     : Allow
IdentityReference         : S-1-5-10
IsInherited                   : False
InheritanceFlags           : None
PropagationFlags         : None

You can find more about decoding these in my previous post which provides a script for this.  In the background though, you have uglier formats to deal with like:

Binary: hex code

If you look at technet on Security Descriptors, the maximum size of an ACE is 64k or roughly 1820 entries.  That's quite a few, but its not too hard to shoot yourself in the foot with this.  For example, you want to give someone access to almost every property of an object, but then you decided that there is one or two specific properties you don't want them to have.  So you may start with giving "read all properties" and "write all properties" rights to the account.  Then you go back into advanced view and uncheck a few properties.  This removes the previous few entries for read/write all, and expands it into hundreds of ACE's for each specific property.  We can see here how this affects the size.  I created a directoryservices.directoryentry object pointing to a computer object

PS> $de.psbase.objectsecurity.getsecuritydescriptorbinaryform()|measure-object
Count    : 11112

Here we see how many bytes are in the ACL.  Now if I go and do what I just described to the ACL

PS> $de.psbase.objectsecurity.getsecuritydescriptorbinaryform()|measure-object
Count    : 45692

The size has quickly exploded to a value that is edging towards the maximum size.  When we hit the max size, we may end up with various failures in different places, with perhaps some very vague errors as to what the real problem is.  The functions that manage the ACL and do conversions may be limited to a length value of 64K, causing exceptions to be thrown when they are processed.

If you really need to do something like this, what you should do is grant the broad level of access and then create a few separate deny permission entries for the few properties that they shouldn't have access to.

Wednesday, August 21, 2013

Wireless - No networks available (a windows services story)

This morning, I was turning on the company laptop to get ready to start my working day.  Sadly my wifi connection icon was not detecting networks at all (not even trying).  Digging around in the services, I found Wireless autoconfig not running due to Extensible Authentication Protocol dependency failure.  EAP wouldn't start either due to CNG Key Isolation dependency failure.  CNG wouldn't start due to:

Service Control Manager Event 7000
The CNG Key Isolation service failed to start due to the following error:
The service did not respond to the start or control request in a timely fashion.

After this it occurred to me that I may have broken by system while connected to a LAN wire the day before.  I had been trying to troubleshoot show smartcard -> mstsc.exe client device redirection interactions, and part of this effort had me isolating windows services to separate processes.  So with this, two things learned

1) CNG Key Isolation service needs to be in a shared process and not its own process
2) EAP needs to be in its own process, and won't allow you to configure it as shared.

For those that may have no idea what I'm talking about with shared/own process in related to service, you can see what I'm talking about in task manager, or tasklist /svc.  In task manager you see processes called svchost.exe, which when you right click and say "go to services", it flips you to the services tab and shows some highlighted service names.  These are the services that are attached to that process.  Windows will often stuff several service processes into a single process id.  In some cases this can be a problem, where one service failing can cause multiple services to enter a stopped state of the process crashes.  But in my case, I was trying to isolate activity to individual services using Sysinternal's ProcMon tool.  When you isolate the processes to their "own" process, you can see them as individual PID's here and also in netmon, or other tools that do process monitoring/debugging.  If you ever find yourself wanting to play around with this, you can use:

sc.exe config [servicename] type= own
sc.exe config [servicename] type= shared

Servicename is the actual service name, not the cute display name that most people know the service by.  I add the whole sc.exe (with extension) in case you run it from powershell [sc is an alias for set-content, alias has priority over exe files in the %path%].  Also, mind the space between the equals sign and the process type.  There are a few other service type's which may have some use in other ways, but don't ask me what they do.

Tuesday, August 13, 2013

Kerberos SPN configuration errors for dummies

In a previous article, I had written about the problem of duplicate Kerberos SPN's (Service principal names) and how to identify them.  Since then, I notice a recurring theme in my life where application and database people typically don't understand authentication configurations at all.  As a result, accounts get swapped out, configuration changes are made without any thoughts to what will work, and so on.  In the end the whole application environment may have downgraded itself to NTLM or just stopped working altogether.  So, I thought I would take another shot at trying to simplify kerberos interactions for the typical application web server talking to a database server.

First of all, lets understand what kerberos is doing for us.  Authentication, is how we identify ourselves.  In the example WEB Server->SQL Server, it could be:

1) a service account on the webserver that is logging into the SQL server
2) The end user (at the browser) authenticating to the webserver and the webserver is set to log into the SQL server on the user's behalf (delegation)

Authentication uses protocols to ensure that the various applications and servers are all speaking the same language.  Typically this is NTLM, NTLMv2, or Kerberos v5.  Here we will focus on kerberos.

The way kerberos works is, you have a "Service" that you want to access.  This "Service" has a type and a host machine that it runs on.  Example:

1) Web service on machine  In Kerberos SPN format:   HTTP/
2) MSSQL service on machine  In Kerberos SPN Format:    MSSqlSvc/

There are other variations that include port numbers and domain names, but to keep things simple we will stick to standard ports and windows services here.

So what is the SPN used for?  Lets look at it in less technical terms first:

John wants to call Amy on the phone.  Amy wants to ensure that the people who call her are really who they say they are.  To enable John to meet Amy's requirements, he calls Amy through a phone Operator.  The phone operator has a list of names (account), phone numbers (service) and passwords (secrets/keys) for everyone that calls through their system, including John and Amy.  John tells the operator his password and the number he is calling, the operator looks up the phone number and the operator gives him a temporary code to use for his conversation.  John gets through to Amy on the phone and tells her the code.  Amy uses a special program that takes her password and decrypts the temporary code that John got from the operator.  If she can decrypt the code, she knows that she is talking to John.

And now for the technical terms.  When a client connects to the service, they are told that they need to authenticate.  The Client connects to a KDC (Kerberos Key Distribution Center) and to request a ticket.  In the windows world, the KDC is a domain controller (Active Directory).  During a user's logon (or an application starting running under a service account), the user will log into the KDC to get a Ticket Granting Ticket (TGT).  When it wants to connect to a service, the user will sent a request to the KDC for a Service Ticket.  The KDC will look through its database to see what account holds the SPN for the service that the user wants to connect to.  If it can find one, it will issue a ticket that is encrypted to both the Requestor and the Account with the SPN.  The user will then take this ticket, send it back to the application that they are connecting to and the application will review the ticket to grant/deny access.  (see the previous article for the step by step)

The problem can come in at this point in several ways.  If the SPN was set up on the wrong account...then the ticket is encrypted to the wrong person.

Back to the non-technical example:

When John calls the operator, let us assume there was some bad information in the operators list of names and passwords.  The Operator then provides a temporary code that works for Susan.  When John gives this code to Amy, Amy can not decrypt the code and will have to reject the phone call.

In another form of this problem, if more than one person have the same phone number (duplicate SPN in kerberos), the operator may look up the wrong name.

To solve these problems, it is important to know

  1. What accounts (users or computer objects) are in use
    1. What service they run on
    2. What servers they are configured on
    3. Do they run services on non standard ports
  2. Is there delegation from one service to connect to another service (double hop)
  3. How does authentication from from end to end (have a diagram or documentation as many of the support people you end up working with do not know anything about your application)
From here you can search for the SPN's that would be in use to look for duplicates.  While searching for duplicates, you can find where the SPN's are assigned.  If the SPN's are assigned to the wrong accounts, then obviously it won't work.  Make sure you get things in the right places, and try to avoid changing things once it is set up and working.  Document, Document, Document, and update the document.  Avoid running multiple services and application environments on the same account.

Symptoms of duplication SPN's
1) Log events on domain controllers pointing out the duplicate SPN
2) SCOM alerts from the AD management pack for the duplicate SPN alerts in #1
3) Application running NTLM authentication when it was configured for kerberos
4) Application working some times, and giving access denied at other times

Symptoms of incorrectly assigned SPN
1) Authentication fails all the time.

Symptoms of missing SPN
1) Authentication fails completely
2) Authentication is using NTLM

Tools to use:
1) Queryspn.vbs script from microsoft
2) Setspn
3) Event viewer on multiple machines
4) klist (to view kerberos ticket, or lack of one after connecting to an application)
5) fiddler or some other similar web debugging tool that can show authentication details in the packets to show protocol type
6) Netmon to view kerberos KDC interactions to find any errors (SPN not found, encryption type not supported, etc)
7) Increased debug logging in microsoft OS.  Can turn on kerberos debugging for all machines involved.

Friday, July 26, 2013

W32Time event 47 manually configured peer

Recently I was dealing with some SCOM events for time services on a few machines in the same domain.  When checking the machines, I came across this error:

Time Provider NtpClient: No valid response has been received from  manually configured peer after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer  with this DNS name.

On seeing this, I thought this domain may have been configured with manual peers and NTP as the client's provider.  When looking at the registry though, all I was seeing was the typical ntp server setting and source was NT5DS.  So I was stuck for a while thinking, the source should be the domain, and this IP address that I'm seeing is not a domain controller, never was a domain controller, and isn't even pinging.  So I tried manual peer configuration with NTP as the provider on a server, but I hit the same issue with the same error.  Searching the registry for both a host name and the IP came up with nothing.  Searching gpresult for the IP/hostname came up with nothing.  Eventually, I dug a bit further in to the "gpresult /scope COMPUTER /Z" output and found an NTP serverr was set in there.  So apparently this type of GPO setting does not push itself to the register, and just quietly overrides whatever is in the registry.  The reason I couldn't find the IP/hostname in the gpresult the first time was that it comes out in gpresult as an array of byte values.

So anyways, GPO edited, gupdate /force, w32tm /resync...and its all back to normal.

Wednesday, July 17, 2013

ADFS SCOM: Configuration Database unavailable

I'm currently in the process of helping set up the ADFS management pack for a relatively new ADFS 2.0 installation.  All of our servers have continuously been alerting:

Alert: SQL Configuration Database Unavailable
Alert Description: The AD FS configuration database that is stored in SQL Server 'false' is unavailable.

When digging through the SCOM agent's health service state directory, I found all of the scripts for the MP seem to be powershell.  The one that checks this is FederationServerRemoteSQLServerPing.ps1.  Basically the script pulls out the SQL connection string for the configuration database and checks a few things in the format (like is the server local to the machine, or has a backslash in it).  After this, it takes the server value and tries a .NET ping on the host.  The problem with this script is that they forgot a few things about SQL connections.  If your SQL server has a port specified with the servername,portnumber format, the server name is not cleaned up by the script, and the ping attempt blows up due to the badly formatted name.

As for a fix (I'm not a SCOM guy), if the script can be edited, a simple check before the ping...

If ($script:server –match “,”) {
     $script:server = $script:server.substring(0,$script:server.indexof(“,”))

will solve the problem.  Otherwise, you may just need to disable the rule until a fix comes from Microsoft.

Monday, July 15, 2013

Facebook...I thought you would know me better by now

For those who view facebook from a standard browser, I'm sure you are familiar with the right side column showing a lot of "recommendations" and sponsored sites. Sometimes these are good, but usually they just seem to be junk. In my case, I don't feel like seeing them anymore, so I wanted to play around with the site to make them go away. When you want to override websites on a permanent basis (not using the built in browser developer tools to edit/delete content), you can use Greasemonkey (firefox), or TamperMonkey (chrome). If you are using IE... first of all I'm sorry to hear that, but there is probably a greasemonkey version for that. Anyways, find the appropriate add-in for your browser and install it. There should be some management console for it or icon for it somewhere in your browser. For Chrome, it comes up as an icon in the top-right which looks like a black square with two grey circles at the bottom. Click there, add new script. You can use this script below and it should block most of these sponsored adds throughout the standard facebook pages.

// ==UserScript==
// @name       Facebook cleaner
// @version    0.1
// @description  Remove sidebar recommendations
// @match      http*://**
// ==/UserScript==

function hideIt(myObjToHide) { = 'hidden';
function cleanup() {
 var junkContent = document.getElementById('pagelet_ego_pane_w');
    if (junkContent != null) { = 'hidden';

 var junkContent2 = document.getElementById('rightCol');
    if (junkContent2 != null) { = 'hidden';
    var junkContent3 = document.getElementById('pagelet_ego_pane');
    if (junkContent3 != null) { = 'hidden';
    var sponsorPopup = document.getElementsByClassName('ego_section');
    if (sponsorPopup != null) {
     for (i = 0; i < sponsorPopup.length; i++) {
    var sponsorPopup2 = document.getElementsByClassName('ego_column');
    if (sponsorPopup2 != null) {
     for (i = 0; i < sponsorPopup2.length; i++) {

setInterval(cleanup, 800);

//end of Script

Enjoy the cleaner experience in your social networking. Do note that this works as of 7/15/2013. Facebook may change their site in the future and rename tag ID's or class Name's which will cause this to break.

Wednesday, May 8, 2013

Get-wmiobject "User credentials cannot be used for local connections"

In Advanced event 2 of the 2013 scripting games, I was going through scripts and thinking it was very strange that people were creating separate blocks of code to do get-wmiobject calls that involved the local machine. This is somewhat disturbing when you are writing code that hits a long list of machines that may include your own local machine. Having to add all the constant checks to see if you are on the local machine seems like a bit of a pain. I can't think of any great work around for this that isn't going to cause additional problems, but you could just let the call fail and pick it up in a catch block. One other option, which creates seperate powershell processes would be:

start-job -credential $cred -scriptblock {get-wmiobject -class yourclass -computer $args[0]} -argumentlist $computer

Then go back and receive-job on the output to process it.

Monday, April 8, 2013

Most efficient remote event log query

Recently I wanted to go through and modernize an old method I had come up for checking last patch install dates on a remote windows machine. In the past, I had done some OS identification to determine if the system was <=2003 or >=vista, then performed an appropriate WMI query against the system log to find events of the correct event ID and source. This method is fine, but I started thinking of the other methodologies that are available with powershell, so I wanted to play around with Get-Winevent to see how they all worked. Basically I broke down event log filtering into 3 possibilities: 1) Xpath filter, 2) Filter hashtable, 3) pipe to |where. The results were interesting between methods #1 and #2, #3 was expected. I know there are a lot of beginner examples for #3 for people who just love pipelining, but when dealing with eventlogs, this is not good. Get-winevent with no parameters will dump all events from all eventlogs, which even running locally is horrible, therefore running on a remote machine is a no-no.

Lets look at the baseline old WMI style query that will get our desired results:

Get-WmiObject -query "Select timewritten from win32_ntlogevent where logfile='system' and sourcename='Microsoft-Windows-WindowsUpdateClient' and eventcode=19" -computername ServerA|select -first 1 -Property timewritten
TotalMinutes      : 0.460832901666667
TotalSeconds      : 27.6499741
TotalMilliseconds : 27649.9741

Here we have 3 criteria in a where statement which helps with the run time. When doing WMI filtering on a remote machine for event logs, you can get quite fast results with a really tight query (especially if you can restrict the timeframe you are looking at). Our result was 27 sec on a machine that has a 250ms response time. The one problem with this method is it will pull all of the records for this event type, even though we are just looking for the most recent.

Next, we have our first two methods of filtering:

get-winevent -FilterHashtable @{logname="system"; providername="Microsoft-Windows-WindowsUpdateClient"; id=19} -computername ServerA -maxevents 1 |select timewritten
TotalMinutes      : 0.05561672
TotalSeconds      : 3.3370032
TotalMilliseconds : 3337.0032

get-winevent -filterxpath "*[System[Provider[@Name='Microsoft-Windows-WindowsUpdateClient']][EventID=19]]" -maxevents 1 -computer ServerA |select timewritten
TotalMinutes      : 1.56868183833333
TotalSeconds      : 94.1209103
TotalMilliseconds : 94120.9103

We can see here a clear disadvantage in Xpath queries in comparison to filterhashtables. Based on this result, I would suspect FilterXPath is doing more work on the local side than the remote side. So lets compare these two with them running "locally" via invoke-command

invoke-command -computername ServerA -ScriptBlock { get-winevent -filterxpath "*[System[Provider[@Name='Microsoft-Windows-WindowsUpdateClient']][EventID=19]]" -maxevents 1}}
TotalMinutes      : 0.143230653333333
TotalSeconds      : 8.5938392
TotalMilliseconds : 8593.8392
invoke-command -computer ServerA -scriptblock {get-winevent -FilterHashtable @{logname="system"; providername="Microsoft-Windows-WindowsUpdateClient"; id=19} -maxevents 1}}
TotalMinutes      : 0.0724750433333333
TotalSeconds      : 4.3485026
TotalMilliseconds : 4348.5026

We have a clear difference here, though we still see a big difference between the two. Also very interestingly we see that the FilterHashTable takes more time to run through invoke-command than it would have to just run it against a remote machine.

So, one to method #3. I'm not even going to run that against a remote machine, so I'll just give what happens locally in two ways. First if we remember to use the -logname parameter to somewhat restrict our results before the pipeline, and one without any filtering before the pipe.

get-winevent -LogName system |where {$_.providername -eq "Microsoft-Windows-WindowsUpdateClient"}|select -first 1
TotalMinutes      : 0.4716362
TotalSeconds      : 28.298172
TotalMilliseconds : 28298.172

Here we can see it takes a pretty long time on just the local machine. How about if we run it with no filtering to start with?

get-winevent |where {$_.providername -eq "Microsoft-Windows-WindowsUpdateClient"}|select -first 1
TotalMinutes      : 23.5320746883333
TotalSeconds      : 1411.9244813
TotalMilliseconds : 1411924.4813

Really really bad idea :)

Thursday, March 21, 2013


This script is a really old PS v1 script that I put together for reading network configurations remotely. It uses a mix of WMI and remote registry reading (to get primary dns suffix). It gives roughly the same information that ipconfig /all will give


PS C:\> get-ipconfig localhost

GUID             : {5F09EE0E-A3AD-4C19-BEE1-84E8DFF27462}
HostName         : MYWORKSTATION
NICName          : [00000011] Intel(R) Centrino(R) Ultimate-N 6300 AGN
MacAddress       : 24:77:03:DC:97:38
DHCPEnabled      : True
DHCPServer       :
IPAddress        : {, fe80::31b6:ee7:18b1:2820}
SubnetMask       : {, 64}
Gateway          : {}
DNSservers       : {}
DnsDomain        :
PrimaryDnsSuffix :
DNSSearchList    : {,}
WinsPrimary      :
WinsSecondary    :

$server = $args[0]

if ([string]::IsNullOrEmpty($server)) {
 Write-Host -ForegroundColor "yellow" "Usage:  Get-ipconfig "
 Write-Host -ForegroundColor "yellow" "   Provide the name of the remote computer to get most of the network"
 Write-Host -ForegroundColor "yellow" "   setting information provided by ipconfig /all.  This uses a wmi"
 Write-Host -ForegroundColor "yellow" "   lookup on Win32_NetworkAdapaterConfiguration.  For more precise"
 Write-Host -ForegroundColor "yellow" "   details, you can run a query against that."
 Write-Host -ForegroundColor "yellow" "   The script returns a PSObject, so you can use select-object,"
 Write-Host -ForegroundColor "yellow" "   and format commands to adjust the results as needed."

#Wmi query the network adapter configuration settings for all NIC chards that are using TCP/IP
$querystr = "Select SettingID,caption,dnshostname,ipaddress,ipsubnet,dhcpenabled,DHCPServer,DnsDomain,"
$querystr += "Macaddress,Dnsserversearchorder,dnsdomainsuffixsearchorder,winsprimaryserver,winssecondaryserver,"
$querystr += "defaultipgateway From Win32_NetworkAdapterConfiguration Where IPEnabled = True"

$nicsettings = gwmi -query $querystr  -ComputerName $server

if ($nicsettings -eq $null) {
 Write-Host -ForegroundColor "red"  "WMI lookup failed"
 return $null
#Get primary dns suffix (from registry)#
$key = "Software\Policies\Microsoft\System\DNSClient"
$type = [Microsoft.Win32.RegistryHive]::LocalMachine
$regkey = [Microsoft.win32.registrykey]::OpenRemoteBaseKey($type,$server)
$regkey = $regkey.opensubkey($key)
$primarysuffix = $regkey.getvalue("PrimaryDnsSuffix")

#Build PSobject of results in an array to return
$results = New-Object collections.arraylist
foreach ($entry in $nicsettings) {
 $myNic = New-Object PSobject
 Add-Member -InputObject $myNic NoteProperty GUID $entry.SettingID
 Add-Member -InputObject $myNic NoteProperty HostName $entry.dnshostname
 Add-Member -InputObject $myNic Noteproperty NICName $entry.caption
 Add-Member -InputObject $myNic NoteProperty MacAddress $entry.MacAddress
 Add-Member -InputObject $myNic NoteProperty DHCPEnabled $entry.dhcpenabled
 if ($entry.dhcpenabled) {
  Add-Member -InputObject $myNic NoteProperty DHCPServer $entry.Dhcpserver
 Add-Member -InputObject $myNic NoteProperty IPAddress $entry.ipaddress
 Add-Member -InputObject $myNic NoteProperty SubnetMask $entry.ipsubnet
 Add-Member -InputObject $myNic Noteproperty Gateway $entry.defaultipgateway
 Add-Member -InputObject $myNic Noteproperty DNSservers $entry.dnsserversearchorder
 Add-Member -InputObject $myNic Noteproperty DnsDomain $entry.dnsdomain
 Add-Member -InputObject $myNic Noteproperty PrimaryDnsSuffix $primarysuffix
 Add-Member -InputObject $myNic Noteproperty DNSSearchList $entry.dnsdomainsuffixsearchorder
 Add-Member -InputObject $myNic Noteproperty WinsPrimary $entry.winsprimaryserver
 Add-Member -InputObject $myNic Noteproperty WinsSecondary $entry.winssecondaryserver
 $results.add($myNic) > $null

return $results

Undefined subnets in Active Directory

It seems like in many environments, there is always a disconnect between the people deploying networks, and ensuring the new subnets are defined in the proper active directory sites. Perhaps there are some IPAM solutions or other neat tricks to come up with for finding these, however we can do some basics with powershell as well. Most Active Directory people will be familiar with the netlogon.log, and may know that there are events logged there for connections from clients that do not map into an AD site. You will also see event log messages in the System log / Netlogon Source / EventID 5807 which state that there were a number of siteless clients connecting and you can look at the netlogon.log for further details. Depending on the level of netlogon debugging that is going on, you may have a lot of noise to deal with in there. But with powershell, some filtering and manipulations, we can strip it all down to source IP's very quickly:

$uniqueIP = get-content c:\windows\debug\netlogon.log | 
? { $_ -cmatch "NO_CLIENT_SITE" } | 
% {$_ -match "\d+\.\d+\.\d+\.\d+"|out-null; $matches[0]} | 
Group-Object | Select Count,Name| Sort Name 

Here we have in the pipeline:
1) Get-content to read the file
2) ? = where. -Cmatch for case sensitive matching of NO_CLIENT_SITE.
3) % = foreach. For each matching line, we look for an IPv4 matching pattern and ignore the true/false result, and display the first matched object
4) Then we group all of these IP addresses as we will have duplications
5) Filter the results to just a count of how many occurrence and which IP is the source
6) Sort by the Name attribute. In this case Name = IP address, so we see our IP's in order which helps us see IP's that might all be in one subnet

If we don't want to get too fancy at this point, we can just visually look through our list and identify possible subnets base on how large our typical subnet blocks are allocated in our environment. We can look at the IP settings of the client remotely via WMI with my get-ipconfig script or another method. Since we may not know the actual location that the new network was deployed, sometimes we can get this from router details. If your organization has telnet open on routers, and puts location details in the banner, this is one useful way of checking. You can look at my telnet script to read these banners. Otherwise, sometimes machine naming conventions or other site specific build details can give away the site location [(nbtstat -a ) or powershell version of this netbios command]. Additionally if you run this type of check frequently, you may end up relooking at Ip's that you already defined subnets for. To get around this you can do some extra filtering in the initial powershell command to read the log. Select with the -last option can reduce your results to the last few lines. Otherwise some date matching could be done. Also you can see for what site the subnet exists in using my powershell script for this.

If you want to try grouping the original results by potential /24 bit subnets:

$uniqueip|select  Name,@{name="mask";expression={$,$'.'))}}|group mask |
select @{name="PossibleSubnet";expression={$}},@{name="UniqueIPAddr";expression = {$|select -expand name}}

This will provide a guess at the subnet ID, along with all the related IP's that are in that range.

Tuesday, March 19, 2013

Looking for memory leaks in svchost

I'm running across an issue with multiple 2008R2 systems that are having memory leaks somewhere in a service. We see svchost processes building up to hundreds of MB or even several GB of RAM utilization. So as first step, I wanted to come up with a list of systems that may be presenting this problem, and pull up a list of services in that process. So, using powershell, I came up with this interesting custom PSObject construct to work on. This is some code that you can pipe in the names of your machines to, and then deal with the output in other ways later:

new-object psobject -property @{
 Services=((gwmi win32_service -computername $_ -filter "processid = $((get-process -computername $_ svchost|sort ws -Descending|select -first 1| tee-object -variable temp).id)"| select name |convertto-csv -notypeinformation) -join ",")
 Memory="{0,-22:n2}" -f ($

For those that are not that familiar with what is happening here, we are creating a new object on the first line, and the @{} is used for "splatting" to add attributes to the object. Our second line creates an attribute "Computer" with the name you piped in.

The next line creates an attribute "Service" which uses get-wmiobject on WIN32_Service to find a specific process PID value. The PID value is obtained by the Get-Process commandlet in a subexpression which goes through multiple pipelines, first finding all svchost processes, sorting them by workingset size, then picking the largest, and finally storing the result in variable $temp along with outputing the PID value into a pseudo variable for our -filter parameter of GWMI. All of that GWMI result is then reduced to the names of the services and converted to CSV. The -join operator is put around all of that to make it a since CSV style line.

After that, we create an attribute called "Memory", which takes our $temp variable that we created with tee-object, and puts it through the format operator -f, to make it a 2 decimal place value. We have a subexpression here to change the working set into MegaBytes by using the 1MB shortcut for calcuation.

The output isn't too beautiful, but for an adhoc look at a large list of machines, its usable. With some further manipulation or sorting we can go further with this.

Services : 

Memory   : 1,713.16
Computer :

Googling around a bit shows there has been some known issues with gpsvc and iphlpsvc, as well as some people complaining about winmgmt causing issues. We can use sc.exe config option to set these to run in their own memory space (sc.exe \\computername config svcname type= own) to see if that isolates the problem. Stopping the various services did not clear up the problem, however gpsvc (group policy) is normally blocked for all users except the SYSTEM, so its not stoppable. For further investigations, get-childitem with .versioninfo.fileversion attributes on the results in powershell can get us version details of files, so we can look for version differences between a good machine and bad machine and see if we may be missing a patch somewhere. If we see growing handles, we can do some fun pipelining in powershell to use the handle.exe sysinternal tool in order to see if we can find a pattern there. Maybe something like:

.\handle.exe -a -p  | where {$_ -match ": "} | % {$_.substring($_.indexof(": "))} |group-object |sort count|select count,name -last 10.

                        Count Name
                        ----- ----
                           51 : ALPC Port
                           61 : Semaphore
                          106 : EtwRegistration
                          845 : File  (---)   \Device\Afd
                         1862 : Event

Wednesday, March 6, 2013

Get-DellWarranty Powershell script for getting warranty details

*Update Jan 3, 2014.  The problem with scripts that scrap websites is that websites change.  Dell's site has changed, so the information below doesn't function anymore.  They are now requiring logon and make the warranty details more difficult to find (at least for the full list).*

I had the task recently of running an inventory on a large group of servers looking for old systems that require replacement.   One of the criteria was the hardware warranty expiration.  These are primary Dell systems, so I looked around for scripts to check that, but wasn't having much luck getting them working.  Since the scripts hit the Dell web site looking for details, I would suspect various changes there will cause issues.  So after looking at the code a bit, running some fiddler traces to see the full interaction of the site, I found I would need cookie support.  Since I haven't seen that work well in the past, I had used PERL LWP for this as you can find in a previous post on Oracle Access Manager diagnostic page scrapping automation.  I did some searching and came upon this very useful post which shows how to get cookies working in the Posh v3 invoke-webrequest commandlet.  So, armed with that, I put together this code to pull the details.  Note that there can be more than one warranty listed for a serial number.  This script takes a service tag number, or array of service tag numbers and outputs something like this:

ServiceTag         : ######
Country            : United States
WarrantyExpiration : 3/23/2011
WarrantyType       : Gold or ProSupport with Mission Critical
WarrantyStarted    : 3/23/2007

ServiceTag         : ######
Country            : United States
WarrantyExpiration : 3/23/2011
WarrantyType       : 4 Hour On-Site Service
WarrantyStarted    : 3/22/2008

You can pass any additional parameter to the script that is accepted by invoke-webrequest. So if you need a proxy or authentication, you can do so.

#Requires -version 3.0

 [parameter(mandatory=$false, ValueFromRemainingArguments=$true,position=1)]

begin {
 #process any extra arguments like proxy settings, credentials, etc (used for invoke-webrequest)
    $extras = @{}
    for ($i = 0; $i -lt $Remaining.Count; $i++) {
     if ($Remaining[$i] -match "^-(.*)") {
      $val = $Matches[1]
      if ($Remaining[$i+1] -match "^-" -or ($remaining.Count -eq $i+1)) {
      } else {
 $script:posturl = ""
 $script:geturlPrefix = ""
 $script:geturlSuffix = "?s=BIZ#ui-tabs-5"
 $script:session = New-Object Microsoft.PowerShell.Commands.WebRequestSession  

process {
 foreach ($svctag in $svctags) {
  Invoke-WebRequest @extras -Uri ($script:geturlPrefix + $svctag + $script:geturlSuffix) `
   -WebSession $script:session -ErrorAction SilentlyContinue |Out-Null
  $postresult = Invoke-WebRequest @extras -Uri $script:posturl -WebSession $script:session -ErrorAction SilentlyContinue
  if ($postresult -eq $null) {
   #webrequest failed
  } else {
   $countrytag = ($postresult.allelements|where {$_.tagname -match "DIV" -and $_.class -eq "Width100Percent" -and $_.innerHTML -match "CounrtyShipDateRight" -and $_.innerText[0] -eq "C"}).innerhtml.split("`n")[1]
   $countrytag = $countrytag.substring($countrytag.indexof(">")+1)
   $countrytag = $countrytag.substring(0,$countrytag.indexof("<")-1)
   #countrytag is now just a country name in text format with HTML removed
   $warrTbl = ($postresult.allelements|where {$_.tagname -eq "form" -and $ -eq "grid"}).innerhtml
   $warrTbl = $warrTbl.replace(" class=uif_t_altRow","").substring($warrTbl.indexof("TBODY")-1)
   $warrXML = [xml]("<root><TABLE>" + $warrTbl + "</root>")
   foreach ($xEntry in $ {
    $result = New-Object PSobject
    Add-Member -InputObject $result NoteProperty ServiceTag $svctag
    Add-Member -InputObject $result NoteProperty Country $countrytag
    Add-Member -InputObject $result NoteProperty WarrantyExpiration $[3]
    Add-Member -InputObject $result NoteProperty WarrantyType $[0]
    Add-Member -InputObject $result NoteProperty WarrantyStarted $[2]

Wednesday, February 6, 2013

Simple Powershell TCP client for reading telnet banner

For many IT pro's you will find yourself in situations where you get an alert that a system you manage is down.  This can be from a variety of issues, but typically you will ping to see if its really down, and sometimes just based on the ICMP reply types for the ping, you can roughly tell where the problem lies.

Some of these results can roughly be translated as follows

     TTL expired in transit = Routing loop.  Can be due to default routes in use and a link being down
     Destination host not reachable OR Request time out = the subnet is accessible or directly connected, but no arp reply for ethernet to work, or firewall filtering is occuring
     Destination network not reachable = a router doesn't know how to get to that subnet
     Request timed out = router may be trying to forward a packet (but the network is not available), system is on the same subnet and down.

You can follow that your ping with a trace-route to see if the link going to the location that the system is in, is up.  Often, you won't get much information from the trace-route as most hop's won't have reverse dns records.  So, if you have an environment with routers that can be accessed by telnet, and the banner information provides some details on where it is, you can telnet to the last responding hop in your trace to see where the network connection drops.  Since telnet.exe is frequently not available on windows machines these days, yet powershell is, you can leverage the .NET framework to do the work for you (and you won't have to lose output prior to the telnet command and wait for the connection to auto disconnect you).

#requires -version 2.0
    [parameter(mandatory=$true,position=0,helpmessage="Hostname or IP")]

if (test-connection $hostname) {
    $conn = new-object$hostname,23)
    $str = $conn.getstream()
    $buff = new-object system.byte[] 1024
    $enc = new-object System.Text.ASCIIEncoding
    start-sleep -m 200
    $output = ""
    while ($str.DataAvailable -and $output -notmatch "username") {
        $read = $$buff,0,1024)
        $output += $enc.getstring($buff, 0, $read)
        start-sleep -m 300
} else {
    Write-Error "Unable to ping or resolve host"
    exit 1

Tuesday, January 29, 2013

Invoke-webrequest example (Finding time for solat in Kuala Lumpur)

On and off over the last few years, I have had to script various html interactions to pull data from sites, navigate through them and parse results.  Previous I had done this with a variety of tools, from scripting Internet explorer with COM, .NET's webclient, and various PERL modules.  Each had a variety of limitations or difficults to overcome with parsing.  Recently I started playing around with Powershell's Invoke-Webrequest.  For simplicity, features and easy parsing, this looks quite promising.  Parsing is especially easy when sites are well designed and various components have unique names.  When you run the request, there is is collection of results under AllElements.

(The webrequest result)

   TypeName: Microsoft.PowerShell.Commands.HtmlWebResponseObject

Name              MemberType Definition
----              ---------- ----------
Equals            Method     bool Equals(System.Object obj)
GetHashCode       Method     int GetHashCode()
GetType           Method     type GetType()
ToString          Method     string ToString()
AllElements       Property   
BaseResponse      Property   System.Net.WebResponse BaseResponse {get;set;}
Content           Property   string Content {get;}
Forms             Property   Microsoft.PowerShell.Commands.FormObjectCollect...
Headers           Property   System.Collections.Generic.Dictionary[string,st...
Images            Property   Microsoft.PowerShell.Commands.WebCmdletElementC...
InputFields       Property   Microsoft.PowerShell.Commands.WebCmdletElementC...
Links             Property   Microsoft.PowerShell.Commands.WebCmdletElementC...
ParsedHtml        Property   mshtml.IHTMLDocument2 ParsedHtml {get;}
RawContent        Property   string RawContent {get;}
RawContentLength  Property   long RawContentLength {get;}
RawContentStream  Property   System.IO.MemoryStream RawContentStream {get;}
Scripts           Property   Microsoft.PowerShell.Commands.WebCmdletElementC...
StatusCode        Property   int StatusCode {get;}
StatusDescription Property   string StatusDescription {get;}

(The All Elements Property)

   TypeName: System.Management.Automation.PSCustomObject

Name        MemberType   Definition
----        ----------   ----------
Equals      Method       bool Equals(System.Object obj)
GetHashCode Method       int GetHashCode()
GetType     Method       type GetType()
ToString    Method       string ToString()
innerHTML   NoteProperty  innerHTML=null
innerText   NoteProperty  innerText=null
outerHTML   NoteProperty  outerHTML=null
outerText   NoteProperty  outerText=null
tagName     NoteProperty System.String tagName=!

Here we have a list of all tag elements on the page. These can be as wide as <html> and everything in that, or down to a leaf element like <img>

Going back to parsing of well designed sites, I wanted to write something to check prayer times.  In Malaysia there are a few sites that post them, one is  In my experience, this site is broken frequently, and with its recent redesign, it looks too complicated to try to parse it. on the other hand, labels various parts of their page, so its easy to pull the data.  In the raw HTML, we have this

<label class="SolatTime">Solat Time, KL <img src="/_layouts/AtQuest/BankIslam/Images/greyarrow3.jpg" /> Imsak 5:59 | Subuh 6:09 | Syuruk 7:28 | Zuhur 1:29 | Asar 4:51 | Maghrib 7:27 | Isyak 8:39</label><br />

We can see here they have the data in a labelled class "SolatTime".  So we can grab that and split up the results, returning a PSobject of times.

$BIsitedata = Invoke-WebRequest -Uri 
$htmldata = $biSitedata.allelements|where {$_.tagname -eq "Label" -and $_.innerhtml -match "SOLAT"}
$result = new-object psobject
$htmldata.innertext.split("|") | where {$_ -notmatch "Solat time" } |foreach {
 $entry = $_.split( )
 add-member -inputobject $result NoteProperty $entry[1] $entry[2]

And our results:

Subuh   : 6:09
Syuruk  : 7:28
Zuhur   : 1:29
Asar    : 4:51
Maghrib : 7:27
Isyak   : 8:39

Monday, January 14, 2013

Delegating WMI security remotely with powershell

A while back, I was working on a script for checking domain controller security event logs which needed to be handed off to a team that was not part of the Domain Admins group.  While they had permissions to access the security event logs through user rights in GPO, trying to read the event log through an MMC remotely is ridiculously slow.  A good solution was to use WMI with a tight filter for event ID's and a brief time window for the specific event.  The problem was, none of this team had WMI access.  So to go about fixing a few hundred domain controllers, I started poking around at WMI permissions.  You can edit this through the MMC->Component Services console.  But doing this via RDP on such a large scale is not an option.  There are some script examples in VB, such as this one.  But, being a powershell guy, I wanted to use some existing code and wrap in the additional lines to update the security.  So, what I did was created the permissions that I wanted on one specific machine (similar to the above article), and use powershell to pull the security descriptor.  The example below is delegating the common root\CimV2 namespace which contains the event log event classes.

#Collect your security descriptor

$sd = gwmi -namespace "root\cimv2" -Class __SystemSecurity -ComputerName $FixedMachine
$sdhelper = new-object Win32_SecurityDescriptorHelper
$binarySD = @($null)

#At this point you can loop through a list of machines and push out the updated permissions.
$sdlocal = gwmi -Namespace "root\cimv2" -Class __SystemSecurity  -computername $remotemachine
$sdhelper2 = new-object Win32_SecurityDescriptorHelper

For more details on the various types of permissions, you can reference this technet article.  For remote read-only, you can go with "remote enable", and "enable account".