Friday, July 30, 2010

Dynamic DNS without the [DNS]

I have been poking around at Microsoft name resolution system investigations that became a side project of computer account security research. I was looking at the ability of computer systems to use the DNS protocol to register names other than their own to see if any additional records could be created for a specific host name. The findings there are that hosts can register names other than their own, but if there is a record already present for the name they are trying to register, and that computer account (or user account) doesn't have rights to update, then it fails. This is good from a security perspective, but shows some limitations for usability and advance configurations provided from a level closer to the user or machine.

In the process, I took a deeper look at Active Directory integrated DNS zones to see how everything is represented. First off, DNS zones can be found in several different locations inside of Active directory. This is based on their replication scopes. For those not too familiar with AD and AD DNS, briefly, the advantages of Active Directory DNS is that it is made highly available through the fact that it is stored in the Active directory database, and all changes are replicated as part of standard AD replication. This allows records to be updated with small updates instead of larger zone file transfers that work on a scheduled timeout period. Additionally, in secure dynamic updates, updates are authenticated through Kerberos and the records are owned by the updater.

Back to replication... There are 4 options for Dns zone replication scopes in AD. This is Legacy, Domain, Forest, and application partition. Here is some information about each:

1) Legacy. This zone will be stored in the standard domain partition, under the Cn=Microsoft DNS,CN=System organizational unit in the dc=yourdomain,dc=com partition. A zone will appear as a container icon, and is of the class dnsZone. The problem with this type of zone is although it is only available in the scope of its own domain, it is part of the domain's system contain and will replicate as part of the global catalog to other domains. So you waste space and replication traffic sending information to other domain controllers that cannot use this information.

2) Domain. This zone is stored in a separate partition DC=DomainDnsZones,dc=yourdomain,dc=com. This partition is only available on domain controllers that are running the DNS service. So, replication is limited to only where the records are needed, and it only stays in that one domain.

3) Forest. This zone is stored in a separate partition DC=ForestDnsZones,dc=yourdomain,dc=com. This partition will replicate to all dns servers in the forest. It is only hosted on domain controllers with dns installed.

4) Application. If you create your own custom application partitions, you can use them to store DNS. Also you can chose where the application partitions replicate. For DNS, this may be useful if you want specific zones only located in DC's in a certain city, but it requires more manual work to manage it.

If you look inside adsiedit at the dnsZone containers in these partitions, you will see dnsNode objects for records. The object will be named DC=. Example, if you have the zone The record will have the name DC=www. Inside each of these objects is a dnsRecord attribute. This is multivalued, so if had several records for 3 different host machines, you will see 3 dnsRecord values. The dnsRecord itself is all in hex. The type of Record (A, CNAME, SRV, etc) is all within the hex, along with the values, timestamps, ttl, etc. (Hopefully I will get back on track to decoding these on of these days)

The interesting point to all of this is that we are looking at the records in LDAP. Typically records are generated through the DNS service using authentication from a workstation. The DHCPClient service registers the hostnames of the machine, and uses the computer account to own the record. When you look at the permissions of the dnsZone object's in LDAP, authenticated users has Create All Child Objects rights. This gives any user or computer the ability to go and create their own records outside of the whole DNS protcol, and built in methods. So, if you needed to create additional dnsRecords on a dnsNode object that you own, you can do that. If you want to dynamically create CNAME, SRV or other records, you can. If you want to dynamically create a DNS record that can't be scavenged, no problems.

Someday I hope to work out a basic tool or script for some of this, but presently I'm still too lazy to finish up my nbtns powershell library for doing all of the netbios name and WINS protocol functions through powershell.

The DNS tunneling possibilities to this are interesting, if you have AD DNS exposed to the outside world.

Wednesday, July 14, 2010

Finding account lockout source (general practice)

A lot of escalations come my way regarding difficult account lockout issues, and repeat lockout events. Over time, I took some of the methodology I used in tracking down the root source of the failed logon and put it into an auto tracker script. This runs quite quickly using WMI's WIN32_NtlogEvent class with a very tight filter for the timegenerated attribute. When providing and upper and lower limit, searching event logs can be done very quickly on remote machines. This is important as in most cases you may traverse through 2 or 3 machines to get to the originating machine.

When starting to track down a lockout source, either manually or through automated practice, you will check the bad password timestamps for all domain controllers [this is an ldap query to all writable domain controllers so it can be slow. Alternatively a much faster approach is to read the metadata of that attribute on the PDC only to get the time and source]. You can do this with ldap queries to each one, or use the microsoft lockoutstatus tool. Ldap queries will allow you to script it. The key point here is that the PDC will always have the most recent bad password time, even if the events are not located on that machine. So for automation purposes, I simplified the process by ignoring the PDC. You could always find the most recent reported event and compare it to see if the PDC is more recent by at least a few seconds to see if the PDC is where you should look. It is also important to note that you are assuming the last bad password time that you are looking at is related to the problem and is not something else like the user mistyping a password. This may require several runs of an automated script to see if you get different sources.

From the bad password time stamp, you can go to the security log for that domain controller, filtering for failure audits and check the event text or InsertionStrings to get the source machine or source IP. Different event ID's have different information. Some failed logons will not show up in the log, or may have a blank source. In this case it is important to have netlogon debugging enabled so you can check the netlogon.log, as it may contain more details for the source. In netlogon, you want to look at the entry in parenthesis that shows via XXXXXX. Ensure the code in the line is a failure, and not success, so you don't chase the wrong entries. For automating, it is helpful to have some code that decodes the various security event hex codes and integer values to plain text for display purposes (my code).

When looking at sources in the event log, you need to carefully check source IP and source workstation if both are present. Sometimes they differ, and the source workstation may be the machine you are already on. Sometimes it is best to use the IP as a preferred value for source. Go to the source machine and look at the security logs there for additional information. When you get down to workstations and member servers, you may get process id's and port information that can help narrow down to a process. Checking logon type codes can be a dead giveaway for scheduled tasks and services with bad cached passwords.  On some occasions you will not see a source machine in the security log.  If this is the case, you will want to enabled netlogon debugging (from command prompt use: nltest /dbflag:0x2080ffff.  Some logon events will show up there, and you can get further source information from the events.  Just search the text files in c:\windows\debug\netlogon* for the user name and look for other machine names in any event that doesn't have a 0x0 status.

You can look at my high level overview for how I wrote my tracking script. For security events, I had to identify some standards between each event ID's I wanted to work with and use insertionstrings to provide a standardized generic PSObject that can be provided by parsing the available information.

Some useful commands to find uses of an account on a local machine (open powershell as administrator, set the target account name without domain)

$targetaccount = "johndoe"

write-host "`n checking services"
gwmi win32_service |select name,startname | where {$_.startname -match $targetaccount}

write-host "`n checking processes"
gwmi win32_process |select name,@{name="ownerdomain"; exp={$_.getowner().domain}},@{name="owneruser" ; expr={$_.getowner().user}}  | where {$_.owneruser -match $targetaccount}

write-host "`n dumping any IIS appPools"
c:\windows\system32\inetsrv\appcmd.exe list apppool /text:* |where {$_ -match "APPPOOL\.NAME|userName"}

write-host "`n checking remote desktop connections"

write-host "`n checking scheduled tasks"
schtasks /query /v -fo CSV | where {$_ -match $targetaccount}

write-host "`n checking mapped network drives for currently logged on user"
dir hkcu:\network

Tuesday, July 13, 2010

How to validate client dns settings for a few thousand systems

Over the last year or so, I have been working on correcting and ensuring correct DNS settings on clients. Inconsistency of documentation between build and support teams, or mistyped entries can prove to be a large scale problem as well as outages waiting to happen. To create a centralized store of DNS client settings is always a challenge, especially when not everyone is in close communication, and you want automation involved. So being an Active Directory guy, I thought of ways to involve AD in this. There is always GPO settings, however these are not permanent, and GPO's can't differentiate between NIC cards that should and shouldn't be touched. So, to get around this problem, I wanted something that was location specific and smart enough to be able to tell what a valid internal IP address was. Having SCOM and SCCM as the tools to help implement this, I came up with a set of scripts for various OS's to parse reports, set client settings, and provide monitoring of client settings to track any deviation from the published standards. The trick to this was stashing DNS client setting information in AD. Since the settings were site specific, tying the information to the site object seemed like a natural choice. From here you can either extend the schema to include an attribute for this, or use one that is available but not in use. The best path seemed to be using one not in use, and what appears to be perfect is the location attribute. Since it is a text field, and is editable in the AD Sites MMC, it is easy to manage. Once that is in place, vbscript's are easy to produce to check the client machine's site, pull out domain specific information out of the location field, compare the approved settings against all "valid IP" NIC's for any problems. Different issues provide different results to the scripts for responsible teams to resolve. There are always some exceptions to the rule that are hard to code for, so not getting to overzealous in the configuration scripts, and providing an exception capability to monitoring scripts is a must.