Sunday, November 12, 2017

Linux mint 18.2 and Windows 10 dual boot

I recently bought a new desktop machine that came with windows 10 preinstalled.  I had asked the company doing to the build to create a 500gb partition for windows on one of the two hard drives for this build.  This would leave a significant amount of space on both drives to allow some cross disk partitioning for Linux and other uses.  Its been a while since I had to select a distro, and ended up picking Mint Linux.

Starting up the install process was fine with the bootable CD.  I partitioned swap and /tmp in two partitions on Drive#1 (same disk as windows) and another for / on drive #2.  All was fine until the bootloader was going to write and grub-installed failed.  I tried selecting a few other locations for where to put the loader, but it either didn't like those either, or the install process hung at some point.

I double checked the bios to ensure fast boot and secure boot were off (both were from the start).  Devices were set to UEFI boot mode in the bios as well.  After some further checking, I found the windows partitions were built in mbr/bios boot mode and that was screwing up grub.  I did the quick google search and found this article to be pretty straight forward for conversion to uefi.

I went back to windows, booted into recovery and tried the steps, but the validate came back with an error for the disk.  Looking around some more it seemed like the linux partitions that were created during the mint installation attempt may be the problem.  So I booted back into the live mint/install state and used the partition tools to wipe out all the linux partitions.  Booted back into windows, then recovery, tried steps again and everything worked fine this time.

One more reboot, back to mint installer, set up the partitions again and everything was smooth and successful.

Friday, October 13, 2017

Opensuse 42.2 to 42.3 upgrade booting into emergency mode

A few days ago I ran the normal "zypper up" on my 42.2 system and received a lovely update from nvidia for their G03 driver on the 4.4.27-2 kernel.  After this, I noticed vlc stopped working due to some plugin.  I rebooted my machine and no longer had X windows starting up.  The error was something about the nvidia driver not being able to load.  Many hours of trying to get that fixed, and try alternatives like Nouveau only got me to a graphic interface that couldn't seem to do more than a 800x600 resolution.  So I noticed there was a new distro update with different nvidia driver and kernel, so I gave that a try. 

Upgrade went ok and as usual (for the past 10 distro updates this machine has gone through), I expected some problems.  Usually its my bootloader pointing to the wrong drive and not being able to start up, but not this time.  For the first time, I had my machine booting into emergency mode with no real obvious errors from the output on the screen.  All file systems (root, /tmp, swap, windows partitions) were mounted as RW.  I had no networking, but otherwise could do pretty much anything form the command line except for starting a graphic interface.  Checking the output of the "journalctl -xb" command showed 2 errors for systemd targets, one for local-fs.target and one that I can't remember at the moment, but I think it was usb related or something else that looked like file systems. After googling around, I couldn't find anything specific to this problem, though a few mentioned looking at fstab for partition issues. I found one line in there which seemed suspicious given the errors in systemd

usbfs               /proc/bus/usb       usbfs       auto,devmode=0666     0 0

I commented this out, rebooted and all was good for boot.  Later installing the nvidia driver and downgrading the kernel to the version that the nvidia driver was created for resolved my original problem.  Spent some more time recreating my desktop environment for the new KDE version.  All together it was about a 12 hour recovery process.  So thanks Nvidia, you guys are awesome.

Thursday, October 5, 2017

Master list of Domain Join errors

This article is a collection of error messages from the domain join process, windows event viewer and general observations.  All of these were tested on a windows 2012R2 server joining to a single domain controller 2012R2 over a simulated router.  The domain is testforest.local and domain controller IP 10.1.1.50.  Various ports were blocked for each test and the results are recorded below.



Main Error Message on client: "An Active Directory Domain Controller (AD DC) for the domain 'test.local' could not be contacted.  Ensure that the domain name is typed correctly"



Situation: No functional dns.  That means, the client has no dns IP's configured, they are not valid dns server IP's, they are not accessible to this client, etc.

Sub Error Message when Details are expanded:

Note: This information is intended for a network administrator.  If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt.

The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "testforest.local":

The error was: "This operation returned because the timeout period expired."
(error code 0x000005B4 ERROR_TIMEOUT)

The query was for the SRV record for _ldap._tcp.dc._msdcs.testforest.local

The DNS servers used by this computer for name resolution are not responding. This computer is configured to use DNS servers with the following IP addresses:

10.1.1.50

Verify that this computer is connected to the network, that these are the correct DNS server IP addresses, and that at least one of the DNS servers is running.

Steps to perform: Ensure the client is pointing to a valid dns server that can resolve this active directory domain.  Use of nslookup as a troubleshooting tool, or nltest /dnsgetdc: will help test connectivity.



Situation:  a RODC is accessible, however a RW domain controller is not accessible.  Your machine may be at a branch office with a local RODC that is handling dns queries, however the link connecting back to a writable domain controller is down.  Additionally this error could come up if the client has a functioning dns server to query that does provide answers, but due to some connectivity problem, the machine can't connect to a domain controller.

Sub Error Message when Details are expanded:

DNS was successfully querie for the service location (SRV) resource record used to locate a domain controller for domain "testforest.local":

The query was for the SRV record _ldap._tcp.dc._msdcs.testforest.local

The following domain controllers were identified by the query:
forest1dc1.testforest.local

However no domain controllers could be contacted.



Situation: Functional dns server, however the server doesn't cover this zone.  This means, the DNS server is accessible and is providing answers, however it cannot resolve anything in this Active Directory zone.  It does not host the zone, it does not forward to another server than can answer, nor does it do any recursion to find the answer.


Sub Error Message when Details are expanded:
Note: This information is intended for a network administrator.  If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt.

The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "testforest2.local":

The error was: "DNS server failure."
(error code 0x0000232A RCODE_SERVER_FAILURE)

The query was for the SRV record for _ldap._tcp.dc._msdcs.testforest2.local

Common causes of this error include the following:

- The DNS servers used by this computer contain incorrect root hints. This computer is configured to use DNS servers with the following IP addresses:

10.1.1.50

- One or more of the following zones contains incorrect delegation:

testforest2.local
local
. (the root zone)

 Steps to Perform: 1) Ensure that the name typed in for the domain name on the client is the correct name, 2) check DNS infrastructure to find a server that is capable of resolving the active directory domain's dns zone.



Situation: Port 389 blocked (LDAP udp/tcp) 

Sub Error Message when Details are expanded:

Note: This information is intended for a network administrator.  If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt.

DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "testforest.local":

The query was for the SRV record for _ldap._tcp.dc._msdcs.testforest.local

The following domain controllers were identified by the query:
forest1dc1.testforest.local




## This ends the above section where the primary error message is domain controller could not be contacted.  In all three of these cases, there will be no prompt for credentials.


Error:  the RPC Server is unavailable

Situation: Block of port 135.  

What is seen:  User is prompted for credentials.  Domain join is slow but works eventually with a welcome to the domain error.  After the success, it may pop up "Changing the primary domain dns name of this computer to "" failed.  The name will remain "testforest.local".




Error:  Extremely slow domain join and everything else (boot up, logon, etc)


Situation: kerberos blocked (port 88 with DROP by firewall)

What is seen: Domain join still works but it is much slower, boot up is very slow, logons are very slow, GP update is very slow

Causes errors in system log
-lsasrv 6038  Microsoft Windows Server has detected NTLM authentication is presently being used between clients and this server....

-GroupPolicy 1055  Windows could not resolve the computer name

-TerminalServices-RemoteConnectionManager  1067   The RD Session Host server cannot register 'TERMSRV' Service Principal Name to be use for server authentication.  The following error occured: The system cannot contact a domain controller to service the authentication request.

-DNS CLient Events 8019.  The system failed to register host (A or AAAA) resource recortapter with settings:...

In the application log
-Winlogon 6006 GPClient errors


Situation: Kerberos blocked with icmp reject (port unreachable), same slowness


Error:  none

Situation: port 137 is blocked

What is seen:  prompts for cred, no problem in domain join, works quickly, no issues.



Situation: port 445 blocked

What is seen: Domain join works quickly, Boot speed is fine, and logon speed is fine. Gpupdate seems to work over port 137/139 (further blocking these ports breaks group policy with eventID 1096 in system log).  TCP 139 is the primary backup to 445 though the other ports may be required to get the connection started


Situation: port 3268  (AD global catalog) blocked

What is seen: No problem, fast join, no obvious problems after join



Situation: All ICMP traffic is blocked

What is seen: Join is fast, boot is fine, logon is fine.  Nothing significant seen here.  Firewall didn't catch any pkt drop.



Situation: Clock time of machine doesn't match domain controller (large skew >5min)

What is seen:  No problem in domain join.  System reboot, logon are all fine.  Clock time sync's after domain join reboot.

Error: "An Active Directory Domain Controller (AD DC) for the domain 'test.local' could not be contacted.  Ensure that the domain name is typed correctly"


 Sub error message in Details:

Note: This information is intended for a network administrator.  If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt.

The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "testforest.local":

The error was: "This operation returned because the timeout period expired."
(error code 0x000005B4 ERROR_TIMEOUT)

The query was for the SRV record for _ldap._tcp.dc._msdcs.testforest.local

The DNS servers used by this computer for name resolution are not responding. This computer is configured to use DNS servers with the following IP addresses:

10.1.1.50

Verify that this computer is connected to the network, that these are the correct DNS server IP addresses, and that at least one of the DNS servers is running.

Situation:  all dynamic ports above 1023 dropped in both directions.

Causes: dropped dns traffic on return.  If return traffic/dns is working.... domain join is fine, boot is slow, logon is slow

System log:

Group policy 1053.  The processing of Group Policy failed.  Windows could not resolve the user name.  This could be caused by ...

Group policy 1055.  The processing of Group policy failed.  Windows could not resolve the computer name.  This could be caused by ...

TerminalServices-RemoteConnection Manager 1067   The RD Session Host server cannot register 'TERMSRV' Service Principal Name to be used for server authentication. The following error occured: The RPC server is unavailable.
.

Service control manager 7022  The Network Location Awareness service hung on starting.

Windows Remote Management 10154

The WinRM service failed to create the following SPNs: WSMAN/Slave1.testforest.local; WSMAN/Slave1.

Additional Data
 The error received was 1722: %%1722.

User Action
 The SPNs can be created by an administrator using setspn.exe utility.

Application Log - winlogon 6006  GPClient taking a long time





Thursday, September 28, 2017

Hajj package (USA) review

This is a long overdue write up, but I felt it is important to share my experience and opinions for the benefit of others out there.  In 2016, I went online looking for a US hajj package provider that would be able to cater to American citizens living outside of the country.  The previous year I had tried the same, but out of all of the companies I found in google, none were responsive to emails.  In 2015 I tried again, and attempted to contact at least 8 different companies.  The only one that actually responded to an email was Hilal Hajj.  From the online description of the package, it looked good, and the fact that they had a sheikh that I had actually heard of before (Omar Suleiman), I thought it would be a good group to go with.  The short summary of that is, yes, they are an outstanding tour provider and I was happy with my selection.

To summarize a few Pro's

1) They accept people who live overseas.  Some companies only want to fly out of one specific location as one big group.  While I can see the ease of logistics for them, some people don't want to have to fly around the world twice when they can just fly a few hours to Saudi.  Hilal was great with this and I wasn't the only one in the group that was an expat coming in.  I few in to Saudi via a layover in UAE, where at least 12 others from the group had flown in.  So we arrived in Medina together and were able to organize and find our tour operator.  Not only were they good at coordinating the ticketing, if you fly from a location where the flight is cheaper, they will reduce the price of your package accordingly.

2) Most importantly, they respond to emails.  Many times over the whole process of registration, visas and everything else, they were quick to respond.  Some of the responses were not always clear, but something is better than nothing.  I would never want to put a lot of trust in any tour operator that only works over the phone.  If they email, at least you sort of have something in writing.  Besides, for those of us overseas, its a pain to have to stay up late at night to play phone tag with some operator in the US.  Every other company I contacted via their listed email or their website contact form did not respond at all, so no business for you.

3) Knowledgeable operator.  Hilal has been in the business for a long time now and they seem to know all the best times to move and get things done during the hajj days.  Everything went smooth and we were where we needed to be and well within the specific times for the rituals.  On Arafat, we were there hours before Zuhr, so we could be well rested and prepared for the day.  In Muzdalifah, they managed to find an area that was almost empty, so we were packed in a really tight location.

4) Good spiritual leadership.  Sheikh Omar gave guidance throughout the tour and specific guidance before every ritual that we performed.  There were a few other sheikhs in the group that were part of the other packages, as well as a few Madinah university students that Hilal had brought along.  So there were always people to answer questions.  Additionally in Mina, this tour operator hooks up with a few groups from other western countries and they have a mini conference with various other Al Maghrib instructors (Abu Eesa, Yahya Ibrahim, etc).

Con's

1) Jeddah departure.  This isn't really a problem with the tour operator specifically.  This terminal is a complete mess with no real information dissemination, and everyone you ask gives a different answer.  The reason I bring this up as a review of the operator is due to the wide variety of departure flights for the different people and packages, you get sent there in separate groups with no one from Hilal being there while you try to figure out your flight.  So it can be a bit difficult to get to where you need to be, especially if no one in your group speaks arabic.

The Omar package is a Medina first package, which is something I would recommend of any package.  Being able to go to Medina first helps you acclimatize to the weather as well as get yourself ready spiritually.  Medina is so much more peaceful than Mecca, so you have a chance to relax, spent time at the masjid, and get shopping out of the way.  We had several sessions of talks in Medina, and were also able to have a session with Sheikh Tahir Wyatt.  Sheikh Tahir also gives talks at the masjid, so you can pick up some knowledge and guidance before the trip to Mecca.  The people that go with this group are typically married couples in their 30's.  Its a friendly and diverse group of people.   In Mecca, the accommodations for all packages before and during the hajj days is in Aziziyah.  Hilal has a building they are leasing which is split up into 4 man rooms.  Its decent and comfortable.  The food is mostly Pakistani and seemed fine to me.  In the days of Mina, we were shifting back and forth between tents and their building with usually half the night spent in Mina.

Hilal is a family run business with the Father/mother doing most of the administrative and logistics work.  You will interact with them in the communications before the trip.  The Omar package group leaders are their son and his wife.  They make everyone feel welcome.

In summary, Hilal was a great experience overall.  Others that were in our group and had been with many companies said their experiences with this company were the best.  I'd definitely go with them again if I go in the future.

Tuesday, September 26, 2017

Sharing files to Hyper-V guest when network is restricted

In some cases of Hyper-V usage, most likely on desktop machines, you may end up in a situation where network between the host machine and your guest isn't working.  VPN clients, IP address space overlap or other software may interfere.  There is an easy, though more time consuming way to transfer files back and forth.  This method isn't very useful for heavily used machines, but if you have a standard set of files that you want to make available to the guest machine, it will work.

In your host machine, go to disk management and right click on the Disk Management tool icon.  There are two options here for creating VHD and attaching VHD.  Create a disk with the appropriate size and type.  On the list of disks in the displace, right click on the device's details (left side, not the graphical space/partition area).  Initialize the disk.  Create a simple volume/format.  Now the drive will appear as a disk on your system.  Copy what you need into the disk, detach it and then make it available to your VM by adding it as a disk. 

If you add the hard drive as a scsi device, you can add it while the VM is running.  Once its added, open disk management in the VM and look for the new drive, which will be in an offline state.  Online it and it will now be accessible.  Do what you need to do with it and then you can offline the disk again.  Before you can reattach it to the host, you will need to remove the hard drive configuration from the VM's settings.

Thursday, September 21, 2017

Reverse CNAME lookup with dns cmdlets

In case you ever get the request to find any alias that points to a server (or list of servers), you can use the DNS commandlets to build a list of results on a zone by zone basis to further dig through.  This command will give you a rough list with 3 attributes:

Hostname = name of the dns record
ShortAlias = non-fqdn of the DNS record data (where the CNAME points to)
Alias = full DNS record data

I put the short name in there just in case the information provided to you is a short server name.

$zone = "contoso.com"
$recs = get-DnsServerResourceRecord -zonename $zone -rrtype cname |
    select @{name="shortalias"; expr={
        $_.recorddata.hostnamealias -replace "\..*",""}}, @{name="alias";
        expr={$_.recorddata.hostnamealias}},hostname

This will give you the full list of cname data for the zone in an array of objects.  If what you are searching for is an array, just run it through a loop in one of two ways [example of matching short names against an array of names to search for]

foreach ($name in $list) {  $recs | where {$_.shortalias -match $name} }

or

foreach ($entry in $recs) { if ($list -contains $entry.shortalias) { $entry } }

Its not super clean, but it will display the records.  You can modify the loops to collect the data in an array.  You could even run an extra outer loop to hit multiple zones.  The $list can just be a copy and paste into powershell from excel or whatever the list comes in.

$list = "
".split("`n")

Make sure when you paste, you don't end up with the " on a new line at the end like it shows above.  If you do that, the first loop example will dump out the whole $recs array on the last entry in $list.

If you don't have access to the Dns cmdlets, but you have rights to pull the zone with dnscmd, you can do something like this:

dnscmd /zoneprint | where {$_ -match "CNAME"} | 
  % {$resline = $_ -split "\s+"; ($resline[0], $resline[3]) }

You'll have to do something with the two values at the end, which are record name and record data.

Monday, August 28, 2017

VI editor for windows admins and its benefits for preformatting data

As someone with some linux background, I ended up using VI/VIM as my go to editor for any linux text editing.  While its a powerful, and some may say complicated, piece of software, if you learn a few of the basic commands and tricks it will be a very beneficial tool.  For windows, there are versions available, such as gvim.  I often find myself going back to it as a nice text editing tool when I receive text that needs some transformation work prior to using it in a script or some other data tool.

Let me give a few examples.

Example #1, you need a csv list of entries, however you have received a list with one item per line.


  1. Copy the text from the original source
  2. open gvim
  3. hit the insert key, to enter insert mode
  4. paste from the clip board
  5. hit esc, to enter command mode
  6. type (including the colon):     :1,$s/\n/\,/
  7. go to the very end of that line and remove the extra comma, ensure its highlighted, then type dl
  8. go to the edit menu, select all and copy
  9. paste it into whatever app you needed it in csv format

Let me explain a bit from the example above.  For most people when they think of text editors you are always in a mode where you are editing the text, and any special command will be a menu item or a keyboard shortcut of a combination of a special key (ctrl, alt, etc) and a letter or number.  VI uses different modes of operations, the two highlighted here are Command (this is what you start it when vi opens), and Insert (this is one of the text editing modes).  While in command mode, there are keyboard shortcuts for moving and editing text based on the cursor position, like in step number 7.  The d, followed by another modifier (lower case L in this example) says delete, in the direction specified (L) which deletes the highlighted character and moves left.  These same direction keys used in the delete command can be used on their own to move the cursor around.  Numbers can be put in front of them to make the move farther.  Another useful operation combined with the delete option is using the W key, this specifies word.  So when combined with the delete (typing:  dw) it deletes from the position of the cursor to the end of the current word.  You can again combine that with numbers, such as d3w, to delete from the current position 3 words to the right.

Now let me explain step #6 and the strange code in there.  When in command mode not all commands are executed by directly typing letters and numbers, sometimes the command needs to be entered from a special prompt, which is brought up by typing the colon.  So in the beginning of that text, the colon brings up this special prompt, and the next 2 parts of the command are a range.  1,$ means from line 1 to the end of the document.  The s/// command is substitution with regex support.  s/ starts the substitute command, the first area between the forward slashes is a regex of what you want to subtitute, and the next area between the forward slashes is the text you want to change it with.  Since its regex, you may need to use lots of backslashes for escaping text.  In this example  /\n/\,/ means match a newline (\n) and replace it with a comma.  Hit enter aft er that and it will execute.  If you are doing multiple matches on a single line, putting a g at the end, such as s///g switches to global match mode.  This command combined with regex is very powerful, and can be used to rearrange text by subset matching.  For cleaning up text, substitute is one of the main go to functions, so doing some research on that and regex's will make it incredibly powerful for you.

One more thing I want to highlight, in case you are using vim without a gui menu, saving a file is :w in command mode, and to exit vim :q.  As with other commands, these can be combined as :wq save and quit.

Insert mode is much more like your standard text editing experience where you move around and edit text.  When working in linux/unix shells, sometimes the arrow keys or backspace key may not work exactly as expected and instead throws some wierd codes into your text.  So if you end up in that situation, learn a bit about the movement, delete and replace commands in command mode.

In the future I may add some additional examples as I come across them.

Monday, July 31, 2017

Testing connectivity to your domain controller

In the distant past there was a useful client side tool for checking connectivity between clients and domain controllers (netdiag.exe). According to microsoft's command line reference guide, it is available in windows 8 and 2012, but in reality the command does not exist on any windows machine I have checked beyond 2003. Trying to run an older version won't work either due to some incompatibility. So, alternatives are required to do checks. One thing you would typically want to check between a client and a domain controller is port connectivity.  Below, I will show a simple script that tests most of the ports.  Some may not be open in your environment (like 636,3269 for ldaps).  Some ports are dynamic, so I haven't included trying to check these.

To begin with, you should know what domain controller your workstation has logged into.  This machine logon establishes the "secure channel" between your machine and the domain.  You can use an old tool that is still around called nltest. 

C:\Windows>nltest /sc_query:contoso.com
Flags: 30 HAS_IP  HAS_TIMESERV
Trusted DC Name \\DC1.contoso.com
Trusted DC Connection Status Status = 0 0x0 NERR_Success
The command completed successfully
This output shows the status of your secure channel, and the name of the domain controller you are querying.  You will need to provide the name of the domain you are connected to.  FQDN domain name or NETBIOS domain name should work fine.

This script will provide two functions, one port checker and one function to run to test your connection.  Run Test-DomainControllerPorts with your domain name (or leave it blank for auto detect).  The script returns the name of the DC that you are connected to, along with 2 arrays of ports that are open and another of ports that aren't responding.

function tcpt ([string]$serv, [string]$p) {
 $result = $false
 try {
  $conn = new-object system.net.sockets.tcpclient($serv,$p)
  if ($conn.connected) { $result = $true } else { result = $false }
  $conn.close()
 } catch {
  $result =  $false
 }
 $conn = $null
 return $result
}
function test-DomainControllerPorts {
 param (
  $domainname = (gwmi win32_computersystem).domain
 )
 $secureChannelDC = (nltest /sc_query:$domainname |
  where {$_ -match "Trusted DC Name"}).split("\\") |
  where {$_ -match $domainname}
 $secureChanneldc = $securechanneldc.trim()
 $functionalports = @()
 $nonFunctionalPorts = @()
 $portsToCheck = ("53", "88", "135", "137", "139", "389", "445", "464", "3268", "636", "3269")
 foreach ($port in $portsToCheck) {
  $portstat = tcpt $secureChannelDC $port
  if ($portstat) {
   $functionalports += $port
  } else {
   $nonfunctionalPorts += $port
  }
 }
 $result = new-object PSObject
 add-member -inp $result NoteProperty DomainController $secureChannelDC
 add-member -inp $result NoteProperty OpenPorts $functionalports
 add-member -inp $result NoteProperty UnOpenPorts $nonfunctionalports
 out-default -inp $result
}


Update for later OS's (high than win 2008), some of the ports above are legacy and wouldn't be open on many domain controllers (such as 137, 139)

Sunday, July 2, 2017

AD: Simple way to remove all members of a group

No loops required, use the -clear parameter in set-adgroup.

Set-adgroup -identity "name of group" -clear member

The time required to execute will vary depending on number of people in the group.

Saturday, May 6, 2017

Some of my coding on Github

1) From Columbia AI course assignment for search algorithms, n-puzzle solver in python 3.  My first attempt at python programming.
2) From Columbia AI course assignment in CSP's, suduko solver using AC3 and backtracking, written in Python3.
3) Arabic typing website code.  (Live site at: https://alexa.islamicpartnership.com/arabic-typing/index.html)

Wednesday, April 19, 2017

Download all enterprise CA crl's from active directory

This script will look for all published crl's in the configuration partition, download them, and write them to binary files.  To further examine the files, you can open them up in windows (standard certificate viewing tools), or use the PSPKI module to dig into the data.


$debase = new-object directoryservices.directoryentry("LDAP://RootDSE")
$configpartition = $debase.configurationNamingContext[0]
$de = new-object directoryservices.directoryentry(` "LDAP://CN=CDP,CN=Public Key Services,CN=Services," + $configpartition)
$ds = new-object directoryservices.directorysearcher($de)
$ds.filter = "(objectclass=cRLDistributionPoint)"
$ds.propertiestoload.add("certificaterevocationlist")|out-null
$crls = $ds.findall()
foreach ($crl in $crls) {
$CAcert = $crl.path.replace("LDAP://CN=","")
$CAcert = $CAcert.substring(0,$CAcert.indexof(","))
$file = $CACert + ".crl"
set-content $file  ([byte[]]($crl.properties.certificaterevocationlist[0])) ` -encoding Byte
}

Download all files from IIS web directory listing (non-recursive)

This simple code should be able to dig out all file names from inside the A HREF tags where the file name consists of letters, numbers, a few special characters, spaces, file path forward slashes and periods; and ends with an extension of 2-4 characters.  Each entry will be downloaded, however, take note that the A HREF data will contain a relative path to the item, including the directory structure.  The webclient downloadfile method's second parameter wants a path name, including file name, for the destination.  If the full path doesn't exist, the file may just get put in the current directory.

$wc = new-object net.webclient

$sitename = "http://somesite/somedirectory"

$weblisting = $wc.downloadstring($sitename)

$items = select-string '"[a-zA-Z0-9/._-() ]*\.[a-zA-Z0-9]{2,4}"' `
   -input $weblisting -allmatches|
   foreach {$_.matches.value.replace('"','')}



foreach ($item in $items) {

 $wc.downloadfile($sitename + $item, ".\" + $item)

}

Wednesday, March 29, 2017

Password change failed: Configuration information could not be read from the domain controller, either because the machine is unavailable, or access has been denied

This error was recently brought to my attention when a user was trying to change password after the expiration notice at logon. This was the first time I had seen it so I thought it was a bit odd. Based on the text of the error message alone, you would expect that the domain controller can't be contacted at all, there is some secure trust issue, or some weird issue on the domain controller. Typically when domain connectivity problems occur, you will get messages like domain controller unavailable or trust relation type problems. Searching around google comes up with some answers that don't seem to relevant, such as unjoin/rejoin the machine. One thing to look at is multi domain environments. Is the machine they are accessing on a different domain that the domain where the account exists? Is the trust one way or two way? Is the connectivity restricted between the two domains (dmz's)? In my particular case, the password change was being attempted on a trusting domain (one way) with limited access. Check this article to help determine possible connectivity requirements.

Monday, March 20, 2017

OpenSuse upgrade to leap 42.2 - missing nvidia module

Last week, my old desktop was running leap 42.1 and I decided to run a normal zypper update to get the latest packages. Even though all my repos were pointing to 42.1, it gave me "opensuse" as a package to update along with the 3.5+GB of files that needed to be downloaded for a distro update. This machine has been through every version of opensuse from 10.3 until leap 42.1 with continual distro updates. Each time there is some small problem, usually with grub pointing to invalid boot locations, audio not working or nvidia module issues. In this case, the updates ran fine, reboot to a failed graphic environment and command prompt logon. After the update all the repos were still pointing to 42.1, so I updated everything, including nvidia, to point to 42.2 locations. I ran another update, had to download a few hundred more GB of files. Reboot again and failed to load the GUI. Running startx gave the missing nvidia module error. I tried playing around with different versions of the drivers, changing from G03 to 02 and 04, but no luck there. I downloaded the driver compatible with my card (340) from nvidia's site. That said the driver was already installed and my attempt to continue installing anyways come up with some compile fails and module failed to build. So I went back to yast and messed around with the nvidia packages some more. In the install process, I noticed in the nvidia installer file name (visible in a progress bar) that after the 340.102 driver version there was a k4.4.27_2 which looks like a kernel version. Checking to see what I was currently running showed an old 4.1.15-8 version. This was the latest grub was showing despite having at least 6 newer kernels installed on the system. The latest kernel on the machine, 4.4.49-16 didn't work with nvidia either, so I worked on grub to get the 4.4.27-2 kernel loading as default and everything was working after that. So it looks like the current nvidia packages only support up to this version, so the system needs to be a bit behind what is the latest model.

Tuesday, January 17, 2017

powershell: filtering for unique lines of csv

Scenario: You have a large csv file (several hundred meg or more) representing username to computer name mappings.  The data contains a lot of duplicates as it represents activity over a period of time.  The data is already sorted by time, so how do you get the most recent activity per computer while ignoring the rest?

Pipeline method with commandlets:

import-csv .\data.csv| select -Unique computername|ConvertTo-Csv -NoTypeInformation
 |out-file .\filtered-data.csv

This ran for hours, hit several hundred MB of ram usage and eventually had to be cancelled as it was taking too long.  Unfortunately for the unique filtering on select, it had to do csv conversions to get the attribute that I wanted to filter on.


Hackish method with hash table:

$ht = new-object hashtable

function selective-add {
 [CmdletBinding()]
   param ( [Parameter(Mandatory=$True,ValueFromPipeline=$True)]$line )
   begin {}
   process {
     $data = $line.split(',')
     if ($ht.contains($data[1])) {} else {
       $ht.add($data[1],$data[0])
     }
   }
}

get-content .\data.csv | selective-add
$ht.keys | % { add-content -path filtered-data.csv -value $("{0},{1}" -f $_, $ht.Item($_)) }

This only took about 15 minutes, however it sucked up twice as much ram as the previous method in a very short period of time.