Tuesday, September 28, 2021

Task scheduler repeat popup "task scheduler service is not available. Verify the service is running."

Recently I noticed a few different errors on task scheduler on some machines that had gone through in place operating system upgrades from Windows 2008 to Windows 2016. After the upgrade, when opening the task scheduler gui tool, it either gave popup messages repeatedly saying the service was not available despite the service being started. Clicking through many of these, it would eventually show some tasks in the list, but not all. Also schtasks.exe was working fine and showing all the tasks. On some of the machines, the gui tool was just giving snap-in errors and nothing would load in the tool. 



The issue turned out to be from some of the legacy jobs. In older versions of windows, simple scheduled tasks could be created through the at.exe command. If some jobs existed on the machine prior to the upgrade, they would still be in the windows directory, and will still show up in schtasks.exe. You can't edit or delete them with the schtasks command line tool, and they won't show up at all in the GUI after you click through the errors. If you try to use at.exe command it says its no longer supported and won't run. You can just delete the job files from the c:\windows\tasks folder. After that, the errors in the gui tool will go away.

Capturing unique simple bind or unsigned ldap queries from a domain controller

Using get-winevent in powershell with XML filter, you can grab the 2889 events from the directory services log. These contain the username, and source IP. With some custom defined attributes within select-object along with an array, you can filter this down to unique connections.

$query = @"

<QueryList>

  <Query Id="0" Path="Directory Service">

    <Select Path="Directory Service">*[System[(EventID=2889)]]</Select>

  </Query>

</QueryList>

"@


$somelistofdomaincontrollers | %{

$serv = $_

$hashes = @();

get-winevent -filterxml $query | select @{n="dc";e={$_.machinename}},
@{n="source";e={($_.properties.value[0].split(":"))[0]}},
@{n="user";e={$_.properties.value[1]}},
@{n='connhash';e={$str = ($_.machinename + 
    $_.properties.value[0].split(":"))[0] +
    $_.properties.value[1]; $str.gethashcode()}} | %{

     if ($hashes.contains($_.connhash)) {} else {$hashes += $_.connhash; $_|
        select dc,source,user}

}

Thursday, August 5, 2021

Quick way to find all OU's in a domain that block gpo inheritence

Using bitwise and on the gpotions attribute of organizational Unit objects. This will run in seconds compared to attempting to use higher level functions like get-adorganizationalunit in combination with get-gpinheritance.

get-adobject -ldapfilter "(&(objectclass=organizationalunit)(gpoptions:1.2.840.113556.1.4.803:=1))"

Friday, November 6, 2020

AWS script launch ec2 instance in various regions for short term command execution

 I'm in the process of doing some service provider evaluation that requires some network tests to be run from various locations around the world.  Using vpn's might provide a way to do this, but I've been playing around with AWS lately, so I thought I would give EC2 scripting a try.  The code below was created for Powershell with AWS modules on a Linux machine.  The basic concept is go create a single small Amazon Linux 2 image machine in a specified region, connect to it via SSH and run whatever commands you want.  Using the linux ssh client, you can provide a command to run on the remote machine, or multiple commands separated with semicolons.  The client connects, output is on your local terminal, and the connection ends after the commands run.

This script isn't designed for much error handling, but it does look for a local keypair file if you already have one that you want to use.  In this scenario, you will need to provide the file name of the .pem file and the name of the keypair (the AWS EC2 name of it).  If you don't provide one, the script creates a new one for you.  It doesn't clean up the keypair or the security group that it creates, but it will terminate the new EC2 instance at the end of the function call.  Since the code is wrapped in a function, you can run a through a list of regions or a list of regions + keypair info to execute commands across several regions.


Note: Some lines of code have been split for clarity and may require them to be recombined for them to execute. You will need to have an AWS profile and credentials set up on your machine already.


function run-test {
    param(
        [string][parameter(mandatory=$true)]$region,
        [string][parameter(mandatory=$true)]$command,
        $keypairFileName, $kpname)

    if ([string]::isnullorempty($keypairfilename) -or 
    	(-not (test-path $keypairfilename))) {
        #no keypair file provided or found, create a temp one

        $kp = new-ec2keypair -keyname $($region + "-tempkp") -region $region
        $keypairFileName = $region + "-tempkp.pem" 
        $kpname = $region + "-tempkp"
        $kp.KeyMaterial | Out-File -Encoding ascii $keypairfilename
        chmod 600 $keypairfilename  #for linux
    }

    #get most recent Amazon Linux 2 image
    $id = Get-EC2Image -Owner amazon -Region $region |
        where {$_.name -match "amzn2*" -and $_.architecture -eq "x86_64" 
        	-and $_.platformDetails -match "Linux"
          	-and $_.rootdevicetype -eq "ebs" 
        	-and $_.virtualizationType -eq 'hvm'} |
        sort-object creationdate |
        select -last 1 |
        select -expand imageid

    #create a security group for the launch with SSH ingress allowed
    $secgroup = get-ec2securitygroup -region $region | 
    	where {$_.description -eq "SSH-only"} |select -exp groupid
    if ($secgroup -eq $null) {
        New-EC2SecurityGroup -region $region -GroupName "SSH-only" -description "SSH-only"

        $secgroup = get-ec2securitygroup -region $region | 
        	where {$_.description -eq "SSH-only"} |select -exp groupid

        $cidrBlocks = New-Object 'collections.generic.list[string]'
        $cidrBlocks.add("0.0.0.0/0")
        $ipPermissions = New-Object Amazon.EC2.Model.IpPermission
        $ipPermissions.IpProtocol = "tcp"
        $ipPermissions.FromPort = 22
        $ipPermissions.ToPort = 22
        $ipPermissions.IpRanges = $cidrBlocks

        Grant-EC2SecurityGroupIngress -Groupid $secgroup 
        	-IpPermissions $ipPermissions -region $region
    } 

    $instance = New-EC2Instance -Region $region -ImageId $id -Instancetype t3.nano 
    	-KeyName $kpname  -AssociatePublicIp $true -securitygroupid $secgroup 

    $publicname = $null
    while ($publicname -eq $null) {
        sleep 5
        $publicname = (get-ec2instance $instance.Instances.instanceid 
        	-region $region).instances.publicdnsname
    }
    
    sleep 30
    ssh -i $keypairfilename ec2-user@$publicname $command

    Remove-EC2Instance -Instanceid $instance.Instances.instanceid -force -region $region
}

#sample execution
$command = 'ping -c 10 server1.test.com ; ping -c 10 server2.test.com ; 
sudo yum install traceroute -y; sudo yum install bind-utils -y ; 
sudo traceroute -n server1.test.com; sudo traceroute -n server2.test.com ; 
nslookup -type=TXT test.com ; nslookup -type=SOA test.com; 
nslookup -type=txt test.com 8.8.8.8'

run-test -region ap-southeast-1 -command $command

Wednesday, October 21, 2020

AWS storage gateway quick and easy lab

 I've been doing some studying of the AWS sysop exam areas.  Storage gateway seemed like a very interesting and useful tool that many organizations are quickly place in their environment for file storage provisioning or storage capacity expansion.  I didn't go to in depth on this, however I was able to deploy a storage gateway, a windows AD domain, an NFS share off the gateway, do a domain join of gateway to AD, and then deploy an SMB share all within about 2 hours (first time touching this tech).


My environment for the lab:  

    Host OS - deskop Linux Mint 19.3.

    Virtualization - Vmware workstation 16 player

    Storage gateway - Downloaded vmware device VM from AWS at the setup of storage gateway.  NIC set to bridged

    Windows domain - Vmware win 2019 eval server.  NIC set to bridged


I started with the simple set up of the gateway with an NFS share.  After downloading the AWS SG vm, it boots up to a logon screen, where you can use the default account: admin, password.  In there, you have some limited capabilities through a menu driven interface.  This mostly focuses on network settings, firewall and routing.  For me, it was pretty well configured already for this step.  In the console dialogs you will need to provide the IP of the storage gateway, and your host OS browser needs to be able to connect to it in order to configuration the VM.  With bridged networking, this was no problem at all.  You will need to add an additional storage driver on the VM to act as the caching location.  This will be detected during the configuration in the console.



Back in the console, create an S3 bucket for you to use with the gateway.  You can use the same bucket for multiple shares, but it looks like you can use the same prefixed folder for more than one share.  Once you create an NFS share (pretty straightforward for options), you can mount it on your host OS and start putting files in there.  The files will act like normal linux/unix files with owner/group and permissions set.  These replicate up to the S3 bucket and are stored in the object's metadata

testuser@nathan-X299-UD4:/home/nathan/aws-sg$ ls -l
total 2
-rw-r--r-- 1 nobody   nogroup  32 Oct 20 12:12 test1
-rw------- 1 nathan   nathan   17 Oct 20 12:12 test2.txt
-rw-rw-r-- 1 testuser testuser 16 Oct 20 12:17 test3.txt



The files created on the NFS share were very quickly available in S3.  I tried replacing one of them in S3 with new data, and it didn't seem to come down to the share level.  I rechecked the settings on the share and there's a cache refresh option which seems to help with this.  Its got a minimum value of 5 minutes though.  So its best to make changes on the share side of the storage device if you want quick read consistency.

On to the SMB share.  I created the 2019 install and did a dc promo with some basic options.  Server was on bridged network with dhcp.  Local guest NIC had the first dns server set to 127.0.0.1 and a dns forwarder set to external dns provider for internet resolution.  For the gateway, you can join it to AD, but this requires the gateway to be able to connect to the AD domain and resolve the servers.  You don't have too many mandatory parameters, just domain name, user and password.  You connect to AD at the storage gate way config -> actions -> edit smb settings.  For this to work, I had to edit the network settings on the gateway:  Network configuration -> edit dns configuration.  I turned off dhcp and specified my 2019 AD server as the primary dns server IP.  This allowed the gateway to connect to the domain and joined with no problem.  Only took about 1 minute or less.

Once you're joined to the domain, you can create an SMB share using AD authentication and user details.  Files in the share have owners, ACL's, and the usual stuff you would expect on any windows machine.  These replicate up to the S3 bucket in the object's metadata similar to the way NFS works, but the data isn't easy to read.  ACL is encoded, and owner doesn't match to a SID, so I wouldn't expect you can search metadata to try to get much usable information out from the S3 side.


Overall, I would say its an impressive offering and it was quite easy to work with.  Some of the errors I ran into didn't have much documentation though, but I managed to get past them.  What I tried, and failed with on the SMB share: 1) creating the share to access the same prefix and bucket as my NFS share, this doesn't work; 2) creating the share with the default share name, which was a duplicate of the NFS share.  After renaming that, it wasn't an issue.  Other than that, I just had some domain join issues while my AD vm was using NAT nic config and the DNS setting on the storage gateway wasn't right.  All pretty easily resolved.


Documentation ref: https://docs.amazonaws.cn/en_us/storagegateway/latest/userguide/storagegateway-ug.pdf

Convert p7b file to CER/PEM/CRT with microsoft gui tools

 Doubleclick the p7b file to open it, expand all the folders.  In the list of certificates, you might see multiple certificates, as p7b files can be a collection of certificates, which often include the full chain of certificates up to the root.  When converting to a CER, PEM, or CRT file, we are making a file with one certificate in it, so you need to select the specific cert you want to create a file for.  In this example, I'm using fiddler's certificate


Double click the cert you want


Click the install certificate button and use the user store.  Change the option to place certificate in a specific folder and select personal from the popup window




Once the certificate is installed, click start->run-> certmgr.msc


Expand the personal->certificates folder.  Right click on the certificate that you just imported, select all tasks -> export

Use the do not export the private key option



Select the DER or Base-64 option.  This will depend on what system you are using and what it supports.  Base-64 is probably the safest option.

Select an export folder and filename for the converted certificate file.  






Wednesday, March 4, 2020

End to end network latency

When it comes to testing connectivity and latency, I've noticed that many IT technicians don't seem to have any tools in their skill set that go beyond a ping. While that works in many situations, there are often situations where ICMP traffic (including ping) is blocked. At this point, connectivity testing skill set often falls down to a telnet command to the port to see if its open, instead of using many already available tools like the powershell test-netconnection cmdlet. Unfortunately, that commandlet and telnet only show that you can connect to a port, and it doesn't tell you how long it takes to get to it. If you're on a windows machine, you can use the Test-PortLatency function that I've written below. This will give a rough idea of the time to connect in milliseconds to a remote tcp port. If you're on a linux machine, the nmap suite of tools has several programs that give latency information, like nping, or just nmap itself. There are other options as well, and typically some programming languages like python or perl available, which should have some capability to create a simple script to provide this information.

While these tools provide latency from one source point to another, you may find that you need to run tests from multiple points. Connections can get complicated with multiple layers and applications that give you a full end to end experience through multiple servers and protocols. In these cases, you will need to work through to determine what your various points in the connection are, and at what points you need to test from. For example, you may be doing remote desktop through a jump server (bastion host). Your workstation doesn't have direct access to the final remote desktop server. Testing connectivity from the jump server to the final destination only gives you part of the total round trip end to end connectivity. You will need to test from the workstation to the jump server, and then add the latency of the jump server to the final destination to get a rough idea of your end to end latency. If you can run ping's at the different layers, it will help give an idea of packet loss as well.

function Test-PortLatency {

              param ( [parameter(mandatory=$true)][string]$Computer,
                              [parameter(mandatory=$true)][int]$Port,
                              [parameter(helpmessage="Timeout in milliseconds")]$timeout=10000
               )

               $starttime = get-date
               $Testconn = New-Object Net.Sockets.TcpClient
               $Testconn.BeginConnect( $computer, $Port, $Null, $Null ) | Out-Null
               $MaxTimeout = ( Get-Date ).AddMilliseconds( $timeout)
               $millisec = 0

               While ( -not $Testconn.Connected -and ( Get-Date ) -lt $maxTimeout )

                   {
                              Sleep -Milliseconds 10
                              $ms += 10          
               }

               $endtime = get-date
               $result = new-object psobject
               add-member -input $result NoteProperty Connected $($testconn.connected)
               add-member -input $result NoteProperty Milliseconds $(($endtime - $starttime).totalmilliseconds)

               if ($testconn.client.remoteendpoint -eq $null) {
                              $resultstr = "Connection_Refused"

               } elseif ($result.milliseconds -gt $timeout) {
                              $resultstr = "Connection_TimedOut"
               } elseif ($result.connected) {
                              $resultstr = "Successful_connection"
               } else {
                              $resultstr = "status_unknown"
               }

               add-member -input $result NoteProperty Result $resultstr
               $Testconn.Close()
               $result

}