Friday, November 6, 2020

AWS script launch ec2 instance in various regions for short term command execution

 I'm in the process of doing some service provider evaluation that requires some network tests to be run from various locations around the world.  Using vpn's might provide a way to do this, but I've been playing around with AWS lately, so I thought I would give EC2 scripting a try.  The code below was created for Powershell with AWS modules on a Linux machine.  The basic concept is go create a single small Amazon Linux 2 image machine in a specified region, connect to it via SSH and run whatever commands you want.  Using the linux ssh client, you can provide a command to run on the remote machine, or multiple commands separated with semicolons.  The client connects, output is on your local terminal, and the connection ends after the commands run.

This script isn't designed for much error handling, but it does look for a local keypair file if you already have one that you want to use.  In this scenario, you will need to provide the file name of the .pem file and the name of the keypair (the AWS EC2 name of it).  If you don't provide one, the script creates a new one for you.  It doesn't clean up the keypair or the security group that it creates, but it will terminate the new EC2 instance at the end of the function call.  Since the code is wrapped in a function, you can run a through a list of regions or a list of regions + keypair info to execute commands across several regions.


Note: Some lines of code have been split for clarity and may require them to be recombined for them to execute. You will need to have an AWS profile and credentials set up on your machine already.


function run-test {
    param(
        [string][parameter(mandatory=$true)]$region,
        [string][parameter(mandatory=$true)]$command,
        $keypairFileName, $kpname)

    if ([string]::isnullorempty($keypairfilename) -or 
    	(-not (test-path $keypairfilename))) {
        #no keypair file provided or found, create a temp one

        $kp = new-ec2keypair -keyname $($region + "-tempkp") -region $region
        $keypairFileName = $region + "-tempkp.pem" 
        $kpname = $region + "-tempkp"
        $kp.KeyMaterial | Out-File -Encoding ascii $keypairfilename
        chmod 600 $keypairfilename  #for linux
    }

    #get most recent Amazon Linux 2 image
    $id = Get-EC2Image -Owner amazon -Region $region |
        where {$_.name -match "amzn2*" -and $_.architecture -eq "x86_64" 
        	-and $_.platformDetails -match "Linux"
          	-and $_.rootdevicetype -eq "ebs" 
        	-and $_.virtualizationType -eq 'hvm'} |
        sort-object creationdate |
        select -last 1 |
        select -expand imageid

    #create a security group for the launch with SSH ingress allowed
    $secgroup = get-ec2securitygroup -region $region | 
    	where {$_.description -eq "SSH-only"} |select -exp groupid
    if ($secgroup -eq $null) {
        New-EC2SecurityGroup -region $region -GroupName "SSH-only" -description "SSH-only"

        $secgroup = get-ec2securitygroup -region $region | 
        	where {$_.description -eq "SSH-only"} |select -exp groupid

        $cidrBlocks = New-Object 'collections.generic.list[string]'
        $cidrBlocks.add("0.0.0.0/0")
        $ipPermissions = New-Object Amazon.EC2.Model.IpPermission
        $ipPermissions.IpProtocol = "tcp"
        $ipPermissions.FromPort = 22
        $ipPermissions.ToPort = 22
        $ipPermissions.IpRanges = $cidrBlocks

        Grant-EC2SecurityGroupIngress -Groupid $secgroup 
        	-IpPermissions $ipPermissions -region $region
    } 

    $instance = New-EC2Instance -Region $region -ImageId $id -Instancetype t3.nano 
    	-KeyName $kpname  -AssociatePublicIp $true -securitygroupid $secgroup 

    $publicname = $null
    while ($publicname -eq $null) {
        sleep 5
        $publicname = (get-ec2instance $instance.Instances.instanceid 
        	-region $region).instances.publicdnsname
    }
    
    sleep 30
    ssh -i $keypairfilename ec2-user@$publicname $command

    Remove-EC2Instance -Instanceid $instance.Instances.instanceid -force -region $region
}

#sample execution
$command = 'ping -c 10 server1.test.com ; ping -c 10 server2.test.com ; 
sudo yum install traceroute -y; sudo yum install bind-utils -y ; 
sudo traceroute -n server1.test.com; sudo traceroute -n server2.test.com ; 
nslookup -type=TXT test.com ; nslookup -type=SOA test.com; 
nslookup -type=txt test.com 8.8.8.8'

run-test -region ap-southeast-1 -command $command

Wednesday, October 21, 2020

AWS storage gateway quick and easy lab

 I've been doing some studying of the AWS sysop exam areas.  Storage gateway seemed like a very interesting and useful tool that many organizations are quickly place in their environment for file storage provisioning or storage capacity expansion.  I didn't go to in depth on this, however I was able to deploy a storage gateway, a windows AD domain, an NFS share off the gateway, do a domain join of gateway to AD, and then deploy an SMB share all within about 2 hours (first time touching this tech).


My environment for the lab:  

    Host OS - deskop Linux Mint 19.3.

    Virtualization - Vmware workstation 16 player

    Storage gateway - Downloaded vmware device VM from AWS at the setup of storage gateway.  NIC set to bridged

    Windows domain - Vmware win 2019 eval server.  NIC set to bridged


I started with the simple set up of the gateway with an NFS share.  After downloading the AWS SG vm, it boots up to a logon screen, where you can use the default account: admin, password.  In there, you have some limited capabilities through a menu driven interface.  This mostly focuses on network settings, firewall and routing.  For me, it was pretty well configured already for this step.  In the console dialogs you will need to provide the IP of the storage gateway, and your host OS browser needs to be able to connect to it in order to configuration the VM.  With bridged networking, this was no problem at all.  You will need to add an additional storage driver on the VM to act as the caching location.  This will be detected during the configuration in the console.



Back in the console, create an S3 bucket for you to use with the gateway.  You can use the same bucket for multiple shares, but it looks like you can use the same prefixed folder for more than one share.  Once you create an NFS share (pretty straightforward for options), you can mount it on your host OS and start putting files in there.  The files will act like normal linux/unix files with owner/group and permissions set.  These replicate up to the S3 bucket and are stored in the object's metadata

testuser@nathan-X299-UD4:/home/nathan/aws-sg$ ls -l
total 2
-rw-r--r-- 1 nobody   nogroup  32 Oct 20 12:12 test1
-rw------- 1 nathan   nathan   17 Oct 20 12:12 test2.txt
-rw-rw-r-- 1 testuser testuser 16 Oct 20 12:17 test3.txt



The files created on the NFS share were very quickly available in S3.  I tried replacing one of them in S3 with new data, and it didn't seem to come down to the share level.  I rechecked the settings on the share and there's a cache refresh option which seems to help with this.  Its got a minimum value of 5 minutes though.  So its best to make changes on the share side of the storage device if you want quick read consistency.

On to the SMB share.  I created the 2019 install and did a dc promo with some basic options.  Server was on bridged network with dhcp.  Local guest NIC had the first dns server set to 127.0.0.1 and a dns forwarder set to external dns provider for internet resolution.  For the gateway, you can join it to AD, but this requires the gateway to be able to connect to the AD domain and resolve the servers.  You don't have too many mandatory parameters, just domain name, user and password.  You connect to AD at the storage gate way config -> actions -> edit smb settings.  For this to work, I had to edit the network settings on the gateway:  Network configuration -> edit dns configuration.  I turned off dhcp and specified my 2019 AD server as the primary dns server IP.  This allowed the gateway to connect to the domain and joined with no problem.  Only took about 1 minute or less.

Once you're joined to the domain, you can create an SMB share using AD authentication and user details.  Files in the share have owners, ACL's, and the usual stuff you would expect on any windows machine.  These replicate up to the S3 bucket in the object's metadata similar to the way NFS works, but the data isn't easy to read.  ACL is encoded, and owner doesn't match to a SID, so I wouldn't expect you can search metadata to try to get much usable information out from the S3 side.


Overall, I would say its an impressive offering and it was quite easy to work with.  Some of the errors I ran into didn't have much documentation though, but I managed to get past them.  What I tried, and failed with on the SMB share: 1) creating the share to access the same prefix and bucket as my NFS share, this doesn't work; 2) creating the share with the default share name, which was a duplicate of the NFS share.  After renaming that, it wasn't an issue.  Other than that, I just had some domain join issues while my AD vm was using NAT nic config and the DNS setting on the storage gateway wasn't right.  All pretty easily resolved.


Documentation ref: https://docs.amazonaws.cn/en_us/storagegateway/latest/userguide/storagegateway-ug.pdf

Convert p7b file to CER/PEM/CRT with microsoft gui tools

 Doubleclick the p7b file to open it, expand all the folders.  In the list of certificates, you might see multiple certificates, as p7b files can be a collection of certificates, which often include the full chain of certificates up to the root.  When converting to a CER, PEM, or CRT file, we are making a file with one certificate in it, so you need to select the specific cert you want to create a file for.  In this example, I'm using fiddler's certificate


Double click the cert you want


Click the install certificate button and use the user store.  Change the option to place certificate in a specific folder and select personal from the popup window




Once the certificate is installed, click start->run-> certmgr.msc


Expand the personal->certificates folder.  Right click on the certificate that you just imported, select all tasks -> export

Use the do not export the private key option



Select the DER or Base-64 option.  This will depend on what system you are using and what it supports.  Base-64 is probably the safest option.

Select an export folder and filename for the converted certificate file.  






Wednesday, March 4, 2020

End to end network latency

When it comes to testing connectivity and latency, I've noticed that many IT technicians don't seem to have any tools in their skill set that go beyond a ping. While that works in many situations, there are often situations where ICMP traffic (including ping) is blocked. At this point, connectivity testing skill set often falls down to a telnet command to the port to see if its open, instead of using many already available tools like the powershell test-netconnection cmdlet. Unfortunately, that commandlet and telnet only show that you can connect to a port, and it doesn't tell you how long it takes to get to it. If you're on a windows machine, you can use the Test-PortLatency function that I've written below. This will give a rough idea of the time to connect in milliseconds to a remote tcp port. If you're on a linux machine, the nmap suite of tools has several programs that give latency information, like nping, or just nmap itself. There are other options as well, and typically some programming languages like python or perl available, which should have some capability to create a simple script to provide this information.

While these tools provide latency from one source point to another, you may find that you need to run tests from multiple points. Connections can get complicated with multiple layers and applications that give you a full end to end experience through multiple servers and protocols. In these cases, you will need to work through to determine what your various points in the connection are, and at what points you need to test from. For example, you may be doing remote desktop through a jump server (bastion host). Your workstation doesn't have direct access to the final remote desktop server. Testing connectivity from the jump server to the final destination only gives you part of the total round trip end to end connectivity. You will need to test from the workstation to the jump server, and then add the latency of the jump server to the final destination to get a rough idea of your end to end latency. If you can run ping's at the different layers, it will help give an idea of packet loss as well.

function Test-PortLatency {

              param ( [parameter(mandatory=$true)][string]$Computer,
                              [parameter(mandatory=$true)][int]$Port,
                              [parameter(helpmessage="Timeout in milliseconds")]$timeout=10000
               )

               $starttime = get-date
               $Testconn = New-Object Net.Sockets.TcpClient
               $Testconn.BeginConnect( $computer, $Port, $Null, $Null ) | Out-Null
               $MaxTimeout = ( Get-Date ).AddMilliseconds( $timeout)
               $millisec = 0

               While ( -not $Testconn.Connected -and ( Get-Date ) -lt $maxTimeout )

                   {
                              Sleep -Milliseconds 10
                              $ms += 10          
               }

               $endtime = get-date
               $result = new-object psobject
               add-member -input $result NoteProperty Connected $($testconn.connected)
               add-member -input $result NoteProperty Milliseconds $(($endtime - $starttime).totalmilliseconds)

               if ($testconn.client.remoteendpoint -eq $null) {
                              $resultstr = "Connection_Refused"

               } elseif ($result.milliseconds -gt $timeout) {
                              $resultstr = "Connection_TimedOut"
               } elseif ($result.connected) {
                              $resultstr = "Successful_connection"
               } else {
                              $resultstr = "status_unknown"
               }

               add-member -input $result NoteProperty Result $resultstr
               $Testconn.Close()
               $result

}

Monday, November 4, 2019

FIM CM / MIM CM Certificate Management service account certificate renewals

references:


Internally FIM/MIM Certificate management has 5 service accounts.  3 of these accounts have certificates stored within their personal certificate store on your application server.  Each certificate uses a unique template that was created during the installation of the application.  As with all certificates, they do eventual expired (based on the settings in your template).

The 3 accounts that have certificates are the Key Recovery Agent, the Enroll Agent, and the CLM agent accounts.  If you are unsure of what accounts are which, go to this folder \Program Files\Microsoft Forefront Identity Manager\2010\Certificate Management\web on your CM server and open the web.config file.  Look for the section that is labelled "CLM users" and find the entries with CLM.RecoveryAgent.Username, Clm.EnrollAgent.Username, and CLM.Agent.Username.  Keep this file open as we need to make changes to it later.

Once you have the accounts identified, ensure you have the correct password for the account.  You can test them using ldp.exe.  If you don't have the password, first go through the password reset process.

With each of the accounts, you will need to open MMC doing a runas.  Open one for each of the three accounts.  Add the certificates snapin with the Current User option.  Expand this, expand personal, and click on certificates.  Unless you have gone through several certificates already, there should only be one in there.

Identify the key that you want to replace, and do an export of each one.  Select PCKS#12 format with "include all certificates in the certificate path if possible" and "export all extended properties" options.  Set a password and export to a file.  This will give you a backup of the key just in case you need it again.

If you read the second article linked above, you will see that the CLMAgent key needs to be renewed with the same key, otherwise it will break previously issued smartcards.  So you can do a renewal of the existing certificate by right clicking the certificate -> all tasks -> advanced operations -> renew this certificate with the same key.  Click next/enroll/finish.  You can do this for each of the 3 certificates.  Once you have the new certificate (you will see an updated expiration date), open the certificates, go to the details tab, find the thumbprint value and make a copy of each new certificate's thumbprint.  

Note: when copying the thumbprint value, you will end up with some invisible unicode character at the beginning of the string.  Paste the thumbprint to notepad, go to the start of the string and hit Del once.  This should get rid of it.  Remove all spaces between the hex values.  To validate that the special char has been removed, copy and paste the whole string into a command prompt and look for any box shaped character.  If there are none, then the string is properly cleaned up.

Once you have all certificates renewed, and your thumbprints gathered, go to the web.config file for the CM application.

Look for Clm.SigningCertificate.Hash.  Replace the current value with the new thumbnail of the ClmAgent certificate

Look for Clm.ValidSigningCertificate.Hashes.  Add the new thumbnail of the ClmAgent certificate to this as a comma seperated list.

Look for Clm.SmartCard.ExchangeCertificate.Hash.  Replace this with the ClmAgent certificate hash.

Search for Clm.EnrollAgent.Certificate.Hash.  Replace this with the EnrollAgent certificate hash.

Go to your certificate authority server.  Open the certificate authority utility, rightclick the CA name, open properties.  Look for the policy module tab, click properties.  Go to the signature certificates tab.  Add a new hash and enter the ClmAgent thumbnail here.  Restart certificate services.

On your CM server, run IISRESET.

If you use recovery agent's, follow the additional steps mentioned in the first link above.

Service Account password resets for FIM CM / MIM CM service accounts

Microsoft's identity manager - Certificate management product has several different service accounts associated with its internal functions as well as an IIS application pool account.  For best practices, it is always good to periodically change service account passwords.  For this product, the account passwords are not configured on windows services, or other easily identifiable locations, so automated password management tools won't be very helpful.

To start with, you want to identify all of your service accounts what roles they perform.  If you are unsure, logon to your CM server and open up the:  \Program Files\Microsoft Forefront Identity Manager\2010\Certificate Management\web folder.  Open the web.config file and look for the section labelled "CLM USERS".  Under this you will find keys for usernames for each component.

Open up a command prompt and go to the CM folder path, and BIN subfolder.  In this folder, there is a tool called clmutil, which will be used to enter the new account passwords.  The account name values in this tool don't perfectly line up with what is in the webconfig.  They are

WebConfig      ->    clmutil
AuthzAgent            authAgent
Agent                      agent
CAManager            caMngr
RecoveryAgent       krAgent
EnrollAgent            enrollagent

Start by going to Active Directory and creating a new strong password for each account.  Make note of the passwords for each username, and ensure you match up your usernames to the roles above.

For each of the accounts, run the clmutil, ex:

clmutil -setacctpwd authAgent  "mynewPassword"

Once you have entered the new account passwords matching each of the service account's roles, open up IIS administration.  Look in the application pools for clmAppPool.  Check the identity of the pool.  For the service account associated with that, do a password reset in Active Directory.  Open the advanced settings for the application pool, click the ... button on the identity value.  Click the set button.  Enter the username and the new password, then click ok, ok, ok.

Now that you have reset all of the passwords, run an iisreset on the server.  Ensure everything is working after that.  If you have additional FIM/MIM CM server nodes, you will need to enter the passwords on each one.

How to pull a full list of users to certificates and card mappings from FIM CM / MIM CM

If you want to collect a report that combines usernames to certificate serial numbers, linked to the card serial number, along with the date the card was issued, you can use this query on your FIM CM database:

Select
u.unc_user_nt4_name, c.cert_issued_serial_number, s.sc_serial_number,
s.sc_manufacturer_id, q.req_submitted_dt,
replace(replace(replace(replace(replace(s.sc_status,'1','Assigned'), '2', 'Active'), '3', 'Disabled'), '4', 'Suspended'), '5', 'Retired') as Status

from dbo.Certificates as C left join dbo.ProfileCertificates  as p
on c.cert_id = p.pc_cert_id 

right outer join dbo.Profiles as r
on p.pc_profile_uuid = r.profile_uuid

 inner join dbo.UserNameCache as u on r.pr_assigned_user_uuid = u.unc_user_uuid

right outer join dbo.Smartcards as s on
r.pr_sc_uuid = s.sc_uuid

right outer join (select * from dbo.Requests where req_type=1) as q
on q.req_sc_uuid = s.sc_uuid

order by unc_user_nt4_name,sc_serial_number,req_submitted_dt