Quantcast
Channel: Damn Technology
Viewing all 59 articles
Browse latest View live

Break sequence on a Cisco 1921 ISR

$
0
0

A colleague made me aware of a potentially serious problem on Cisco 1921 and other ISR G2 routers. According to Field Note 63355, these devices shipped with a buggy version of ROMMON, the software that controls the boot process of Cisco routers. Here’s how Cisco describe the problem:

Routers with ROMMON version 15.0(1r)M1 fail to respond to the break sequence command received from a device connected to the console port. This failure prevents normal password recovery of the device.

If you have a 1941 you can simply pull the CF card to enter into ROMON. But what about it you have a 1921 and need to perform password recovery? the Cisco 1921 doesn’t have a CF card, and according to Cisco has no user-replaceable flash. You’re essentially forever locked out of your device.

Thankfully, there’s a workaround. If you pop open the cover of a Cisco 1921, using a Torx 10 screwdriver, you’ll see a small daugher-board. This small daughter-board, secured with a single screw, is the flash on the 1921. Remove the single screw and carefully lift out the board.

Cisco 1921 - Flash

Turn on your router with a serial-cable connected and you’ll enter ROMMON where you can perform the usual reset procedure (confreq). Entering ROMMON should look like the following:

System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 2011 by cisco Systems, Inc.

Total memory size = 512 MB
Field Upgradeable ROMMON Integrity test
_______________________________________
ROM: Digitally Signed Release Software
CISCO1921/K9 platform with 524288 Kbytes of main memory
Main memory is configured to 64 bit mode with ECC disabled


Upgrade ROMMON initialized
rommon 1 > confreg 0x2142
rommon 2 > reset

Once you’ve reset the device you can reseat and secure the flash, then put the case back on.

The post Break sequence on a Cisco 1921 ISR appeared first on Blog of Dave Hope.


DCDiag error after upgrading to DFS-R

$
0
0

After switching a domain to use DFS-R rather than FRS for SYSVOL replication you may experience the following error when running dcdiag.exe

      Starting test: VerifyReferences
         Some objects relating to the DC LONDON have problems:
            [1] Problem: Missing Expected Value
             Base Object:
            CN=LONDON,OU=UK,OU=Domain Controllers,DC=nwtraders,DC=msft
             Base Object Description: "DC Account Object"
             Value Object Attribute Name: frsComputerReferenceBL
             Value Object Description: "SYSVOL FRS Member Object"
             Recommended Action: See Knowledge Base Article: Q312862
 
         ......................... LONDON failed test VerifyReferences

DFR-S replication of the SYSVOL replication group looks to be otherwise healthy.

This error is caused by some poor logic in dcdiag.exe when the domain controllers have been moved from the default “Domain Controllers” OU. If you move the domain controllers back to the default “Domain Controllers” OU the error will disappear. However, leaving them where they are is likely to cause no problems, other than give you a dcdiag.exe error.

Microsoft plan to fix this in Windows Server 2012.

The post DCDiag error after upgrading to DFS-R appeared first on Blog of Dave Hope.

Locating PST files on a network

$
0
0

In order to size any mail archiving solution it is important to understand the amount of archive data currently in use. For many companies this is in the form of Outlook Data Files (PST’s). Unforutnately, the only resource Microsoft provide is a VBScript dating back to 2005 on the technet script center.

I decided to have a go at implementing two methods to locate PST files on the network using Powershell, the two options for locating files I came up with are:

  • Enumerate the Outlook settings on client computers to determine PST files loaded on client computers;
  • Use WMI to call the search APIs on remote computers to locate the files;

In testing the two options, using the search APIs via WMI located twice as many files as just relying on the information located in the windows registry. Both scripts will also read the first 11 bytes of the PST file to determine the file format, whether it’s an ANSI or Unicode PST file.

Using the registry

The advantage of using the data in the Windows registry is that it’s quick. We can quickly find enumerate user profiles and identify PST files loaded in Outlook. Once we have that information, the file infromation can be checked using SMB. This does however does require the Remote Registry service to be enabled and TCP port 139 to be open on client computers.

This script uses PowerShell background jobs, checking 5 computers at a time. To check more computers at once, increase the $MaxThreads variable.

cls
#
# PST Scanning Utility (Registry)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Retrieves a list of computers (recursively) from an OU in Active
# Directory then uses Remote Registry calls to locate PST files for each
# user profile.
#
# The advantage to querying the registry is speed, however accurancy is 
# potentially reduced since PST files may not be added to Outlook.
#
# Changelog
# ~~~~~~~~~
# 2012.03.28	Dave Hope		Initial version.
# 2012.03.30	Dave Hope		Added try/catch for Get-Item $tmpPath to
#								handle missing files.
#
# ======================================================================
# SETTINGS
# ======================================================================
$cfgOU = "LDAP://DC=nwtraders,DC=msft"
$cfgInterval = -30			# Difference to lastLogonTimestamp (days)
$cfgOutpath = "H:\Registry.CSV"
$MaxThreads = 5			# Maximum number of checks to run at once.
$SleepTimer = 500			# Wait between checks.

# ======================================================================
# STOP CHANGING HERE.
# ======================================================================


#
# Uses CIFS/SMB and the remote registry APIs to determine remote PST
# files and their sizes for a given computername.
# ======================================================================
$GetPSTInfo = {
	Param(
		[string]$ComputerName = $(throw "ComputerName required.")
		)
	$ReturnArray = @()

	# Test connection (ICMP) first rather than relying on the slow
	# Get-WMiObject call to fail.
	if( (Test-Connection -ComputerName $ComputerName -Count 1 -Quiet) -eq $false )
	{
		Write-Host "Failed communicating with $ComputerName - ICMP Unreachable"
		return;
	}

	# Connect to remote system.
	try
	{
		$RegHive = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey( "Users", $ComputerName )
	}
	catch {
		Write-Host "Failed communicating with $ComputerName - OpenRemoteBaseKey failed"
		return;
	}

	# Get the list of user profiles.
	$RegUsers = $RegHive.getSubKeyNames()

	# Iterate over user profiles on the computer
	foreach( $RegUser in $RegUsers )
	{
		# Get list of Outlook datafiles in use for this profile.
		$RegPath = "$RegUser\Software\Microsoft\Office\12.0\Outlook\Catalog"
		$Catalogs = $RegHive.OpenSubKey( $RegPath );
		if( $Catalogs -eq $null )
		{
			continue;
		}
	
		# Iterate over the data files, if the file name ends with
		# something other than .pst, or resides somewhere other than
		# C: skip it.
		$Archives = $Catalogs.GetValueNames()
		foreach( $Archive in $Archives )
		{
			if(
				$Archive.ToLower().EndsWith(".pst") -And	$Archive.ToLower().StartsWith("c:") )
			{
				# Replace the local path with that of a remote path.
				$tmpPath = $Archive -Replace "C:\\", "\\$ComputerName\c$\"
				try
				{
					$FileInfo = Get-Item $tmpPath -ErrorAction Stop
				}
				Catch
				{
					# PST file doesn't exist, or we can't reach it,
					# continue on with the next one.
					continue;
				}
				
				# Add file info to array.
				$FileReturn = "" | Select Computer, Owner, Path, Size, Modified, Version
				$FileReturn.Computer = $ComputerName
				$FileReturn.Owner = (Get-Acl $tmpPath | select Owner).Owner
				$FileReturn.Path = $Archive
				$FileReturn.Size = $FileInfo.Length
				$FileReturn.Modified = $FileInfo.LastWriteTime


				#
				# PST Version.
				[system.io.stream]$fileStream = [system.io.File]::Open( (Get-Item $tmpPath) , 'Open' , 'Read' , 'ReadWrite' )

				try
				{
					[byte[]]$fileBytes = New-Object byte[] 11 # Length we need.
					[void]$fileStream.Read( $fileBytes, 0, 11);
					if ($fileBytes[10] -eq 23 )
					{
						$FileReturn.Version = "2003";
					}
					elseif ( ($fileBytes[10] -eq 14) -or ($fileBytes[10] -eq 15) )
					{
						$FileReturn.Version = "1997";
					}
					else
					{
						$FileReturn.Version = "Unknown";
					}
				}
				catch
				{
					$FileReturn.Version = "Error";
				}
				$fileStream.Close();

				$ReturnArray += $FileReturn
			}
		}
	}
	return $ReturnArray;
}


#
# Gets a list of object names from AD recursively
# ======================================================================
Function GetAdObjects
{
	Param(
		[string]$Path = $(throw "Path required."),
		[string]$desiredObjectClass = $(throw "DesiredObjectClass required.")
		)
	$ReturnArray = $null

	# Bind to AD using the provided path.
	$objADSI = [ADSI]$Path

	# Iterate over each object and add its name to the array.
	foreach( $obj in $objADSI.Children )
	{
		$thisItem = $obj | select objectClass,distinguishedName,name
		if (
			$thisItem.objectClass.Count -gt 0 -And
			$thisItem.objectClass.Contains( $desiredObjectClass)
			)
		{
			$ReturnArray += $thisItem.distinguishedName
		}
		elseif(
			$thisItem.objectClass.Count -gt 0 -And
			$thisItem.objectClass.Contains("organizationalUnit")
			)
		{
			# Init to null rather than @() so we dont add empty
			# values.
			$RecurseItems = $null
			$RecurseItems += GetAdObjects "LDAP://$($thisItem.distinguishedName.ToString())" $desiredObjectClass
			if( $RecurseItems.Count -gt 0 )
			{
				$ReturnArray += $RecurseItems
			}
		}
	}

	# Make sure we have items to return, otherwise we'll push
	# empty items to the array.
	if( $ReturnArray.Count -gt 0)
	{
		return $ReturnArray;
	}
}


#
# Converts a COMObect to a LargeInteger
# ======================================================================
function Convert-IADSLargeInteger([object]$LargeInteger)
{
	$type = $LargeInteger.GetType()
	$highPart = $type.InvokeMember("HighPart","GetProperty",$null,$LargeInteger,$null)
	$lowPart = $type.InvokeMember("LowPart","GetProperty",$null,$LargeInteger,$null)
	$bytes = [System.BitConverter]::GetBytes($highPart)
	$tmp = New-Object System.Byte[] 8
	[Array]::Copy($bytes,0,$tmp,4,4)
	$highPart = [System.BitConverter]::ToInt64($tmp,0)
	$bytes = [System.BitConverter]::GetBytes($lowPart)
	$lowPart = [System.BitConverter]::ToUInt32($bytes,0)
	$lowPart + $highPart
}

#
# Evaluate the lastLogonTimestamp attribute for accounts and pull ones 
# from the last 30 days only.
# ======================================================================
Function GetObjectsLoggedIntoSince
{
	Param(
		[Array] $Computers = $(throw "Computers required"),
		[int] $LoginDays = $(throw "LoginDays required")
		)

	$earliestAllowedLogon = [DateTime]::Today.AddDays($LoginDays)

	foreach( $Computer in $Computers )
	{
		$objADSI = [ADSI]"LDAP://$Computer"
		if( $objADSI.Properties.Contains("lastLogonTimeStamp") -eq $false )
		{
			continue;
		}

		$lastLogon = [DateTime]::FromFileTime(
			[Int64]::Parse(
				$(Convert-IADSLargeInteger $objADSI.lastlogontimestamp.value)
				)
			)
		if( [DateTime]::Compare( $earliestAllowedLogon , $lastLogon) -eq -1 )
		{
			$objADSI.name
		}
		continue;
	}
}

#
# Get computer accounts from Active Directory.
$OutArray = @()
$Computers = GetAdObjects "$cfgOU" "computer"
$Computers = GetObjectsLoggedIntoSince $Computers $cfgInterval

#
# Remove any previous jobs.
$jobsTotal = $(Get-Job).Count
$i = 0
if( $jobsTotal -gt 0)
{
	foreach( $job in Get-Job)
	{
		Write-Progress -Activity "Locating PST files" -Status "Removing existing jobs" -CurrentOperation "$i of $jobsTotal" -PercentComplete ($i / $jobsTotal * 100)
		$job | Remove-Job -Force
		$i++
	}
}

#
# If we have no computers to check, just exit.
if( $Computers.Count -le 0 )
{
	return;
}

#
# Create all the jobs.
$i = 0
ForEach ($Computer in $Computers)
{
	#
	# We're currently running at $MaxThreads, wait for one to close.
	While ((Get-Job -state running).count -ge $MaxThreads)
	{
		$statTotal = $computers.count
		$statComplete = $((Get-Job -state completed).count)
		$statInProgress = $((Get-Job -state running).count)
		Write-Progress -Activity "Locating PST files" -Status "Waiting for a scan to finish before starting another" -CurrentOperation "Total: $statTotal , Complete: $statComplete , In Progress: $statInProgress" -PercentComplete ($i / $Computers.count * 100)
		Start-Sleep -Milliseconds $SleepTimer
		$JobsRunning = (Get-Job -state running).count
	}

	#
	# Start job.
	$i++
	Start-Job -ScriptBlock $GetPSTInfo -ArgumentList $Computer -Name $Computer | out-null
	$statTotal = $computers.count
	$statComplete = $((Get-Job -state completed).count)
	$statInProgress = $((Get-Job -state running).count)
	Write-Progress -Activity "Locating PST files" -Status "Starting a scan" -CurrentOperation "Total: $statTotal , Complete: $statComplete , In Progress: $statInProgress" -PercentComplete ($i / $Computers.count * 100)
}


#
# Finishhed creating all jobs, waiting for remaining running jobs to
# complete.
While (@(Get-Job -State Running).count -gt 0)
{
	$statTotal = @(Get-Job).count
	$statComplete = $((Get-Job -state completed).count)
	$statInProgress = $((Get-Job -state running).count)
	Write-Progress -Activity "Locating PST files" -Status "Waiting on final scans to complete" -CurrentOperation "Total: $statTotal , Complete: $statComplete , In Progress: $statInProgress" -PercentComplete ($statComplete / $statTotal * 100)
	Start-Sleep -Milliseconds $SleepTimer
}

#
# Handle completed jobs
ForEach($Job in Get-Job)
{
	$retVal = (Receive-Job $Job)
	if( $retVal -ne $null)
	{
		$OutArray += $retVal
	}
}
$OutArray | Export-Csv "$cfgOutpath" -NoClobber -NoTypeInformation

Using WMI

Using WMI is slow compared to relying on the registry, but will locate files that are not open in Outlook. The Windows Firewall “Windows Firewall: Allow remote administration exception” should be enabled so that WMI can be accessed remotely.

Unfortunately I couldn’t get the PowerShell job functionality to work well with Get-WMiObject, so systems are checked one by one which also slows things down.

#
# PST Scanning Utility (WMI)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~
# Retrieves a list of computers (recursively) from an OU in Active
# Directory then uses WMI to search for PST files on the remote 'C:'
# drive, saving the results to CSV format with location, size and owner
# details.
#
# The advantage to performing a search is that PST files in non-default
# locations will be found, enumerating the registry only shows files in
# use by Outlook.
#
# This script doesn't make use of threading (Jobs) due to hangs/locks
# experienced when they were implemented.
#
# Changelog
# ~~~~~~~~~
# 2012.03.28	Dave Hope		Initial version.
# 2012.04.24	Dave Hope		Added PST file version information.
# 2012.04.26	Dave Hope		Added try/catch around file owner check.
#
# ======================================================================
# SETTINGS
# ======================================================================
$cfgOU = "LDAP://DC=nwtraders,DC=msft"
$cfgInterval = -30
$cfgOutpath = "H:\WMI.CSV"

# ======================================================================
# STOP CHANGING HERE.
# ======================================================================

#
# Scans the specified hostname for PST files, returning an array of data
# must of this is inline due to the nature of job functionality in PS.
# ======================================================================
Function GetPSTInfo
{
	Param( [string]$ComputerName = $(throw "ComputerName required.") )
	$ReturnArray = @()
	
	# Test connection first rather than relying on the slow
	# Get-WMiObject call to fail.
	if( (Test-Connection -ComputerName $ComputerName -Count 1 -Quiet) -eq $false )
	{
#		Write-Host "Failed communicating with $ComputerName - ICMP Unreachable"
		return;
	}

	# Connect and execute query.
	try
	{
		#Path,FileSize,LastModified,LastAccessed,Extension,Drive
		$PstFiles = Get-Wmiobject -namespace "root\CIMV2" -computername $computerName -ErrorAction Stop -Query "SELECT * FROM CIM_DataFile WHERE Extension = 'pst' AND Drive = 'c:'"
	}
	Catch
	{
#		Write-Host "Failed communicating with $ComputerName - Get-WMIObject failed"
		return;
	}
	# Iterate over the found PST files.
	foreach ($file in $PstFiles)
	{ 
		if($File.FileName)
		{ 
			$FileReturn = "" | select Computer,Owner,Path,FileSize,LastModified,LastAccessed,Version
			$filepath = $file.description 
						
			#
			# Try and find the owner of the file.
			$Owner = "Unknown";
			try
			{
				$query = "ASSOCIATORS OF {Win32_LogicalFileSecuritySetting=`'$filepath`'} WHERE AssocClass=Win32_LogicalFileOwner ResultRole=Owner" 
				$Owner = @(Get-Wmiobject -namespace "root\CIMV2" -computername $computerName -Query $query) 
				$Owner = "$($Owner[0].ReferencedDomainName)\$($Owner[0].AccountName)" 
			}
			catch
			{
#				Write-Host "Unable to determine the owner of a PST File on $ComputerName"
			}
			
			$FileReturn.Computer = $computerName
			$FileReturn.Path = $filepath 
			$FileReturn.FileSize = $file.FileSize/1KB 
			$FileReturn.Owner = $Owner
			$FileReturn.LastModified = [System.Management.ManagementDateTimeConverter]::ToDateTime($($file.LastModified))
			$FileReturn.LastAccessed = [System.Management.ManagementDateTimeConverter]::ToDateTime($($file.LastAccessed))

			#
			# Here, we're examining part of the PST file header.
			# We only need wVer (2bytes), so we seek to that position in
			# the file.
			$tmpPath = $filepath  -Replace "C:\\", "\\$ComputerName\c$\"
			[system.io.stream]$fileStream = [system.io.File]::Open( (Get-Item $tmpPath) , 'Open' , 'Read' , 'ReadWrite' )
			try
			{
				[byte[]]$fileBytes = New-Object byte[] 11 # Length we need.
				[void]$fileStream.Read( $fileBytes, 0, 11);
				if ($fileBytes[10] -eq 23 )
				{
					$FileReturn.Version = "2003";
				}
				elseif ( ($fileBytes[10] -eq 14) -or ($fileBytes[10] -eq 15) )
				{
					$FileReturn.Version = "1997";
				}
				else
				{
					$FileReturn.Version = "Unknown";
				}
			}
			catch
			{
				$FileReturn.Version = "Error";
			}
			$fileStream.Close();

			$ReturnArray += $FileReturn
		} 
	}
	return $ReturnArray;
}


#
# Gets a list of object names from AD recursively
# ======================================================================
Function GetAdObjects
{
	Param(
		[string]$Path = $(throw "Path required."),
		[string]$desiredObjectClass = $(throw "DesiredObjectClass required.")
		)
	$ReturnArray = $null

	# Bind to AD using the provided path.
	$objADSI = [ADSI]$Path

	# Iterate over each object and add its name to the array.
	foreach( $obj in $objADSI.Children )
	{
		$thisItem = $obj | select objectClass,distinguishedName,name
		if (
			$thisItem.objectClass.Count -gt 0 -And
			$thisItem.objectClass.Contains( $desiredObjectClass)
			)
		{
			$ReturnArray += $thisItem.distinguishedName
		}
		elseif(
			$thisItem.objectClass.Count -gt 0 -And
			$thisItem.objectClass.Contains("organizationalUnit")
			)
		{
			# Init to null rather than @() so we dont add empty
			# values.
			$RecurseItems = $null
			$RecurseItems += GetAdObjects "LDAP://$($thisItem.distinguishedName.ToString())" $desiredObjectClass
			if( $RecurseItems.Count -gt 0 )
			{
				$ReturnArray += $RecurseItems
			}
		}
	}

	# Make sure we have items to return, otherwise we'll push
	# empty items to the array.
	if( $ReturnArray.Count -gt 0)
	{
		return $ReturnArray;
	}
}


#
# Converts a COMObect to a LargeInteger
# ======================================================================
function Convert-IADSLargeInteger([object]$LargeInteger)
{
	$type = $LargeInteger.GetType()  
	$highPart = $type.InvokeMember("HighPart","GetProperty",$null,$LargeInteger,$null)  
	$lowPart = $type.InvokeMember("LowPart","GetProperty",$null,$LargeInteger,$null)  
	$bytes = [System.BitConverter]::GetBytes($highPart)  
	$tmp = New-Object System.Byte[] 8  
	[Array]::Copy($bytes,0,$tmp,4,4)  
	$highPart = [System.BitConverter]::ToInt64($tmp,0)  
	$bytes = [System.BitConverter]::GetBytes($lowPart)  
	$lowPart = [System.BitConverter]::ToUInt32($bytes,0)  
	$lowPart + $highPart  
} 

#
# Evaluate the lastLogonTimestamp attribute for accounts and pull ones 
# from the last 30 days only.
# ======================================================================
Function GetObjectsLoggedIntoSince
{
	Param(
		[Array] $Computers = $(throw "Computers required"),
		[int] $LoginDays = $(throw "LoginDays required")
		)

	$earliestAllowedLogon = [DateTime]::Today.AddDays($LoginDays)

	foreach( $Computer in $Computers )
	{
		$objADSI = [ADSI]"LDAP://$Computer"
		if( $objADSI.Properties.Contains("lastLogonTimeStamp") -eq $false )
		{
			continue;
		}

		$lastLogon = [DateTime]::FromFileTime(
			[Int64]::Parse(
				$(Convert-IADSLargeInteger $objADSI.lastlogontimestamp.value)
				)
			)
		if( [DateTime]::Compare( $earliestAllowedLogon , $lastLogon) -eq -1 )
		{
			$objADSI.name
		}
		continue;
	}
}

#
# Get computer accounts from Active Directory.
$OutArray = @()
$Computers = GetAdObjects "$cfgOU" "computer"
$Computers = GetObjectsLoggedIntoSince $Computers $cfgInterval

#
# If we have no computers to check, just exit.
if( $Computers.Count -le 0 )
{
	return;
}

#
# Create all the jobs.
$statTotal = $computers.count
$statComplete = 0
ForEach ($Computer in $Computers)
{
	Write-Progress -Activity "Locating PST files" -Status "Waiting for a scan to finish before starting another" -CurrentOperation "Total: $statTotal , Complete: $statComplete" -PercentComplete ($statComplete/$statTotal * 100)
	$RetVal = GetPSTInfo $Computer
	if( $RetVal -ne $null)
	{
		$OutArray += $retVal
	}
	$statComplete++
}

$OutArray | Export-Csv "$cfgOutpath" -NoClobber -NoTypeInformation

Which method is best?

In the environments I’ve tested these scripts in the WMI method returned significantly more PST files due to the simple fact that it runs a search on each client computer. If anyone has feedback on running these in their environments I’d love to hear it.

The post Locating PST files on a network appeared first on Blog of Dave Hope.

Dell KACE Admin Template

$
0
0

By default when the Dell KACE agent is deployed to client computers they will display a splash screen. If you have a Windows Vista or later environment this can be conveniently managed by pushing out some registry values using Group Policy Preferences. If however you still have some Windows XP client computers without the Group Policy Preferences client installed, it’s a little more tricky.

Without further ado, here’s a simple Group Policy Template that you can import to disable the Dell KACE Splash screens.

;
; Dell KACE K1000 Agent Settings
; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
;
; Provides group-policy based customisation of the Dell K1000 agent
; bootup splash screens. Useful for networks with XP clients that do not
; have the group policy preferences client installed.
;
; Changelog
; ~~~~~~~~~
; 2012.08.20 Dave Hope Initial version
;
; ======================================================================
CLASS MACHINE

CATEGORY "Dell"
CATEGORY "KACE"

KEYNAME "SOFTWARE\Kace\CustomBootupSplash"
POLICY !!bootup_policy
EXPLAIN !!bootup_explain
VALUENAME "DisableBootupSplash"
VALUEON NUMERIC 1
VALUEOFF DELETE
END POLICY

POLICY !!login_policy
EXPLAIN !!login_explain
VALUENAME "DisableLoginSplash"
VALUEON NUMERIC 1
VALUEOFF DELETE
END POLICY

POLICY !!boottask_policy
EXPLAIN !!boottask_explain
VALUENAME "DisableWaitForBootupTasks"
VALUEON NUMERIC 1
VALUEOFF DELETE
END POLICY

POLICY !!logintask_policy
EXPLAIN !!logintask_explain
VALUENAME "DisableWaitForLoginTasks"
VALUEON NUMERIC 1
VALUEOFF DELETE
END POLICY

END CATEGORY
END CATEGORY

[strings]
bootup_policy="Disable bootup Splash"
login_policy="Disable login Splash"
boottask_policy="Disable wait for bootup tasks"
logintask_policy="Disable wait for login tasks"
bootup_explain="Controls whether the KACE splash screen is displayed at startup.\n\nEnabling this policy prevents the splash screen appearing.\n\nIf this setting is not configured the splash screen is displayed."
login_explain="Controls whether the KACE splash screen is displayed at login.\n\nEnabling this policy prevents the splash screen appearing.\n\nIf this setting is not configured the splash screen is displayed."
boottask_explain="Controls whether the system should wait for KACE bootup taks.\n\nEnabling this policy prevents the system waiting for bootup tasks.\n\nIf this setting is not configured the system will wait for bootup tasks."
logintask_explain="Controls whether the system should wait for KACE login taks.\n\nEnabling this policy prevents the system waiting for login tasks.\n\nIf this setting is not configured the system will wait for login tasks."

Setting the policy elements to “Enabled” will disable their setting. For example, Enabling the “Disable bootup Splash” policy will prevent the splash screen for appearing. For more information on the registry keys involved visit the Dell KACE knowledge base.

The post Dell KACE Admin Template appeared first on Blog of Dave Hope.

How To – Query Cisco routers using SNMP

$
0
0

Earlier today I released a new application I’ve been working on for the past month or two, Cisco Device Info. It allows you to quickly query a Cisco device (providing you have SNMP access) and view information such as interface status, throughput, IPSec status etc. It came about because we were looking for something at work for our helpdesk staff to use and couldn’t find anything suitable.

Enabling SNMP access on routers is simple, just follow the below instructions.

Enabling SNMP Version 2

Enabling version 2 is possible with a single command. It can optionally be restricted further with a standard
access list. To enable SNMP access with a read-only community string named “CiscoRocks” the following
command should be run in the global configuration mode (conf t);

snmp-server community CiscoRocks RO

Enabling SNMP Version 3

Enabling version 3 requires four commands to be run in the global configuration mode (conf t).

snmp-server group  <Group Name> v3 auth read  <Read-Only View name> write  <Read-Write View Name>
snmp-server view <Read-Only View Name> iso included
snmp-server view  <Read-Write View Name> iso included
snmp-server user cdi  <Group Name> v3 auth  <Auth Type>  <Auth Passphrase> priv  <Priv Type> <Priv Passphrase>

For example if it was decided that SNMPv3 using MD5 and AES-128bit should be enabled, the commands may
look like the following:

snmp-server group V3GROUP v3 auth read V3READ write V3WRITE
snmp-server view V3READ iso included
snmp-server view V3WRITE iso included
snmp-server user cdi V3GROUP v3 auth md5 CiscoRocks priv aes 128 CiscoRocks

If you’d like to find out more you can read more about Cisco Device Info here

The post How To – Query Cisco routers using SNMP appeared first on Blog of Dave Hope.

How To – Enable Wake On LAN using a Cisco router

$
0
0

Wake On LAN is an Ethernet standard that allows for a device to be powered on when receiving a specially crafted “magic packet”. The “magic packet” is a broadcast frame consisting of 6 bytes of 255 (FF FF FF FF FF FF) followed by sixteen repetitions of the 48-bit MAC address. Turned off computers receiving the broadcast don’t actually process the message up the protocol stack, they are just looking out for a matching 102-byte string.

Because of this we can use a UDP datagram to remotely wake up a computer from somewhere else on a routed network. Here’s how you can achieve this using Cisco IOS.

The first thing we need to do is setup a static NAT entry for the UDP port we wish to use (usually 7 or 9), so that our “magic packet” is forwarded from our external interface to the host we want to power on. In the below example the IP address of the system we’re going to wake up is 192.0.2.1 and the external interface is ATM0.1:

ip nat inside source static udp 192.0.2.1 7 int ATM0.1 7

Next up we need to create an access-list that will contain the IP addresses of systems we’ll be sending the Wake On LAN message from:

access-list 10 permit host 198.51.100.1

Finally we need to enable “directed broadcasts”. This enables packets sent to the subnet broadcast address to be sent from hosts that are not part of that subnet. Again, in the below example our external interface is ATM0.1:

int ATM0.1
ip directed-broadcast 10

You’ll then need to enable Wake on LAN on the device itself. Once that’s done you can use online services or free applications to wake your device.

The post How To – Enable Wake On LAN using a Cisco router appeared first on Blog of Dave Hope.

DBCC TRACEON in Microsoft Excel

$
0
0

I recently spent some time migrating data to a new Microsoft SQL (MSSQL) Server, from SQL 2000 to SQL 2008. One thing that we missed, and that broke after migration, was Excel documents containing Pivot tables. Users were reporting they’d stopped working, now displaying the following error:

"[Microsoft][SQL Native Client][SQL Server]User 'DOMAIN\username' does not have permission to run DBCC TRACEON."

TRACEON is intended to enable certain trace flags and requires membership of the sysadmin role. Not something I wanted to give out. I spent a few hours investigating but none of the Excel functionality was executing DBCC TRACEON. So what was going on?

It appears as if SQL 2005 and later checks if the client is sending an application identifier of “Microosft® Query” and if it is, tries to turn quoted identifiers on (SET QUOTED IDENTIFIER ON). This can be mitigated by adjusting the connection string in Excel documents and changing “APP=Microsoft® Query”.

You can change “APP=Microsoft® Query” either inside Excel, or by editing the file in Notepad and replacing the word “Query” with something else of the same length. SQL then doesn’t try and enable quoted identifiers and all is fine once more.

The post DBCC TRACEON in Microsoft Excel appeared first on Blog of Dave Hope.

Cisco Device Info 1.3.0 Released

$
0
0

Cisco Device Info (CDI) is a Windows application to retrieve runtime information from Cisco equipment such as routers and switches. This is achieved using the SNMP protocol. Late last week I released version 1.3.0, some important enhancements are as follows:

  1. Enhanced the way additional columns can be shown (Right-Click table headers). Any column can now be removed from display and Interface index and descriptions can be made visible (not shown by default);
  2. When hovering over a line on the interface usage chart a tool-tip appears showing the interface name. This is useful for devices with dozens of interfaces;
  3. Where no data for one tab is available for that device, the tab will be hidden. For example the “Switch Stack” tab will not be shown for ISR routers;
  4. Fixed bug were user interface could be left in an inconsistent state following a router failing to respond;
  5. Fixed bug where interfaces with speeds greater than 4GB/s were misreported;
  6. Improved error handling experience of the interface usage chart following SNMP timeouts due to slow responding, or no longer available devices;

For more information, or to download Cisco Device Info (free to use in a home lab) head over to the download page for Cisco Device Info.

The post Cisco Device Info 1.3.0 Released appeared first on Blog of Dave Hope.


How-To – Check diskspace using PowerShell

$
0
0

Sometimes it’s useful to quickly get an idea of certain usage statistics from a number of servers. Without monitoring and capacity management systems in place it may be difficult to capture information such as disk usage, fragmentation etc.

Rather than logging into each server and manually gather the information, why not automate the task using WMI and PowerShell? The script below will connect to each server in the list ($compList) and determine the free disk space and volume fragmentation.

$compList = "LONDON", "BRISBANE"
$DiskResults = @()
foreach ($computerName in $compList)
{
  Write-Host $computerName
  $objDisks = Get-WmiObject -Computername $computerName -Class win32_logicaldisk | Where-Object { $_.DriveType -eq 3 }
  ForEach( $disk in $objDisks )
  {
    $diskFragmentation = "Unknown"
    try
    {
      $objDisk = Get-WmiObject -Computername $computerName -Class Win32_Volume -Filter "DriveLetter='$($disk.DeviceID)'"
      $objDefrag = $objDisk.DefragAnalysis()
      $objDefragAnalysis = $objDefrag.DefragAnalysis;
      $diskFragmentation = $objDefragAnalysis.TotalPercentFragmentation
    }
    catch{}
    $ThisVolume = "" | select ServerName,Volume,Capacity,FreeSpace,Fragmentation
    $ThisVolume.ServerName = $computerName
    $ThisVolume.Volume = $disk.DeviceID
    $ThisVolume.Capacity = $([Math]::Round($disk.Size/1073741824,2))
    $ThisVolume.FreeSpace = $([Math]::Round($disk.FreeSpace/1073741824,2))
    $ThisVolume.Fragmentation = $diskFragmentation
    $DiskResults += $ThisVolume
  }
}
$DiskResults | ft

This could easily be extended to capture windows reliability information using Win32_ReliabilityRecords to determine if servers have crashed recently or other information. Check back for a future post about that.

The post How-To – Check diskspace using PowerShell appeared first on Blog of Dave Hope.

How To – Check crashed servers using PowerShell

$
0
0

Following on from my previous post about how to check disk space and volume fragmentation of servers, it may be useful to also determine if a system has crashed in a given timeframe. From Windows 2008 R2 onwards a new WMI class was introduced, Win32_ReliabilityRecords. This class contains EventLog information relating to Windows Reliability.

The PowerShell script below will connect to each server in a list ($compList) and check for reboots.

$compList = "LONDON,BRISBANE"
$date = [System.Management.ManagementDateTimeConverter]::ToDMTFDateTime((Get-Date "01/11/2012"))
foreach ($computerName in $compList)
{
	$compCrashes = 0
	try
	{
		$compCrashes = Get-WmiObject -Computername $computerName -Class Win32_ReliabilityRecords -Filter "SourceName='EventLog' AND EventIdentifier='6008' AND Timegenerated >= '$date'" | group __CLASS | select Count
	}
	catch{}
	Write-Host $ComputerName $compCrashes.Count
}

This could be expanded to store the results in a table and send the results using Send-MailMessage, but I’ll leave that to the reader.

The post How To – Check crashed servers using PowerShell appeared first on Blog of Dave Hope.

Turn an old Google Appliance into an ESX server

$
0
0

So, you’ve got an old Google Appliance kicking around? Maybe from an expired Google Enterprise Partner Program (GEP) agreement? Why not turn it into an ESX/vSphere server.

Once your license has expired, you’ll want to do is check with Google and make sure they don’t want the hardware back (they never do, but best to check). It’s safe to say that doing this is going to void your warranty.

The first thing you’re going to need to do is reset he password on the BIOS so that you can change the boot order. The easiest way to do this is to open the chassis and remove the jumper labelled PWRD_EN. The jumper is located just behind the memory towards the rear of the server. The next time you boot you can hit F2 to get into the BIOS.

With the BIOS now open, set the option to boot form the front USB ports. You’re going to want to flash the BIOS with a newer, non-branded one. A Google appliance is just a Dell PowerEdge 2950 with a yellow coat of paint and a snazzy front bezel. Head over to the Dell site and download the latest BIOS (I used 2.6.1). Once downloaded, run the utility to create a BIOS update floppy disk.

With the floppy disk in hand, connect an external USB floppy drive to one of the front USB ports and boot from your BIOS update disk. The update will give you an error message saying that a Dell PowerEdge 2950 BIOS cannot be applied to a Google Enterprise Search Appliance. Fear not, when the update exits it’ll leave you at a DOS prompt. Run the following command:

020601 /forcetype

It should complete successfully and reboot. Remove the floppy drive, enable visualization support in the BIOS and then install ESX.

The post Turn an old Google Appliance into an ESX server appeared first on Blog of Dave Hope.

Automatically backup Netscreen firewall

$
0
0

I spent some time a while ago automating the backups of network device configuration to a restricted network share and thought I’d share a simple batch script to backup the configuration from multiple Juniper Netscreen (ScreenOS) firewalls.

@echo off
REM ================================================================
REM CONFIGURATION INFO
REM ================================================================
set USERNAME=backupAccount
set PASSWORD=superSecretPassword
set CFGFILE=BackupList.txt
set DESTDIR=C:\Backups\

REM ================================================================
REM STOP CHANGING HERE OR YOU'LL BREAK SOMETHING
REM ================================================================
SET TIMESTAMP=%date:~-4,4%.%date:~-7,2%.%date:~-10,2%
for /F "tokens=1,2 delims=," %%A in (%CFGFILE%) do (
	IF NOT EXIST "%DESTDIR%%TIMESTAMP%" mkdir "%DESTDIR%%TIMESTAMP%"
	pscp -q -scp -pw %PASSWORD% %USERNAME%@%%B:ns_sys_config "%DESTDIR%%TIMESTAMP%\%%A.cfg"
)


The above will read the details of the firewalls from a CSV file (BackupList.txt) in hostname,ip address format. PSCP is then used to SCP the configuration from the firewall to the location specified in DESTDIR.

You’ll need to make sure SSH + SCP is enabled on your firewalls and that pscp is in the same directory as the script. Click here to download the latest version of Putty and PSCP.

The post Automatically backup Netscreen firewall appeared first on Blog of Dave Hope.

Automatically backup IPOffice

$
0
0

Some time ago I investigated automating backups of network devices such as switches, access points and other devices. I knocked up a quick batch script to backup Avaya IPOffice phone systems (It’d probably work on the older Lucent ArgentOffice too). Save the following as Backup.bat:

@echo off
REM ================================================================
REM CONFIGURATION INFO
REM ================================================================
set CFGFILE=BackupList.txt
set DESTDIR=C:\Backups\

REM ================================================================
REM STOP CHANGING HERE OR YOU'LL BREAK SOMETHING
REM ================================================================
SET TIMESTAMP=%date:~-4,4%.%date:~-7,2%.%date:~-10,2%
for /F "tokens=1,2 delims=," %%A in (%CFGFILE%) do (
	IF NOT EXIST "%DESTDIR%%TIMESTAMP%" mkdir "%DESTDIR%%TIMESTAMP%" > NUL
	echo %%B
	tftp -i %%B GET config "%DESTDIR%%TIMESTAMP%\%%A.cfg" > NUL
)


In the same directory create a TXT file named BackupList.txt. Add phone systems to the file that should be backed up in Name,ip address format. A sample BackupList.txt file might look like:

London Phone System,192.168.1.1
New York phone System, 192.168.2.1

You’ll also need to download the following free TFTP client from Tandom Systems Ltd. Place it in the same directory as the other two files and run Backup.bat to begin the backup.

The post Automatically backup IPOffice appeared first on Blog of Dave Hope.

Extract bandwidth information from lighttpd log files

$
0
0

I was looking for a simple way to get some bandwidth statistics for websites that I host. Interested in historical data my only option was to look back over my webserver log files.

My webserver of choice on linux systems is currently lighttpd. Here’s a quick Bash script to get the bandwidth statistics out of the default lighttpd log files:

#!/bin/bash
cat access.log | awk '{
month=substr($4,5,3)
year= substr($4,9,4)
timstamp=year" "month
bytes[timstamp] += $10
} END {
for (date in bytes)
printf("%s %20d MB\n", date, bytes[date]/(1024*1024))
}' | sort -k1n -k2M


That will give you a table containing the stats based on the bytes sent for the body of the pages:

2011 Jan                  662 MB
2011 Feb                12090 MB
2011 Mar                13645 MB
2011 Apr                12274 MB
2011 May                12279 MB
2011 Jun                 9551 MB

This could be easily adapted to work out the number of hits or other information contained in the log files.

The post Extract bandwidth information from lighttpd log files appeared first on Blog of Dave Hope.

Automatically backup Netgear WNDAP

$
0
0

As part of an ongoing battle to backup all network devices, I’ve cobbled together a batch script to backup Netgear’s ProSafe range of access points. Save the following as Backup.bat:

@echo off

REM ================================================================
REM CONFIGURATION INFO
REM ================================================================
set CFGFILE=BackupList.txt
set DESTDIR=C:\Backups\
set NET_USERNAME=admin
set NET_PASSWORD=netgear

REM ================================================================
REM STOP CHANGING HERE OR YOU'LL BREAK SOMETHING
REM ================================================================
SET TIMESTAMP=%date:~-4,4%.%date:~-7,2%.%date:~-10,2%
for /F "tokens=1,2 delims=," %%A in (%CFGFILE%) do (
	IF NOT EXIST "%DESTDIR%%TIMESTAMP%" mkdir "%DESTDIR%%TIMESTAMP%" > NUL
	echo %%B

	curl -s -c "%%A.cookie.txt" "http://%%B/login.php?username=%NET_USERNAME%&password=%NET_PASSWORD%"
	curl -s -b "%%A.cookie.txt" "http://%%B/downloadFile.php?file=config" -o "%DESTDIR%%TIMESTAMP%\%%A.cfg"

	IF EXIST %%A.cookie.txt del %%A.cookie.txt
)


In the same directory create a TXT file named BackupList.txt. Add access points to the file that should be backed up in Name,ip address format. A sample BackupList.txt file might look like:

LON-CORE-WAP01,192.168.1.1
NY-CORE-WAP01,192.168.2.1

You’ll also need to download the windows version of cURL, a list of mirrors can be found here. Place it in the same directory as the other two files and run Backup.bat to backup your ProSafe access points.

This method has been tested on both WNDAP330 and WNDAP 350 access points.

The post Automatically backup Netgear WNDAP appeared first on Blog of Dave Hope.


Using powershell to list users accessing OWA

$
0
0

Sometimes it’s useful to identify how many users are using Outlook Web Access, particularly for capacity management. Just because an account has OWA enabled, doesn’t mean it’s being used.

The below PowerShell script will enumerate the IIS logs looking for OWA access, if found it’ll add the user account to a list of accounts accessing OWA.

$path = "C:\WINDOWS\system32\LogFiles\W3SVC1\ex*"

# Create new DataTable to hold log entries
$tblLog = New-Object System.Data.DataTable "Log"
$arrUsers = New-Object System.Collections.ArrayList($null)
$bFirstRun = $TRUE;

foreach ($file in Get-ChildItem $path)
{
	# Get the contents of the file, excluding the first three lines.
	$fileContents = Get-Content $file.FullName | where {$_ -notLike "#[D,S-V]*" }

	# Create DataTable columns. No handling for different columns in
	# each log file.
	if( $bFirstRun )
	{
		$columns = (($fileContents[0].TrimEnd()) -replace "#Fields: ", "" -replace "-","" -replace "\(","" -replace "\)","").Split(" ")
		$colCount = $columns.Length

		# Create a DataColumn from the column string and add to our DataTable.
		foreach ($column in $columns)
		{
			$colNew = New-Object System.Data.DataColumn $column, ([string])
			$tblLog.Columns.Add( $colNew )
		}
		$bFirstRun = $FALSE;
		Write-Host "Columns complete"
	}
	
	# Get the row contents from the file, filtering what I want to retrieve.
	$rows = $fileContents | where {$_ -like "*/owa/Default.aspx*"}

	# Loop through rows in the log file.
	foreach ($row in $rows)
	{
		if(!$row)
		{
			continue
		}

		$rowContents = $row.Split(" ")
		$newRow = $tblLog.newrow()
		for($i=0;$i -lt $colCount; $i++)
		{
			$columnName = $columns[$i]
			$newRow.$columnName = $rowContents[$i]
		}
		$tblLog.Rows.Add( $newRow )
	}
	Write-Host $file.Name "Done"
}
$tblLog | foreach {  if(! $arrUsers.Contains( $_.csusername ) ) { $arrUsers.Add( $_.csusername ) } }
$arrUsers


Save to a file with the “.ps1″ extension and run. You’ll then be presented with a list of users, for example:

NWTRADERS\Administrator
NWTRADERS\Steve
NWTRADERS\Dave
NWTRADERS\James
NWTRADERS\Albert

You’ll need to alter the path to the IIS logs if you’ve stored them in a non-default location. To filter only logs for the current year, you can use a wildcard in the $path variable. For example “ex13*” for 2013 only.

The post Using powershell to list users accessing OWA appeared first on Blog of Dave Hope.

Microsoft Licensing Options

$
0
0

In this post I’m going to talk about Microsoft licensing. I’ll try and cover the various options and why one model may suit you more than another. The information provided by Microsoft licensing “specialists” can be confusing and contradictory. That’s not to say mine wont be, so make sure you seek advise from a trusted independent source.

Why do I need to license?

Because the EULA says so, but also because all (well, most) software is protected by copyright. A software license grants a user the right to use the software in accordance with the EULA. Without a license to use the software, you’re likely breaking copyright law and could be subject to financial penalties or worse.

Software vendors employ auditing companies (e.g. the BSA) to visit customers and ensure they’re correctly licensed. This process is expensive and will see you self-auditing all your software. If it’s shown you’re not correctly licensed (which is almost always the case, even for diligent customers) you have no option but to purchase the licenses – and paying a premium for it too.

What options do I have?

Each Microsoft product has its own licensing intricacies, rather than attempt to cover every Microsoft product, I’ll be taking about the ways in which you can buy licenses. For smaller companies this will be from retail outlets, whilst larger SMB and enterprise organisations will likely be through ongoing agreements.

Original Equipment Manufacturer (OEM)

OEM licensing is hassle free, when you purchase a computer from a retail outlet it usually comes with its Windows operating system already installed and licensed. If you examine the side of the case you’ll likely find a Certificate of Authenticity (COA), a small sticker identifying that the computer has a license. Together with your proof of purchase, you’ll have a licensed operating system.

The software will be pre-installed on the computer and is non-transferable to other devices. If you buy a new computer, you will not be allowed to transfer the license from the old one.

Full Packaged Product (FPP)

FPP (Retail) is new, shrink-wrapped software purchased from a retail outlet. FPP licenses usually come in a box with media and a manual, they’re aimed towards home users or low-volume purchases. Typically purchasing a FPP includes only a single license.

FPP licenses are usually transferable, so if you remove it from one computer you’re entitled to install it on another.

Volume Licensing Programs

This is where it gets more complicated. Microsoft has numerous volume licensing programs which allow for multiple installations under a single purchase.

Volume license programs are usually just the license and a digital download of the software. You receive no physical media, manual or other accompanying bits and pieces. By entering into a volume license agreement purchasing is simplified, can reduce costs and offers increased flexibility.

Unless you’re a Microsoft Partner, when a few additional options become available, Microsoft offer six volume licensing programs. For companies with more than 5 but fewer than 250 computers you can choose from the following volume licensing programs:

  Open Value Open Value Subscription Open License
# Computers 5 to 250
Term 3 Years 2 Years
Payment Options Up front or Annual 2 Years
Software Assurance Included Optional
Can increase licenses Yes No
Can decrease licenses No Yes No
Perpetual licenses Yes Buyout Option Yes
Physical Media Yes No
Downgrade rights Yes
Re-Imaging rights Yes
Cross-language rights Yes
Secondary use rights Academic Only
Training licenses Academic Only

For companies with greater than 250 computers you can choose from the following licensing programs:

  Select Plus Enterprise Agreement EA Subscription
# Computers 250+
Term Never Expires 3 Years
Payment Options Annual
Software Assurance Optional Included
Can increase licenses Yes
Can decrease licenses No Yes
Perpetual licenses Yes No
Physical Media Optional
Downgrade rights Yes
Re-Imaging rights Yes
Cross-language rights Yes
Secondary use rights Yes
Training licenses 20

This should help you to make an informed decision about which volume licensing program suits you best, taking into account whether you prefer capital or revenue expendeture. The comparison of the various Microsoft licensing programs in the above two tables is based on reading through various literature provided by Microsoft rather than third-party sources.

In the next instalment I’ll talk about what happens if you get audited by an organisation such as the BSA, and how you can keep track of your licensing position using Software Asset Management (SAM) solutions.

The post Microsoft Licensing Options appeared first on Blog of Dave Hope.

BSA Licensing Audits

$
0
0

Following on from my post about Microsoft Licensing Options, I thought it prudent to cover what may happen if your licensing isn’t in order and you end up getting audited. The BSA (Business Software Alliance) represent many vendors, not just Microsoft so are the most likely ones to be involved with an audit.

Regardless of how up to date your licensing is, it’s possible for the BSA to audit you if they suspect you’re not compliant. This can be down to a number of reasons but most commonly boils down to an ex-employee informing them that your software licensing doesn’t add up. They do require credible information though, so an angry ex-employee with unfounded accusations probably isn’t going to get very far.

Can I refuse a licensing audit?

As for the rest of this article, I need to preface that IANAL, however refusing to deal with the BSA is likely not a good idea. By refusing to work wit them you risk ending up in a lawsuit over Intellectual Property licensing, which is almost always a seven digit sum.

As soon as you hear from the BSA you should speak to senior management, if you haven’t done already, and get company lawyers involved. They are likely going to advise you that the last thing you want to happen is to end up in court, unless you’re completely confident that all your software is correctly licensed.

Can I just un-install the infringing software?

One of the first things you’ll hear from the BSA is that you shouldn’t remove installed software. You’ll be sent a legal document from them as the first piece of communication, as of the date on the document you’re tied into what you’re using. During the process it’s very unlikely they’ll consider allowing removal of software as a way to become compliant.

Frustratingly they’ll also tell you that you’re not authorised to purchase any more software from the vendor that you’re being audited for. Your legal team should be able to have the sanction preventing you purchasing software lifted fairly early into the process, allowing you to purchase licenses for any new installations.

How will I be audited?

This surprised me the first time I was aware of the BSA and how they function, but you will probably never meet a representative. You will be required to audit your license usage and accurately report it to them. What you will report will form a legal document, so understating your used licenses will be considered perjury and will not work in your favour should it go to court.

As part of the report you’ll need to produce information on what license counts you have deployed and what licenses you have. When determining the licenses you have you must be completely convinced they’re valid, you may be required to provide evidence if the case goes to court.

How can I self audit?

There are various software asset management (SAM) solutions on the market, with support for various vendors. The solution you choose is likely going to depend on why you’re being audited, if it’s mainly Microsoft software you can look towards a free Microsoft application known as the MAP Toolkit. The MAP Toolkit provides agenetless inventory and reporting of your environment. Whilst it doesn’t compare to various commercial solutions, if you just want something for Microsoft software to quickly provide number of installations it’ll do the job.

How much is it going to cost?

This is a tricky question as it’ll depend on your negotiation process between your legal team and the BSA. Once you’ve self-audited and understand the number of missing licenses I’d advise working out what the retail pricing will be. The initial settlement offer from the BSA is going to be a few times higher than this, but during the negotiation process it’s probably reasonable to work down to 2x or even 1.5x retail price.

Can I find out who reported my company to the BSA?

In short, probably not. The only way it’s possible is through what’s known as “Discovery”, which is (as far as I know) only possible if you take the case to court. This is likely to reduce your negotiation options when it comes to price, so unless there’s a definite requirement to know who it’s probably best to just focus on correcting the licensing situation.

The post BSA Licensing Audits appeared first on Blog of Dave Hope.

Cisco 887VA VDSL2 throughput

$
0
0

I recently migrated ISP from an ADSL2+ to a VDSL2 connection, providing “up to” 80Mbps downstream and 20Mbps upstream. Under ADSL2+ (Annex-A) with interleaving on and fastpath off, results from various “speed test” websites would show around 15mbps Down, 1mbps up. The pre-migration controller information is as follows:

                  DS Channel1     DS Channel0   US Channel1       US Channel0
Speed (kbps):             0            19228             0              1235

As I started looking into the capabilities of my 887VA, I discovered that the routing performance is listed as 25.6Mbps (According to the Cisco routing performance product sheet). I had seen blogs discuss the negotiated max attainable rate, but not the throughput people have been getting.

With the new FTTC connection (VDSL2 – Profile 17a) I’m seeing a great “attainable rate” and should Profile 30A be enabled by my ISP (pretty unlikely) I’m theoretically capable of 130Mbps:

Attainable Rate:        135036 kbits/s           42711 kbits/s
...
                  DS Channel1     DS Channel0   US Channel1       US Channel0
Speed (kbps):             0            79987             0             20000

I expected the bottleneck not to be on the VDSL2/ADSL chipset side of things, but on the Cisco 887VA itself. After checking some line information and making sure I had a 1500, I headed off to test the throughput. I decided to test downloading an Ubuntu ISO over HTTP, resulting in transfer speeds of 9M, Bps (~72Mbps), with various speedtest websites also showing 72Mbps with under 25ms latency.

So there you have it, the Cisco 887 is certainly capable of running an 80Mbps FTTC connection

The post Cisco 887VA VDSL2 throughput appeared first on Blog of Dave Hope.

SAM Software

$
0
0

In two recent blog posts I’ve covered available licensing programs and what happens if you get audited by the Business Software Alliance. In this blog post I’m going to talk about keeping track of your software licensing through Software Asset Management and a basic overview of how the process work.

Software Asset Management (SAM) is the process of optimising the purchasing, management and recognition of software licenses. Software tools are available to assist with the process, which can be broken down into five or six categories:

  1. Inventory tools to identify what software is deployed across a network
  2. Management tools to manage license agreements and reconcile usage rights against data returned from inventory tools
  3. Metering tools to identify how frequently software is used, to allow you to remove software from computers where it is not used
  4. Policy tools to restrict installation of software can be installed, or who can use it
  5. Deployment and Patch management are usually part of the same system, and provide facilities to deploy new software packages and update already deployed software

From that list items 1,3 and 5 are usually provided in the same application. For many organisations that is Microsoft SCCM, Dell KACE or Symantec Altiris. My preferred system is KACE, providing a good cost/functionality point compared to SCCM. Whilst these tools may have some limited SAM functionality out the box, they rarely meet requirements when it comes to license management. Where KACE and SCCM are concerned, they are both very weak in this area, but excellent at inventory, deployment and patching.

Organisations look to SAM software vendors to compliment their existing systems, there are a few main vendors in this area:

I can’t vouch for SNOW or Asset Labs, but I’m familiar with License Dashboard which covers my requirements pretty well and has proven to be flexible and extremely usable.

License Dashboard Notifications

Once armed with your SAM software you’ll want to quickly understand your effective license position (ELP). This is a three step process:

  1. Create an inventory of your licenses, with supporting evidence
  2. Create an inventory of software usage data, with counts of how many systems have the software
  3. Reconcile the two, creating your Effective License Position (ELP)

1. Create an inventory of your licenses

Importing licenses and contracts is a slow process, for a large environment this could become a full time job for a few weeks as you manually enter contract details. Any good SAM software will allow you to import an MVLS statement to relieve some of the hassle of manually entering the details. The specification of the MVLS file format is open, so with any luck other vendors will begin to support it.

To make your life easier when entering license details, here’s some functionality you’ll want to look out for:

  • Software licenses can be tricky; with downgrade rights, entitlements to upgrades in the first 12 months, renewals and limitations on what versions can be upgraded from. Ensure the SAM software you’re looking at can accommodate various license models;
  • Attaching evidence to the record is useful, it’ll make proving the license on a group of devices much easier if you’re audited. To be compliant you must have evidence for each license;

2. Create an inventory of software usage

Importing software usage data into SAM software should be straight forward if you have systems management product such as Dell KACE or Microsoft SCCM. Any good SAM product will provide support for importing from them.

Two useful bits of functionality to make sure you have are:

  • The ability to understand the configuration of the physical systems you assign licenses to, whether they’re virtual, how many processors they have etc;
  • The capability to filter out non-licensable software such as drivers or other free software that doesn’t require a license. I can guarantee this functionality will save you a significant amount of time;

3. Reconcile the two, creating your ELP

Once your licenses and usage data is imported, the real functionality of SAM software comes to play – reconciling them.

License Dashboard

At first this sounds like a straightforward process, but there’s a lot to consider. Whatever software you choose needs to be able to cater for the following:

  • Upgrades that you’re entitled to due to Software Assurance;
  • Upgrade versions that allow you to upgrade from some previous versions, but not others;
  • Downgrade rights are available for certain license schemes but not others;
  • With so much data in one application, the ability to remind you of license renewals etc is very useful;

Once reconciled you’ll be left with your ELP, giving you an idea of what capacity you have for additional deployments, or how many more licenses you need to buy.

If you’re interested in License Manager you can find more information on Twitter, LinkedIn or the License Dashboard website. License Dashboard have also published a video, Create an ELP in Three Simple Steps which takes you through the process using their SAM software.

The post SAM Software appeared first on Blog of Dave Hope.

Viewing all 59 articles
Browse latest View live