Make Fortinet SysLogs Human Readable with PowerShell

If you’ve ever looked at syslogs generated by a Fortinet Firewall, you know they are difficult to read. I was unable to find an easy way to make them human readable so I decided to do it myself with PowerShell and a little help from AI with the Regular Expressions (regex) needed to extract each key-value pair from the data.

Here’s an example to show what I mean.

Your Fortigate Syslog data looks like this:

time=22:59:23 devname="FGT40F-A" devid="FGT100FXXX" eventtime=1751259563122277579 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=188.117.57.162 srcport=44498 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=443 dstintf="lan" dstintfrole="lan" srccountry="United States" dstcountry="Canada" sessionid=36305090 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block1" service="HTTPS" trandisp="dnat" tranip=192.168.1.49 tranport=443 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
time=23:07:56 devname="FGT40F-A" devid="FGT100FXXX" eventtime=1751260076522569039 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=201.163.2.188 srcport=38294 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=443 dstintf="lan" dstintfrole="lan" srccountry="United States" dstcountry="Canada" sessionid=36307786 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block2" service="HTTPS" trandisp="dnat" tranip=192.168.1.249 tranport=443 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
time=23:11:09 devname="FGT40F-A" devid="FGT100FXXX" eventtime=1751260269071401579 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=133.118.195.68 srcport=11089 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=80 dstintf="VLAN2" dstintfrole="lan" srccountry="Brazil" dstcountry="Canada" sessionid=36308833 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block2" service="tcp/82" trandisp="dnat" tranip=172.16.1.32 tranport=80 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"

Unreadable, right? To fix that, run your syslog data through the script below and you end up with this:

So much easier to read, right? Not to mention now you can filter and sort to your hearts content.

# Example logs - Use Get-Content or extract from a Syslog database or whatever you have to do to get the raw syslog data
$Logs = @'
time=22:59:23 devname="FGT100F-A" devid="FGT100FXXX" eventtime=1751259563122277579 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=188.117.57.162 srcport=44498 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=443 dstintf="lan" dstintfrole="lan" srccountry="United States" dstcountry="Canada" sessionid=36305090 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block1" service="HTTPS" trandisp="dnat" tranip=192.168.1.49 tranport=443 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
time=23:07:56 devname="FGT100F-A" devid="FGT100FXXXL" eventtime=1751260076522569039 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=201.163.2.188 srcport=38294 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=443 dstintf="lan" dstintfrole="lan" srccountry="United States" dstcountry="Canada" sessionid=36307786 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block2" service="HTTPS" trandisp="dnat" tranip=192.168.1.249 tranport=443 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
time=23:11:09 devname="FGT100F-A" devid="FGT100FXXX" eventtime=1751260269071401579 tz="-0600" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" srcip=133.118.195.68 srcport=11089 srcintf="wan" srcintfrole="wan" dstip=1.2.3.4 dstport=80 dstintf="VLAN2" dstintfrole="lan" srccountry="Brazil" dstcountry="Canada" sessionid=36308833 proto=6 action="deny" policyid=17 policytype="policy" poluuid="528bd556-f7ad-51ef-dc1f-395084d39886" policyname="Block2" service="tcp/82" trandisp="dnat" tranip=172.16.1.32 tranport=80 duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
'@ -split "`r`n"

Function Convert-UnixTimeToDateTime($inputUnixTime){
    # Convert nanoseconds to ticks (1 tick = 100ns)
    $Ticks = [math]::Floor($inputUnixTime / 100)

    # Unix epoch as DateTime (in UTC)
    $Epoch = Get-Date -Date "1970-01-01 00:00:00Z"

    # Add ticks to epoch
    $ReadableTime = $Epoch.AddTicks($Ticks)

    # Show result in local time
    $ReadableTime.ToLocalTime()
}

Function Parse-FortiGateSyslogMsg {
    param (
        [string]$msg
    )

    $result = @{}

    # Pattern: key="value with spaces" OR key=value (no quotes)
    $pattern = '\b(?<key>\w+)=(".*?"|\S+)'

<#

Regex Character | Purpose                                                  
-----------------|----------------------------------------------------------
 \b              | Asserts a word boundary.                                 
 (               | Opens a capturing group.                                 
 ?               | Marks the following group as a named capturing group.    
 <key>           | Names the capturing group as "key".                      
 \w              | Matches any word character (alphanumeric and underscore).
 +               | Matches one or more of the preceding character or group. 
 )               | Closes the named capturing group.                        
 =               | Matches the literal equals sign character.               
 (               | Opens a capturing group (for the value alternatives).    
 "               | Matches a literal double quote.                          
 .               | Matches any character (except newline by default).       
 *               | Matches zero or more of the preceding character or group.
 ?               | Makes the preceding quantifier lazy (matches as few as possible).
 "               | Matches a literal double quote.                          
 |               | Acts as an OR operator, allowing either the left or right pattern to match.
 \S              | Matches any non-whitespace character.                    
 +               | Matches one or more of the preceding character or group. 
 )               | Closes the capturing group for the value alternatives.  

#>


    foreach ($match in [regex]::Matches($msg, $pattern)) {
        $key = $match.Groups['key'].Value
        $value = $match.Value -replace "^\w+=", ''

        # Strip quotes if present
        if ($value.StartsWith('"') -and $value.EndsWith('"')) {
            $value = $value.Substring(1, $value.Length - 2)
        }

        # Special case for Fortinet-style FILETIME timestamps
        if ($key -ieq 'eventtime' -and $value -match '^\d+$') {
            try {
                $value = Convert-UnixTimeToDateTime $value
            } catch {
                # If parsing fails, keep original
            }
        }

        $result[$key] = $value
    }

    return $result
}

$Results = @()
foreach ($Entry in $logs) {
    $parsed = Parse-FortiGateSyslogMsg -msg $Entry
    $Results += [pscustomobject]$parsed
}

$Results | Out-GridView

I’ve found this incredibly helpful and used it a lot since I wrote it so I figured I’d share.

Identify cause of performance issue with Hyper-V Cluster and Shared Storage

Do you have a Hyper-V cluster with multiple nodes and all of your VMs live on shared storage?  Do you often have users complain that the virtual servers are “slow”? 

When you check task manager inside the VM, you don’t see obvious issues.  You then open Resource Monitor and go to the disk tab and you see that the “Active Time” is showing as 100%.  The VM feels slow even though task manager otherwise says it’s idle.  You check your shared storage console and the latency and cache hit rate and other metrics all look normal – nothing obvious to indicate any performance issues.

How do you determine exactly what is causing the performance woes?  I found myself in this exact situation and I found a solution and figured if it helped me, it can help you.

The Solution:

Run the PowerShell script below on one of the Hyper-V hosts in your cluster, any host will do so long as it’s part of the cluster.

The script will connect to each Hyper-V host and extract the performance counters for readbytes/s and writebytes/s.  It will then combine the data for all VMs on all the hosts  and create a third “totalbytes/s” column.  It will then display a gridview of every individual VHDX file and its real time read/write byte/s.  If you have one ore multiple VMs that are exceedingly high for a long period of time, these VMs are almost certainly sucking up all of your Disk IO and starving the other VMs resulting in the user complaints of slowness.

Crucially at the end of the script will output the sum of totalbytes/s used across all VMs.   This becomes your one-stop shop value to determine if high read/write/s on your shared storage is the cause of your performance issues.  In our environment, anything up to 200mb/s is generally OK.  But if it gets over 300mb/s, people will start to complain and over 400mb/s, VMs nearly grind to a halt when used interactively.

image

Identify what files on a specific VM are using all the disk IO

Now that you know one or more VMs are using all the disk IO on your SAN, how do you identify what specific files are being read inside the VMs?  You could log into the VMs and open resource monitor and check the disk tab but that’s not always a readily available option.  An alternative solution is below.  

This leverages the amazing Nirsoft “FileActivityWatch” tool (https://nirsoft.net/file_activity_watch.html) to output a list of the most accessed files inside the VM.  The script for that is below.  This version is intended to be used from an RMM tool so you can modify for your use case as needed.

Get-ClusterDiskPerVMUsageMetrics.ps1

$HyperVHosts = (Get-ClusterNode).Name

if(-not ($HyperVHosts)) { write-warning "Clustered HyperV Hosts not found.  Is this script running on a clustered HyperV Host?"; break}

$allCounterData = @()

foreach ($HyperVHost in $HyperVHosts) {
    $counters = Get-Counter -ComputerName $HyperVHost -Counter @(
        "\Hyper-V Virtual Storage Device(*)\Read Bytes/sec",
        "\Hyper-V Virtual Storage Device(*)\Write Bytes/sec"
    )

    ForEach ($sample in $counters.CounterSamples) {
        $allCounterData += [PSCustomObject]@{
            HyperVHost   = $HyperVHost
            InstanceName = $sample.InstanceName
            Path         = $sample.Path
            CounterName  = if ($sample.Path -like '*Read Bytes/sec') { 'Read' } else { 'Write' }
            BytesPerSec  = $sample.CookedValue
        }
    }
}

$results = $allCounterData | Group-Object HyperVHost, InstanceName | ForEach-Object {
    
    $group = $_.Group
    $hyperVHost = $group[0].HyperVHost
    $instanceName = $group[0].InstanceName
    $timestamp = $group[0].Timestamp

    if($instancename -match 'vhdx')
    {
        [int]$readBytes = [math]::round((($group | Where-Object { $_.CounterName -eq 'Read' }).BytesPerSec | Measure-Object -Sum | Select-Object -ExpandProperty Sum),0)
        [int]$writeBytes = [math]::round((($group | Where-Object { $_.CounterName -eq 'Write' }).BytesPerSec | Measure-Object -Sum | Select-Object -ExpandProperty Sum),0)
        [int]$totalBytes = [math]::round(($readBytes + $writeBytes),0)

        [PSCustomObject]@{
            HyperVHost  = $hyperVHost
            VMInstance  = $instanceName
            'ReadBytes/s'   = $ReadBytes
            'WriteBytes/s'  = $writebytes
            'TotalBytes/s'  = $totalbytes
        }
    }
}

$results | Sort-Object 'TotalBytes/s' -Descending | Out-GridView

$Total = [math]::round("{0:N0}" -f ($Results | measure-object 'totalbytes/s' -Sum).sum / 1MB,0)

$HyperVHostsString = $HyperVHosts -join ";"

$ts = [string](get-date)
write-host "Data Collection Date: $ts" -ForegroundColor Green
write-host "[$HyperVHostsString] processed $Total MB/s" -ForegroundColor yellow

Get-VMPerFileUsage.ps1

# Do what you need to do to ensure the FileActivityWatch.exe program from Nirsoft is in c:\windows\temp before running this script
# https://www.nirsoft.net/utils/file_activity_watch.html

$LogFile = "c:\windows\temp\fileactivity.csv"
if (Test-Path $logFile) { Remove-Item $LogFile }

c:\windows\temp\FileActivityWatch.exe /scomma $LogFile /capturetime 10000

Start-sleep -Seconds 12

Import-Csv $Logfile | ForEach-Object {
    
    $ReadBytes = [int64]$_.'Read Bytes'
    $WriteBytes = [int64]$_.'Write Bytes'
    $TotalBytes =  ([int64]$_.'Read Bytes') + ([int64]$_.'Write Bytes')
    
    [PSCustomObject]@{
        FileName     = $_.'Filename'
        ProcessName  = $_.'Process Name'
        ProcessID = $_.'Process ID'
        [string]'ReadBytes/s'    = 	"{0:n0}" -f $ReadBytes + " bytes/s"
        [string]'WriteBytes/s'   = "{0:n0}" -f $WriteBytes + " bytes/s"
        [string]'TotalBytes/s'   =  "{0:n0}" -f $TotalBytes + " bytes/s"
        'totalbytesraw' = $TotalBytes
    }
} | Select-Object FileName, ProcessName, ProcessID, 'ReadBytes/s', 'WriteBytes/s', 'TotalBytes/s', totalbytesraw | sort totalbytesraw -Descending | select  -first 25 -Property * -ExcludeProperty totalbytesraw | fl 


The output looks as shown below. In this case during the 10 second interval the command collected data from, several ISOs were being copied in the C: drive.

You know are armed with the information you need to quiet the VM disk utilization and stop your users from complaining the VMs are “slow”.

Migrate Azure VM to OnPrem Hyper-V Host

I recently had to move a test VM from Azure to an on-prem Hyper-V host.  I wasn’t able to to find a clear guide for a migration of a simple single VM so I figured I’d share how I achieved it:

1) In the Azure portal, browse to the VHD disk for the VM you wish to migrate and and choose Create Snapshot on the overview page and choose Full

2) Once the snapshot is generated, open it and click the Snapshot Export button on the left pane

3) Click Generate URL to create a SAS URL to download the VHD

4) Use azcopy.exe to download the file. 
Note: Do not use a web browser as the VHD files are typically too large and the downloads will eventually fail.  Azcopy.exe can be found here:

https://aka.ms/downloadazcopy-v10-windows

The command to download the file is:

azcopy.exe copy “[SASURL]” “C:\location\vmname.vhd”

5) Convert the VHD to VHDX using this command:

Convert-VHD -Path C:\location\vmname.vhd -DestinationPath c:\location\vmmame.vhdx -VHDType Dynamic

6) In Hyper-V Manager create a new Gen 2 VM and attach this VHDX to the VM and start it.  It should boot normally.

The only issue I had was the warning below about a license warning that I haven’t yet been able to resolve but otherwise the test VM is working and will be deleted shortly so I’ve just left it as is.

image

I hope this helps if you ever need to perform a quick and dirty reverse cloud migration.

HOWTO: Move EFI System Partition so you can increase space on System Drive

You’ve likely found this page because you have a Windows virtual machine that has low space on the C: drive. You added more space to the VM using your hypervisor administration tool but when you log into the VM to try and extend the C: drive, you find yourself confronted with the problem below:

The EFI System partition is sitting between your C: drive and your free space meaning that the “Extend” option on the C: drive is greyed out. Well crap, now what?

I spent a lot of time Googling this but I never found a satisfactory answer. Most of what I did find involved dealing with the recovery partition being to the right of the C: drive and doing various things to remove it.

The problem is we can’t remove the EFI System partition as it’s required for Windows to boot. We somehow need to move the EFI system partition from the right side of the C: drive to the left. But how?

Below are the steps I’ve found that have worked for me. Of course, this information is provided as-is. Ensure you have complete backups of your drive before proceeding.

Continue reading

Explain “You do not have permission” error as a Domain Admin

I recently had a customer express frustration they could no longer manage file permissions on their Windows Server with a newly created domain admin account. They would receive a “You do not have permission to access” error when trying to open folders in Windows Explorer when they were confirmed to have full permissions to those folders.

I’ve run into this problem before but admittedly never fully understood why. I do now though and wanted to share my learnings for the benefit of others.
Here’s the scenario:

  • Use COMPANY\administrator account to perform NTFS file share permission changes by first RDPing directly into SERVER1, opening the folder in question using its UNC path and modify the NTFS permissions. This works as expected
  • COMPANY\administrator is a member of Domain Admins which in turn is a member of the local administrators group on all servers
  • COMPANY\administrator account is then permanently disabled and a replacement account called COMPANY\newadmin is created. This account is assigned to the Domain Admins group and thus should have permissions identical to that of COMPANY\administrator
  • RDP into SERVER1 as COMPANY\newadmin and try to open the same folder using the same path that previously worked as COMPANY\administrator and receive this error:

  • Next try to open the folder directly in Windows Explorer (rather than the UNC share) and get this message:
  • Pressing continue grants access to the folder but it does so by granting the logged in user “Full Control” to the ACL of every individual file and folder selected. This can take a long time and doesn’t make intuitive sense as to why it’s even necessary as COMPANY\newadmin is already a member of groups that have access to this folder
  • Using the “Effective Access” verification tool in Windows confirms the permissions granted to COMPANY\newadmin on the folders tells us that it’s supposed to have “full control” and yet that is not the behavior we see
  • As you might guess, the root cause is ultimately User Account Control or UAC but it’s a little more nuanced than I would have expected. UAC removes the admin tokens from the non-elevated session of COMPANY\newadmin at login which means when Explorer.exe is started, it runs in a non-elevated session.
  • This ends up being a problem as documented by Microsoft here:

https://docs.microsoft.com/en-us/troubleshoot/windows-server/windows-security/dont-have-permission-access-folder

“This behavior is by design. But because the typical pattern with UAC elevation is to run an instance of the elevated program with administrative rights, users may expect that by selecting Continue, which will generate an elevated instance of Windows Explorer, and not make permanent changes to file system permissions. However, this expectation isn’t possible, as Windows Explorer’s design doesn’t support the running of multiple process instances in different security contexts in an interactive user session.

That’s the issue. Explorer is running non-elevated and because of the long history of Windows Explorer and because its codebase long predates User Account Control as a concept, it doesn’t support being able to switch from a standard to an elevated session. Microsoft created the workaround we are familiar with whereby explorer launches a different process that then updates the ACLs of each individual file but that’s a kludgy workaround at best.

This behavior can be confirmed by accessing the same folder using any other tool such as PowerShell. In the example below I’m logged in as COMPANY\newadmin and am trying to open a folder. On the left I try to do so through Windows Explorer and I’m denied. On the right in the exact same session I do so in an elevated PowerShell session and it works fine:

  • If we check Task Manager, we can see that explorer.exe is in fact running without elevated rights by design when UAC is enabled.
    Due to how explorer.exe is architected, a second elevated instance cannot be started in the same user session:

This is great but still leaves an unanswered question. Why was the customer able to modify these exact same permissions with the COMPANY\administator account on the same server? I would think that account should be subject to the same security but clearly wasn’t. I speculated that a previous administrator in years past may have created some kind of exception for the account. It took some digging but I believe I found that setting:

Under Computer Configuration –> Windows Settings –> Security Settings –> Local Policies –> Security Options:

You can see above that “User Account Control Admin Approval Mode for the Built-In Administrator” account is set to disabled which always runs everything, including explorer as elevated.

Now that we understand that UAC is causing this issue, how can we workaround it?

Microsoft recommends the best practice of encouraging administrators to perform all of their server administration remotely as opposed to logging directly into individual servers directly via RDP. I think this is why this issue has never been fully addressed as if administrators followed this best practice, they would never encounter this issue. This means that even though you get an access denied issue when accessing the shares locally on the server, file access works fine when accessed remotely through UNC.

This brings us to the workaround which is remarkably simple:

  1. Connect to SERVER1 remotely from a different server using the COMPANY\newadmin credentials and browse to the desired file share
  2. Modify your permissions from there as desired. You will no longer receive the security prompts as described above

Every Billboard Hot 100 #1 from 1958 to 2020

This is a weird one.

A thought popped in my head this evening I couldn’t shake. What would a music critic from the first year the Billboard Hot 100 launched think of the progression of pop music since? We live in the age where all the information you could want is at your finger tips so I decided to try and find out. Here’s what I did:

  • I went to Billboard’s website and got a list the #1 song from each year since its founding in 1958
  • I loaded up Audacity and configured it to capture audio from my computer
  • I searched Youtube for every single song on the list in chronological order and played a random short sample from each song, recording directly to an MP3
  • I saved that file, uploaded it here and is now available for your listening enjoyment (62 years of music in 16 minutes)

If you’re going to consume this silly experiment the way it was intended, I recommend putting on a pair of headphones, closing your eyes and pretending you are a music critic from the late 1950s. You’ve been told you will hear #1 hits from the future and you must attempt to find the throughline of themes and melodies for the next 60 years of music. Listening to this all at once proved to be an enlightening experience as the individual threads that make up the progress of modern pop music became ever more visible.

A few notes of trivia:

  • From 1958 to 2020 only one artist has has a #1 hit more than once. Can you guess who? Yep. The Beatles
  • Elton John gets an honorable mention though as he had a second bonus number one in 1997 for his tribute to Princess Diana
  • A few of the song clips go a little longer than others. That wasn’t intentional but rather I realized I was vibing to the music. I opted to leave those slightly longer clips in
Clips from every billboard hot 100 #1 from 1958 to 2020

The full list of songs is below:

Continue reading

HOWTO: Find NAV user running long SQL queries

I was recently tasked with determining why a NAV 2018 installation was occasionally performing poorly.  I suspected that one or more users were running particular queries or functions but I needed to find a way to prove that.  The catch was I am not a SQL DBA and I know even less about NAV.  But that’s how it is sometimes.  It falls on you when no one else can figure it out.

I did a lot of Googling but the consensus seems to be that it’s not possible to find the specific NAV user running any given SQL query because from the perspective of SQL every query is run as the NAV service account.  I found posts that explain how to enable SQL/NAV debugging to try and capture the user name in real time but this puts a lot of extra load on an already poorly performing system and we didn’t know exactly when the issue would occur.

I decided to build a NAV 2018 lab environment with multiple users to see if I can find a way to determine which user was running which queries.  I came up with something that seems to work and wanted to share it in case it benefits others.

In the screenshot below, I have 2 users called ADMIN-RV and JSMITH.  I used the NAV Client to perform various NAV functions with each user.  The report below shows how long each query took to execute, the full SQL query (not limited to just 4,000 characters) and most importantly of all, the actual NAV user account that executed the query.  The rows that do not include a username are internal system queries and are not associated with any end user.  The report below shows all queries for testing but in production we would limit it to queries that ran for longer durations.

clip_image001

Continue reading

HOWTO: Deploy Dynamics NAV Contact Insights Outlook Addin End to End

This HOWTO explains how to configure a completely fresh environment with Dynamics NAV 2018 and the Contact Insight Dynamics NAV Outlook Addin on-premises while using Azure AD for authentication.

The reason this HOWTO was created is a customer wanted to use the Contact Insights NAV plugin for Outlook. It was determined that this plugin does not support the default “Windows” based authentication NAV uses by default and instead must use either “NavUserPassword” authentication or AzureAD authentication. The latter provides a more single sign on experience and since the customer already uses Office 365, it was decided to implement the addin using AzureAD.

Unfortunately the documentation Microsoft provides is lacking in the implementation details and so there has been considerable banging my head against the wall. Now that I’ve gotten it working, I wanted to document my steps for the benefit of both others and for future me.

This HOWTO is partially based on the official Microsoft guides for configuring AzureAD and the Outlook addin and are available here:

https://docs.microsoft.com/en-us/dynamics-nav/authenticating-users-with-azure-active-directory
https://docs.microsoft.com/en-us/dynamics-nav/setting-up-office-add-ins-outlook-inbox

In order to proceed, you will need the NAV 2018 installation media. That can be downloaded at the link below and at the time of this writing the newest version available is Cumulative Update 20.

Note: This free download can be used to install the full application and includes a demo license and database that will be sufficient for testing

https://www.microsoft.com/en-us/download/details.aspx?id=58503&WT.mc_id=rss_alldownloads_all

Here is what our lab environment looks like. For your purposes, please replace any reference to company to the name of your Office 365 tenant or domain name as appropriate

Continue reading

2019 Okanagan Half Marathon Route Map

UPDATE:  Since I made the waypoints anyway, I thought it might be fun to make a video flythrough of the entire 21.1KM of the course.  It’s quick and dirty and more than a little silly but it does serve to demonstrate that this is not going to be easy.  Check out the video here:

 

I’m as surprised as anyone but I have officially registered and paid to run a 21.1km half marathon this October.  Specifically I have entered the 2019 Okanagan Half Marathon which takes place in Kelowna, BC on October 20th.  For those keeping score at home, that’s just 6 months from the time of this writing.

I wanted to know what the route looked like so I could better visualize and mentally prepare during my training.  Unfortunately while the official website (available at https://www.okanaganmarathon.ca/route-maps-p183040) includes a “Route Map” for the “21K”, it actually only includes written directions.  That’s not terribly useful.

So I decided to manually map out all of the waypoints of the course in Google Earth Pro.  I figure I’d post this here in case it’s useful for anyone also participating in the same race or more generally are interested in what 21KM actually looks.

To start us off, here is what the course looks like when taken in as a whole.  The segments highlighted in yellow are those that have to be completed twice (once in either direction).

overviewmap

Continue reading

Celebrate International Women’s Day with 24 Radio Hours of Music by Women

I heard tonight that a local radio station will celebrate International Women’s Day by playing songs exclusively by women for 24 straight hours.  This got me thinking — could I fill an entire day’s worth of music sung only by women and do so using only songs that I actually know and like?

I realized I have almost 1,000 songs in my MP3 collection that have been acquired over two decades.  I figure if I have it, it’s a safe bet to say I like the song so I wondered how many hours all those songs would add up to.

First I needed to set a couple of ground rules.  Since the objective is to fill “24 hours” of Radio airplay, I have to take into account commercials and DJ banter.  Some googling suggests that a typical radio station plays 80% music every hour which sounds about right.  That works out to 40 minutes each hour or 16 total hours of music over a 24 hour period.

Finally, to be eligible for this list, the song must be sung exclusively by a woman or women.  Duets or guest spots make the song ineligible.

To figure this out, I needed to scan all of my MP3s and dump the list into Excel along with each song Duration.

Continue reading