Unreleased Set-Top Box Project

At the beginning of August 2019, I transitioned from my previous contract placement to another with You.i TV. The focus of this project was updating the UI and UX of several established set-top box brands to match that of modern implementations, such as Netflix, Disney+, and Amazon Prime. These set-top boxes ran on different custom distributions of Linux with vastly different hardware specifications, and required writing flexible, portable code that would operate across a multitude of feature sets. During the contract, I implemented features such as:

  • email integration
  • main landing screen, and dynamic configuration processing
  • VOD playback (for certain boxes)
  • SD and HD subtitles using EIA-608, EIA-708 and DVB captioning standards
  • server-sent event (SSE) management and dispatch

and worked on a variety of bugs relating to managing recordings, enabling playback, and minor device optimization for lower-end boxes. I worked with the technical lead on several occasions to help diagnose engine-level issues that prevented the application from running on the lower-end hardware, and communicated on-occasion with the engine team to provide fixes. Also, I created several testing tools using ExpressJS to mock SSE for various pieces of functionality ahead of delivery from their client and co-developers.

During the latter part of 2019, SimWave wanted me to be back in the office one day/week so I began working through the logistics of how to make that a reality. The set-top box signal comes from a coax feed hardwired through the You.i office, so getting the feed out of the office in some capacity other than video capture was not really an option. In November, during the You.i Hack Day, I worked with another coworker on a remote solution to facilitate communication with the boxes via a virtual remote, and streaming their video feed using WebRTC. The virtual remote was hosted using ExpressJS and featured a photo taken of a real STB remote with virtual button codes sent from keypresses to a backend server. The server would would send those keycodes over a serial connection to an Arduino board, which were then converted into their NEC IR protocol output and blasted at the STB. We had captured the real IR output from a remote and hardcoded the resulting codes into the Arduino to facilitate mocking the remote’s functionality.

The project was about 70% complete by the time the November Hack Day finished, and would then be revisited and completed during the following Hack Day in March of 2020. The day afterward, the announcement of the first confirmed case of COVID-19 was reported in Kanata, which prompted the company to shift to work from home. Over the following weeks, I worked with the technical and team leads for our project to scale this solution up to the entire team, which included Elgato HDMI capture cards, HDMI multiviewers, and several laptops for redundancy. I trained several members of the team with how to maintain this solution, which was still in place as I transitioned back to working at SimWave proper.

Role: Developer
Time on Project: Aug 2019 – Dec 2020
Technology used: You.i Engine, NodeJS (ExpressJS, serialport), Arduino
Language used:
C/C++, Bash, JavaScript

Simutech Training System

The 3D, hands-on Electrical Troubleshooting Simulations provide an immersive environment where your professionals will learn to diagnose and repair electrical faults in complex production machinery safely, accurately, and efficiently.

During the late fall of 2018, I was brought on to assist Simutech Multimedia and their team with adapting and reimagining their 2D eletrical troubleshooting software in 3D. They had specifically asked to use Unity as their team had some experience with the engine, so we set about discussing requirements and determining what would need to be done for a successful port.

The project changed rather significantly during my time working on it; initially envisioned as a desktop application, it grew into a web app with a connected services backend to help support the lower-end hardware their clients had. I built the primary interaction framework that allowed objects through the scene to be accessible, replaced if deemed damaged, and inspected. Along with that, I was responsible for building the application’s navigation using feedback from Simutech and their clients, and facilitating communication with Simutech’s electrical solver software to properly evaluate the state of the circuit at all times.

Later, I investigated WebGL as a potential platform, and once Simutech was satisfied with that direction, we explored how best to implement the user interface for a lot of the learning content that they had present. Their design team sought to provide the learning content as HTML/CSS, so I developed a solution that would allow the same content to be presented in-editor as well as in builds, as well as integrate successfully into several popular LMS’. During this time, the team was also exploring Kubernetes as a potential avenue for managing the deployment of the application and shortly thereafter my time on the contract was complete.

Role: Lead Developer
Time on Project: Oct 2018 – Jan 2020
Technologies used: Unity, Microsoft Azure, HTML5
Languages used:
C#, JavaScript

Inside the Mind and Body – Central Multimedia Experience

A large multimedia experience brings visitors inside the mind and body of extreme athletes to explore their experiences and motivations.

In January 2018, SimWave was approached by Science North to produce three experiences for their Beyond Human Limits exhibit, in collaboration with the Ontario Science Centre. The Central Multimedia Experience was envisioned as a hybrid quiz game/motivational experience for visitors who had visited the other zones within the exhibit. The area was designed as a hexagon, and segmented into three opposing walls, each with a projector playing a video with either educational content or positing a hypothetical question.

During the quiz section of the experience, visitors would be a question about what would be the most beneficial approach to learning a new skill and given two scenarios. All three projected walls would play a video presenting the question, then two would show the possible scenarios to achieve the desired goal. Visitors would then have to move to to the wall with the scenario they believed to be the correct answer. After a brief time has elapsed, cameras suspended in the ceiling would determine how many individuals were by either wall and select that answer. The selected answer would play on all three walls, followed by the correct answer, and a brief segment following the life and struggles of an extreme athlete.

TouchDesigner was chosen as the development platform largely because of it’s built-in video processing capabilities, and Science North had remaining licenses following a previous joint-developed project. During development, we had initially aimed to use RGB cameras to keep costs down and because we already owned licenses to software that could perform person detection on a live video feed, and would provide information back to use via a RESTful API. We were provided all three videos in a single video file, as a single computer would be driving the experience with three projectors. I’d developed a JSON configuration file that contained all of the start/end segments for each video, the correct and incorrect answers, and the overall flow of all the application so that it could be reconfigured by the staff at their behest.

Once I arrived on-site to begin installation, I noticed that the lighting in the area was very dim and found that the software could no longer distinguish people accurately. Moreover, the s0ftware was having issues with individuals wearing shirts with faces, and with individuals with backpacks. At this point, the RGB cameras were replaced by RealSense depth cameras, and the camera handling was re-written to remove dependence on the person tracking software and instead focus on blob detection via background removal. As time on-site was very limited, background removal was done via pictures taken of the area with no participants and the live feed was subtracted at runtime. As I completed my time on-site, the client had asked that children should be valued greater than adults, so using a height map, I added a valuation curve to the height and calculated the average pixel color to determine which area would be considered for the selected answer.

After installation was complete and the client was happy, I provided developer support until the contract expired at the end of the year.

Role: Sole Developer/Support
Time on Project: Jan 2018 – Dec 2018
Technology used: TouchDesigner, Intel RealSense cameras
Language used:
Python, C++

ClickFunnel Stats

The official ClickFunnels app for iOS and Android; the app allows users to check on the growth and status of their funnels while on the go.

I worked with the ClickFunnels team to realize their intended UI design and performed countless hours of UX testing to provide the best possible experience. Both apps were developed natively to remove any barriers to optimal performance, leveraging a combination of ClickFunnels’ mobile website and Turbolinks to enhance the experience from the app. Recently, Face ID and Touch ID support were added to the iOS app to provide users with additional means to log in should they have multiple accounts.

Role: Lead Developer/Support
Time on Project:
Feb 2018 – Aug 2018
Technologies used: Xcode, Android Studio
Languages used: Objective-C, Swift, Java, JavaScript

Forensic Psychology Training

We have developed virtual reality experiences to allow medical staff to train in areas of violence, manage critical incidents, and become accustomed to procedural tasks that one would do in their field of work. We have developed a suite of different scenarios for St. Joseph’s Hospital & McMaster University that nurses, doctors, and other medical professionals can train with to prepare them for in-field complex and possibly dangerous situations. Through the use of virtual reality, we are able to immerse the trainee in an environment that simulates real life, allowing it to have the biggest impact on in-field experiences. More specifically, we have created scenarios that have occurred in the past, although rather than having a static ending, the medical professionals can make their own decisions along the way. Their decisions will impact the overall experience and outcome.

Role: Lead Developer
Time on Project:
May 2017 – Jan 2018 for Contraband experience; Apr 2018 – May 2018 for Prison Seclusion experience
Technology used: Unreal Engine 4
Languages used: C++, Blueprints

Being Me – Insider Tours

Video: https://www.youtube.com/watch?v=58ElIRAMF8Y

For Inside Tours, I acted as Lead Developer through the project, overseeing its development through several staffing changes.

During the primary development cycle, I worked on:

  • almost all of the sound design/implementation
  • analyzing the amplitude of audio files to provide the automatic mouth movements of Zoey the Robot
  • developing the HMD-related features of the experience, such as detecting when the HMD was taken off to reset for the next user
  • big fixes

Once primary development was done, I was flown down to Charlotte, NC to do the installation and acted as the Support contact to Discovery Place as issues arose in the wild. I set up several scripts to launch the experience on startup and hide any extraneous launchers (Oculus/Steam) that would potentially push their way into the foreground. On top of that, I worked to help source parts as they broke, provided manuals for changing out hardware and maintaining the machines we had provided them, and flew back on a few occasions to help with hardware migration.

The experience was initially developed for the Oculus Rift, but after several months and numerous broken HMDs, Discovery Place had designed a custom fitting to mount an HTC Vive into to provide better durability and longevity. I’d reworked small parts of the experience to ensure that they worked correctly with the Vive and continued maintaining the machines until the end of the Support contract.

Role: Lead Developer/Support
Time on Project: Sept 2017 – Apr 2018
Technology used: Unreal Engine 4
Language(s) used: C++, Blueprints, Bash

Backcountry Showdown

A virtual reality experience showcased at Science North in Sudbury, Ontario, Canada. Participants assume the role of a skier or snowboarder making their way down the side of a mountain and can turn and ‘see’ one another through their journey.

Participants stand at one of two parallel stations at the exhibit and pull down a Gear VR housed in a custom apparatus that allows them to adjust for their height and view the experience in 360-degrees. Runtime is approximately 3 minutes.

Role: Lead Developer/Support
Time on Project:
Jan 2018 – Mar 2018
Technologies used: Unity, Gear VR
Language used:
C#

Image licensed from Jeffrey Brandjes via Unsplash.
Original image can be found here: https://unsplash.com/photos/hZin3PFaXfc
For information on Unsplash’s licencing, visit: https://unsplash.com/license

Match Your Moves

Match Your Moves is an interactive exhibit used to educate users about the different poses and maneuvers that extreme athletes take to perform various activities.
The exhibit uses a Kinect Sensor V2 device connected to a Windows PC to analyze the pose and orientation of up to two participants, allowing them to go head-to-head in trying to match the pose shown on screen fastest.

I was initially responsible with overseeing development while working on the two other experiences to be showcased with the Beyond Human Limits exhibit. After primary development was completed in Feb 2018, I spent the next month working with my team to analyze performance bottlenecks on the intended hardware. Between February and March, I traveled to the museum on several occasions to troubleshoot issues, and benchmark the application with the client.

Role: Lead Developer
Time on Project:
Jan 2018 – Mar 2018
Technology used: TouchDesigner
Language used:
Python

VISA’s Virtual Reality Bobsled Experience

Throughout the month of November we were tasked with developing a virtual reality bobsled experience for Visa. We were given only 3 weeks of development time although with a lot of hard work we were able to put together something visually compelling and fun. We created a 3D environment of the Olympic bobsled track that will be at the 2018 Olympics in PyeongChang, South Korea. The experience allows a user to actually be an Olympic bobsledder for Team Visa and race down the track, feeling how it would be for a true Olympic athlete.

Role: Lead Developer
Time on Project: Nov 2017 – Dec 2017
Technologies used: Unreal Engine 4, HTC Vive
Languages used:
C++, Blueprints

Exercise Athéna – First Responder Scene Assessment Application

A Windows tablet-based training application developed in association with International Safety Research Inc. for Transport Canada’s Exercise Athéna.

Exercise Athéna was a training simulation intended to implement the Transport Canada Emergency Response Plan to see if certain deficiencies can be corrected, but most importantly to allow the various response teams to anticipate possible dangers and difficulties. Many organizations participated in Exercise Athéna, including the Defense Research and Development Canada (DRDC) Safety Science Center, the CP and CN Railway Companies, the Association of Fire Chiefs of Quebec, first responders, but also industrial members like Suncor.

Role: Lead Developer
Time on Project: Jan 2017 – Apr 2017
Technologies used: Unity, Google Maps API
Language used: C#