Skip to main content

Mark Buckler

PhD Student


I’m currently a PhD student in the Electrical and Computer Engineering department here at Cornell University. I’m advised by Prof Adrian Sampson.

My graduate research focusses on hardware for embedded computer vision. In my work I’ve found that abstractions can simplify the design process, but breaking down boundaries through hardware-software co-design can produce superior results. For this reason I see myself as a system creator rather than a hardware or software developer.

As an entrepreneurial engineer I quickly gravitated to both industrial and academic research. While finishing my M.S. at UMass Amherst I formed Firebrand Innovations as a way of monetizing intellectual property I developed while in highschool. These days I can be found in Cornell’s Computer Systems Laboratory plugging away at code for powerful new computer vision systems.


  • Computer Architecture
  • Embedded Systems
  • Computer Vision
  • Machine Learning
  • VLSI


  • PhD in Computer Engineering

    Cornell University

  • M.S. in Electrical and Computer Engineering, 2014

    University of Massachusetts, Amherst

  • B.S. in Electrical Engineering, 2012

    Rensselaer Polytechnic Institute

Selected Publications

  • , , , Reconfiguring the Imaging Pipeline for Computer Vision, in International Conference on Computer Vision.

    Details arXiv PDF

  • , , , , Dynamic synchronizer flip-flop performance in FinFET technologies, in IEEE Symposium on Networks-on-Chip.

    Details DOI PDF

  • , , Predictive synchronization for DVFS-enabled multi-processor systems, in IEEE Symposium on Quality Electronic Design.

    Details DOI PDF

  • , , , Low-power networks-on-chip: Progress and remaining challenges, in IEEE Symposium on Low Power Electronics and Design.

    Details DOI PDF

Selected Patents

  • , Continuous Frequency Measurement for Predictive Periodic Synchronization, filed: , granted:

    Details Google Patents USPTO

  • , , , Methods and Systems of Synchronizer Selection, filed: , granted:

    Details Google Patents USPTO

  • , , Predictive Periodic Synchronization Using Phase-Locked Loop Digital Ratio Updates, filed: , granted:

    Details Google Patents USPTO

  • , Synchronizer Circuits With Failure-Condition Detection and Correction, filed: , granted:

    Details Google Patents USPTO

  • , Video Conferencing, filed: , granted:

    Details Google Patents USPTO

Recent Posts

While building the downloading and decoding scripts for the Youtube BoundingBoxes dataset I needed to accurately cut videos into smaller clips. Some of the annotated videos were quite long and the annotations rarely covered the full video, so to save space my scripts cut out and save only the annotated sections. If done incorrectly this video cutting can cause subtle frame timing issues which I didn’t fully understand when I started writing these scripts.


Back in 2010 I started a blog dedicated to electronic music called Quoth the Raver (a play on Quoth the Raven). It was a huge amount of fun sharing the music I found with everyone on the site, and it inspired me to get even more familiar with the various genres, artists, and labels. These days electronic music (also known as EDM) has exploded in popularity, so the novelty of sharing new artists has worn off to a certain extent.


As an academic I am often writing LaTeX code for publications or general documentation. I used to use ShareLatex and Overleaf because I have a secret love of GUIs (Lord forgive me!), but recently my co-authors have prefered to work in LaTeX repos shared on GitHub. For this reason my most recent paper was actually written entirely in Vim! This post isn’t meant to be a complete description of the best way to edit LaTeX in Vim, but instead I want to share some of the tools, tricks, and tips that I’ve found useful when writing LaTeX in Vim.


Most of us academic/industrial ECE/CS researchers want to make our results as reproducible as possible. Its good for the integrity of our field and it also helps future researchers and developers to build on our prior work. GitHub is great for distributing software, but code can rarely be compiled and run on its own as nearly all modern software relies heavily on packages and dependencies. Installing these dependencies can be incredibly frustrating or possibly even impossible depending on how long ago the code was written.


Anyone who’s set up a new Linux based, GPU enabled, deep learning system knows the horror that is driver installation. While it is technically possible to install NVIDIA drivers and CUDA from your package manager, the most up to date versions aren’t available and in the worst case you might even break graphics on your machine. After a great deal of difficulties installing and reinstalling, I finally found a viable strategy: installing both CUDA and proprietary Linux drivers from an NVIDA run file without OpenGL libraries.



Approximate Vision Pipeline

Using a reversible imaging pipeline to optimize sensor and ISP design for computer vision

YouTube BoundingBox Dataset Downloader

Helpful scripts I wrote for downloading and parsing Google’s huge video dataset

Reversible Imaging Pipeline

A Halide implementation of a forward and reverse computational photography pipeline

Neural Network Accelerator with Logarithmic Number System

Hardware accelerator for neural network computation using the LNS

Configurable Imaging Sensor

Tapeout of a configurable and energy-proportional image sensor

Network-on-Chip Synchronization

My M.S. thesis on synchronization circuits and systems for multi-clock domain Networks-on-Chip


What started as a Science Fair project became Firebrand Innovation’s first product


  • Rhodes Hall, Room 471C, Cornell University, New York, Ithaca, 14850, USA