Rune Julsbøl Henriksen

Infrastructure, Development, DevOps


My primary drive in my work is making technology and processes better and more effective. I try to automate and simplify existing tasks to build a better future.
I work best in a free workflow with room for iteratively identifying and improving while working towards my goals.
I'm passionate about security, GNU Linux, and libre software.


Systems Engineer October 2020 to Present, Copenhagen

Udviklings- og Forenklingsstyrelsen (UFST)

IT Specialist August 2017 to September 2020, Copenhagen

I worked closely in a small group of Linux specialists to build, maintain, and improve UFST's Linux infrastructure. Additionally, I collaborated with the DevOps department to implement new systems.


  • Transitioned the Linux deployment process from PXE and manual configuration into a fully automated Ansible provisioning process, providing a unified platform for our teams to depend on across VMWare, Azure, and physical hardware.
  • Lead the project of providing a platform for containerized systems. Installing, configuring, and upgrading Openshift. I worked to prepare it for production tackling issues of stability, security, and the processes required to make it successful and compliant in production.
  • Worked on development of Sheplog, an application for creating and controlling a robust, redundant, and secure central logging service to be used across the organization. Designed to simplify management of both the infrastructure and the compliance of access and storage of logs.

Lighthouse Technology

CTO & IT Consultant, August 2016 to July 2017, Copenhagen

I worked with clients in Denmark, Great Britain, and the United States supporting their day to day needs, helping improve their IT security and solving larger projects as needed. Furthermore, I was responsible for management and development of tools and systems used internally in the company.


  • I worked in a team to recover and continue development of a business essential C# application. The application had been abandoned was largely without documentation on the 213.000 lines of code project.
  • Recovered the data of a larger business essential bug tracking system after a larger data loss. I recreated all of the lost bugs. This was accomplished by aggregating the information various sources, processing it, and mapping it into the systems data structure.
  • I built our internal tracking and client management system, which simplifies timesheet reporting, expense management, and invoicing.


Network Engineer, August 2015 to August 2016, San Francisco

I worked with clients in San Francisco and the general Bay Area, where I identified their needs and implemented solutions that strengthened their IT stability and security.
Additionally, I was part of the daily operations for the clients being their direct contact when technical challenges arrived.


  • I maintained, updated, and supported over 130 servers and 1500 client computers during daily operations.
  • I was responsible for internal IT at Cobaltix, handling administration and support of internal servers, cloud, and VoIP services.
  • I migrated a client company their old VoIP system to a cloud hosted provider. I was responisble for contact with provider handling phone number transfers, configuring the system, and installing the IP phones on each client office.

Pandi Web

Developer, March 2014 to May 2015, Copenhagen

As a backend developer I developed websites, API's, and integrations. I primarily worked in PHP and JavaScript with Mysql and MSSQL. Additionally, I updated, setup, and configured Ubuntu servers used for hosting projects.


  • I created plugins, API's, and integrations for coordinating data between different webshops and their e-commerce and accounting systems.
  • I developed the backend for one of the Culture Ministry institutions websites.
  • I configured, updated, and installed Ubuntu on servers used for website hosting.


Development and Operations, September 2013 to March 2014, Copenhagen

I worked with implementation of algorithms, general backend development, and database management. I also managed our virtual machine infrastructure on Amazon Web Services.


  • I was responsible for our Ubuntu and RHEL virtual machines hosting websites, databases, and algorithms central to the operation of the business.
  • I implemented our primary probability algorithms for use in our conversion tool.
  • I extrapolated data from various datasets and combined them for usage to improve our probability algorithms.


Zabbix Certified Professional (ZCP)

Zabbix 4 - real-time monitoring, January 2020

Teaches the knowledge and skills necessary to configure and use Zabbix proxy and distributed monitoring for network and application monitoring, as well as discussing advanced Zabbix topics.

Zabbix Certified Specialist (ZCS)

Zabbix 4 - real-time monitoring, January 2020

Teaches understanding of Zabbix concepts and structure for the information technology professionals, who need to run Zabbix efficiently and provide support to other Zabbix users.


Elasticsearch Engineer I

Elastic training May 2019

First foundation class required for Elastic Certified Engineer.
Teaches deployment and management of Elasticsearch servers, troubleshooting, and how to work with queries, analyzers, mappings, and aggregations.

Extended PostgreSQL course

Redpill Linpro training January 2018

An extended PostgreSQL course with PostgreSQL core team member and developer Magnus Hagander.
The course went over a number of topics ranging from architecture to low level details about concurrency design and caching functionality.

Keywords of the course include:
architecture, installation, data design, data types, large objects, MVCC, security, logging, monitoring, maintenance, indexing and fulltext indexing, partitioning, backup and recovery, performance tuning, high availability, warm standby and slony, SQL Optimization


IT University of Copenhagen

No Degree, Software Engineering, 2013-2015

Completed courses:

  • Algorithms and Data Structures
  • First-year Project - Map of Denmark: Visualization, Navigation, Searching, and Route Planning
  • Foundations of Computing - Discrete Mathematics
  • Functional Programming
  • Introduction to Database Design
  • Introductory Programming with Project
  • Mobile and Distributed Systems MSc
  • Project Work and Communication
  • Second Year Project - Software Development in Large Teams with International Collaboration
  • System Development and Project Organisation
  • Systematic Design of User Interfaces



Linux, Kubernetes, Docker


Ansible, Chef Infra, Gitlab CI, Jenkins

HAProxy, Nginx, Apache

Prometheus, Grafana, Zabbix

Data management

Elasticsearch, Kibana, Logstash

Kafka, Minio, ZFS, GlusterFS

PostgreSQL, MySQL, Redis


Ruby, Python, Golang, JavaScript, PHP, Java, C#, F#

Ruby on Rails, Laravel,




Volunteer work


Contributor, Member, September 2017 - December 2019

Inspired by the Duplicati project I decided to volunteer my time and skills to help improve the project.
I contributed features and bug fixes, helped managing issues, and I tested and reviewed pull requests from contributors.
I developed a 3rd party client for interacting with the Duplicati API from the command line.
Additionally, I participated in the community on the forum and on Github to help support the project and its users.

ITU Innovators

Vice-chairman, June 2014 - May 2015

The purpose of ITU Innovators is to be a catalyst for new ideas, innovation and creativity at the IT University of Copenhagen.
As Vice-chainman of ITU Innovators I worked closely with the community volunteers and the board to achieve the goals of our organisation.

ITU Innovators

Member, September 2013 - June 2014

As a member of ITU Innovators I organized events and helped out where I could.


CVE 2019-10225

Coauthor August 2019

During pentesting of our Openshift environments, we discovered and disclosed a vulnerability in the way Openshift uses GlusterFS through the Heketi API.
This vulnerability allowed any authenticated user to gain admin access, on the Heketi API, granting them the ability to, for example, delete all persistent volume claims in the cluster.