<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Jacob Colvin</title><link>https://jacobcolvin.com/posts/</link><description>Recent content in Posts on Jacob Colvin</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>&lt;a href="https://creativecommons.org/licenses/by-nc/4.0/" target="_blank" rel="noopener">CC BY-NC 4.0&lt;/a></copyright><lastBuildDate>Sun, 13 Jul 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://jacobcolvin.com/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Introducing kat: A TUI and rule-based rendering engine for Kubernetes manifests</title><link>https://jacobcolvin.com/posts/2025/07/introducing-kat-a-tui-and-rule-based-rendering-engine-for-kubernetes-manifests/</link><pubDate>Sun, 13 Jul 2025 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2025/07/introducing-kat-a-tui-and-rule-based-rendering-engine-for-kubernetes-manifests/</guid><description>I don&amp;rsquo;t know about you, but one of my favorite tools in the Kubernetes ecosystem is k9s. It&amp;rsquo;s a terminal UI (TUI) for interacting with your Kubernetes clusters, and at work I have it open pretty much all of the time. After I started using it, I felt like my productivity skyrocketed, since anything you could want is just a few keystrokes away.
However, when it comes to rendering and validating manifests locally, I found myself frustrated with the existing tools (or lack thereof).</description><content type="html"><![CDATA[<p>I don&rsquo;t know about you, but one of my favorite tools in the Kubernetes ecosystem is <a href="https://k9scli.io/"><code>k9s</code></a>. It&rsquo;s a terminal UI (TUI) for interacting with your Kubernetes clusters, and at work I have it open pretty much all of the time. After I started using it, I felt like my productivity skyrocketed, since anything you could want is just a few keystrokes away.</p>
<p>However, when it comes to rendering and validating manifests locally, I found myself frustrated with the existing tools (or lack thereof). For me, I found that working with manifest generators like <code>helm</code> or <code>kustomize</code> often involved a repetitive cycle:</p>
<ol>
<li>Run <code>helm template</code>, <code>kustomize build</code>, or similar commands</li>
<li>Search through many pages of output looking for specific resources</li>
<li>Find some issue and make a change to the source files</li>
<li>Re-run the rendering commands</li>
<li>Re-run whatever search I originally did</li>
<li>Find another issue and make a change to the source files</li>
<li>Repeat ad nauseam</li>
</ol>
<p>So, I set out to build something that would make this process easier and more efficient. After a few months of work, I&rsquo;m excited to introduce you to <code>kat</code>!</p>
<h2 id="what-is-kat">What is <code>kat</code>?</h2>
<p><code>kat</code> automatically invokes manifest generators like <code>helm</code> or <code>kustomize</code>, and provides a persistent, navigable view of rendered resources, with support for live reloading, integrated validation, and more.</p>
<p>It is made of two main components, which can be used together or independently:</p>
<ol>
<li>A <strong>rule-based engine</strong> for automatically rendering and validating manifests</li>
<li>A <strong>terminal UI</strong> for browsing and debugging rendered Kubernetes manifests</li>
</ol>
<p>Together, these deliver a seamless development experience that maintains context and focus while iterating on Helm charts, Kustomize overlays, and other manifest generators.</p>
<p><img alt="demo" src="https://github.com/MacroPower/kat/raw/main/docs/assets/demo.gif"></p>
<blockquote>
<p>If you&rsquo;re interested in giving <code>kat</code> a try, there are installation and usage instructions available in the repo&rsquo;s README: <a href="https://github.com/macropower/kat">github.com/macropower/kat</a></p>
</blockquote>
<h3 id="features">Features</h3>
<p><strong>Manifest Browsing</strong>: Rather than outputting a single long stream of YAML, <code>kat</code> organizes the output into a browsable list structure. Navigate through any number of rendered resources using their group/kind/ns/name metadata.</p>
<p><strong>Live Reload</strong>: Just use the <code>-w</code> flag to automatically re-render when you modify source files, without losing your current position or context when the output changes.</p>
<p><strong>Integrated Validation</strong>: Run tools like <code>kubeconform</code>, <code>kyverno</code>, or custom validators automatically on rendered output through configurable hooks. Additionally, you can define custom &ldquo;plugins&rdquo;, which function the same way as k9s plugins (i.e. commands invoked with a keybind).</p>
<p><strong>Flexible Configuration</strong>: <code>kat</code> allows you to define profiles for different manifest generators (like Helm, Kustomize, etc.). Profiles can be automatically selected based on output of CEL expressions, allowing <code>kat</code> to adapt to your project structure.</p>
<p><strong>And Customization</strong>: <code>kat</code> can be configured with your own keybindings, as well as custom themes!</p>
<p><img alt="Themes" src="https://github.com/MacroPower/kat/raw/main/docs/assets/themes.gif"></p>
<h2 id="how-do-i-use-it">How do I use it?</h2>
<p>Let&rsquo;s use a simple example with <code>helm</code> to illustrate how <code>kat</code> works.</p>
<blockquote>
<p>Note that configuration for <code>helm</code> is included in kat&rsquo;s <a href="https://github.com/MacroPower/kat/blob/main/pkg/config/config.yaml">default configuration</a>.</p>
</blockquote>
<p>First, we need to define a profile for <code>helm</code>. This profile will specify how to render manifests using <code>helm template</code>, as well as:</p>
<ul>
<li>Any init, preRender, and/or postRender hooks we want to apply</li>
<li>Any plugins that should be available in this context</li>
<li>Any UI settings that should differ from the global config</li>
</ul>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e"># yaml-language-server: $schema=./config.v1beta1.json</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">kat.jacobcolvin.com/v1beta1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Configuration</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">profiles</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">helm</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">command</span>: <span style="color:#ae81ff">helm</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">args</span>: [<span style="color:#ae81ff">template, ., --generate-name]</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">env</span>: []
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Reload on edits to YAML and template files.</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">source</span>: &gt;-<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">      files.filter(f, pathExt(f) in [&#34;.yaml&#34;, &#34;.yml&#34;, &#34;.tpl&#34;])</span>      
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Inherit helm environment variables from the caller process.</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">envFrom</span>: <span style="color:#75715e">&amp;helmEnvFrom</span>
</span></span><span style="display:flex;"><span>      - <span style="color:#f92672">callerRef</span>:
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">pattern</span>: <span style="color:#ae81ff">^HELM_.+</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">hooks</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#75715e"># Ensure that helm is installed.</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">init</span>:
</span></span><span style="display:flex;"><span>        - <span style="color:#f92672">command</span>: <span style="color:#ae81ff">helm</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">args</span>: [<span style="color:#ae81ff">version]</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e"># Build any helm dependencies before rendering.</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">preRender</span>:
</span></span><span style="display:flex;"><span>        - <span style="color:#f92672">command</span>: <span style="color:#ae81ff">helm</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">args</span>: [<span style="color:#ae81ff">dependency, build]</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">envFrom</span>: <span style="color:#75715e">*helmEnvFrom</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e"># Validate rendered resources with kubeconform.</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">postRender</span>:
</span></span><span style="display:flex;"><span>        - <span style="color:#f92672">command</span>: <span style="color:#ae81ff">kubeconform</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">args</span>: [<span style="color:#e6db74">&#34;-strict&#34;</span>, <span style="color:#e6db74">&#34;-summary&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">ui</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#75715e"># Use a custom theme for the helm profile.</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">theme</span>: <span style="color:#e6db74">&#34;dracula&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">plugins</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#75715e"># Add a plugin to invoke helm dry-run when `H` is pressed.</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">dry-run</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">description</span>: <span style="color:#ae81ff">invoke helm dry-run</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">keys</span>:
</span></span><span style="display:flex;"><span>          - <span style="color:#f92672">code</span>: <span style="color:#ae81ff">H</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">command</span>: <span style="color:#ae81ff">helm</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">args</span>: [<span style="color:#ae81ff">install, ., -g, --dry-run]</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">envFrom</span>: <span style="color:#75715e">*helmEnvFrom</span>
</span></span></code></pre></div><p>Now, we can use this profile to render manifests. For example, if we have a Helm chart in the current directory, we can run:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>kat . helm
</span></span></code></pre></div><p>Next, we can define a <code>rule</code>, so that we automatically select the <code>helm</code> profile when we run <code>kat</code> in a directory containing a helm chart:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e"># yaml-language-server: $schema=./config.v1beta1.json</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">kat.jacobcolvin.com/v1beta1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Configuration</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">profiles</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># ...</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">rules</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># If there is a chart file in the current directory, and</span>
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># it defines `apiVersion: v2`, use the `helm` profile.</span>
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">match</span>: &gt;-<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">      files.exists(f,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">        pathBase(f) in [&#34;Chart.yaml&#34;, &#34;Chart.yml&#34;] &amp;&amp;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">        yamlPath(f, &#34;$.apiVersion&#34;) == &#34;v2&#34;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">      )</span>      
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">profile</span>: <span style="color:#ae81ff">helm</span>
</span></span></code></pre></div><p>Now, if we have a Helm chart in the current directory, we can simply run:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>kat
</span></span></code></pre></div><p>And <code>kat</code> will automatically select the <code>helm</code> profile, render, and validate the helm chart for us.</p>
<p>You can continue to define additional profiles and rules to handle other manifest generators like Kustomize, Jsonnet, CUE, KCL, and more. You can also express more complex rules using CEL expressions to match specific project structures or configurations (such as using <a href="https://github.com/allenporter/flux-local">flux-local</a> if your project contains some Fluxtomizations).</p>
<p>To learn more, check out the <a href="https://github.com/MacroPower/kat">README</a> and kat&rsquo;s <a href="https://github.com/MacroPower/kat/blob/main/docs/CEL.md">CEL Expression Guide</a>.</p>
<h2 id="conclusion">Conclusion</h2>
<p><code>kat</code> solved my specific workflow problems when working with Kubernetes manifests locally. And while it may not be a perfect fit for everyone, I hope it can help others who find themselves in a similar situation.</p>
<p>If you&rsquo;re interested in giving it a try, check out the repo here:</p>
<p><a href="https://github.com/macropower/kat">github.com/macropower/kat</a> (please ⭐ if you like it!)</p>
<p>Also, a huge thanks to the authors of the following projects (that provided inspiration and/or code):</p>
<ul>
<li><a href="https://github.com/derailed/k9s">k9s</a> - <em>A terminal UI to interact with your Kubernetes clusters.</em></li>
<li><a href="https://github.com/sharkdp/bat">bat</a> - <em>A <code>cat(1)</code> clone with wings.</em></li>
<li><a href="https://github.com/go-task/task">task</a> - <em>A task runner for Go.</em></li>
<li><a href="https://github.com/charmbracelet/glow">glow</a> - <em>Render markdown on the CLI, with pizzazz!</em></li>
<li><a href="https://github.com/charmbracelet/soft-serve">soft-serve</a> - <em>The mighty, self-hostable Git server for the command line.</em></li>
<li><a href="https://github.com/charmbracelet/wishlist">wishlist</a> - <em>The SSH directory.</em></li>
<li><a href="https://github.com/sachaos/viddy">viddy</a> - <em>A modern <code>watch</code> command.</em></li>
<li><a href="https://github.com/charmbracelet/bubbletea">charmbracelet/bubbletea</a> - <em>A powerful TUI framework for Go.</em>
<ul>
<li>&hellip;plus many other fantastic libraries from <a href="https://github.com/charmbracelet"><em>charm</em></a></li>
</ul>
</li>
<li><a href="https://github.com/alecthomas/chroma">alecthomas/chroma</a> - <em>A general-purpose syntax highlighter in pure Go.</em></li>
<li><a href="https://github.com/google/cel-go">google/cel-go</a> - <em>A fast, portable, and safe expression evaluation engine.</em></li>
<li><a href="https://github.com/goccy/go-yaml">goccy/go-yaml</a> - <em>YAML support for Go.</em></li>
<li><a href="https://github.com/fsnotify/fsnotify">fsnotify</a> - <em>Cross-platform filesystem notifications.</em></li>
<li><a href="https://github.com/invopop/jsonschema">invopop/jsonschema</a> - <em>JSON Schema generation.</em></li>
<li><a href="https://github.com/santhosh-tekuri/jsonschema">santhosh-tekuri/jsonschema</a> - <em>JSON Schema validation.</em></li>
</ul>
]]></content></item><item><title>Linkerd Multi-cluster Without a Public IP Address</title><link>https://jacobcolvin.com/posts/2023/04/linkerd-multi-cluster-without-a-public-ip-address/</link><pubDate>Mon, 03 Apr 2023 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2023/04/linkerd-multi-cluster-without-a-public-ip-address/</guid><description>Recently, I&amp;rsquo;ve set up Linkerd in my homelab, and one of the features I was really interested in was multi-cluster communication. This allows you to mirror services between clusters. Meaning, apps in one cluster can communicate with services in another cluster, as if they were in the same cluster.
Setting up multi-cluster communication with Linkerd is straightforward under ideal conditions. However, it can be more challenging if one of the clusters cannot create services of type LoadBalancer (with a public, or otherwise routable IP address).</description><content type="html"><![CDATA[<p>Recently, I&rsquo;ve set up <a href="https://github.com/linkerd/linkerd2">Linkerd</a> in my homelab, and one of
the features I was really interested in was <a href="https://linkerd.io/2.12/tasks/multicluster/">multi-cluster communication</a>.
This allows you to mirror services between clusters. Meaning, apps in one
cluster can communicate with services in another cluster, as if they were in the
same cluster.</p>
<p>Setting up multi-cluster communication with Linkerd is straightforward under
ideal conditions. However, it can be more challenging if one of the clusters
cannot create services of type <code>LoadBalancer</code> (with a public, or otherwise
routable IP address). This is the case for me, as I have clusters both at home
and in Hetzner, with my home Kubernetes cluster running behind NAT.</p>
<p>Fortunately, there are ways to work around this! In this post, I&rsquo;ll walk you
through my design and implementation process, discuss the challenges I faced,
and share the workarounds I came up with to make everything run smoothly.
Additionally, I&rsquo;ll explore some alternative options for multi-cluster
communication and potential improvements to my current setup.</p>
<blockquote>
<p>My exact and up-to-date implementation of everything in this article can be
found in my homelab repo! <a href="https://github.com/MacroPower/homelab">https://github.com/MacroPower/homelab</a></p>
</blockquote>
<h2 id="the-basics">The Basics</h2>
<p>One thing that was not immediately clear to me, when I was following the
<a href="https://linkerd.io/2.12/tasks/multicluster/">multi-cluster setup docs</a> for the first time, was how services
are mirrored bi-directionally between clusters. The docs give examples of how to
link a theoretical &ldquo;east&rdquo; cluster to a &ldquo;west&rdquo; cluster, which can be done via
this command:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>linkerd --context<span style="color:#f92672">=</span>east multicluster link --cluster-name east |
</span></span><span style="display:flex;"><span>  kubectl --context<span style="color:#f92672">=</span>west apply -f -
</span></span></code></pre></div><p>However, this only allows you to mirror services from the &ldquo;east&rdquo; cluster to the
&ldquo;west&rdquo; cluster. If you want to mirror services from the &ldquo;west&rdquo; cluster to the
&ldquo;east&rdquo; cluster, you need to run this command a second time, but in the inverse
order:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>linkerd --context<span style="color:#f92672">=</span>west multicluster link --cluster-name west |
</span></span><span style="display:flex;"><span>  kubectl --context<span style="color:#f92672">=</span>east apply -f -
</span></span></code></pre></div><p>This means that if you want bi-directional mirroring between two clusters, each
cluster needs to have the ability to create services of type <code>LoadBalancer</code>,
with an IP address that can be reached by the other cluster.</p>
<p>To say it differently, any clusters acting as a source for service mirrors must
be routable from any destination clusters. If the source isn&rsquo;t routable, you
have to find a way to make it so, regardless of the networking situation on the
other cluster.</p>
<h2 id="the-recommended-solution">The Recommended Solution</h2>
<p>Linkerd&rsquo;s <a href="https://linkerd.io/2.12/tasks/multicluster/">multi-cluster docs</a> recommend looking into
<a href="https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/">inlets</a>. The concept is very cool and also pretty simple.</p>
<p>Basically, in your home / non-routable cluster, you can have a client acting as
a sort of proxy to any local services. The client establishes a tunnel to a
server running somewhere accessible from the cloud / routable cluster. This
means that you should be able to just point Linkerd to the inlets server, and
from there it will be routed to the client, and then to the previously
non-routable Linkerd gateway!</p>
<p><img alt="diagram" src="https://raw.githubusercontent.com/cubed-it/inlets/master/docs/inlets.png"></p>
<p>However, inlets is no longer the open-source project it once was. The author
stopped maintaining the open-source version a while ago, before eventually
deleting all of the source code. Now, it&rsquo;s available for purchase as a monthly
subscription, with personal licenses starting at $20/month. For me, this is a
completely infeasible price to pay. There are still parts of the project that
are open-source, notably <a href="https://github.com/inlets/inlets-operator">inlets-operator</a>, but it spins up an
entire VPS for the tunnel, which we then have to pay for, when we already have a
perfectly good K8s cluster we could be hosting it on.</p>
<p>Luckily, a fork of inlets was created, <a href="https://github.com/cubed-it/inlets">cubed-it/inlets</a>. This will
allow us to manually create a client in our home / non-routable cluster, and a
server in our cloud / routable cluster. Again, very luckily, the fork adds
support for tunneling TCP ports (as opposed to HTTP), which we will need for
both Linkerd and also the K8s API server.</p>
<h2 id="my-implementation">My Implementation</h2>
<p>Conceptually, what I wanted to do was this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>┌────────────────────────────────────────┐     ┌─────────────────────────────────────────┐
</span></span><span style="display:flex;"><span>│             Cloud Cluster              │  │  │              Home Cluster               │
</span></span><span style="display:flex;"><span>│                                        │     │                                         │
</span></span><span style="display:flex;"><span>│ ┌───────────────┐     ┌─────────────┐  │  │  │ ┌─────────────┐      ┌─────────────┐    │
</span></span><span style="display:flex;"><span>│ │ Inlets Server │◀────│   Ingress   │◀─┼─────┼─│Inlets Client│──┬──▶│   Linkerd   │──┐ │
</span></span><span style="display:flex;"><span>│ └───────────────┘     └─────────────┘  │  │  │ └─────────────┘  │   │   Gateway   │  │ │
</span></span><span style="display:flex;"><span>│         ▲                              │     │                  │   └─────────────┘  │ │
</span></span><span style="display:flex;"><span>│         ├───────────────────┐          │  │  │                  │   ┌─────────────┐  │ │
</span></span><span style="display:flex;"><span>│         │                   │          │     │                  └──▶│   K8s API   │  │ │
</span></span><span style="display:flex;"><span>│         │                   │          │  │  │                      └─────────────┘  │ │
</span></span><span style="display:flex;"><span>│ ┌───────────────┐ ╔══════════════════╗ │     │                      ╔═════════════╗  │ │
</span></span><span style="display:flex;"><span>│ │Linkerd Service│ ║      Foobar      ║ │  │  │                      ║   Foobar    ║  │ │
</span></span><span style="display:flex;"><span>│ │    Mirror     │ ║      Mirror      ║─│─ ─ ─│─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─▶║   Service   ║◀─┘ │
</span></span><span style="display:flex;"><span>│ └───────────────┘ ╚══════════════════╝ │  │  │                      ╚═════════════╝    │
</span></span><span style="display:flex;"><span>│         │                   ▲          │     │                                         │
</span></span><span style="display:flex;"><span>│         └────────Creates────┘          │  │  │                                         │
</span></span><span style="display:flex;"><span>└────────────────────────────────────────┘     └─────────────────────────────────────────┘
</span></span></code></pre></div><p>In this implementation, the inlets client and server is entirely contained
within Kubernetes. This is great, because we don&rsquo;t have to pay anything extra,
and also it&rsquo;s great from a security perspective because the only thing we need
to expose outside the cluster is a single endpoint for the tunnel, which can be
done via our normal ingress.</p>
<p>In this section, I will walk you through my implementation for setting up a
secure tunnel between my home and cloud Kubernetes clusters using inlets. We
will be deploying inlets server and client using Helm charts I created for both
the inlets server and client, which works with <a href="https://github.com/cubed-it/inlets">cubed-it/inlets</a>,
and then addressing some of the challenges encountered with Linkerd during the
process.</p>
<p>First, add a new namespace in every cluster:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>kubectl create namespace inlets
</span></span></code></pre></div><p>And create a secret in every cluster with the same token:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>token<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span> head -c <span style="color:#ae81ff">16</span> /dev/urandom | shasum | cut -d<span style="color:#e6db74">&#34; &#34;</span> -f1 <span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>kubectl create secret generic linkerd-tunnel-token <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>  --from-literal<span style="color:#f92672">=</span>token<span style="color:#f92672">=</span><span style="color:#e6db74">${</span>token<span style="color:#e6db74">}</span>
</span></span></code></pre></div><p>In our cloud cluster, we can use the <code>inlets-server</code> chart:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">kustomize.config.k8s.io/v1beta1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Kustomization</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">helmCharts</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">inlets-server</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">repo</span>: <span style="color:#ae81ff">https://jacobcolvin.com/helm-charts/</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">version</span>: <span style="color:#e6db74">&#34;0.1.1&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">releaseName</span>: <span style="color:#ae81ff">linkerd-tunnel</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">inlets</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">valuesInline</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">inlets</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Port is the main port that serves any HTTP traffic.</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Other TCP ports are assigned on the client.</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">port</span>: <span style="color:#ae81ff">4191</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">disableTransportWrapping</span>: <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">tokenSecretName</span>: <span style="color:#ae81ff">linkerd-tunnel-token</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">service</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">data-plane</span>:
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">type</span>: <span style="color:#ae81ff">ClusterIP</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">ports</span>:
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">kube</span>:
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6443</span>
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">proxy</span>:
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">port</span>: <span style="color:#ae81ff">4143</span>
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">admin</span>:
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">port</span>: <span style="color:#ae81ff">4191</span>
</span></span><span style="display:flex;"><span>              <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">ingress</span>: {}
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#  main:</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#    enabled: true</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#    hosts:</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#      - host: linkerd-tunnel.example.com</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#        paths:</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#          - path: /</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#            pathType: Prefix</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#    tls:</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#      - hosts: [linkerd-tunnel.example.com]</span>
</span></span><span style="display:flex;"><span>      <span style="color:#75715e">#    annotations: {}</span>
</span></span></code></pre></div><p>In our home cluster, we can use the <code>inlets-client</code> chart:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">kustomize.config.k8s.io/v1beta1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Kustomization</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">helmCharts</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">inlets-client</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">repo</span>: <span style="color:#ae81ff">https://jacobcolvin.com/helm-charts/</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">version</span>: <span style="color:#e6db74">&#34;0.1.2&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">releaseName</span>: <span style="color:#ae81ff">linkerd-tunnel</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">inlets</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">valuesInline</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">inlets</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># The url points to the ingress of the other cluster.</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">url</span>: <span style="color:#ae81ff">wss://linkerd-tunnel.example.com</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Since we don&#39;t want to restrict the hostnames the other cluster can</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># use, strictForwarding should be false.</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">strictForwarding</span>: <span style="color:#66d9ef">false</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">tokenSecretName</span>: <span style="color:#ae81ff">linkerd-tunnel-token</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Configure upstreams for Linkerd. Any traffic coming to `match` will</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># be forwarded to `target`. If the `match` value is `tcp:PORT`, the</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># server will automatically create a server listening on that port.</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">upstreams</span>:
</span></span><span style="display:flex;"><span>          - <span style="color:#75715e"># Accessible on inlets.port / 4191.</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">target</span>: <span style="color:#ae81ff">http://linkerd-gateway.linkerd-multicluster.svc.cluster.local:4191</span>
</span></span><span style="display:flex;"><span>          - <span style="color:#f92672">match</span>: <span style="color:#ae81ff">tcp:6443</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">target</span>: <span style="color:#ae81ff">kubernetes.default.svc.cluster.local:443</span>
</span></span><span style="display:flex;"><span>          - <span style="color:#f92672">match</span>: <span style="color:#ae81ff">tcp:4143</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">target</span>: <span style="color:#ae81ff">linkerd-gateway.linkerd-multicluster.svc.cluster.local:4143</span>
</span></span></code></pre></div><p>In <em>theory</em>, you would expect to then be able to Link the clusters, just by
overriding the defaults for the gateway and API server addresses:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>linkerd --context<span style="color:#f92672">=</span>home multicluster link --cluster-name home <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>    --gateway-addresses <span style="color:#e6db74">&#34;linkerd-tunnel-data-plane.inlets.svc.cluster.local&#34;</span> --gateway-port <span style="color:#ae81ff">4143</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>    --api-server-address <span style="color:#e6db74">&#34;https://linkerd-tunnel-data-plane.inlets.svc.cluster.local:6443&#34;</span> |
</span></span><span style="display:flex;"><span>  kubectl --context<span style="color:#f92672">=</span>cloud apply -f -
</span></span></code></pre></div><p>But this is not the case. There are multiple interactions between the normal
output of <code>linkerd multicluster link</code> and the inlets tunnel that need to be
accounted for.</p>
<h2 id="fixing-the-probe-gateway-service">Fixing the probe-gateway service</h2>
<p>First of all, the <code>linkerd multicluster link</code> command creates a <code>probe-gateway</code>
service, which points to the gateway&rsquo;s health endpoint. However, in this case,
that health endpoint is actually another Kubernetes service. Now, I&rsquo;m not
confident on exactly why this is, but this is not a configuration that works in
Kubernetes. The <code>probe-gateway</code> service will time out every time, even though
the endpoint it points to will work just fine. To work around this issue, we
need to change the <code>probe-gateway</code> service so that it&rsquo;s of type <code>ExternalName</code>,
with an <code>externalName</code> of the inlets server service.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Service</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">probe-gateway-home</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">linkerd-multicluster</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">labels</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">mirror.linkerd.io/mirrored-gateway</span>: <span style="color:#e6db74">&#34;true&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">mirror.linkerd.io/cluster-name</span>: <span style="color:#ae81ff">home</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">type</span>: <span style="color:#ae81ff">ExternalName</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">externalName</span>: <span style="color:#ae81ff">linkerd-tunnel-data-plane.inlets.svc.cluster.local</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">ports</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">mc-probe</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">port</span>: <span style="color:#ae81ff">4191</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span></code></pre></div><p>Once this is done, the <code>linkerd multicluster link</code> command will work, and the
<code>linkerd multicluster check</code> command will actually succeed as well. However, if
you then create a service mirror, it will not work.</p>
<h2 id="fixing-the-service-mirror">Fixing the service mirror</h2>
<p>The service mirror suffers from the same issue as the probe gateway. It creates
a service that points to the gateway&rsquo;s proxy endpoint, which is another
Kubernetes service. Unfortunately, we can&rsquo;t solve this so easily since the
service mirror is created dynamically by Linkerd. But, we can get around this
issue by changing the way that we expose the gateway&rsquo;s proxy endpoint. Instead
of using a normal service, we can create a <code>LoadBalancer</code> service, which is very
annoying, but I couldn&rsquo;t find any better workarounds for it. Note that this
service does not need to be exposed outside the cluster, so don&rsquo;t feel a need to
add a firewall exception or anything like that.</p>
<p>This introduces an additional issue. If we already have a <code>LoadBalancer</code> in our
cluster for the gateway (for services being mirrored in the other direction),
we can&rsquo;t reuse the same port.</p>
<p>You can implement all of this by first making a slight modification to the
upstreams declared in the <code>inlets-client</code> chart, to remap the ports:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">upstreams</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">target</span>: <span style="color:#ae81ff">http://linkerd-gateway.linkerd-multicluster.svc.cluster.local:4191</span>
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">match</span>: <span style="color:#ae81ff">tcp:6443</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">target</span>: <span style="color:#ae81ff">kubernetes.default.svc.cluster.local:443</span>
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">match</span>: <span style="color:#ae81ff">tcp:6143</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">target</span>: <span style="color:#ae81ff">linkerd-gateway.linkerd-multicluster.svc.cluster.local:4143</span>
</span></span></code></pre></div><p>And also changing the services declared in the <code>inlets-server</code> chart:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">inlets</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6191</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">service</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">data-plane</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">type</span>: <span style="color:#ae81ff">ClusterIP</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">ports</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">kube</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6443</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">admin</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6191</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">data-plane-lb</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">type</span>: <span style="color:#ae81ff">LoadBalancer</span>
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Add any annotations you need, e.g. for MetalLB.</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">annotations</span>: {}
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">ports</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">proxy</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6143</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span></code></pre></div><p>And the <code>probe-gateway</code> service that we changed, we need to change it again for
the new <code>mc-probe</code> port:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Service</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">probe-gateway-home</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">linkerd-multicluster</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">labels</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">mirror.linkerd.io/mirrored-gateway</span>: <span style="color:#e6db74">&#34;true&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">mirror.linkerd.io/cluster-name</span>: <span style="color:#ae81ff">home</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">type</span>: <span style="color:#ae81ff">ExternalName</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">externalName</span>: <span style="color:#ae81ff">linkerd-tunnel-data-plane.inlets.svc.cluster.local</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">ports</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">mc-probe</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">port</span>: <span style="color:#ae81ff">6191</span> <span style="color:#75715e"># &lt;-- The new port.</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span></code></pre></div><h2 id="meshing-the-inlets-pods">Meshing the inlets pods</h2>
<p>As a bit of a plus, using different ports from <code>linkerd-proxy</code> means that we can
mesh the inlets pods themselves. You can do this by adding <code>linkerd.io/inject: enabled</code> to the namespace annotations:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Namespace</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">inlets</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">annotations</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">linkerd.io/inject</span>: <span style="color:#ae81ff">enabled</span>
</span></span></code></pre></div><p>The only change you will need to make is on the client. You will also need to
skip the Linkerd outbound ports, which you can do by adding the following:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">podAnnotations</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">config.linkerd.io/skip-outbound-ports</span>: <span style="color:#e6db74">&#34;4143,4191&#34;</span>
</span></span></code></pre></div><h2 id="linking-the-clusters">Linking the clusters</h2>
<p>With all these workarounds in place, the architecture looks like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>┌──────────────────────────────────────────────────────────┐     ┌─────────────────────────────────────┐
</span></span><span style="display:flex;"><span>│                      Cloud Cluster                       │  │  │            Home Cluster             │
</span></span><span style="display:flex;"><span>│                                                          │     │                                     │
</span></span><span style="display:flex;"><span>│ ┌───────────────┐   ┌───────────────┐   ┌─────────────┐  │  │  │ ┌───────────────┐   ┌─────────────┐ │
</span></span><span style="display:flex;"><span>│ │ Inlets Server │◀──│ Service: 8123 │◀──│   Ingress   │◀─┼─────┼─│ Inlets Client │──▶│   K8s API   │ │
</span></span><span style="display:flex;"><span>│ └───────────────┘   └───────────────┘   └─────────────┘  │  │  │ └───────────────┘   └─────────────┘ │
</span></span><span style="display:flex;"><span>│         ▲                                                │     │         │                           │
</span></span><span style="display:flex;"><span>│         ├─────────────────┬───────────────────┐          │  │  │         │                           │
</span></span><span style="display:flex;"><span>│         │                 │                   │          │     │         │                           │
</span></span><span style="display:flex;"><span>│ ┌───────────────┐ ┌───────────────┐ ┌──────────────────┐ │  │  │         │                           │
</span></span><span style="display:flex;"><span>│ │    K8s API    │ │Gateway Health │ │  Gateway Proxy   │ │     │         └──────────────────┐        │
</span></span><span style="display:flex;"><span>│ │ Service: 6443 │ │ Service: 6191 │ │ Private LB: 6143 │ │  │  │                            │        │
</span></span><span style="display:flex;"><span>│ └───────────────┘ └───────────────┘ └──────────────────┘ │     │                            │        │
</span></span><span style="display:flex;"><span>│         ▲                 ▲                   ▲          │  │  │                            │        │
</span></span><span style="display:flex;"><span>│         │                 │                   │          │     │                            ▼        │
</span></span><span style="display:flex;"><span>│ ┌───────────────┐  ┌─────────────┐  ╔══════════════════╗ │  │  │  ╔═════════════╗    ┌─────────────┐ │
</span></span><span style="display:flex;"><span>│ │Linkerd Service│  │probe-gateway│  ║      Foobar      ║ │     │  ║   Foobar    ║    │   Linkerd   │ │
</span></span><span style="display:flex;"><span>│ │    Mirror     │─▶│ExternalName │  ║      Mirror      ║─│─ ┼ ─│─▶║   Service   ║◀───│   Gateway   │ │
</span></span><span style="display:flex;"><span>│ └───────────────┘  └─────────────┘  ╚══════════════════╝ │     │  ╚═════════════╝    └─────────────┘ │
</span></span><span style="display:flex;"><span>│                                                          │  │  │                                     │
</span></span><span style="display:flex;"><span>└──────────────────────────────────────────────────────────┘     └─────────────────────────────────────┘
</span></span></code></pre></div><p>Now we can link the two clusters (but for real this time):</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>linkerd --context<span style="color:#f92672">=</span>home multicluster link --cluster-name home <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>    --gateway-addresses <span style="color:#e6db74">&#34;&lt;The address of your LoadBalancer&gt;&#34;</span> --gateway-port <span style="color:#ae81ff">6143</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>    --api-server-address <span style="color:#e6db74">&#34;https://linkerd-tunnel-data-plane.inlets.svc.cluster.local:6443&#34;</span> |
</span></span><span style="display:flex;"><span>  kubectl --context<span style="color:#f92672">=</span>cloud apply -f -
</span></span></code></pre></div><h2 id="other-options">Other Options</h2>
<p>There are tons of other avenues that could be explored for this. I went down
this particular path because it was recommended in the Linkerd docs. However,
basically any solution that would allow our cloud cluster to talk directly to
our homelab would have worked. For example, setting up something like Wireguard
would also have probably been a very reasonable solution.</p>
<p>There are also other options for multi-cluster communication, besides Linkerd.
However, I believe they will all have the same or worse networking requirements.
For example, some solutions require that all individual nodes be routable across
all clusters.</p>
<h2 id="future-work">Future Work</h2>
<ul>
<li>
<p>I don&rsquo;t think there&rsquo;s any reason why a K8s provisioner couldn&rsquo;t be added to
<a href="https://github.com/inlets/inlets-operator">inlets-operator</a>, which means we wouldn&rsquo;t have to use the
forked version of inlets.</p>
</li>
<li>
<p>There may also be a way to modify the service mirrors created by
<a href="https://github.com/linkerd/linkerd2">Linkerd</a>, such that they can be directly routed through the tunnel,
instead of having to hit a LoadBalancer first, e.g. by creating them as type
<code>ExternalName</code>.</p>
</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>I hope this was helpful. To learn more, you can check out the following links:</p>
<ul>
<li>Linkerd: <a href="https://github.com/linkerd/linkerd2">https://github.com/linkerd/linkerd2</a></li>
<li>Linkerd Multi-Cluster: <a href="https://linkerd.io/2.12/tasks/multicluster/">https://linkerd.io/2.12/tasks/multicluster/</a></li>
<li>Inlets: <a href="https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/">https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/</a></li>
<li>Inlets Operator: <a href="https://github.com/inlets/inlets-operator">https://github.com/inlets/inlets-operator</a></li>
<li>Inlets Fork by cubed-it: <a href="https://github.com/cubed-it/inlets">https://github.com/cubed-it/inlets</a></li>
</ul>
<p>And finally, again, if you would like to see my exact and up-to-date
implementation of everything above, check out my homelab repo:</p>
<ul>
<li><a href="https://github.com/MacroPower/homelab">https://github.com/MacroPower/homelab</a></li>
</ul>
<p> </p>
]]></content></item><item><title>Building a Twin-ITX Cluster</title><link>https://jacobcolvin.com/posts/2023/04/building-a-twin-itx-cluster/</link><pubDate>Sun, 02 Apr 2023 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2023/04/building-a-twin-itx-cluster/</guid><description>I am building a cluster that includes a Turing Pi (a mini-ITX cluster board), alongside a normal mini-ITX system. I wanted to put both of these in a single chassis to save space, which is possible with a &amp;ldquo;dual mini-itx&amp;rdquo;, or &amp;ldquo;twin-itx&amp;rdquo; chassis. This article covers most of the chassis I discovered.
As of writing, I have made a purchase (I went with the MyElectronics chassis) and am waiting for it to arrive.</description><content type="html"><![CDATA[<p>I am building a cluster that includes a <a href="https://turingpi.com/">Turing Pi</a> (a
mini-ITX cluster board), alongside a normal mini-ITX system. I wanted to put
both of these in a single chassis to save space, which is possible with a &ldquo;dual
mini-itx&rdquo;, or &ldquo;twin-itx&rdquo; chassis. This article covers most of the chassis I
discovered.</p>
<p>As of writing, I have made a purchase (I went with the MyElectronics chassis)
and am waiting for it to arrive. I will eventually be writing another article
covering my exact setup.</p>
<h2 id="1u">1U</h2>
<p>1U chassis will not fit a Turing Pi. So, I did not evaluate these.</p>
<h2 id="15u">1.5U</h2>
<p>I gave up on this idea for a couple reasons.</p>
<p>One, we don&rsquo;t yet know whether newer compute modules will fit. CM4s should, but
there&rsquo;s no telling how much clearance will be needed for RK1, etc.</p>
<p>Two, the second system, which I want to be a more normal x86 system, is not
optimized for 1.5U. Next to no fans are going to fit in 1.5U, and fanless are
all optimized for 1U with those loud and awful fans.</p>
<p>For this reason the 2U list is likely more complete.</p>
<h3 id="onlogic-mk150">OnLogic MK150</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://www.onlogic.com/mk150/">MK150</a></li>
<li><a href="https://www.onlogic.com/akdb-mk15x/">AKDB-MK15X</a></li>
</ul>
<p><strong>Notes:</strong> Normal mini-itx chassis that can be adapted into a dual-mainboard
chassis.</p>
<p><strong>Cost:</strong> $202 + $15</p>
<h3 id="dv-industrial-computer-ds12">DV Industrial Computer DS12</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="http://inpc.com.ua/data/ds12.html">DS12 TWIN</a></li>
</ul>
<p><strong>Notes:</strong> Deep but with no hotswap. Thermals make no sense to me.</p>
<p><strong>Cost:</strong> I have no idea what it costs.</p>
<h2 id="2u">2U</h2>
<h3 id="myelectronics-dual-mini-itx">MyElectronics Dual Mini-ITX</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://www.myelectronics.nl/us/19-inch-2u-mini-itx-case-for-dual-mini-itx-short-d.html">MyElectronics 6875</a></li>
</ul>
<p><strong>Notes:</strong> No hotswap, but is short depth. Looks nice, makes sense from a
thermals perspective. Is built specifically for PicoPSU, and a normal PSU will
not fit. Allows you to have front-io even without headers, via adapters that
plug into the back of your system.</p>
<p><strong>Cost:</strong> ~$295</p>
<h3 id="s208">S208</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://www.genesysgroup.com.tw/s208b-twinitx.htm">GeneSys S208B-TWIN-ITX</a></li>
<li><a href="http://www.plinkusa.net/webTWIN-ITX-S2082.htm">PlinkUSA TWIN-ITX-S2082</a></li>
</ul>
<p><strong>Notes:</strong> Hotswap. Nice layout. I reached out via email but could not get in
contact with them.</p>
<p><strong>Cost:</strong> $290</p>
<p><strong>Threads:</strong></p>
<ul>
<li><a href="https://forums.servethehome.com/index.php?threads/2u-dual-itx-case.9386/">https://forums.servethehome.com/index.php?threads/2u-dual-itx-case.9386/</a></li>
</ul>
<h3 id="travla--tawa-series">Travla / TAWA Series</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://www.kiwatek.com/corp/index.php?route=product/product&path=75_78&product_id=62">TAWA-T2240</a></li>
<li><a href="https://www.kiwatek.com/corp/index.php?route=product/product&path=75_78&product_id=63">TAWA-T2241</a></li>
<li><a href="https://www.kiwatek.com/corp/index.php?route=product/product&path=75_78&product_id=229">TAWA-T2242</a></li>
<li><a href="https://www.kiwatek.com/corp/index.php?route=product/product&path=75_78&product_id=230">TAWA-T2280</a></li>
<li><a href="https://www.kiwatek.com/corp/index.php?route=product/product&path=75_78&product_id=256">TAWA-T2900</a></li>
</ul>
<p>All these are also available under the &ldquo;Travla&rdquo; brand <a href="https://www.mini-itx.com/store/?c=63">here</a>.</p>
<p><strong>Notes:</strong> Number of different options. They tend to have really odd layouts
internally. TAWA-T2900 has all front i/o which is unique.</p>
<p><strong>Cost:</strong> I have no idea what it costs.</p>
<p><strong>Threads:</strong></p>
<ul>
<li><a href="https://forums.servethehome.com/index.php?threads/travla-2u-t2280-t2281.10288/">https://forums.servethehome.com/index.php?threads/travla-2u-t2280-t2281.10288/</a></li>
<li><a href="https://forums.servethehome.com/index.php?threads/dual-itx-cases.8230/">https://forums.servethehome.com/index.php?threads/dual-itx-cases.8230/</a></li>
<li><a href="https://www.reddit.com/r/sffpc/comments/ejek9k/travla_t2241_2u_dual_miniitx_case_19l_with_2x/">https://www.reddit.com/r/sffpc/comments/ejek9k/travla_t2241_2u_dual_miniitx_case_19l_with_2x/</a></li>
</ul>
<h3 id="cablematic-ck018">Cablematic CK018</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://cablematic.com/en/products/server-case-rackmount-chassis-19-inch-ipc-mini-itx-2u-4x35-inch-depth-360mm-CK01800/">Cablematic CK01800</a></li>
</ul>
<p><strong>Notes:</strong> Deep but with no hotswap.</p>
<p><strong>Cost:</strong> ~$125</p>
<h3 id="rm-2270">RM-2270</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="https://www.circotech.com/rm-2270-2u-rackmount-case-for-dual-mini-itx-motherboard-system-14-deep.html">Circotech RM-2270</a></li>
<li><a href="https://www.amazon.com/KRI-Rackmount-Chassis-RM-2270-Mini-ITX/dp/B08JNFV99V">KRI RM-2270</a></li>
</ul>
<p><strong>Notes:</strong> Deep but with no hotswap.</p>
<p><strong>Cost:</strong> $279</p>
<h3 id="istarusa-d-218m2-itx">iStarUSA D-218M2-ITX</h3>
<p><strong>Items</strong>:</p>
<ul>
<li><a href="http://www.istarusa.com/en/istarusa/products.php?model=D-218M2-ITX">iStarUSA D-218M2-ITX</a></li>
</ul>
<p><strong>Notes:</strong> Deep but with no hotswap. There are some packages you can get that
include PSUs, but they were out of stock when I looked.</p>
<p><strong>Cost:</strong> ~$130</p>
<p><strong>Threads:</strong></p>
<ul>
<li><a href="https://forums.servethehome.com/index.php?threads/dual-mitx-rackmount-2u-istarusa-d-218m2-itx.1303/">https://forums.servethehome.com/index.php?threads/dual-mitx-rackmount-2u-istarusa-d-218m2-itx.1303/</a></li>
</ul>
<h2 id="misc-threads-and-other-articles">Misc Threads and other Articles</h2>
<ul>
<li><a href="https://www.reddit.com/r/homelab/comments/onfh15/2u4u_dual_itx_case/">https://www.reddit.com/r/homelab/comments/onfh15/2u4u_dual_itx_case/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/u7fecx/looking_to_build_a_dual_mini_itx_server_rack/">https://www.reddit.com/r/homelab/comments/u7fecx/looking_to_build_a_dual_mini_itx_server_rack/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/l7ifrb/dual_mini_itx_rackmount_servers/">https://www.reddit.com/r/homelab/comments/l7ifrb/dual_mini_itx_rackmount_servers/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/vskcrw/looking_for_unique_case_looking_for_a_rack_mount/">https://www.reddit.com/r/homelab/comments/vskcrw/looking_for_unique_case_looking_for_a_rack_mount/</a></li>
<li><a href="https://forums.servethehome.com/index.php?threads/looking-for-dual-mini-itx-cases.26835/">https://forums.servethehome.com/index.php?threads/looking-for-dual-mini-itx-cases.26835/</a></li>
<li><a href="https://forums.servethehome.com/index.php?threads/4-mini-itx-boards-in-1u-chassi.10453/">https://forums.servethehome.com/index.php?threads/4-mini-itx-boards-in-1u-chassi.10453/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/4evfn6/dual_miniitx_rackmount_chassis/">https://www.reddit.com/r/homelab/comments/4evfn6/dual_miniitx_rackmount_chassis/</a></li>
<li><a href="https://www.reddit.com/r/sffpc/comments/aeldtb/has_anyone_ever_made_a_smallish_itx_dual_system/">https://www.reddit.com/r/sffpc/comments/aeldtb/has_anyone_ever_made_a_smallish_itx_dual_system/</a></li>
<li><a href="https://www.reddit.com/r/sffpc/comments/hm4sr4/are_there_any_dual_itx_cases/">https://www.reddit.com/r/sffpc/comments/hm4sr4/are_there_any_dual_itx_cases/</a></li>
<li><a href="https://www.reddit.com/r/sffpc/comments/wjsm4n/any_dual_itx_cases/">https://www.reddit.com/r/sffpc/comments/wjsm4n/any_dual_itx_cases/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/en84r2/mitx_multi_node_case/">https://www.reddit.com/r/homelab/comments/en84r2/mitx_multi_node_case/</a></li>
<li><a href="https://www.reddit.com/r/homelab/comments/u6062c/dual_server_chassis/">https://www.reddit.com/r/homelab/comments/u6062c/dual_server_chassis/</a></li>
</ul>
]]></content></item><item><title>Backups for K8s and Beyond</title><link>https://jacobcolvin.com/posts/2023/01/backups-for-k8s-and-beyond/</link><pubDate>Thu, 05 Jan 2023 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2023/01/backups-for-k8s-and-beyond/</guid><description>Intro Recently I have been moving my homelab to Kubernetes. This has presented the need for a backup solution for any persistent data I might have there. For quite some time, I have been using Duplicati for my backups, but I haven&amp;rsquo;t been completely content with its performance, and have heard many horror stories of restores not working properly. So, I wanted to take this opportunity to find a backup solution that worked well for my personal computers (I have Windows, Linux, and Darwin hosts), my storage servers (UnRaid and FreeNAS), as well as Kubernetes.</description><content type="html"><![CDATA[<h2 id="intro">Intro</h2>
<p>Recently I have been moving my <a href="https://github.com/MacroPower/homelab">homelab</a> to Kubernetes. This has
presented the need for a backup solution for any persistent data I might have
there. For quite some time, I have been using <a href="https://github.com/duplicati/duplicati">Duplicati</a> for my
backups, but I haven&rsquo;t been completely content with its performance, and have
heard many horror stories of restores not working properly. So, I wanted to take
this opportunity to find a backup solution that worked well for my personal
computers (I have Windows, Linux, and Darwin hosts), my storage servers (UnRaid
and FreeNAS), as well as Kubernetes. Assuming that such a solution exists, of
course!</p>
<h2 id="choosing-a-tool">Choosing a Tool</h2>
<p>There were a few solutions that I had heard mentioned lot on /r/homelab, and I
took a look at all of them. Those being <a href="https://duplicacy.com/">Duplicacy</a>, <a href="https://www.borgbackup.org/">Borg</a>,
and <a href="https://github.com/restic/restic">Restic</a>.</p>
<p><a href="https://duplicacy.com/">Duplicacy</a> seems like a good solution for some people, and the
interface looked very nice. However, you do need to purchase a license to use
all of its features, so I chose to avoid it unless I couldn&rsquo;t find anything else
that worked. I also wasn&rsquo;t sure how I&rsquo;d use it for any of my Kubernetes needs;
it didn&rsquo;t seem like a very popular use-case for the tool.</p>
<p><a href="https://www.borgbackup.org/">Borg</a> and <a href="https://github.com/restic/restic">Restic</a> both seemed like great tools. Ultimately, I
decided to go with Restic purely because of its ecosystem, but again, they both
seem like very nice solutions. Borg also probably has a better ecosystem for the
majority of users, but I believe Restic&rsquo;s is better in my particular case.</p>
<p>Restic didn&rsquo;t have any particularly nice client GUIs that I could find, but for
someone like me who likes to use version control as much as they can, there&rsquo;s
<a href="https://github.com/creativeprojects/resticprofile">resticprofile</a>, which is a fantastic tool that makes managing
Restic on client machines very easy, and it works very well on my Windows,
Linux, and Darwin hosts. I also found that <a href="https://github.com/emuell/restic-browser">Restic Browser</a>
could serve as a very usable GUI for doing restores. It&rsquo;s still very bare-bones,
but it does the job. Restic also has several solutions for interacting with K8s
that looked very promising. Furthermore, Restic and all of these other tools are
written in Go, which I very much prefer to Python, which is what Borg is written
in. I assume this is one of the main reasons the Kubernetes ecosystem around
Restic is so much more developed.</p>
<h2 id="integrating-with-kubernetes">Integrating with Kubernetes</h2>
<p>There are several tools out there that exist to make backing up persistent
storage on Kubernetes with Restic much easier. Typically, they are operators
that allow you define things like a backup schedule and what PVCs you want to be
in which Restic repo. Again, I took a look at three relatively popular options.</p>
<p>The first product that I found was <a href="https://github.com/stashed/stash">Stash</a>. Stash is interesting because
it has CRDs for a lot of different things you might want to backup or restore. I
reached out to the sales team to see what an Enterprise License would cost
(enterprise is needed for the most useful features), but they did not reply to
me, I assume because I only have a few Kubernetes nodes to my name. From there,
I was going to see if I could just build from source with license checks
disabled, but it&rsquo;s clear to me that at least some enterprise functionality isn&rsquo;t
present in the normal public repo, so that&rsquo;s off the table as well.</p>
<p>Another very popular choice is <a href="https://github.com/vmware-tanzu/velero">Velero</a>. However, I was immediately very
apprehensive about it because it was made by Hepito, who sold out to VMware some
time ago. This has led to a good amount of abandonware. It does look like
Velero is still being supported, but it&rsquo;s still important to realize that this
acquisition altered the goals of the project. And I would have to pray that
VMware does not alter them further. Additionally and very annoyingly, despite
heavily using Restic, Velero does not support the Restic REST server backend.
Meaning I would be hugely limited in my potential storage options.</p>
<p>Ultimately, I ended up going with <a href="https://github.com/k8up-io/k8up">K8up</a>. In stark contrast to the other
solutions I outlined, K8up is an active CNCF sandbox project, which makes me
much more comfortable with using it. I really didn&rsquo;t see any downsides to it for
me personally, as it included most (if not all) of the enterprise features from
Stash (such as support for backing up databases), and it also supports using the
Restic REST server as a backend, which Velero was missing.</p>
<h2 id="my-implementation">My Implementation</h2>
<p>Below is a minimized account of how I implemented everything. For all the exact,
ugly details, feel free to take a look at my <a href="https://github.com/MacroPower/homelab">homelab GitHub repo</a>.</p>
<p>First, I created a <code>backup</code> namespace for everything centralized:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Namespace</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">backup</span>
</span></span></code></pre></div><p>Next, I set up my Restic REST server. I used Rclone to do this, which basically
allows you to use anything that Rclone supports as storage for Restic. I ended
up creating a <a href="https://github.com/MacroPower/helm-charts/tree/main/charts/rclone">new helm chart for Rclone</a>, just because I couldn&rsquo;t
find any existing ones that I liked very much. Unlike many others, it just runs
<code>rclone rcd</code>, so you can use this chart for basically anything, and just send
commands to serve/copy/sync/etc as needed.</p>
<p>Basically, the only extra values I supplied were to set my config file and add
an extra port for Restic. This is my Kustomization:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">helmCharts</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">rclone</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">repo</span>: <span style="color:#ae81ff">https://jacobcolvin.com/helm-charts/</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">version</span>: <span style="color:#e6db74">&#39;0.3.0&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">releaseName</span>: <span style="color:#ae81ff">rclone</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">backup</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">valuesInline</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">image</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">repository</span>: <span style="color:#ae81ff">rclone/rclone</span>
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">tag</span>: <span style="color:#e6db74">&#39;1.60.1&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">configSecretName</span>: <span style="color:#ae81ff">rclone-config</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">extraPorts</span>:
</span></span><span style="display:flex;"><span>        - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">restic</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">containerPort</span>: <span style="color:#ae81ff">50001</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">protocol</span>: <span style="color:#ae81ff">TCP</span>
</span></span></code></pre></div><p>I SSHed to the container and set up my remote <code>ResticRemote</code>. Then I saved this
to my secret provider for the <code>rclone-config</code> secret, so it won&rsquo;t be lost.</p>
<p>I then created a Job to run the following command after the sync completes to
start the Restic server:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl -v -X POST -H <span style="color:#e6db74">&#39;Content-Type: application/json&#39;</span> -d <span style="color:#e6db74">&#39;{
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  &#34;_async&#34;: true,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  &#34;_group&#34;: &#34;job/restic&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  &#34;command&#34;: &#34;serve&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  &#34;arg&#34;: [&#34;restic&#34;, &#34;ResticRemote:/&#34;],
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  &#34;opt&#34;: {
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    &#34;addr&#34;: &#34;:50001&#34;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  }
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">}&#39;</span> http://rclone.backup.svc.cluster.local:5572/core/command
</span></span></code></pre></div><p>There are probably a lot of different ways to handle this, and I&rsquo;m sure it&rsquo;s
mostly down to preference. So I won&rsquo;t go into further detail on exactly how my
Job is setup and such, but if you&rsquo;re curious, it&rsquo;s all public on my GitHub repo.</p>
<p>For my machines I wanted to back up, I used Traefik as an ingress for this. This
is where I added things like authentication, certs, and such. Normally I use
Cloudflare to proxy traffic, but in this case I thought it&rsquo;d be better to not do
that, as I am potentially sending quite a lot of data back and forth and don&rsquo;t
want to have to deal with any potential complications there. I also use both
external-dns and cert-manager, so this was as simple as adding/replacing a few
annotations, to disable proxying and switch to my Let&rsquo;s Encrypt issuer:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">&#39;external-dns.alpha.kubernetes.io/cloudflare-proxied&#39;</span>: <span style="color:#e6db74">&#39;false&#39;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&#39;cert-manager.io/issuer&#39;</span>: <span style="color:#e6db74">&#39;letsencrypt-prod&#39;</span>
</span></span></code></pre></div><p>From there I was able to start using Restic on my personal machines. I used
resticprofile to do the vast majority of the heavy lifting here. If you would
like to see examples of the profiles I configured, you can check out my
<a href="https://github.com/MacroPower/dotfiles">dotfiles repo</a>.</p>
<p>Moving on to using this infrastructure to actually start backing up PVCs and
such that are also hosted by Kubernetes. First, I installed the K8up Backup
Operator. Note that the <code>resources</code> part is required, because they don&rsquo;t include
CRDs in the helm repo. You can also download the CRD and point to the file.
Also, the <code>BACKUP_GLOBAL_OPERATOR_NAMESPACE</code> environment variable is important.
It tells any Jobs in other namespaces that they should use the operator from the
<code>backup</code> namespace. Obviously, you&rsquo;d want to configure this differently if there
were lots of people using one cluster.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">helmCharts</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">k8up</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">repo</span>: <span style="color:#ae81ff">https://k8up-io.github.io/k8up</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">version</span>: <span style="color:#e6db74">&#39;4.0.1&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">releaseName</span>: <span style="color:#ae81ff">k8up</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">backup</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">valuesInline</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">k8up</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">envVars</span>:
</span></span><span style="display:flex;"><span>          - <span style="color:#f92672">name</span>: <span style="color:#ae81ff">BACKUP_GLOBAL_OPERATOR_NAMESPACE</span>
</span></span><span style="display:flex;"><span>            <span style="color:#f92672">value</span>: <span style="color:#ae81ff">backup</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">resources</span>:
</span></span><span style="display:flex;"><span>  - <span style="color:#ae81ff">https://github.com/k8up-io/k8up/releases/download/k8up-4.0.1/k8up-crd.yaml</span>
</span></span></code></pre></div><p>With that installed, we can now use the <code>Schedule</code> CR to start backing things
up. While in this configuration, the operator is centralized, the <code>Schedule</code> is
not. There should be one CR in each namespace containing things you want to back
up. Here&rsquo;s an example <code>Schedule</code> for a <code>foobar</code> namespace.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">k8up.io/v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">Schedule</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">foobar-schedule</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">foobar</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">backend</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">rest</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">url</span>: <span style="color:#ae81ff">http://rclone-restic.backup.svc.cluster.local:50001/macropower/foobar</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">repoPasswordSecretRef</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">name</span>: <span style="color:#ae81ff">restic-credentials</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">key</span>: <span style="color:#ae81ff">repo-key</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">backup</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">schedule</span>: <span style="color:#e6db74">&#39;0 4 * * *&#39;</span> <span style="color:#75715e"># 04:00</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">failedJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">successfulJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">check</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">schedule</span>: <span style="color:#e6db74">&#39;0 1 * * 1&#39;</span> <span style="color:#75715e"># 01:00 on Monday</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">failedJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">successfulJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">prune</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">schedule</span>: <span style="color:#e6db74">&#39;0 1 * * 0&#39;</span> <span style="color:#75715e"># 00:00 on Monday</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">failedJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">successfulJobsHistoryLimit</span>: <span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">retention</span>:
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">keepLast</span>: <span style="color:#ae81ff">3</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">keepDaily</span>: <span style="color:#ae81ff">7</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">keepWeekly</span>: <span style="color:#ae81ff">5</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">keepMonthly</span>: <span style="color:#ae81ff">12</span>
</span></span></code></pre></div><p>Note that this <code>Schedule</code> has its very own repo to use at <code>macropower/foobar</code>,
and also its own encryption key in the <code>restic-credentials</code> secret. A different
namespace with its own <code>Schedule</code> could have its own repo, credentials, or any
other attributes that aren&rsquo;t configured on the operator.</p>
<p>Once the <code>Schedule</code> is created, anything inside the namespace it lives in
(<code>foobar</code> in this example) can be backed up via an annotation. For example,
below I have two PVCs, one for <code>music</code> and one for <code>anime</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span>---
</span></span><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">PersistentVolumeClaim</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">music</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">foobar</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">annotations</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">&#39;k8up.io/backup&#39;</span>: <span style="color:#e6db74">&#39;true&#39;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># ...</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>---
</span></span><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">v1</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">PersistentVolumeClaim</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">name</span>: <span style="color:#ae81ff">anime</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">namespace</span>: <span style="color:#ae81ff">foobar</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># ...</span>
</span></span></code></pre></div><p><code>music</code> has the backup annotation, so it will be backed up every day per our
<code>Schedule</code>. However, <code>anime</code> does not have this annotation, so it will not be
included in backups.</p>
<p>Here&rsquo;s a diagram showing how everything works together:</p>
<p><img alt="k8up diagram" src="/k8up.drawio.svg"></p>
<h2 id="databases-and-other-edge-cases">Databases and Other Edge Cases</h2>
<p>Lastly, to deal with databases, you of course can&rsquo;t simply backup their PVC.
Thankfully, K8up has a really simple way of addressing databases and basically
any other edge cases. You can add annotations on the Pod itself, and K8up can
run commands inside your containers to collect and backup data. Personally, I
use TimescaleDB which is backed up almost exactly in the same way as Postgres. I
was able to just add the following annotations:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">podAnnotations</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#39;k8up.io/backup&#39;</span>: <span style="color:#e6db74">&#39;true&#39;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#39;k8up.io/backupcommand&#39;</span>: <span style="color:#ae81ff">sh -c &#39;PGUSER=&#34;postgres&#34; PGPASSWORD=&#34;$PATRONI_SUPERUSER_PASSWORD&#34; pg_dumpall --clean&#39;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#39;k8up.io/file-extension&#39;</span>: <span style="color:#ae81ff">.sql</span>
</span></span></code></pre></div><p>This just creates a snapshot of the <code>db.sql</code> file resulting from the <code>pg_dump</code>
command. I am not sure how or even if I could automate restores with Timescale,
because they do require <a href="https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restoring-an-entire-database-from-backup">a bit of extra work</a> compared to
vanilla Postgres. But, hopefully this isn&rsquo;t something I&rsquo;ll have to do very
often.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I hope someone found this article helpful. If you would like to see my
exact and up-to-date implementation of everything above, please check out my homelab repo:</p>
<ul>
<li><a href="https://github.com/MacroPower/homelab">https://github.com/MacroPower/homelab</a></li>
</ul>
<p>And of course a huge thanks to the authors of the following projects:</p>
<ul>
<li><a href="https://github.com/restic/restic">Restic</a></li>
<li><a href="https://github.com/k8up-io/k8up">K8up</a></li>
<li><a href="https://github.com/creativeprojects/resticprofile">resticprofile</a></li>
<li><a href="https://github.com/emuell/restic-browser">Restic Browser</a></li>
</ul>
]]></content></item><item><title>How to use Grafana and Prometheus to Rickroll your friends (or enemies)</title><link>https://jacobcolvin.com/posts/2021/07/how-to-use-grafana-and-prometheus-to-rickroll-your-friends-or-enemies/</link><pubDate>Fri, 30 Jul 2021 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2021/07/how-to-use-grafana-and-prometheus-to-rickroll-your-friends-or-enemies/</guid><description>Source code: https://github.com/MacroPower/prometheus_video_renderer
Read the blog post: https://grafana.com/blog/2021/07/30/how-to-use-grafana-and-prometheus-to-rickroll-your-friends-or-enemies/</description><content type="html"><![CDATA[<p>Source code: <a href="https://github.com/MacroPower/prometheus_video_renderer">https://github.com/MacroPower/prometheus_video_renderer</a></p>
<p>Read the blog post: <a href="https://grafana.com/blog/2021/07/30/how-to-use-grafana-and-prometheus-to-rickroll-your-friends-or-enemies/">https://grafana.com/blog/2021/07/30/how-to-use-grafana-and-prometheus-to-rickroll-your-friends-or-enemies/</a></p>

<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
  <iframe src="https://www.youtube.com/embed/aLvh0oId3Go?autoplay=1" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video"></iframe>
</div>

]]></content></item><item><title>HTB Challenge - Reversing - Baby RE</title><link>https://jacobcolvin.com/posts/2020/10/htb-challenge-reversing-baby-re/</link><pubDate>Sat, 24 Oct 2020 00:00:00 +0000</pubDate><guid>https://jacobcolvin.com/posts/2020/10/htb-challenge-reversing-baby-re/</guid><description>We can start off by running ltrace, which runs a command and intercepts dynamic library &amp;amp; sys calls.
ltrace -i -C ./baby We&amp;rsquo;re prompted and can start off by inserting random value, e.g. asd.
A call to strcmp is intercepted by ltrace.
strcmp(&amp;#34;asd\n&amp;#34;, &amp;#34;ab[REDACTED]13\n&amp;#34;) Let&amp;rsquo;s start by trying to pass ab[REDACTED]13.
In this case it was just that simple, and we receive the flag.
HTB{B[REDACTED]Z} There were probably a lot of additional ways this one could have been solved, and I think I got lucky by starting in this direction.</description><content type="html"><![CDATA[<p>We can start off by running <a href="https://man7.org/linux/man-pages/man1/ltrace.1.html">ltrace</a>, which runs a command and intercepts dynamic library &amp; sys calls.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>ltrace -i -C ./baby
</span></span></code></pre></div><p>We&rsquo;re prompted and can start off by inserting random value, e.g. <code>asd</code>.</p>
<p>A call to <code>strcmp</code> is intercepted by <code>ltrace</code>.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-c++" data-lang="c++"><span style="display:flex;"><span>strcmp(<span style="color:#e6db74">&#34;asd</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">&#34;</span>, <span style="color:#e6db74">&#34;ab[REDACTED]13</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">&#34;</span>)
</span></span></code></pre></div><p>Let&rsquo;s start by trying to pass <code>ab[REDACTED]13</code>.</p>
<p>In this case it was just that simple, and we receive the flag.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>HTB{B[REDACTED]Z}
</span></span></code></pre></div><p>There were probably a lot of additional ways this one could have been solved, and I think I got lucky by starting in this direction.</p>
]]></content></item></channel></rss>