{
    "version": "https://jsonfeed.org/version/1.1",
    "title": "Samuel Forestier",
    "home_page_url": "https://samuel.forestier.app",
    "feed_url": "https://samuel.forestier.app/feed.json",
    "author": {
        "name": "Samuel FORESTIER",
        "url": "https://samuel.forestier.app",
        "avatar": "https://samuel.forestier.app/img/portrait.jpg"
    },
    "authors": [
        {
            "name": "Samuel FORESTIER",
            "url": "https://samuel.forestier.app",
            "avatar": "https://samuel.forestier.app/img/portrait.jpg"
        }
    ],
    "icon": "https://samuel.forestier.app/img/portrait.jpg",
    "favicon": "https://samuel.forestier.app/img/favicon/favicon.ico",
    "language": "en",
    "items": [{
            "title": "How to link a device to an Insular-ed Garmin Connect on Android",
            "date_published": "2026-01-15T19:31:00+01:00",
            "date_modified": "2026-01-15T19:31:00+01:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android",
            "image": "https://samuel.forestier.app/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_1.png\"><img src=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>Unlike most of Android users (but like some members the infosec community), I use <a href=\"https://gitlab.com/secure-system/Insular\">Insular</a> to isolate “untrustworthy” (your definition of “trust” may vary here) Android applications.</p>\n\n<p>Insular is an alternative to <a href=\"https://github.com/PeterCxy/Shelter\">Shelter</a> and a fork of <a href=\"https://github.com/oasisfeng/island\">Island</a>, which <em>weaponizes</em> the Android “Work profile” (that relies on Linux namespaces under the hood) to offer two independent Android execution contexts on the same device.<br />\n“Work profile” is ordinarily used by enterprises or organizations to dedicate a second SIM to a separate context or to even allow a MDM (Mobile Device Management) solution to be installed and granted root-like (Administrator) privileges to it, without accessing device owner’s “personal” data (e.g. messages, contacts, calendar and so on, present in the other context).</p>\n\n<p>This is a very nice approach of “defense in depth” for Android systems, as it allows one not to trust Android permissions as the only security mechanism preventing an application from accessing your personal data (which had already been <a href=\"https://www.sstic.org/2023/presentation/leveraging_android_permissions\">trivially bypassed</a> in the past).</p>\n\n<p>Technically, Insular manages two contexts called “Mainland” and “Island”, respectively used as “personal” (trustworthy) and “work” (untrustworthy) contexts.</p>\n\n<p>Insular is of course <a href=\"https://f-droid.org/packages/com.oasisfeng.island.fdroid/\">available on F-Droid</a>.</p>\n\n<h3 id=\"typical-setup\">Typical setup</h3>\n\n<p>A typical setup is to install F-Droid in Personal/Mainland context, clone it to Work/Island, and then use this clone to install a third-party applications store (e.g. Aurora Store) to install “untrusted” applications. The third-party store, as well as all the applications installed with it, will only be able to access data in the Work/Island context (which correspond to more or less nothing).</p>\n\n<p><a href=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_2.png\"><img src=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>For day-to-day usability, Android supports transferring media from/to Work/Island context, modulo additional confirmation dialogs (and thus, requiring <strong>user approbation</strong>).</p>\n\n<h3 id=\"here-comes-the-garmin-issue\">Here comes the Garmin issue</h3>\n\n<p>So let’s say you want to link a Garmin device to your Android phone through the official <a href=\"https://play.google.com/store/apps/details?id=com.garmin.android.apps.connectmobile\">Garmin Connect application</a>. One would follow the instructions, grant <code class=\"language-plaintext highlighter-rouge\">Nearby devices</code> Android permission and run the easy setup process, and that’s it.</p>\n\n<p>But if Garmin Connect is run from the Work/Island context, it looks like a bug prevents the (custom ?) Bluetooth pairing flow to find and connect to your device. The setup process actually starts, and then… nothing. Manually opening Bluetooth settings and trying to find the device didn’t work.</p>\n\n<p>It looked like something was silently failing (pretty frustrating when you work in IT and lack of an explicit error) :</p>\n\n<p><a href=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_3.png\"><img src=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_3.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>The only similar issue I’ve found so far is <a href=\"https://forums.garmin.com/outdoor-recreation/outdoor-recreation/f/fenix-6-series/241569/am-having-trouble-pairing-fenix-6x---pro-solar-edition-i-have-tried-every-possible-combinations-trying-garmin-connect-in-ios-and-android-device-and-still-failing\">this one</a>, where the user actually had a locale divergence between their Garmin account and Android settings from the application point of view. The Work profile is also known to prevent access (or fake) some system information, it’s not unlikely that something might be related here.</p>\n\n<h3 id=\"the-dirty-workaround\">The (dirty) workaround</h3>\n\n<p>Anyway, Insular allows applications to be cloned from Personal/Mainland to Work/Island contexts (as we’ve seen with F-Droid for instance), but the other way is also possible !</p>\n\n<p><a href=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_4.png\"><img src=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_4.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>So I’ve cloned Garmin Connect <strong>from Island to Mainland</strong>, opened it, logged in with my account, and then started the linking process as before. Bluetooth system dialogs popped-up, and everything went smoothly.</p>\n\n<p><a href=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_5.png\"><img src=\"/img/blog/how-to-link-a-device-to-an-insular-ed-garmin-connect-on-android_5.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>So now that the device is known to Android, and as Bluetooth devices are actually shared across contexts (because they are part of system), I only had to uninstall the cloned Garmin Connect from Mainland, and start using the original one installed in Island which now can discuss with the device !</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>It wasn’t very clean, but leveraging Android permissions by solely and temporarily granting access to <code class=\"language-plaintext highlighter-rouge\">Nearby devices</code> in Personal/Mainland context, it suffices to go through the Bluetooth pairing process, before uninstalling the proprietary piece of software, like nothing happened.</p>\n",
            "summary": "Pairing a device through an application running in Android Work profile may be a PITA",
            "tags": ["Tutorials"]
        },{
            "title": "How to switch to Pico TTS for Orca on Debian GNOME",
            "date_published": "2025-12-21T19:15:00+01:00",
            "date_modified": "2025-12-21T19:15:00+01:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-switch-to-pico-tts-for-orca-on-debian-gnome",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-switch-to-pico-tts-for-orca-on-debian-gnome",
            "image": "https://samuel.forestier.app/img/blog/how-to-switch-to-pico-tts-for-orca-on-debian-gnome_1.jpg",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-switch-to-pico-tts-for-orca-on-debian-gnome_1.jpg\"><img src=\"/img/blog/how-to-switch-to-pico-tts-for-orca-on-debian-gnome_1.jpg\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>Microsoft wants us to throw away working computers for <a href=\"https://techcommunity.microsoft.com/blog/windows-itpro-blog/tpm-2-0-%E2%80%93-a-necessity-for-a-secure-and-future-proof-windows-11/4339066\">(opinionated) security benefits</a>. Most of people only want to browse the Web, read their e-mails, but more importantly, leverage the last computer they bought while it’s still working : it’s good for the planet and their expenses. Software shouldn’t be the limitation here.<br />\nTheir risk assessment (which actually doesn’t exist, because they have only few assets) couldn’t care less about whether or not their motherboard or CPU offers a TPM 2.0. They only want a secure operating system to be protected from trivial malware, a secure Web browser to execute megabytes of third-party JavaScript, and a secure e-mail client to handle all the spams they receive.</p>\n\n<p>So here is yet another good time for most of humans to part away from <a href=\"https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish\">EEE</a> USA-giants, and opt for open (and free) alternatives (as GNU/Linux).</p>\n\n<h3 id=\"gnome-orca-default-voice\">GNOME Orca default voice</h3>\n\n<p>Among all humans, some are part of my family (no kidding !), for which I usually play the “IT technician” role :nerd_face:</p>\n\n<p>My grand-father is highly visually impaired, as <a href=\"https://geek-mexicain-archive.pages.dev/android-malvoyance-comment-vraiment-adapter-son-mobile\">I previously wrote about when documenting how he used Android</a>, and has been really keen to try what screen reading feature Linux could offer, following Windows replacement.<br />\nWindows… an OS he couldn’t even use for the last decade due to successive UX breaking changes (XP, 7, 10) he couldn’t adapt to, coupled to poor accessibility tools.</p>\n\n<p><a href=\"https://orca.gnome.org/\">Orca</a> screen reader is integrated to GNOME desktop environment by default, and uses <a href=\"https://github.com/brailcom/speechd\">speech-dispatcher</a> (speechd) as an interface between screen readers and TTS (text-to-speech) engines for audio transcription.</p>\n\n<p>On a default Debian 13 (Trixie) install, the screen reading feature software flow looks like this :</p>\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>Screen text content --&gt; Orca --&gt; speechd --&gt; eSpeak NG --&gt; Audio output\n</code></pre></div></div>\n\n<p>Unfortunately, it turns out default (and open-source) TTS engine <a href=\"https://github.com/espeak-ng/espeak-ng\">eSpeak NG</a> output voices were absolutely horrendous, and we couldn’t get a clue of what they actually said. So we decided to give a try to SVOX Pico TTS engine, which is (proprietary but) publicly appreciated by the French community.</p>\n\n<p>As we couldn’t find an up-to-date/working documentation, <em>here</em> it is !</p>\n\n<h3 id=\"switching-to-pico-tts-engine\">Switching to Pico TTS engine</h3>\n\n<p>First you need to add <code class=\"language-plaintext highlighter-rouge\">contrib</code> and <code class=\"language-plaintext highlighter-rouge\">non-free</code> components to your apt sources list, for instance :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-config\" data-lang=\"config\"><span class=\"n\">deb</span> <span class=\"n\">http</span>://<span class=\"n\">deb</span>.<span class=\"n\">debian</span>.<span class=\"n\">org</span>/<span class=\"n\">debian</span> <span class=\"n\">trixie</span> <span class=\"n\">main</span> <span class=\"n\">contrib</span> <span class=\"n\">non</span>-<span class=\"n\">free</span>\n<span class=\"n\">deb</span> <span class=\"n\">http</span>://<span class=\"n\">deb</span>.<span class=\"n\">debian</span>.<span class=\"n\">org</span>/<span class=\"n\">debian</span>-<span class=\"n\">security</span> <span class=\"n\">trixie</span>-<span class=\"n\">security</span> <span class=\"n\">main</span> <span class=\"n\">contrib</span> <span class=\"n\">non</span>-<span class=\"n\">free</span>\n<span class=\"n\">deb</span> <span class=\"n\">http</span>://<span class=\"n\">deb</span>.<span class=\"n\">debian</span>.<span class=\"n\">org</span>/<span class=\"n\">debian</span> <span class=\"n\">trixie</span>-<span class=\"n\">updates</span> <span class=\"n\">main</span> <span class=\"n\">contrib</span> <span class=\"n\">non</span>-<span class=\"n\">free</span></code></pre></figure>\n\n<p>Then you have to install Pico utilities as well as speech-dispatcher Pico module :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt-get update\napt-get <span class=\"nb\">install</span> <span class=\"nt\">-y</span> libttspico-utils speech-dispatcher-pico</code></pre></figure>\n\n<p>Eventually, we edited <code class=\"language-plaintext highlighter-rouge\">/etc/speech-dispatcher/speechd.conf</code> to enable Pico TTS by default :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-config\" data-lang=\"config\"><span class=\"c\"># ...\n</span><span class=\"n\">DefaultModule</span> <span class=\"n\">pico</span>\n<span class=\"c\"># ...\n</span><span class=\"n\">AddModule</span> <span class=\"s2\">\"pico\"</span> <span class=\"s2\">\"sd_pico\"</span> <span class=\"s2\">\"pico.conf\"</span>\n<span class=\"c\"># ...\n# Note : you can replace below \"fr\" by \"de\", \"en\", \"es\" or even \"it\" for Italian !\n</span><span class=\"n\">LanguageDefaultModule</span> <span class=\"s2\">\"fr\"</span> <span class=\"s2\">\"pico\"</span></code></pre></figure>\n\n<p>Now you <em>only</em> have to restart GNOME to make Orca properly use new speech-dispatcher configuration (restarting <code class=\"language-plaintext highlighter-rouge\">speech-dispatcherd.service</code> didn’t seem to be sufficient).</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>SVOX Pico only proposes a single (female) voice, which is clearer and more importantly… understandable.</p>\n\n<p>And don’t forget : an up-to-date and <a href=\"http://arstechnica.com/security/2025/12/microsoft-will-finally-kill-obsolete-cipher-that-has-wreaked-decades-of-havoc/\">secure</a> Windows, is a Windows which you’ve eventually got rid of ! :wastebasket:</p>\n",
            "summary": "Improving Orca output voice on Debian GNOME wasn't quite straightforward",
            "tags": ["Tutorials"]
        },{
            "title": "Welcome to dystopIA",
            "date_published": "2025-08-09T17:32:00+02:00",
            "date_modified": "2025-08-23T22:33:00+02:00",
            "id": "https://samuel.forestier.app/blog/articles/welcome-to-dystopia",
            "url": "https://samuel.forestier.app/blog/articles/welcome-to-dystopia",
            "image": "https://samuel.forestier.app/img/blog/welcome-to-dystopia_1.jpg",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"https://screenrant.com/terminator-2-movie-miles-dyson-skynet-most-sympathetic-character/\"><img src=\"/img/blog/welcome-to-dystopia_1.jpg\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction-or-what-has-convinced-me-to-write-on-the-matter\">Introduction (or “What has convinced me to write on the matter”)</h3>\n\n<p>Somewhere in France, two months ago : a ten year old boy plays with his mother’s (Android) smartphone and ends up on Gemini which reads something like this :</p>\n\n<blockquote>\n  <p>“Hello $NAME, here is what you can do near $LOCATION […]”<br />\n- How do you know we are in $LOCATION ?<br />\n- [Gemini’s irrelevant and pointless redacted answer]<br />\n- NO !!! :angry: HOW. DO. YOU. KNOW. WE. ARE. IN. $LOCATION ???</p>\n</blockquote>\n\n<p>Welcome to dystopIA. Youngsters simple (thus naive) approach of life against huge tech giants steamrollers…</p>\n\n<h3 id=\"politics\">Politics</h3>\n\n<p>As a previous Intel CEO (Andy Grove) said in early 2000s :</p>\n\n<blockquote>\n  <p>“High tech runs three-times faster than normal businesses. And the government runs three-times slower than normal businesses. So we have a nine-times gap… And so what you want to do is you want to make sure that the government does not get in the way and slow things down.”</p>\n</blockquote>\n\n<p>… we cannot expect and wait for slowly-evolving legislation to enforce regulation for the sake of citizens (and eventually humanity).</p>\n\n<p>My only hope (for the top-down approach) is that even on the right-side of the economically-political spectrum, liberals begin to understand that this new “technological revolution” won’t <em>just</em> cause massive layoffs and hence unemployment, but also totally destroy the “labour value” so important to them.</p>\n\n<p>How to “emancipate through work” (if you happen to believe in it) if one is enslaved to a black box prompt copying/pasting someone else’s work that has been mixed up with others’ ?</p>\n\n<p>How to “benefit from the social elevator” if most of your studies consist into giving course slides and exercises to a machine only good at respectively summing them up and resolving them for you ?</p>\n\n<p>How to “innovate” (to save life on Earth, as techno-solutionists evangelize) if your only tool is a huge regressive equation, which is only good at dwelling on what <strong>has already been</strong> discovered ?</p>\n\n<p>You know the situation is completely f*cked up when the ones <a href=\"https://x.com/sama/status/1953893841381273969\">actually driving on the highway</a> are simultaneously <a href=\"https://www.bbc.com/news/world-us-canada-65616866\">calling for speed regulation</a> (for their public image) as well as <a href=\"https://www.indiatoday.in/technology/news/story/sam-altmans-says-gpt-5-ai-is-so-smart-it-made-him-feel-useless-ahead-of-imminent-launch-2763081-2025-07-29\">fearful about their own behavior</a>.</p>\n\n<h3 id=\"energy-or-houston-we-have-a-problem\">Energy (or “Houston, we have a problem”)</h3>\n\n<h4 id=\"infrastructure\">Infrastructure</h4>\n\n<p>We have piled up millions of lines of codes, terabytes of binaries, and now with this new paradigm we keep still : we need <a href=\"https://www.theregister.com/2025/07/09/anubis_fighting_the_llm_hordes/\">anti-crawling bots</a> to keep our websites up, some even <a href=\"https://zadzmo.org/code/nepenthes/\">feed them trash</a> to “slow” them by wasting computational resources, and we build <a href=\"https://www.schneier.com/blog/archives/2025/04/applying-security-engineering-to-prompt-injection-security.html\">concurrent models</a> to “fix” inherent security issues… Cats and mice game continues, but planet loses.</p>\n\n<p>As network backbones had to scale up to deal with modern usages (e.g. a whole city streaming 4K of different VoDs at the same hour every day), our power grids and energetic mixes have to go down the same path. And don’t fool yourself : it’s our usages that design infrastructures, not the other way around. Companies and engineers won’t spend money (nor time) on unworthy projects. I keenly invite you to read on the <a href=\"https://en.wikipedia.org/wiki/Path_dependence\">“path dependence”</a>, and <a href=\"https://www.terrestres.org/2025/06/28/chatgpt-cest-juste-un-outil/#:~:text=La%20d%C3%A9pendance%20au%20sentier\">here (french)</a> applied to AI.</p>\n\n<p>We reached a point where a datacenter built in a neighborhood<a href=\"#footnotes\">¹</a> almost <a href=\"https://www.lechiffon.fr/la-courneuve-plongee-dans-le-monde-tres-materiel-du-plus-grand-data-center-de-france/\">consumes what the residents do</a>. Imagine.</p>\n\n<p>In the end, the situation is so lunar that we’re encouraged to <del>pee in the shower</del> <a href=\"https://futurism.com/altman-please-thanks-chatgpt\">skip greetings</a>…</p>\n\n<h4 id=\"brain\">Brain</h4>\n\n<p>I’d like to mention Luc Julia and more particularly his take on <a href=\"https://www.youtube.com/watch?v=yuDBSbng_8o\">why AI doesn’t exist</a>.<br />\nThe key note is : our brain consumes pretty much nothing compared to what neural networks require, and it usually does better.</p>\n\n<p>So you will tell me human-beings cannot possibly work 24/7, need medical insurances and (what a shame !) happen to fight for their rights. Indeed, we aren’t machines.</p>\n\n<p>But let’s be honest, a “performant AI” is a “well-trained AI”. And <em>training</em> at the end of the day, comes from humans !<br />\n<a href=\"https://pulitzercenter.org/stories/madagascar-training-ground-ai-french\">What actually happens</a> is : we’re paying few cents per hours some people (for their low brain energy consumption) on the other side of the planet to “train” (very high energy intensive) machines on ours, so as not to pay a (salaried) human-beings decently. More on the “hidden (social) conditions” behind AI <a href=\"https://www.terrestres.org/2025/06/28/chatgpt-cest-juste-un-outil/#:~:text=L%E2%80%99occultation%20des%20conditions%20de%20possibilit%C3%A9%20de%20l%E2%80%99IAg\">here (french)</a>.</p>\n\n<p>Earth loses. Humanity loses. Tech parties “wins”.</p>\n\n<h3 id=\"economies\">Economies</h3>\n\n<p>It has been some years we haven’t experienced the explosion of a speculative bubble… We had Internet around 2000, real estate that begins to collapse in China, cryptocurrencies that happen to more or less follow “real-economy” trends between two insider trading infringements, …<br />\nInvestors needed a new horse to bet their extra-money on, and AI appeared. Once they will understand that “garbage in” usually and eventually gives “garbage out”, be ready and brace for impact…</p>\n\n<p>The same and rather old patterns apply : competition, predation, economic headlong rush (not to spoil the property owners) : our Occidental unindustrialized (and thus massively unemployed) societies have come to an end (and are slowly declining).<br />\nOwners and renters can still put some money on the table, wait for the donuts to grow (because others follow), and gather the differences (often positive for them).</p>\n\n<p>Trivially, we could say that “owners” have interest in such a “technological revolution” to happen in order to maximize the expansion of what they already have. On the other hand, “workers” will be compelled to embrace yet another enslaving companion (for “productivity”).</p>\n\n<h3 id=\"relationship-or-what-were-actually-dealing-with\">Relationship (or “What we’re actually dealing with”)</h3>\n\n<p>Would you value one’s speeches if you knew each one of their sentences next word were computed using probability ? But you do if you use generative AI.<br />\nWould you seriously value a discussion with a delusional/mythomaniac friend of yours ? But you do if use generative AI.<br />\nWould you share some personal/confidential data with someone publicly known to use it for their own benefits (and that may make them available for the rest of the world afterwards) ? But you do if you use (a public) generative AI.</p>\n\n<p>Do you know <a href=\"https://www.lesswrong.com/w/crockers-rules\">Crocker’s Rules</a> ? If not, you really should.<br />\nWhat would be the point of feeding bullet points to an LLM<a href=\"#footnotes\">²</a> for it to generate a well-written e-mail, if on the other end, your recipient also uses an LLM to sum up your block of text down to… bullet points ?<br />\nSo you’d have expanded-then-compressed some data, which at best is useless, and at worst the essence of your key notes may have been lost during the process.</p>\n\n<p>Here’s a situation : you have a problem to solve, and a quick query using your favorite Web search engine and relevant keywords gave no results : don’t lose time asking a generative AI, it will hallucinate and you will just lose time (and consume more energy for nothing).</p>\n\n<h3 id=\"philosophy\">Philosophy</h3>\n\n<p>A neural networks acts more or less as a linear regression. It will try to minimize what we mathematically call a <a href=\"https://en.wikipedia.org/wiki/Loss_function\">cost function</a>. So it’ll be a “performant” machine, for a given set of tasks. But it won’t be able to <em>create</em> new things.</p>\n\n<p>If it’s the end of innovation, it’ll be the end of Societies (at least as the forms in which we know them today).</p>\n\n<p>No more artists. Thus, all artists ? <del>Creating</del> Generating content relying on NFT opened to speculation, on a daily-basis ?</p>\n\n<p>A choice needs to be made : do we want to keep a “social usefulness” ? Or at least, the <em>possibility</em> to have one ?</p>\n\n<p>Is a machine only performing syntactically-correct answers sufficient ? Try to e-mail a well-written but factually wrong text to your boss and you’ll eventually see it isn’t…</p>\n\n<p>Is “Chat” a GPT (Global Persistent Threat) to humanity ? Yes because, by paraphrasing and translating Emmanuel Dockès<a href=\"#footnotes\">³</a> :</p>\n\n<blockquote>\n  <p>“If you happen to only free people that can stand thralldom, you only free the strong ones”.</p>\n</blockquote>\n\n<p>If you think users should be blamed for blindly following definitely wrong instructions, try to analyze who operate the service, and why.</p>\n\n<h3 id=\"tool-dependency\">“Tool” dependency</h3>\n\n<p>What will you do when <a href=\"https://www.reddit.com/r/LocalLLaMA/comments/1kg9mjs/how_long_before_we_start_seeing_ads_intentionally/\">advertisements will directly be included in prompt responses</a> ?<br />\nYou will pay for the service your work (life ?) depends on ? As you already paid for your music, VoD, game, Internet access and other entertainment-thus-optional services, <a href=\"https://www.theguardian.com/environment/2025/apr/10/farmers-mental-health-crisis-trump\">while the persons feeding you daily kill themselves</a> ?</p>\n\n<p>Does a plumber uses free tools ? Does a plumber would even rent them ?<br />\nWe shouldn’t start relying on tools we do not master, trust, or that we cannot replace (I’m looking at you Office 365 :eyes:).</p>\n\n<p>Giant tech companies offering “smart” assistants for free are currently performing a massive “technological dumping” that is never discussed.<br />\nAs it started before with “clouds” and e-mail providers, we are giving them away all of our personal data, and they will eventually be able to make you pay for their services.</p>\n\n<h3 id=\"you-dont-care-about-privacy--but-the-ones-you-care-about-may-\">You don’t care about privacy ? But the ones you care about may !</h3>\n\n<p>I’d like to remind you of a previous popular take on e-mails : <a href=\"https://mako.cc/copyrighteous/google-has-most-of-my-email-because-it-has-all-of-yours\">Google Has Most of My Email Because It Has All of Yours</a></p>\n\n<p>Unless you’re called Robinson and live on an island, your (in)actions have an impact on society (particularly on your close circles). If you let predatory companies entering your life, they also enter lives of the ones you meet, spend time and discuss with, make love to, …\nIn the end, our ideas, our voices, our emotions, even our intentions go to them. And that’s scary.</p>\n\n<p>Don’t forget that everything you type on a platform that belongs to a third party, 1) doesn’t belong to you (that were the terms of service you didn’t read when signing up) and 2) it’ll be a matter of time before <a href=\"https://www.404media.co/more-than-130-000-claude-grok-chatgpt-and-other-llm-chats-readable-on-archive-org/\">it ends up</a> being publicly <a href=\"https://thehackernews.com/2025/01/deepseek-ai-database-exposed-over-1.html\">available on Internet</a>.</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>We were arguing whether <a href=\"https://history.stackexchange.com/questions/5597/is-history-always-written-by-the-victors\">history is written by the victors</a>, and maybe we’ll soon debate whether <a href=\"https://www.lemonde.fr/pixels/article/2025/05/28/le-gouvernement-retire-une-video-generee-par-ia-sur-la-resistance-a-la-suite-d-une-erreur-historique_6609012_4408996.html\">past facts existed at all (french)</a>.</p>\n\n<p>I don’t want a world where the only particularity of human-beings would be their <a href=\"https://carta.anthropogeny.org/moca/topics/thumb-opposability\">thumb opposability</a>, that will serve machines (somewhat already close to warehouse worker conditions, for you to receive your packages on short time notices…).</p>\n\n<p>I don’t want either a world looking like something between <a href=\"https://www.imdb.com/title/tt0387808/\">Idiocracy (2006)</a> and <a href=\"https://www.imdb.com/title/tt11286314/\">Don’t look up (2021)</a> (<a href=\"https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986\">already crossed</a> with <a href=\"https://www.imdb.com/title/tt1798709\">Her (2013)</a>), where <a href=\"https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/\">average IQ went drastically down</a> due to massive assistance or billionaires deciding how we should live (or die). If it doesn’t end up as in <a href=\"https://www.imdb.com/title/tt0181852/\">Terminator 3 (2003)</a>, once we’ll have had connected the first AI to military infrastructure to fight… itself.</p>\n\n<p>Will we be able to live without a personal assistant ? <em>Optarim verius, quam sperarim</em>. In the meantime, this post has been written without any.</p>\n\n<p>#StepAIside, #WalkAwAI, #OptForBrAIn</p>\n\n<hr />\n\n<h3 id=\"footnotes\">Footnotes</h3>\n\n<p><a href=\"#infrastructure\">1</a> : La Courneuve, France<br />\n<a href=\"#relationship-or-what-were-actually-dealing-with\">2</a> : Large Language Model<br />\n<a href=\"#philosophy\">3</a> : “Si vous ne libérez que ceux qui ont la force de refuser la servitude, vous ne libérez que les forts”. p302, Emmanuel Dockès (2017), <em>Voyage en misarchie</em>. Éditions du Détour</p>\n",
            "summary": "AI-backed personal assistants are turning us children again and killing our youngsters own childhood, we need to act now",
            "tags": ["Articles"]
        },{
            "title": "How to migrate Magnetico SQLite database dump to PostgreSQL",
            "date_published": "2025-01-03T21:26:00+01:00",
            "date_modified": "2026-01-07T21:55:00+01:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql",
            "image": "https://samuel.forestier.app/img/blog/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql_1.png\"><img src=\"/img/blog/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p><a href=\"https://tgragnato.it/magnetico/\">Magnetico</a> is <del>a</del> the self-hosted <a href=\"https://en.wikipedia.org/wiki/Mainline_DHT\">BitTorrent DHT</a> crawler, with built-in searching capabilities.</p>\n\n<p>Usually, to avoid bootstrapping database and crawl Internet <a href=\"https://en.wikipedia.org/wiki/From_Zero\">from zero</a> on your own, you can download a community dump (like @anindyamaiti’s <a href=\"https://tnt.maiti.info/dhtd/\">here</a>), and let Magnetico start from here.</p>\n\n<p>But these dumps are usually proposed as SQLite databases (see this <a href=\"https://github.com/boramalper/magnetico/issues/218\">thread</a>), because original Magnetico implementation <a href=\"https://github.com/boramalper/magnetico/issues/280\">didn’t support any other source</a>.</p>\n\n<p>Some years ago, there was indeed <a href=\"https://pkg.go.dev/gitlab.com/skobkin/magnetico-go-migrator\">magnetico-go-migrator</a> that had been developed for this very purpose, but <del>it’s written in Go</del> the project doesn’t seem to exist anymore… There is also <a href=\"https://framagit.org/Glandos/magnetico_merge\">magnetico_merge</a>, but I wanted a stable and efficient solution (i.e. without hard-coded SQL schema and not written in Python either).</p>\n\n<p>In this blog post, I’ll explain how I migrated such an SQLite database dump to PostgreSQL, which now Magnetico uses as its regular database, using <a href=\"https://pgloader.io/\">PGLoader</a> (because such a migration <a href=\"https://stackoverflow.com/questions/4581727/how-to-convert-sqlite-sql-dump-file-to-postgresql\">isn’t a straightforward process</a> at all).</p>\n\n<h3 id=\"initial-setup\">Initial setup</h3>\n\n<p>First we need to install some packages :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt <span class=\"nb\">install</span> <span class=\"nt\">-y</span> icu-devtools pgloader sqlite3</code></pre></figure>\n\n<p>I consider Magnetico service directory tree to look somehow like this :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-tree\" data-lang=\"tree\">.\n├── data\n│   ├── database.sqlite3\n│   ├── database.sqlite3-shm\n│   └── database.sqlite3-wal\n└── docker-compose.yml</code></pre></figure>\n\n<h3 id=\"pgloader-is-broken-when-it-comes-to-sqlite\">PGLoader is broken (when it comes to SQLite)</h3>\n\n<p>Despite being pretty actively maintained, PGLoader still counts an incredible amount of <a href=\"https://github.com/dimitri/pgloader/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen\">opened issues</a>. One that I’ve been struggling with is <a href=\"https://github.com/dimitri/pgloader/issues/1256\">#1256</a>, which causes PGLoader to fail to correctly “transpile” SQLite <code class=\"language-plaintext highlighter-rouge\">REFERENCES</code> constraints :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-log\" data-lang=\"log\">ERROR PostgreSQL Database error 42601: syntax error at or near \")\"\nQUERY: ALTER TABLE \"files\" ADD FOREIGN KEY() REFERENCES \"torrents\"() ON UPDATE RESTRICT ON DELETE CASCADE</code></pre></figure>\n\n<p>… which, as you can see, comes from Magnetico schema :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">sqlite3 <span class=\"nt\">-readonly</span> data/database.sqlite3 <span class=\"s2\">\".schema files\"</span></code></pre></figure>\n\n<figure class=\"highlight\"><pre><code class=\"language-sql\" data-lang=\"sql\"><span class=\"k\">CREATE</span> <span class=\"k\">TABLE</span> <span class=\"n\">files</span> <span class=\"p\">(</span>\n\t\t\t<span class=\"n\">id</span>          <span class=\"nb\">INTEGER</span> <span class=\"k\">PRIMARY</span> <span class=\"k\">KEY</span><span class=\"p\">,</span>\n<span class=\"hll\">\t\t\t<span class=\"n\">torrent_id</span>  <span class=\"nb\">INTEGER</span> <span class=\"k\">REFERENCES</span> <span class=\"n\">torrents</span> <span class=\"k\">ON</span> <span class=\"k\">DELETE</span> <span class=\"k\">CASCADE</span> <span class=\"k\">ON</span> <span class=\"k\">UPDATE</span> <span class=\"k\">RESTRICT</span><span class=\"p\">,</span>\n</span>\t\t\t<span class=\"k\">size</span>        <span class=\"nb\">INTEGER</span> <span class=\"k\">NOT</span> <span class=\"k\">NULL</span><span class=\"p\">,</span>\n\t\t\t<span class=\"n\">path</span>        <span class=\"nb\">TEXT</span> <span class=\"k\">NOT</span> <span class=\"k\">NULL</span>\n\t\t<span class=\"p\">,</span> <span class=\"n\">is_readme</span> <span class=\"nb\">INTEGER</span> <span class=\"k\">CHECK</span> <span class=\"p\">(</span><span class=\"n\">is_readme</span> <span class=\"k\">IS</span> <span class=\"k\">NULL</span> <span class=\"k\">OR</span> <span class=\"n\">is_readme</span><span class=\"o\">=</span><span class=\"mi\">1</span><span class=\"p\">)</span> <span class=\"k\">DEFAULT</span> <span class=\"k\">NULL</span><span class=\"p\">,</span> <span class=\"n\">content</span>   <span class=\"nb\">TEXT</span>    <span class=\"k\">CHECK</span> <span class=\"p\">((</span><span class=\"n\">content</span> <span class=\"k\">IS</span> <span class=\"k\">NULL</span> <span class=\"k\">AND</span> <span class=\"n\">is_readme</span> <span class=\"k\">IS</span> <span class=\"k\">NULL</span><span class=\"p\">)</span> <span class=\"k\">OR</span> <span class=\"p\">(</span><span class=\"n\">content</span> <span class=\"k\">IS</span> <span class=\"k\">NOT</span> <span class=\"k\">NULL</span> <span class=\"k\">AND</span> <span class=\"n\">is_readme</span><span class=\"o\">=</span><span class=\"mi\">1</span><span class=\"p\">))</span> <span class=\"k\">DEFAULT</span> <span class=\"k\">NULL</span><span class=\"p\">);</span>\n<span class=\"k\">CREATE</span> <span class=\"k\">UNIQUE</span> <span class=\"k\">INDEX</span> <span class=\"n\">readme_index</span> <span class=\"k\">ON</span> <span class=\"n\">files</span> <span class=\"p\">(</span><span class=\"n\">torrent_id</span><span class=\"p\">,</span> <span class=\"n\">is_readme</span><span class=\"p\">);</span></code></pre></figure>\n\n<p>So if we wrap this up, a naive approach (or rather “an approach that I’ve tried for you” :upside_down_face:) would be to execute a PGLoader command derived from <a href=\"https://pgloader.readthedocs.io/en/stable/ref/sqlite.html#using-advanced-options-and-a-load-command-file\">upstream documentation</a> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-sql\" data-lang=\"sql\"><span class=\"k\">LOAD</span> <span class=\"k\">database</span>\n     <span class=\"k\">from</span> <span class=\"n\">sqlite</span><span class=\"p\">:</span><span class=\"o\">//</span><span class=\"k\">data</span><span class=\"o\">/</span><span class=\"k\">database</span><span class=\"p\">.</span><span class=\"n\">sqlite3</span>\n     <span class=\"k\">into</span> <span class=\"n\">postgres</span><span class=\"p\">:</span><span class=\"o\">//</span><span class=\"n\">magnetico</span><span class=\"o\">@</span><span class=\"n\">unix</span><span class=\"p\">:</span><span class=\"n\">run</span><span class=\"o\">/</span><span class=\"n\">postgresql</span><span class=\"p\">:</span><span class=\"mi\">5432</span><span class=\"o\">/</span><span class=\"n\">magnetico</span>\n\n<span class=\"k\">WITH</span> <span class=\"n\">include</span> <span class=\"k\">drop</span><span class=\"p\">,</span> <span class=\"k\">create</span> <span class=\"n\">tables</span><span class=\"p\">,</span> <span class=\"k\">create</span> <span class=\"n\">indexes</span><span class=\"p\">,</span> <span class=\"k\">reset</span> <span class=\"n\">sequences</span><span class=\"p\">,</span>\n     <span class=\"n\">quote</span> <span class=\"n\">identifiers</span><span class=\"p\">,</span>\n     <span class=\"k\">on</span> <span class=\"n\">error</span> <span class=\"n\">resume</span> <span class=\"k\">next</span><span class=\"p\">,</span>\n     <span class=\"n\">prefetch</span> <span class=\"k\">rows</span> <span class=\"o\">=</span> <span class=\"mi\">10000</span>\n\n<span class=\"k\">SET</span> <span class=\"n\">work_mem</span> <span class=\"k\">to</span> <span class=\"s1\">'16 MB'</span><span class=\"p\">,</span>\n    <span class=\"n\">maintenance_work_mem</span> <span class=\"k\">to</span> <span class=\"s1\">'512 MB'</span>\n\n<span class=\"k\">AFTER</span> <span class=\"k\">LOAD</span> <span class=\"k\">DO</span>\n    <span class=\"err\">$$</span> <span class=\"k\">CREATE</span> <span class=\"n\">EXTENSION</span> <span class=\"n\">IF</span> <span class=\"k\">NOT</span> <span class=\"k\">EXISTS</span> <span class=\"n\">pg_trgm</span><span class=\"p\">;</span> <span class=\"err\">$$</span><span class=\"p\">,</span>\n    <span class=\"err\">$$</span> <span class=\"k\">ALTER</span> <span class=\"k\">TABLE</span> <span class=\"n\">files</span> <span class=\"k\">ADD</span> <span class=\"k\">FOREIGN</span> <span class=\"k\">KEY</span><span class=\"p\">(</span><span class=\"n\">torrent_id</span><span class=\"p\">)</span> <span class=\"k\">REFERENCES</span> <span class=\"n\">torrents</span> <span class=\"k\">ON</span> <span class=\"k\">UPDATE</span> <span class=\"k\">RESTRICT</span> <span class=\"k\">ON</span> <span class=\"k\">DELETE</span> <span class=\"k\">CASCADE</span><span class=\"p\">;</span> <span class=\"err\">$$</span><span class=\"p\">;</span></code></pre></figure>\n\n<p>You should have noticed <code class=\"language-plaintext highlighter-rouge\">prefetch rows</code> <a href=\"https://pgloader.readthedocs.io/en/latest/command.html?highlight=prefetch%20rows#batch-behaviour-options\">option</a> (which defaults to <code class=\"language-plaintext highlighter-rouge\">100000</code>) that I had to lower as, if you don’t have a whole datacenter at your disposal either, it leads to heap memory exhaustion.</p>\n\n<blockquote>\n  <p>Later, I’ve also hit another PGLoader “transpilation” <a href=\"https://github.com/dimitri/pgloader/issues/1547\">issue</a>, this time related to SQLite <code class=\"language-plaintext highlighter-rouge\">PRIMARY KEY</code>… Long story short : this is an SQL nightmare.</p>\n</blockquote>\n\n<p>But anyway, this PGLoader command fails pretty hard and quick, because of…</p>\n\n<h3 id=\"encoding-issues-as-always\">Encoding issues… <a href=\"https://www.depesz.com/2010/03/07/error-invalid-byte-sequence-for-encoding/\">as always</a></h3>\n\n<p>BitTorrent DHT actually contains lots of garbage, including invalid UTF-8 character sequences.\nPostgreSQL enforces <a href=\"https://github.com/jackc/pgx/discussions/1554#discussioncomment-5353546\">strict character encoding checks</a>, which prevents us from directly importing <code class=\"language-plaintext highlighter-rouge\">TEXT</code> columns (<code class=\"language-plaintext highlighter-rouge\">torrents.name</code> and <code class=\"language-plaintext highlighter-rouge\">files.path</code>) from SQLite. Moreover, it stops current table processing when it encounters an error (including an encoding one), and <a href=\"https://github.com/dimitri/pgloader/issues/1250\">PGLoader doesn’t continue with the remaining rows afterward</a>.</p>\n\n<p>So I decided to adapt @Crystalix007’s <a href=\"https://m.ichael.dk/post/converting-sqlite3-to-postgres/\">solution</a> for <em>cleaning</em> the SQLite dump from invalid character sequences, without the major drawback of duplicating it on disk multiple times (which by the way additionally lead to database corruption in my case…).</p>\n\n<h3 id=\"the-final-recipe\">The final recipe</h3>\n\n<p>So the idea here is to walk away from PGLoader SQLite connector and let Magnetico create a clean database schema. Then we dump SQLite database tables as CSV streamed through <code class=\"language-plaintext highlighter-rouge\">uconv</code> (in order to skip invalid UTF-8 character sequences, which spares us some useless gigabytes by the way :wink:) down to PostgreSQL database thanks to PGLoader :</p>\n\n<p><a href=\"/img/blog/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql_2.png\"><img src=\"/img/blog/how-to-migrate-magnetico-sqlite-database-dump-to-postgresql_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>As CSV is a text format, special care must be taken when importing <code class=\"language-plaintext highlighter-rouge\">torrents.info_hash</code>, which is an SQLite <code class=\"language-plaintext highlighter-rouge\">BLOB</code> column (containing raw bytes). For this we leverage PostgreSQL’s <code class=\"language-plaintext highlighter-rouge\">bytea</code> <a href=\"https://www.postgresql.org/docs/current/datatype-binary.html#DATATYPE-BINARY-BYTEA-HEX-FORMAT\">hex format</a> and encode those bytes on-the-fly as hexadecimal.</p>\n\n<h3 id=\"the-final-steps\">The final steps</h3>\n\n<h4 id=\"bootstrap-magnetico\">Bootstrap Magnetico</h4>\n\n<p>Extend Magnetico Compose stack with a PostgreSQL service, as below :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"na\">version</span><span class=\"pi\">:</span> <span class=\"s2\">\"</span><span class=\"s\">2\"</span>\n\n<span class=\"na\">services</span><span class=\"pi\">:</span>\n  <span class=\"na\">magnetico</span><span class=\"pi\">:</span>\n    <span class=\"na\">image</span><span class=\"pi\">:</span> <span class=\"s\">ghcr.io/tgragnato/magnetico:latest</span>\n    <span class=\"na\">restart</span><span class=\"pi\">:</span> <span class=\"s\">always</span>\n    <span class=\"na\">ports</span><span class=\"pi\">:</span>\n      <span class=\"pi\">-</span> <span class=\"s2\">\"</span><span class=\"s\">127.0.0.1:8080:8080\"</span>\n    <span class=\"na\">command</span><span class=\"pi\">:</span>\n      <span class=\"pi\">-</span> <span class=\"s2\">\"</span><span class=\"s\">--database=postgres://magnetico:password@postgres:5432/magnetico?sslmode=disable\"</span>\n      <span class=\"pi\">-</span> <span class=\"s2\">\"</span><span class=\"s\">--max-rps=500\"</span>\n      <span class=\"pi\">-</span> <span class=\"s2\">\"</span><span class=\"s\">--addr=0.0.0.0:8080\"</span>\n    <span class=\"na\">depends_on</span><span class=\"pi\">:</span>\n      <span class=\"pi\">-</span> <span class=\"s\">postgres</span>\n\n  <span class=\"na\">postgres</span><span class=\"pi\">:</span>\n    <span class=\"na\">image</span><span class=\"pi\">:</span> <span class=\"s\">docker.io/postgres:18-alpine</span>\n    <span class=\"na\">restart</span><span class=\"pi\">:</span> <span class=\"s\">always</span>\n    <span class=\"na\">shm_size</span><span class=\"pi\">:</span> <span class=\"s\">128mb</span>\n    <span class=\"na\">environment</span><span class=\"pi\">:</span>\n      <span class=\"na\">POSTGRES_USER</span><span class=\"pi\">:</span> <span class=\"s2\">\"</span><span class=\"s\">magnetico\"</span>\n      <span class=\"na\">POSTGRES_PASSWORD</span><span class=\"pi\">:</span> <span class=\"s2\">\"</span><span class=\"s\">password\"</span>\n      <span class=\"na\">POSTGRES_DB</span><span class=\"pi\">:</span> <span class=\"s2\">\"</span><span class=\"s\">magnetico\"</span>\n    <span class=\"na\">volumes</span><span class=\"pi\">:</span>\n      <span class=\"pi\">-</span> <span class=\"s\">./data/postgres:/var/lib/postgresql</span>\n      <span class=\"c1\"># required at first PostgreSQL start, to enable 'pg_trm' extension</span>\n      <span class=\"pi\">-</span> <span class=\"s\">./load_trm.sql:/docker-entrypoint-initdb.d/load_trm.sql:ro</span>\n      <span class=\"c1\"># allow PGLoader to connect to PostgreSQL using UNIX domain socket, for maximum performance</span>\n      <span class=\"pi\">-</span> <span class=\"s\">./run:/var/run</span></code></pre></figure>\n\n<p>… and then run the following commands :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nb\">mkdir</span> <span class=\"nt\">-p</span> data/postgres/ run/\n<span class=\"nb\">echo</span> <span class=\"s2\">\"CREATE EXTENSION IF NOT EXISTS pg_trgm;\"</span> <span class=\"o\">&gt;</span> load_trm.sql\n\npodman-compose up <span class=\"nt\">-d</span> <span class=\"o\">&amp;&amp;</span> podman-compose logs <span class=\"nt\">-f</span> magnetico\n<span class=\"c\"># press CTRL+C  when you see \"magnetico is ready to serve on [...]!\" log message !</span>\n\n<span class=\"c\"># stop Magnetico service (only)</span>\npodman-compose stop magnetico</code></pre></figure>\n\n<h4 id=\"run-migration\">Run migration</h4>\n\n<p>You’ll have to prepare a PGLoader command file (<code class=\"language-plaintext highlighter-rouge\">load_table.pgl</code>), which imports CSV content read from stdin into PostgreSQL <code class=\"language-plaintext highlighter-rouge\">TARGET_TABLE</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-sql\" data-lang=\"sql\"><span class=\"k\">LOAD</span> <span class=\"n\">CSV</span>\n     <span class=\"k\">FROM</span> <span class=\"k\">stdin</span>\n     <span class=\"k\">INTO</span> <span class=\"n\">postgres</span><span class=\"p\">:</span><span class=\"o\">//</span><span class=\"n\">magnetico</span><span class=\"o\">@</span><span class=\"n\">unix</span><span class=\"p\">:</span><span class=\"n\">run</span><span class=\"o\">/</span><span class=\"n\">postgresql</span><span class=\"p\">:</span><span class=\"mi\">5432</span><span class=\"o\">/</span><span class=\"n\">magnetico</span>\n          <span class=\"n\">TARGET</span> <span class=\"k\">TABLE</span> <span class=\"s1\">'{{ TARGET_TABLE }}'</span>\n\n<span class=\"k\">WITH</span> <span class=\"n\">disable</span> <span class=\"n\">triggers</span><span class=\"p\">,</span>\n     <span class=\"n\">fields</span> <span class=\"n\">optionally</span> <span class=\"n\">enclosed</span> <span class=\"k\">by</span> <span class=\"s1\">'\"'</span><span class=\"p\">,</span>\n     <span class=\"n\">fields</span> <span class=\"n\">escaped</span> <span class=\"k\">by</span> <span class=\"nb\">double</span><span class=\"o\">-</span><span class=\"n\">quote</span><span class=\"p\">,</span>\n     <span class=\"n\">fields</span> <span class=\"n\">terminated</span> <span class=\"k\">by</span> <span class=\"s1\">','</span>\n\n<span class=\"k\">SET</span> <span class=\"n\">work_mem</span> <span class=\"k\">to</span> <span class=\"s1\">'32 MB'</span><span class=\"p\">,</span>\n    <span class=\"n\">maintenance_work_mem</span> <span class=\"k\">to</span> <span class=\"s1\">'512 MB'</span>\n\n<span class=\"k\">BEFORE</span> <span class=\"k\">LOAD</span> <span class=\"k\">DO</span>\n    <span class=\"err\">$$</span> <span class=\"k\">TRUNCATE</span> <span class=\"k\">TABLE</span> <span class=\"nv\">\"{{ TARGET_TABLE }}\"</span> <span class=\"k\">CASCADE</span><span class=\"p\">;</span> <span class=\"err\">$$</span><span class=\"p\">,</span>\n    <span class=\"err\">$$</span> <span class=\"k\">UPDATE</span> <span class=\"n\">pg_index</span> <span class=\"k\">SET</span> <span class=\"n\">indisready</span><span class=\"o\">=</span><span class=\"k\">false</span> <span class=\"k\">WHERE</span> <span class=\"n\">indrelid</span> <span class=\"o\">=</span> <span class=\"p\">(</span><span class=\"k\">SELECT</span> <span class=\"n\">oid</span> <span class=\"k\">FROM</span> <span class=\"n\">pg_class</span> <span class=\"k\">WHERE</span> <span class=\"n\">relname</span> <span class=\"o\">=</span> <span class=\"s1\">'{{ TARGET_TABLE }}'</span><span class=\"p\">);</span> <span class=\"err\">$$</span>\n\n<span class=\"k\">AFTER</span> <span class=\"k\">LOAD</span> <span class=\"k\">DO</span>\n    <span class=\"err\">$$</span> <span class=\"k\">UPDATE</span> <span class=\"n\">pg_index</span> <span class=\"k\">SET</span> <span class=\"n\">indisready</span><span class=\"o\">=</span><span class=\"k\">true</span> <span class=\"k\">WHERE</span> <span class=\"n\">indrelid</span> <span class=\"o\">=</span> <span class=\"p\">(</span><span class=\"k\">SELECT</span> <span class=\"n\">oid</span> <span class=\"k\">FROM</span> <span class=\"n\">pg_class</span> <span class=\"k\">WHERE</span> <span class=\"n\">relname</span> <span class=\"o\">=</span> <span class=\"s1\">'{{ TARGET_TABLE }}'</span><span class=\"p\">);</span> <span class=\"err\">$$</span><span class=\"p\">,</span>\n    <span class=\"err\">$$</span> <span class=\"k\">SELECT</span> <span class=\"n\">setval</span><span class=\"p\">(</span><span class=\"s1\">'seq_{{ TARGET_TABLE }}_id'</span><span class=\"p\">,</span> <span class=\"p\">(</span><span class=\"k\">SELECT</span> <span class=\"k\">MAX</span><span class=\"p\">(</span><span class=\"n\">id</span><span class=\"p\">)</span> <span class=\"k\">FROM</span> <span class=\"nv\">\"{{ TARGET_TABLE }}\"</span><span class=\"p\">));</span> <span class=\"err\">$$</span><span class=\"p\">;</span></code></pre></figure>\n\n<p>It turns out PGLoader doesn’t specify <code class=\"language-plaintext highlighter-rouge\">CASCADE</code> when truncating target table, so we must do it ourselves as import prelude.</p>\n\n<p>For performance purpose, we disable all <code class=\"language-plaintext highlighter-rouge\">TARGET_TABLE</code> indexes during import (following @fle’s <a href=\"https://fle.github.io/temporarily-disable-all-indexes-of-a-postgresql-table.html\">trick</a>) and trigger whole <a href=\"https://www.postgresql.org/docs/current/sql-reindex.html#:~:text=Recreate%20all%20indexes%20within%20the%20current%20database,%20except%20system%20catalogs\">database re-indexation</a> afterwards (see below).<br />\nThis is a(nother) workaround as PGLoader isn’t able to <code class=\"language-plaintext highlighter-rouge\">drop indexes</code> (before reconstructing them at the end) when some SQL constraints depend on them.</p>\n\n<p>You can then execute this Bash script (<code class=\"language-plaintext highlighter-rouge\">import_db.sh</code>) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"c\">#!/usr/bin/env bash</span>\n\n<span class=\"nb\">set</span> <span class=\"nt\">-euo</span> pipefail\n\n<span class=\"c\"># 1. Import 'torrents' table as CSV (with hex-encoded 'info_hash' column and without invalid UTF-8 characters)</span>\nsqlite3 <span class=\"nt\">-readonly</span> data/database.sqlite3 <span class=\"nt\">-csv</span> <span class=\"se\">\\</span>\n    <span class=\"s2\">\"SELECT id, '</span><span class=\"se\">\\x</span><span class=\"s2\">'||hex(info_hash), name, total_size, discovered_on, updated_on, n_seeders, n_leechers, modified_on FROM torrents;\"</span> <span class=\"se\">\\</span>\n        | uconv <span class=\"nt\">--callback</span> skip <span class=\"nt\">-t</span> utf8 <span class=\"se\">\\</span>\n        | <span class=\"nv\">TARGET_TABLE</span><span class=\"o\">=</span><span class=\"s2\">\"torrents\"</span> pgloader <span class=\"nt\">-v</span> load_table.pgl\n\n<span class=\"c\"># 2. Import 'files' table as CSV (without invalid UTF-8 characters)</span>\nsqlite3 <span class=\"nt\">-readonly</span> data/database.sqlite3 <span class=\"nt\">-csv</span> <span class=\"se\">\\</span>\n    <span class=\"s2\">\"SELECT * FROM files;\"</span> <span class=\"se\">\\</span>\n        | uconv <span class=\"nt\">--callback</span> skip <span class=\"nt\">-t</span> utf8 <span class=\"se\">\\</span>\n        | <span class=\"nv\">TARGET_TABLE</span><span class=\"o\">=</span><span class=\"s2\">\"files\"</span> pgloader <span class=\"nt\">-v</span> load_table.pgl\n\n<span class=\"c\"># 3. Trigger database complete re-indexation</span>\npodman-compose <span class=\"nb\">exec </span>postgres <span class=\"se\">\\</span>\n    psql <span class=\"nt\">-U</span> magnetico <span class=\"nt\">-c</span> <span class=\"s1\">'REINDEX DATABASE magnetico;'</span></code></pre></figure>\n\n<h4 id=\"restart-magnetico\">Restart Magnetico</h4>\n\n<p>Once database import is done, you may simplify your <code class=\"language-plaintext highlighter-rouge\">postgres</code> Compose service definition and restart services :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-diff\" data-lang=\"diff\">   postgres:\n     image: docker.io/postgres:18-alpine\n     restart: always\n     shm_size: 128mb\n     environment:\n       POSTGRES_USER: \"magnetico\"\n       POSTGRES_PASSWORD: \"password\"\n       POSTGRES_DB: \"magnetico\"\n      volumes:\n        - ./data/postgres:/var/lib/postgresql\n<span class=\"gd\">-      - ./load_trm.sql:/docker-entrypoint-initdb.d/load_trm.sql:ro\n-      - ./run:/var/run</span></code></pre></figure>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman-compose up <span class=\"nt\">-d</span></code></pre></figure>\n\n<p>If you’re satisfied with the imported dataset you can clean all of our mess (as well as old SQLite database dump) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nb\">rm</span> <span class=\"nt\">-rf</span> <span class=\"se\">\\</span>\n    data/database.sqlite3<span class=\"o\">{</span>,-shm,-wal<span class=\"o\">}</span> <span class=\"se\">\\</span>\n    run/ <span class=\"se\">\\</span>\n    load_table.pgl <span class=\"se\">\\</span>\n    import_db.sh <span class=\"se\">\\</span>\n    /tmp/pgloader\n\napt autoremove <span class=\"nt\">--purge</span> icu-devtools pgloader sqlite3</code></pre></figure>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>Migration took around ~12 hours on my setup (~30M torrents with ~950M files), but it took me more than a week to completely figure out the process and tidy it up :clown_face:</p>\n\n<p>‘hope it helped and that one day most of Magnetico users will eventually <a href=\"https://www.hendrik-erz.de/post/why-you-shouldnt-use-sqlite\">drop SQLite</a>, preferring PostgreSQL (<a href=\"https://www.postgresql.org/docs/current/app-pgdump.html#:~:text=Output%20a%20custom-format%20archive%20suitable%20for%20input%20into%20pg_restore\">custom archive</a>) dumps.<br />\nDon’t forget to share with the world… tracker-free :earth_africa:</p>\n",
            "summary": "Follow the journey of a BitTorrent DHT SQLite database dump",
            "tags": ["Tutorials"]
        },{
            "title": "MTA-STS for ProtonMail custom domain, the \"zero maintenance\" way",
            "date_published": "2024-11-24T20:01:00+01:00",
            "date_modified": "2024-11-24T20:01:00+01:00",
            "id": "https://samuel.forestier.app/blog/security/mta-sts-for-protonmail-custom-domain-the-zero-maintenance-way",
            "url": "https://samuel.forestier.app/blog/security/mta-sts-for-protonmail-custom-domain-the-zero-maintenance-way",
            "image": "https://samuel.forestier.app/img/blog/mta-sts-for-protonmail-custom-domain-the-zero-maintenance-way.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/mta-sts-for-protonmail-custom-domain-the-zero-maintenance-way.png\"><img src=\"/img/blog/mta-sts-for-protonmail-custom-domain-the-zero-maintenance-way.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h2 id=\"introduction\">Introduction</h2>\n\n<p>This blog post isn’t about explaining <a href=\"https://datatracker.ietf.org/doc/html/rfc8461\">MTA-STS</a> (there are already plenty of resources on the Internet to read about this), but more about its integration with <a href=\"https://proton.me/support/custom-domain\">custom domains</a> when e-mail is handled by ProtonMail.</p>\n\n<p>@Wonderfall wrote some years ago a very interesting <a href=\"https://wonderfall.dev/mta-sts/\">blog post</a> detailing how ProtonMail lack of MTA-STS support could be circumvented using two DNS entries and a statically served Web page.</p>\n\n<p>However, one month and half ago, someone on Reddit has come up with an <a href=\"https://old.reddit.com/r/ProtonMail/comments/y6q6g8/mtasts_for_custom_domains/#lq1d1hs\">elegant solution</a>, presented as “zero maintenance”, leveraging a CloudFlare worker (<a href=\"https://pastebin.com/95fY5KrM\">source</a>, inspired by CloudFlare own <a href=\"https://developers.cloudflare.com/email-routing/setup/mta-sts/\">documentation</a>) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-javascript\" data-lang=\"javascript\"><span class=\"k\">export</span> <span class=\"k\">default</span> <span class=\"p\">{</span>\n    <span class=\"k\">async</span> <span class=\"nf\">fetch</span><span class=\"p\">(</span><span class=\"nx\">request</span><span class=\"p\">,</span> <span class=\"nx\">env</span><span class=\"p\">,</span> <span class=\"nx\">ctx</span><span class=\"p\">)</span> <span class=\"p\">{</span>\n        <span class=\"k\">return</span> <span class=\"nf\">fetch</span><span class=\"p\">(</span><span class=\"dl\">'</span><span class=\"s1\">https://mta-sts.protonmail.ch/.well-known/mta-sts.txt</span><span class=\"dl\">'</span><span class=\"p\">);</span>\n    <span class=\"p\">},</span>\n<span class=\"p\">};</span></code></pre></figure>\n\n<p>So we will properly detail how to implement this solution, but more importantly how we can self-host a CloudFlare worker alternative, so as not to depend on (another) third-party service provider.</p>\n\n<h2 id=\"how-to-setup-mta-sts\">How to setup MTA-STS</h2>\n\n<blockquote>\n  <p>Below configuration and code snippets use fake <code class=\"language-plaintext highlighter-rouge\">example.org</code> domain name.</p>\n</blockquote>\n\n<h3 id=\"create-a-new-mta-sts-subdomain\">Create a new <code class=\"language-plaintext highlighter-rouge\">mta-sts</code> subdomain</h3>\n\n<p>Before anything else, we have to create a <code class=\"language-plaintext highlighter-rouge\">mta-sts.example.org</code> subdomain. So you can create an <code class=\"language-plaintext highlighter-rouge\">A</code> or <code class=\"language-plaintext highlighter-rouge\">AAAA</code> record which targets a Web server you administrate. <code class=\"language-plaintext highlighter-rouge\">A</code> record type is currently safer, just in case the <a href=\"https://en.wikipedia.org/wiki/Message_transfer_agent\">MTA</a> on the verge of connecting to ProtonMail servers lack of IPv6 support.</p>\n\n<h3 id=\"create-a-new-apache-site--tls-reverse-proxy-and-cache\">Create a new Apache site : TLS, reverse proxy and cache</h3>\n\n<p>Once it’s done and DNS propagation took place, you have to setup TLS for this domain on the Web server, as you would do for any other site.</p>\n\n<p>As MTA-STS policies aren’t really known to change over time, we enable caching to prevent systematic connections to ProtonMail servers.</p>\n\n<p>For instance, your Apache VHOST configurations could look like (<code class=\"language-plaintext highlighter-rouge\">/etc/apache2/sites-available/mta-sts.example.org.conf</code>) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-apache\" data-lang=\"apache\"><span class=\"p\">&lt;</span><span class=\"nl\">VirtualHost</span><span class=\"sr\"> *:443</span><span class=\"p\">&gt;\n</span>\t<span class=\"nc\">ServerName</span> mta-sts.example.org\n\t<span class=\"nc\">ServerAdmin</span> webmaster@example.org\n\n\t<span class=\"c\"># Allow proxy engine to connect to an HTTPS server</span>\n\t<span class=\"nc\">SSLProxyEngine</span> <span class=\"ss\">On</span>\n\n\t<span class=\"c\"># Setup caching to make it honor ACL and cache the response for 1 day</span>\n\t<span class=\"nc\">CacheQuickHandler</span> <span class=\"ss\">Off</span>\n\t<span class=\"nc\">CacheSocache</span> shmcb\n\t<span class=\"nc\">CacheSocacheMaxTime</span> 86400\n\n\t<span class=\"c\"># Ignore ProtonMail \"Cache-Control\" and \"Expires\" headers</span>\n\t<span class=\"nc\">Header</span> <span class=\"ss\">unset</span> Cache-Control\n\t<span class=\"nc\">Header</span> <span class=\"ss\">unset</span> Expires\n\t<span class=\"c\"># Ignore lack of \"Last Modified\" header in ProtonMail response</span>\n\t<span class=\"nc\">CacheIgnoreNoLastMod</span> <span class=\"ss\">On</span>\n\n\t<span class=\"c\"># Transparently serve Proton's MTA-STS policy</span>\n\t<span class=\"p\">&lt;</span><span class=\"nl\">Location</span><span class=\"sr\"> /.well-known/mta-sts.txt</span><span class=\"p\">&gt;\n</span>\t\t<span class=\"nc\">CacheEnable</span> socache\n\n\t\t<span class=\"c\"># Result will be cached, always close TCP session afterwards</span>\n\t\t<span class=\"nc\">ProxyPass</span> https://mta-sts.protonmail.ch/.well-known/mta-sts.txt disablereuse=on\n\t\t<span class=\"nc\">ProxyPassReverse</span> https://mta-sts.protonmail.ch/.well-known/mta-sts.txt\n\t<span class=\"p\">&lt;/</span><span class=\"nl\">Location</span><span class=\"p\">&gt;\n</span>\n\t<span class=\"nc\">ErrorLog</span> ${APACHE_LOG_DIR}/mta-sts.example.org_error.log\n\t<span class=\"nc\">CustomLog</span> ${APACHE_LOG_DIR}/mta-sts.example.org_access.log combined\n\n\t<span class=\"nc\">SSLEngine</span> <span class=\"ss\">On</span>\n\t<span class=\"nc\">SSLCertificateFile</span> /path/to/mta-sts.example.org/fullchain.pem\n\t<span class=\"nc\">SSLCertificateKeyFile</span> /path/to/mta-sts.example.org/privkey.pem\n\n\t<span class=\"nc\">Header</span> <span class=\"ss\">always</span> <span class=\"ss\">set</span> Strict-Transport-Security: \"max-age=63072000\"\n<span class=\"p\">&lt;/</span><span class=\"nl\">VirtualHost</span><span class=\"p\">&gt;\n</span>\n<span class=\"p\">&lt;</span><span class=\"nl\">VirtualHost</span><span class=\"sr\"> *:80</span><span class=\"p\">&gt;\n</span>\t<span class=\"nc\">ServerName</span> mta-sts.example.org\n\t<span class=\"nc\">ServerAdmin</span> webmaster@example.org\n\n\t<span class=\"nc\">RewriteEngine</span> <span class=\"ss\">On</span>\n\t<span class=\"nc\">RewriteRule</span> ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]\n<span class=\"p\">&lt;/</span><span class=\"nl\">VirtualHost</span><span class=\"p\">&gt;</span></code></pre></figure>\n\n<p>You now only have to enable some Apache modules as well as above new site, and reload Apache configuration (Debian procedure) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">a2enmod cache_socache proxy ssl\na2ensite mta-sts.example.org.conf\nsystemctl reload apache2.service\n\n<span class=\"c\"># If you opt for disk-based caching (mod_cache_disk), don't forget to enable Apache2 cleaning systemd service</span>\nsystemctl <span class=\"nb\">enable</span> <span class=\"nt\">--now</span> apache-htclean.service</code></pre></figure>\n\n<p>You can now check MTA-STS policy is properly enforced for your domain by downloading it :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">curl https://mta-sts.example.org/.well-known/mta-sts.txt\n\nversion: STSv1\nmode: enforce\nmx: mail.protonmail.ch\nmx: mailsec.protonmail.ch\nmax_age: 604800</code></pre></figure>\n\n<h3 id=\"setup-mta-sts-discovery-through-_mta-sts-subdomain\">Setup MTA-STS discovery through <code class=\"language-plaintext highlighter-rouge\">_mta-sts</code> subdomain</h3>\n\n<p>Now your MTA-STS policy is available to the world, you have to allow its “discovery”, through an additional <code class=\"language-plaintext highlighter-rouge\">TXT</code> DNS record.</p>\n\n<p>So the idea here, instead of copying ProtonMail own <code class=\"language-plaintext highlighter-rouge\">TXT</code> record value (<code class=\"language-plaintext highlighter-rouge\">dig +noall +answer TXT _mta-sts.protonmail.ch</code>) and adding it ourselves as <code class=\"language-plaintext highlighter-rouge\">_mta-sts.example.org</code>, is rather to simply create a <code class=\"language-plaintext highlighter-rouge\">CNAME</code> for <code class=\"language-plaintext highlighter-rouge\">_mta-sts.example.org</code>, pointing to <code class=\"language-plaintext highlighter-rouge\">_mta-sts.protonmail.ch</code>.</p>\n\n<p>This way, when MTA will try to probe SMTP server for MTA-STS support, they will hit ProtonMail own beacon :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">dig +noall +answer TXT _mta-sts.example.org\n\n_mta-sts.example.org.\t1799\tIN\tCNAME\t_mta-sts.protonmail.ch.\n_mta-sts.protonmail.ch.\t1200\tIN\tTXT\t\t<span class=\"s2\">\"v=STSv1; id=190906205100Z;\"</span></code></pre></figure>\n\n<p>That’s it ! MTA-STS policy should be enforced for your domain :innocent:</p>\n\n<h2 id=\"how-to-setup-tls-rpt\">How to setup TLS-RPT</h2>\n\n<p>Wait ! Don’t close this tab yet ! Let’s take the opportunity to have access to our DNS administration console to also setup <a href=\"https://datatracker.ietf.org/doc/html/rfc8460\">TLS-RPT</a>.</p>\n\n<p>TLS-RPT (“TLS Reporting”) allows you to receive reports about e-mail delivering issues, which are related to TLS connection. As we just have enforced a strict TLS policy, this is important to at least be notified in case of errors.</p>\n\n<p>So first, let’s query ProtonMail’s TLS-RPT configuration :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">dig +noall +answer TXT _smtp._tls.protonmail.ch\n\n_smtp._tls.protonmail.ch. 1200\tIN\tTXT\t<span class=\"s2\">\"v=TLSRPTv1; rua=https://reports.proton.me/reports/smtptls\"</span></code></pre></figure>\n\n<p>So ProtonMail asks MTA to <code class=\"language-plaintext highlighter-rouge\">POST</code> TLS-RPT reports to their own API endpoint.</p>\n\n<p>However, as TLS-RPT allows us to specify <a href=\"https://datatracker.ietf.org/doc/html/rfc8460#section-3\">more than one RUA</a> (which stands for “Reporting URI for Aggregated reports”) :</p>\n\n<blockquote>\n  <p>The record supports the ability to declare more than one rua, and if there exists more than one, the reporter MAY attempt to deliver to each of the supported rua destinations.</p>\n</blockquote>\n\n<p>So let’s leverage this specification detail to send report to ourselves, as well as ProtonMail (hoping they actually do something about them).</p>\n\n<p>You only have to create a new <code class=\"language-plaintext highlighter-rouge\">TXT</code> record for <code class=\"language-plaintext highlighter-rouge\">_smtp._tls.example.org</code> which contains the value <code class=\"language-plaintext highlighter-rouge\">v=TLSRPTv1; rua=mailto:security@example.org,https://reports.proton.me/reports/smtptls</code>. Don’t forget to adapt the e-mail address which should receive those reports !</p>\n\n<h2 id=\"check-your-security-policy\">Check your security policy</h2>\n\n<p>If you’re happy with your setup, you can then test it using (for instance, and I’ve absolutely nothing to do with them) <a href=\"https://easydmarc.com/tools/mta-sts-check\">EasyDMARC</a>. A <a href=\"https://easydmarc.com/tools/tls-rpt-check\">TLS-RPT checker</a> is also available.</p>\n\n<h2 id=\"conclusion\">Conclusion</h2>\n\n<p>I hope this post has helped you ! As always, credits go to their respective owners and comments are appreciated :pray:</p>\n\n<hr />\n\n<p><em>&gt; Post header image was generated using <a href=\"https://chatgpt.com/\">ChatGPT</a> (please, never again)</em></p>\n",
            "summary": "MTA-STS and TLS-RPT transparent setup for ProtonMail custom domain",
            "tags": ["Security"]
        },{
            "title": "Projects ask for contributors but are reluctant to contributions",
            "date_published": "2024-07-06T19:33:00+02:00",
            "date_modified": "2024-07-06T19:33:00+02:00",
            "id": "https://samuel.forestier.app/blog/articles/projects-ask-for-contributors-but-are-reluctant-to-contributions",
            "url": "https://samuel.forestier.app/blog/articles/projects-ask-for-contributors-but-are-reluctant-to-contributions",
            "image": "https://samuel.forestier.app/img/blog/projects-ask-for-contributors-but-are-reluctant-to-contributions_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/projects-ask-for-contributors-but-are-reluctant-to-contributions_1.png\"><img src=\"/img/blog/projects-ask-for-contributors-but-are-reluctant-to-contributions_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>Over the past years, I contributed to various projects on GitHub. It turned out two recent contributions (<a href=\"https://github.com/isso-comments/isso/pull/976\">1</a> &amp; <a href=\"https://github.com/rustdesk/rustdesk/pull/7983\">2</a>) turned into <em>fiascos</em>. <em>Fiascos</em> I’ll try to comment here, as they retrospectively appeared to me as “weak signals” of FOSS ecosystem deep symptoms related to contributions acceptance.</p>\n\n<blockquote>\n  <p>In this post I’ll be referring a lot to “FOSS”, standing for Free and Open-Source Software.</p>\n</blockquote>\n\n<h3 id=\"responsibility-vs-accountability\">Responsibility vs. Accountability</h3>\n\n<p>It’s true maintainers are accountable for the software they work on. Although, as an external contributor I am definitely responsible for the (broken ?) patch I push.</p>\n\n<p>If you fear accountability, despite the “zero guarantee” OSS license that should already protects you (strictly speaking from a legal point of view), feel free to reject patch soon (or you can always rework it yourself, if you prefer when “it’s written your way”).</p>\n\n<h3 id=\"dont-be-picky-about-testing-if-your-project-isnt-yet-\">Don’t be picky about testing if your project isn’t (yet ?)</h3>\n\n<p>Don’t put the burden of missing integration/non-regression tests on contributors ; They already invested a long time to :</p>\n\n<ul>\n  <li>\n    <p>checkout your code and setup development environment ;</p>\n  </li>\n  <li>\n    <p>read your documentation (if any) ;</p>\n  </li>\n  <li>\n    <p>nail down the bug/implement a new feature ;</p>\n  </li>\n  <li>\n    <p>read your contributing guidelines (if any) ;</p>\n  </li>\n  <li>\n    <p>follow your coding style ;</p>\n  </li>\n  <li>\n    <p>cleanup the patch before proposing it to the world ;</p>\n  </li>\n  <li>\n    <p>deal with your forge technical specificities to actually submit changes.</p>\n  </li>\n</ul>\n\n<p>Optionally, they could also figure out <a href=\"https://github.com/rust-cli/confy/pull/94\">the issue required to be addressed in a third-party dependency</a>, or even worst case : <a href=\"https://github.com/rustdesk-org/confy/pull/2\">a fork you made of it</a> (!).</p>\n\n<p>Focusing on improving “tests” could be handled afterwards, as a totally separate concern (proposing a bug fix or new feature on a currently untested code base is fully independent from addressing the actual issue).</p>\n\n<p>Let’s be clear : improving an untested piece of code is still <strong>improving</strong> a piece of code. End-users (including third developers) absolutely do not care about the status of your current code base coverage.</p>\n\n<p>If your entry point (<code class=\"language-plaintext highlighter-rouge\">main</code>) isn’t tested (only integration tests can), please exclude it from code coverage results, allowing it to be modified (read “improved”) without bringing “bad statistics” to your CI.</p>\n\n<h3 id=\"foss-workload\">FOSS workload</h3>\n\n<p>This is not new : there is only a few people <em>maintaining</em> way too many software.<br />\nOne of the consequences of these <a href=\"https://en.wikipedia.org/wiki/Bus_factor\">bus factors</a> (also called “circus factors”, like <a href=\"https://github.com/dylanaraps/dylanaraps/commit/811599cc564418e242f23a11082299323e7f62f8\">the recent decision of Neofetch creator and maintainer to simply quit</a> after more than two years of vacancy) : it’s hard to expect from them fully-rational and situational reviews (i.e. reviews based on the actual <em>diff</em> instead of whether or not the CI pipeline is green, commits have been “signed-off” or overall code coverage has increased).</p>\n\n<p>Code bases are huge and usually rather complex (we wouldn’t propose changes if they weren’t), thus they often require maintainer complete focus to be properly treated.</p>\n\n<h3 id=\"opinionated-foss-philosophy\">(opinionated) FOSS philosophy</h3>\n\n<p>As a maintainer, you’re committed to the project roadmap. But if you adhere to Open-Source Software philosophy that’s only one of the missions.</p>\n\n<p>Let’s name a few others :</p>\n\n<ul>\n  <li>\n    <p>make it work in most of (supported) environments, mostly when a contributor already did the job and implemented another platform or environment support ;</p>\n  </li>\n  <li>\n    <p>make it as secure as possible because, despite the “zero guarantee” license, you don’t know what your software will be used for, and it may end up in a critical system (that’s why <a href=\"https://therecord.media/eu-to-fund-bug-bounty-programs-for-libreoffice-mastodon-three-others\">EU funds OSS bug bounty</a> or <a href=\"https://openssf.org/blog/2022/09/27/the-united-states-securing-open-source-software-act-what-you-need-to-know/\">USA now considers OSS security as critical</a>) ;</p>\n  </li>\n  <li>\n    <p>provide help and support for users (at least by pointing them to documentation or third resources so they can properly work on their side) ;</p>\n  </li>\n  <li>\n    <p>explore any possibility of “empowerment” : if there is a need, or when developer opted for an opinionated default behavior, there should be an option for it.</p>\n  </li>\n</ul>\n\n<p>Hot take : if you don’t think so, but still are enthusiast about publishing some lines of code (because you like to share stuff to the world, or think code is a form of art), please push them to a read-only mirror or disable direct contributions on your forge (this will spare some developer precious time and make them opt for an alternative solution to address their issue).</p>\n\n<h3 id=\"about-repetition-bias\">About “repetition bias”</h3>\n\n<p>It’s not because there are plenty of idiots sending you harsh or dumb messages that a new contributor will be necessarily harsh or dumb. You don’t have to repeat four times the same thing : we usually also have a brain behind our screen.<br />\nIf you will eventually refuse proposed changes, don’t ask three times for a new version (believe it or not, it actually consumes time on the other end) and quickly (not after 6 months) reject them by explaining <em>why</em>.</p>\n\n<h3 id=\"about-upstream-testing\">About upstream testing</h3>\n\n<p>Do not ask me to test third-API, mostly standard library ones. They are usually already tested upstream, and if not (shame on their maintainers !), they are <em>de facto</em> (due to the number of programs using them for several years and the fact they still exist in their current form).</p>\n\n<p>Instead, you should add linting jobs to your CI, which will check for their proper uses (number of arguments, possibly their types, return type, …).</p>\n\n<h3 id=\"proofs-of-working-patch-must-not-come-from-contributors-themselves\">Proofs of “working patch” must not come from contributors themselves</h3>\n\n<p>Please do not ask me to “record videos” of my screen showing that submitted patch “work as expected”. This has absolutely no value, and it consumes time.</p>\n\n<p>You can’t possibly have any trust in my machine, the code I am (showing you) running, the configuration I dumped somewhere on disk nor the runtime environment it uses. It isn’t reproducible at all, and gives a false feeling of safety.</p>\n\n<p>You want to make sure proposed changes “work” and you don’t have any non-regression test ? Please kindly checkout my branch and run your own tests locally. You’re likely setup to do so, and if you think your setup is representative enough, you’re free to accept changes and merge them.</p>\n\n<p>In cryptography, we don’t trust a peer handing over a public key without a signature from its private part.<br />\nMore generally in science, we don’t consider something true until it has been demonstrated.</p>\n\n<h3 id=\"be-concise-avoid-low-value-comments\">Be concise, avoid low-value comments</h3>\n\n<p>… including moral judgment and/or proselytism.</p>\n\n<p>Serializing information (from brain to a text box) and deserializing (from an HTML page to brain) costs us time and energy. Please don’t use issue tracker or review comments to share your rants like :</p>\n\n<blockquote>\n  <p>[…] You need to [understand] test, “I think” is the worst [word] for testing.</p>\n</blockquote>\n\n<p>… <a href=\"https://github.com/rustdesk/rustdesk/pull/7983#issuecomment-2104604813\">which had been quickly edited</a> afterwards to :</p>\n\n<blockquote>\n  <p>[…] You need to [understand] test, testing is for fixing your “I think”.</p>\n</blockquote>\n\n<p>Yes, thanks, I understand tests quite well. I <a href=\"https://github.com/HorlogeSkynet/archey4/pull/24\">implemented unit and integration tests</a> from scratch for an existing code base, and <a href=\"https://github.com/HorlogeSkynet/AppArmor/commit/4496e1af8eb74cd8bc1de090c25d7bb27ac68188#diff-a6641da3acd747c3ab0588c29224697f6c3081bd464365fba4c109b58f26f6f1\">always include proper jobs in new projects</a>. All my contributions <a href=\"https://github.com/lxc/lxcfs/pull/639\">address documentation and tests</a>, when it’s possible (read “when your project already got some”).</p>\n\n<p>(While we’re at it : <a href=\"https://github.com/rustdesk/rustdesk/pull/7983#issuecomment-2104596542\">my “I think”</a> wasn’t about <em>testing</em> itself, but precisely the fact that “testing videos” recorded on third-machines cannot act as tests themselves.)</p>\n\n<h3 id=\"youre-right-but-since-the-xz-backdoor-incident-we-must-be-careful\">“You’re right, but since the <a href=\"https://www.wired.com/story/xz-backdoor-everything-you-need-to-know/\">xz backdoor incident</a> we must be careful”</h3>\n\n<p>Contrary to way too many members of the FOSS community, I don’t consider the “xz backdoor attempt” as a failed one. Even if it didn’t reach Debian <code class=\"language-plaintext highlighter-rouge\">stable</code> repositories (or any other Linux distribution “stable” release) nor each and every running OpenSSH servers on the planet, it actually end up running on systems connected to Internet, hence machines that we can consider “compromised”.<br />\nOne may always say that running production systems on “unstable” or “rolling-release” distributions is a terrible idea (and I would agree here), but this changes absolutely nothing : malicious code has been executed <em>somewhere</em> without CPU owner will, and that is definitely a “success” from perpetrator(s) point of view.</p>\n\n<p>Moreover, if the supply-chain attack actually targeted an organization running even <strong>one</strong> public-facing SSH server using a “rolling-release” distribution, it could be considered compromised.<br />\nIf we take Debian for instance, it has been almost one month between the first merge of malicious content (<a href=\"https://research.swtch.com/xz-script\">it appeared “broken”</a>, from perpetrator(s) point of view) and the revert to a previous (supposedly) “clean” version.</p>\n\n<p>One, month. Think about it.</p>\n\n<p><a href=\"/img/blog/projects-ask-for-contributors-but-are-reluctant-to-contributions_2.png\"><img src=\"/img/blog/projects-ask-for-contributors-but-are-reluctant-to-contributions_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>Indeed, we must be careful about what we merge. That’s why integration pipelines must be as relevant as possible and security tests should also be included. But automation won’t solve any problem (as always), and brain needs to be switched on to give a contextualized review.</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>Maintaining a software is accepting that we (maintainers) are not the only ones to use it, nor to decide what it should look like. It also means others’ needs have to be treated like our (their) ones. The direct consequence of this is a matter of time (it’s always about that, don’t you think ?).<br />\nBy publishing software on a public forge, you (implicitly) signed up for dedicating time to others. And it’s what constitutes the power of FOSS. Use it well.</p>\n\n<hr />\n\n<p><em>&gt; Post header image was generated using <a href=\"https://firefly.adobe.com/\">Adobe Firefly</a></em></p>\n\n",
            "summary": "Retrospective analysis of two rather recent contributions to Open-Source Software",
            "tags": ["Articles"]
        },{
            "title": "How I finally got rid of Docker",
            "date_published": "2023-11-11T21:56:00+01:00",
            "date_modified": "2024-01-07T15:33:00+01:00",
            "id": "https://samuel.forestier.app/blog/security/how-i-finally-got-rid-of-docker",
            "url": "https://samuel.forestier.app/blog/security/how-i-finally-got-rid-of-docker",
            "image": "https://samuel.forestier.app/img/blog/how-i-finally-got-rid-of-docker_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-i-finally-got-rid-of-docker_1.png\"><img src=\"/img/blog/how-i-finally-got-rid-of-docker_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p><a href=\"https://github.com/docker/compose\">Docker Compose</a> has been a regular way of distributing/running container stacks to deploy services for many years now. What is actually interesting is the <a href=\"https://github.com/compose-spec/compose-spec/blob/master/spec.md\">“Compose spec”</a>, which mainly defines static representation(s) (it used to be multiple versions of it) of these stacks (i.e. how different services share resources [or not], and “work” together).</p>\n\n<p>For this second post of this “Podman series” (also see “<a href=\"/blog/security/podman-rootless-in-podman-rootless-the-debian-way\">Podman rootless in Podman rootless, the Debian way</a>”), I will explain how I replaced Docker-Compose (and hence Docker itself) by Podman-Compose for some personal services.</p>\n\n<h3 id=\"from-docker-compose-to-docker-compose\">From Docker-Compose to Docker Compose</h3>\n\n<p>Historically, <em>Composing</em> with Docker was achieved by using <a href=\"https://github.com/docker/compose/tree/v1#docker-compose\">Docker-Compose (v1)</a>, a binary which was actually a Python program built using <a href=\"https://pyinstaller.org/\">PyInstaller</a>, and invoked by <code class=\"language-plaintext highlighter-rouge\">docker-compose</code>. On Debian, <a href=\"https://packages.debian.org/stable/docker-compose\">it is still packaged in main repositories</a>.</p>\n\n<p>Since 2021 (and <a href=\"https://github.com/docker/compose/pull/9625\">mostly since last year</a>), <em>Composing</em> is now achieved using Docker Compose (v2), a Go plugin for the <code class=\"language-plaintext highlighter-rouge\">docker</code> (client) CLI.<br />\nOn Debian, once Docker repository has been added, setup is pretty straightforward : <code class=\"language-plaintext highlighter-rouge\">apt install docker-ce docker-compose-plugin</code>.<br />\nInvocation is then done by <code class=\"language-plaintext highlighter-rouge\">docker compose</code> (please note the space instead of the dash).</p>\n\n<p>In either case, a <code class=\"language-plaintext highlighter-rouge\">docker-compose.yml</code> file is usually downloaded according to vendor recommendations (or carefully handcrafted when missing :stuck_out_tongue:), and <em>modulo</em> a <code class=\"language-plaintext highlighter-rouge\">docker[-]compose up -d</code> invocation, you’re all set and running.</p>\n\n<h3 id=\"composing-with-podman\"><em>Composing</em> with Podman</h3>\n\n<p>Whereas Podman has been built as a “drop-in replacement” for Docker (despite internals being fundamentally different), <em>Composing</em> was initially out-of-scope (or very partially supported). But it’s 2023, and <a href=\"https://github.com/containers/podman-compose\">Podman-Compose</a> is now a robust alternative.</p>\n\n<p>Within this containers ecosystem where (almost) everything is written in Go, please welcome Python again (and mind the coming back of the dash !).</p>\n\n<p>Podman-Compose implements the Compose spec, while still being oriented “rootless” (after all, it’s what we expect from Podman, isn’t it ?) and “daemon-less”.</p>\n\n<h3 id=\"podman-compose-setup-for-debian-12-bookworm\">Podman-Compose setup for Debian 12 (Bookworm)</h3>\n\n<p>First we create a system user dedicated to Podman execution :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">adduser <span class=\"nt\">--system</span> namdop <span class=\"nt\">--group</span> <span class=\"nt\">--home</span> /opt/namdop</code></pre></figure>\n\n<p>Unlike Docker, there is no such thing as a (privileged) daemon running and waiting for us to give commands.<br />\nSo we must enable <a href=\"https://www.freedesktop.org/software/systemd/man/latest/loginctl.html#enable-linger%20USER%E2%80%A6\">systemd-logind “lingering”</a> for our system user to make systemd start (in user mode) on machine boot, and thus provide a proper runtime for OOTB Podman execution :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">loginctl enable-linger namdop</code></pre></figure>\n\n<p>Now we can install Podman and deal with Debian (undesired) pulled dependencies :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt-get <span class=\"nb\">install</span> <span class=\"nt\">-y</span> <span class=\"nt\">--no-install-recommends</span> <span class=\"se\">\\</span>\n    podman <span class=\"se\">\\</span>\n    slirp4netns <span class=\"se\">\\</span>\n    uidmap <span class=\"se\">\\</span>\n    golang-github-containernetworking-plugin-dnsname\n\nsystemctl <span class=\"nt\">--global</span> disable <span class=\"nt\">--now</span> <span class=\"se\">\\</span>\n    dirmngr.socket <span class=\"se\">\\</span>\n    gpg-agent.socket <span class=\"se\">\\</span>\n    gpg-agent-browser.socket <span class=\"se\">\\</span>\n    gpg-agent-extra.socket <span class=\"se\">\\</span>\n    gpg-agent-ssh.socket</code></pre></figure>\n\n<p>We then define our ranges of UIDs/GIDs dedicated to our system user :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nb\">echo</span> <span class=\"s2\">\"namdop:100000:65536\"</span> <span class=\"o\">&gt;</span> /etc/subuid\n<span class=\"nb\">echo</span> <span class=\"s2\">\"namdop:100000:65536\"</span> <span class=\"o\">&gt;</span> /etc/subgid</code></pre></figure>\n\n<blockquote>\n  <p>:warning: <a href=\"https://packages.debian.org/stable/podman-compose\">Podman-Compose is packaged in Debian main repositories</a>, but (already) pretty old so you’ll understand why <a href=\"https://pypi.org/project/podman-compose/\">we go through PyPI to retrieve an up-to-date version</a>.</p>\n</blockquote>\n\n<p>As we like doing clean things and we actually don’t wanna <a href=\"https://peps.python.org/pep-0668/#motivation\">break our system</a>, we will install <code class=\"language-plaintext highlighter-rouge\">podman-compose</code> in a Python virtual environment :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt-get <span class=\"nb\">install</span> <span class=\"nt\">-y</span> python3-venv</code></pre></figure>\n\n<p>Now we can finalize the setup as our regular system user :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">su - namdop <span class=\"nt\">-s</span> /bin/bash\n\npython3 <span class=\"nt\">-m</span> venv venv <span class=\"o\">&amp;&amp;</span> <span class=\"nb\">source </span>venv/bin/activate\npip3 <span class=\"nb\">install</span> <span class=\"nt\">-U</span> pip wheel\n\npip3 <span class=\"nb\">install </span>podman-compose\n\n<span class=\"nb\">mkdir </span>my_service <span class=\"o\">&amp;&amp;</span> <span class=\"nb\">cd </span>my_service/\n\n<span class=\"c\"># Here you can setup your docker-compose.yml and required configuration/data, as you would do with Docker !</span>\n<span class=\"c\"># ...</span>\n\npodman-compose up <span class=\"nt\">-d</span></code></pre></figure>\n\n<h3 id=\"the-restart-policy-pitfall\">The <code class=\"language-plaintext highlighter-rouge\">restart</code> policy pitfall</h3>\n\n<p>Sooo, as you may already know, containers do not start on boot <strong>by default</strong>. This behavior depends on the <code class=\"language-plaintext highlighter-rouge\">--restart={always,no,on-failure[:max-retries],unless-stopped}</code> flag, that you <del>can</del> should specify when creating a container (or through the <a href=\"https://github.com/compose-spec/compose-spec/blob/master/spec.md#restart\"><code class=\"language-plaintext highlighter-rouge\">restart</code> key</a> in a Compose file).</p>\n\n<p>Unlike for “standalone” Podman where we could run <code class=\"language-plaintext highlighter-rouge\">podman generate systemd &lt;container&gt;</code> to make a container actually managed by systemd as a regular “service” (and thus enjoy the <a href=\"https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html#Restart=\"><code class=\"language-plaintext highlighter-rouge\">Restart</code></a> policy feature), <a href=\"https://github.com/containers/podman-compose/issues/254\">there is no such thing (yet ?)</a>.</p>\n\n<p>As a workaround, we can rely on the (global) systemd service unit (called <code class=\"language-plaintext highlighter-rouge\">podman-restart.service</code>), responsible for restarting containers on boot, that we must enable in our (user mode) systemd runtime :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nv\">XDG_RUNTIME_DIR</span><span class=\"o\">=</span><span class=\"s2\">\"/run/user/</span><span class=\"si\">$(</span><span class=\"nb\">id</span> <span class=\"nt\">-u</span><span class=\"si\">)</span><span class=\"s2\">\"</span> systemctl <span class=\"nt\">--user</span> <span class=\"nb\">enable</span> <span class=\"nt\">--now</span> podman-restart.service</code></pre></figure>\n\n<p>Whereas <code class=\"language-plaintext highlighter-rouge\">unless-stopped</code> is supposed to be <a href=\"https://docs.podman.io/en/stable/markdown/podman-run.1.html#restart-policy\">identical to <code class=\"language-plaintext highlighter-rouge\">always</code></a> with Podman, packaging hits us hard and that’s unfortunate.<br />\n<code class=\"language-plaintext highlighter-rouge\">podman-restart.service</code> unit specifies <code class=\"language-plaintext highlighter-rouge\">--filter=always</code> to only (re)start containers that should be.</p>\n\n<p>So you <strong>must</strong> add :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"na\">services</span><span class=\"pi\">:</span>\n  <span class=\"na\">service</span><span class=\"pi\">:</span>\n    <span class=\"c1\"># ...</span>\n    <span class=\"na\">restart</span><span class=\"pi\">:</span> <span class=\"s\">always</span>\n    <span class=\"c1\"># ...</span></code></pre></figure>\n\n<p>… to your Compose stacks to enjoy containers auto-restart, or you will have to tweak the upstream systemd unit :clown_face:</p>\n\n<p>This issue <a href=\"https://github.com/containers/podman/issues/10539#issuecomment-900992521\">has already been mentioned</a> two years ago by @plevart, but it has not been fixed yet.</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt-get autoremove <span class=\"nt\">--purge</span> docker-ce docker-compose-plugin\n\n<span class=\"nb\">rm</span> <span class=\"nt\">-f</span> /etc/apt/keyrings/docker.gpg\n<span class=\"nb\">rm</span> <span class=\"nt\">-f</span> /etc/apt/sources.list.d/docker.list\napt-get update</code></pre></figure>\n\n<p><a href=\"/img/blog/how-i-finally-got-rid-of-docker_2.png\"><img src=\"/img/blog/how-i-finally-got-rid-of-docker_2.png\" alt=\"A missing blog post image\" /></a></p>\n",
            "summary": "Podman-Compose setup walkthrough on Debian 12",
            "tags": ["Security"]
        },{
            "title": "Certbot : unprivileged Debian setup walkthrough",
            "date_published": "2023-10-24T21:30:00+02:00",
            "date_modified": "2025-10-24T19:43:00+02:00",
            "id": "https://samuel.forestier.app/blog/security/certbot-unprivileged-debian-setup-walkthrough",
            "url": "https://samuel.forestier.app/blog/security/certbot-unprivileged-debian-setup-walkthrough",
            "image": "https://samuel.forestier.app/img/blog/certbot-unprivileged-debian-setup-walkthrough_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/certbot-unprivileged-debian-setup-walkthrough_1.png\"><img src=\"/img/blog/certbot-unprivileged-debian-setup-walkthrough_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h2 id=\"introduction\">Introduction</h2>\n\n<p>Some weeks ago, while I was upgrading my server operating system, I had to reboot the machine to load the new kernel. Unfortunately, the machine actually wasn’t feeling well and decided <em>not</em> to reboot. After contacting the data center support, it appeared the chassis had a critical hardware failure which prevented it from booting again.</p>\n\n<p>This event taught me several things :</p>\n\n<ul>\n  <li>\n    <p>reboot is not an innocuous operation (in my very case : IPMI fails very quickly after power on, and serial console was not reading anything interesting) ;</p>\n  </li>\n  <li>\n    <p>hardware issues may have nothing to do with <em>current</em> runtime (no kernel warning popped out recently) ;</p>\n  </li>\n  <li>\n    <p>backups are primordial (you can’t imagine how the last backup I’ve run some days before the upgrade reassured me during this period) ;</p>\n  </li>\n  <li>\n    <p>SLA means <em>something</em>, including for IaaS (in my very case : probably due to a <a href=\"https://en.wikipedia.org/wiki/Blade_server\">blade containing multiple servers</a>, data center operators couldn’t access the chassis to get the disk out without impacting other clients. So they did not, according to the SLA).</p>\n  </li>\n</ul>\n\n<p>On our “new” machine, I had to go through re-setup Web server(s) and, among other things, TLS certificates.</p>\n\n<p>Do I really need to introduce you to <a href=\"https://eff.org/\">EFF</a>’s <a href=\"https://certbot.eff.org/\">Certbot</a>, which you are very likely already using to obtained HTTPS certificates from <a href=\"https://letsencrypt.org/\">Let’s Encrypt</a> ? I guess not, ‘cause you wouldn’t be reading this blog post if you know nothing about it.</p>\n\n<blockquote>\n  <p>So what is the link between your server crash and Certbot ?</p>\n</blockquote>\n\n<p>Well, this time I decided not to grant administrator privileges to this piece of software, and we’ll see how we can achieve that.</p>\n\n<h2 id=\"installation\">Installation</h2>\n\n<p>Official setup procedure <a href=\"https://certbot.eff.org/instructions?ws=apache&amp;os=debiantesting\">recommends</a> to go through Canonical’s <a href=\"https://snapcraft.io/snapd\">snapd</a> software to deploy Certbot, but I tend to reject these approaches, mostly when it’s about running interpreted code (and not compiled C/C++ programs, which can require several libraries loaded at runtime, which can “justify” [please note the quotes] shipping tons of BLOBs to <em>ease</em> deployment among heterogeneous systems).</p>\n\n<p><a href=\"/img/blog/certbot-unprivileged-debian-setup-walkthrough_2.png\"><img src=\"/img/blog/certbot-unprivileged-debian-setup-walkthrough_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>As Certbot is still distributed through <a href=\"https://pypi.org/project/certbot/\">PyPI</a>, we’ll go this way.</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt <span class=\"nb\">install</span> <span class=\"nt\">-y</span> python3-venv\nadduser <span class=\"nt\">--system</span> certbot <span class=\"nt\">--home</span> /opt/certbot\nsu - certbot <span class=\"nt\">-s</span> /bin/bash\n\n<span class=\"c\"># As certbot user :</span>\npython3 <span class=\"nt\">-m</span> venv venv <span class=\"o\">&amp;&amp;</span> <span class=\"nb\">source </span>venv/bin/activate\npip3 <span class=\"nb\">install</span> <span class=\"nt\">-U</span> pip wheel\npip3 <span class=\"nb\">install </span>certbot</code></pre></figure>\n\n<h2 id=\"asking-for-a-certificate\">Asking for a certificate</h2>\n\n<p>So there is two ways to ask for an HTTPS certificate : either Certbot spawns an HTTP Web server and directly responds to CA’s http-01 challenge, or it could <em>write</em> to an already “served” HTTP Web root.</p>\n\n<h3 id=\"the-standalone-http-01-server-way\">The standalone http-01 server way</h3>\n\n<p>The idea here is to make Certbot bind a local port &gt; 1024, redirect new HTTP traffic to this port and let it directly respond to the http-01 challenge (as it were the actual Web server behind your domain name/IP address).</p>\n\n<p>For a certificate that has been asked for the first time this way (as <code class=\"language-plaintext highlighter-rouge\">certbot</code> user) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">venv/bin/certbot <span class=\"se\">\\</span>\n    <span class=\"nt\">--work-dir</span><span class=\"o\">=</span>/opt/certbot <span class=\"se\">\\</span>\n    <span class=\"nt\">--logs-dir</span><span class=\"o\">=</span>/opt/certbot/logs <span class=\"se\">\\</span>\n    <span class=\"nt\">--config-dir</span><span class=\"o\">=</span>/opt/certbot/config <span class=\"se\">\\</span>\n    certonly <span class=\"se\">\\</span>\n        <span class=\"nt\">--standalone</span> <span class=\"se\">\\</span>\n        <span class=\"nt\">--http-01-address</span> 127.0.0.1 <span class=\"se\">\\</span>\n        <span class=\"nt\">--http-01-port</span> 8080 <span class=\"se\">\\</span>\n        <span class=\"nt\">-d</span> <span class=\"s2\">\"your.domain.name\"</span></code></pre></figure>\n\n<p>… I propose you below Bash renewal script (mainly the detailed steps to adapt with your setup) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"c\">#!/usr/bin/env bash</span>\n\n<span class=\"nb\">set</span> <span class=\"nt\">-euo</span> pipefail\n\n<span class=\"nv\">DOMAIN</span><span class=\"o\">=</span><span class=\"s2\">\"your.domain.name\"</span>\n<span class=\"nv\">WAN_IP_ADDRESS</span><span class=\"o\">=</span><span class=\"s2\">\"your.wan.ip.address\"</span>\n<span class=\"nv\">WAN_NETWORK_INTERFACE</span><span class=\"o\">=</span><span class=\"s2\">\"eth0\"</span>\n<span class=\"nv\">HTTP_01_LOCAL_ADDR</span><span class=\"o\">=</span><span class=\"s2\">\"127.0.0.1\"</span>\n<span class=\"nv\">HTTP_01_LOCAL_PORT</span><span class=\"o\">=</span>8080\n\n<span class=\"c\"># 1. Some firewall rules to DNAT and ACCEPT (new) HTTP traffic to local http-01 port</span>\n<span class=\"nv\">dnat_rule_handle</span><span class=\"o\">=</span><span class=\"s2\">\"</span><span class=\"si\">$(</span>nft <span class=\"nt\">-ej</span> insert rule ip nat prerouting ip daddr <span class=\"s2\">\"</span><span class=\"nv\">$WAN_IP_ADDRESS</span><span class=\"s2\">\"</span> tcp dport http ct state new dnat to <span class=\"s2\">\"</span><span class=\"k\">${</span><span class=\"nv\">HTTP_01_LOCAL_ADDR</span><span class=\"k\">}</span><span class=\"s2\">:</span><span class=\"k\">${</span><span class=\"nv\">HTTP_01_LOCAL_PORT</span><span class=\"k\">}</span><span class=\"s2\">\"</span> | <span class=\"nb\">grep</span> <span class=\"nt\">-vE</span> <span class=\"s1\">'^#'</span> | jq <span class=\"nt\">-r</span> .nftables[0].insert.rule.handle<span class=\"si\">)</span><span class=\"s2\">\"</span>\n<span class=\"nv\">filter_rule_handle</span><span class=\"o\">=</span><span class=\"s2\">\"</span><span class=\"si\">$(</span>nft <span class=\"nt\">-ej</span> insert rule inet filter input ip daddr <span class=\"s2\">\"</span><span class=\"nv\">$HTTP_01_LOCAL_ADDR</span><span class=\"s2\">\"</span> tcp dport <span class=\"s2\">\"</span><span class=\"nv\">$HTTP_01_LOCAL_PORT</span><span class=\"s2\">\"</span> ct state new accept | <span class=\"nb\">grep</span> <span class=\"nt\">-vE</span> <span class=\"s1\">'^#'</span> | jq <span class=\"nt\">-r</span> .nftables[0].insert.rule.handle<span class=\"si\">)</span><span class=\"s2\">\"</span>\n\n<span class=\"c\"># 2. Allow DNAT to loopback</span>\nsysctl <span class=\"nt\">-q</span> <span class=\"nt\">-w</span> <span class=\"s2\">\"net.ipv4.conf.</span><span class=\"k\">${</span><span class=\"nv\">WAN_NETWORK_INTERFACE</span><span class=\"k\">}</span><span class=\"s2\">.route_localnet=1\"</span>\n\n<span class=\"c\"># Renew the certificate using Certbot (`|| true` is required to allow it to fail, \"for reasons\")</span>\nsu - certbot <span class=\"nt\">-s</span> /bin/bash <span class=\"nt\">-c</span> <span class=\"se\">\\</span>\n    <span class=\"s2\">\"/opt/certbot/venv/bin/certbot --work-dir=/opt/certbot --logs-dir=/opt/certbot/logs --config-dir=/opt/certbot/config renew -q\"</span> <span class=\"se\">\\</span>\n    <span class=\"o\">||</span> <span class=\"nb\">true</span>\n\n<span class=\"c\"># 3. Disallow DNAT to loopback</span>\nsysctl <span class=\"nt\">-q</span> <span class=\"nt\">-w</span> <span class=\"s2\">\"net.ipv4.conf.</span><span class=\"k\">${</span><span class=\"nv\">WAN_NETWORK_INTERFACE</span><span class=\"k\">}</span><span class=\"s2\">.route_localnet=0\"</span>\n\n<span class=\"c\"># 4. (situational) Install cryptographic materials where they need to be</span>\n<span class=\"nb\">cp</span> <span class=\"s2\">\"/opt/certbot/config/live/</span><span class=\"k\">${</span><span class=\"nv\">DOMAIN</span><span class=\"k\">}</span><span class=\"s2\">/fullchain.pem\"</span> /path/to/fullchain.pem\n<span class=\"nb\">cp</span> <span class=\"s2\">\"/opt/certbot/config/live/</span><span class=\"k\">${</span><span class=\"nv\">DOMAIN</span><span class=\"k\">}</span><span class=\"s2\">/privkey.pem\"</span> /path/to/privkey.pem\n\n<span class=\"c\"># 5. (situational) Restart the service(s) to load the new certificate(s)</span>\nsystemctl restart apache2.service\n\n<span class=\"c\"># 6. Delete our temporary firewall rules</span>\nnft delete rule inet filter input handle <span class=\"s2\">\"</span><span class=\"nv\">$filter_rule_handle</span><span class=\"s2\">\"</span> <span class=\"o\">||</span> <span class=\"nb\">true\n</span>nft delete rule ip nat prerouting handle <span class=\"s2\">\"</span><span class=\"nv\">$dnat_rule_handle</span><span class=\"s2\">\"</span> <span class=\"o\">||</span> <span class=\"nb\">true</span></code></pre></figure>\n\n<p>For this script to work, I assume :</p>\n\n<ul>\n  <li>\n    <p><code class=\"language-plaintext highlighter-rouge\">jq</code> is available on the system (used to parse <code class=\"language-plaintext highlighter-rouge\">nft</code> JSON output) ;</p>\n  </li>\n  <li>\n    <p>The firewall is managed through nftables ;</p>\n  </li>\n  <li>\n    <p>(nftables) tables <code class=\"language-plaintext highlighter-rouge\">ip nat</code> and <code class=\"language-plaintext highlighter-rouge\">inet filter</code> exist ;</p>\n  </li>\n  <li>\n    <p>(nftables) chains <code class=\"language-plaintext highlighter-rouge\">prerouting</code> (<code class=\"language-plaintext highlighter-rouge\">ip nat</code>) and <code class=\"language-plaintext highlighter-rouge\">input</code> (<code class=\"language-plaintext highlighter-rouge\">inet filter</code>) exist.</p>\n  </li>\n</ul>\n\n<p>Note : http-01 server configuration is stored by Certbot so we don’t have to specify <code class=\"language-plaintext highlighter-rouge\">--http-01-*</code> arguments during renewal.</p>\n\n<h3 id=\"the-already-served-http-web-root\">The already “served” HTTP Web root</h3>\n\n<p>This is the method I’d prefer, as we don’t have to play with firewall.</p>\n\n<p>First, you will have to tweak your Web server configuration (i.e. the default VHOST) to :</p>\n\n<ol>\n  <li>\n    <p>Disable HTTPS redirection for <code class=\"language-plaintext highlighter-rouge\">.well-known</code> URIs (if any) ;</p>\n  </li>\n  <li>\n    <p>Allows access to <code class=\"language-plaintext highlighter-rouge\">.well-known/acme-challenge</code> Web root (if restricted).</p>\n  </li>\n</ol>\n\n<p>Below, for instance, Apache httpd configuration :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-apache\" data-lang=\"apache\"><span class=\"p\">&lt;</span><span class=\"nl\">VirtualHost</span><span class=\"sr\"> *:80</span><span class=\"p\">&gt;\n</span>\t<span class=\"nc\">ServerName</span> your.domain.name\n\n\t<span class=\"c\"># Certbot</span>\n\t<span class=\"nc\">DocumentRoot</span> /var/www\n\t<span class=\"p\">&lt;</span><span class=\"nl\">Directory</span><span class=\"sr\"> /var/www/.well-known/acme-challenge</span><span class=\"p\">&gt;\n</span>\t\t<span class=\"nc\">Require</span> <span class=\"ss\">all</span> granted\n\t<span class=\"p\">&lt;/</span><span class=\"nl\">Directory</span><span class=\"p\">&gt;\n</span>\n\t<span class=\"nc\">RewriteEngine</span> <span class=\"ss\">on</span>\n\t<span class=\"nc\">RewriteCond</span> %{REQUEST_URI} !^/\\.well-known\n\t<span class=\"nc\">RewriteRule</span> ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]\n<span class=\"p\">&lt;/</span><span class=\"nl\">VirtualHost</span><span class=\"p\">&gt;</span></code></pre></figure>\n\n<p>Now, you can run the following commands :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"c\"># Prepare the Web root</span>\n<span class=\"nb\">mkdir</span> <span class=\"nt\">-p</span> /var/www/.well-known/acme-challenge\n<span class=\"nb\">chown </span>www-data:www-data /var/www/.well-known\n<span class=\"nb\">chown </span>certbot:www-data /var/www/.well-known/acme-challenge\n\n<span class=\"c\"># Reload Apache httpd configuration</span>\na2enmod rewrite\nsystemctl restart apache2.service\n\n<span class=\"c\"># Ask for a certificate</span>\nsu - certbot <span class=\"nt\">-s</span> /bin/bash <span class=\"nt\">-c</span> <span class=\"se\">\\</span>\n    <span class=\"s2\">\"/opt/certbot/venv/bin/certbot --work-dir=/opt/certbot --logs-dir=/opt/certbot/logs --config-dir=/opt/certbot/config certonly --webroot-path=/var/www -d 'your.domain.name'</span></code></pre></figure>\n\n<p>A typical renewal procedure would then be :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">su - certbot <span class=\"nt\">-s</span> /bin/bash <span class=\"nt\">-c</span> <span class=\"se\">\\</span>\n    <span class=\"s2\">\"/opt/certbot/venv/bin/certbot --work-dir=/opt/certbot --logs-dir=/opt/certbot/logs --config-dir=/opt/certbot/config renew --deploy-hook 'touch certs_renewed' -q\"</span>\n\n<span class=\"k\">if</span> <span class=\"o\">[</span> <span class=\"nt\">-f</span> /opt/certbot/certs_renewed <span class=\"o\">]</span><span class=\"p\">;</span> <span class=\"k\">then\n\t</span>systemctl restart apache2.service\n\t<span class=\"nb\">rm</span> <span class=\"nt\">-f</span> /opt/certbot/certs_renewed\n<span class=\"k\">fi</span></code></pre></figure>\n\n<p>Note : Web root path configuration is stored by Certbot so we don’t have to specify <code class=\"language-plaintext highlighter-rouge\">--webroot-path</code> argument during renewal.</p>\n\n<p>The trick with <code class=\"language-plaintext highlighter-rouge\">--deploy-hook</code> is required as Certbot exits with status code <code class=\"language-plaintext highlighter-rouge\">0</code> on “success” (i.e. either when zero, one or multiple certificates got renewed). @iquito’s <a href=\"https://github.com/certbot/certbot/issues/4090#issuecomment-282605558\">workaround</a> is thus required here if we want to prevent unconditional Web server restart.</p>\n\n<h2 id=\"conclusion\">Conclusion</h2>\n\n<p>Do backup. Try your restoration procedure. Encrypt the world. Get rid of unnecessary privileges. KISS.</p>\n",
            "summary": "Let's deploy and run these Python lines of code without any privilege !",
            "tags": ["Security"]
        },{
            "title": "Blog domain name update",
            "date_published": "2023-10-24T14:25:00+02:00",
            "date_modified": "2023-10-24T14:25:00+02:00",
            "id": "https://samuel.forestier.app/blog/articles/blog-domain-name-update",
            "url": "https://samuel.forestier.app/blog/articles/blog-domain-name-update",
            "image": "https://samuel.forestier.app/img/blog/blog-domain-name-update_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p>A very short blog post to announce that blog “has moved” to <code class=\"language-plaintext highlighter-rouge\">samuel.forestier.app</code> (no, I haven’t been hacked [yet]) ! :stuck_out_tongue:</p>\n\n<p>So please do update your bookmark and/or RSS feed addresses (I’ve seen in logs that some RSS readers are still querying the previous address, and that they don’t seem to <em>remember</em> the <code class=\"language-plaintext highlighter-rouge\">301 Permanent Redirect</code> that I’ve set up).</p>\n\n<p>HTTP redirections from old domain will stay configured for about 2 weeks from now.</p>\n\n<p>Bye :wave:</p>\n",
            "summary": "Please update your bookmark and/or RSS feed !",
            "tags": ["Articles"]
        },{
            "title": "Podman rootless in Podman rootless, the Debian way",
            "date_published": "2023-09-17T10:31:00+02:00",
            "date_modified": "2024-10-22T22:39:00+02:00",
            "id": "https://samuel.forestier.app/blog/security/podman-rootless-in-podman-rootless-the-debian-way",
            "url": "https://samuel.forestier.app/blog/security/podman-rootless-in-podman-rootless-the-debian-way",
            "image": "https://samuel.forestier.app/img/blog/podman-rootless-in-podman-rootless-the-debian-way_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/podman-rootless-in-podman-rootless-the-debian-way_1.png\"><img src=\"/img/blog/podman-rootless-in-podman-rootless-the-debian-way_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>Podman is the new <em>de facto</em> state-of-the-art to run containers on Linux. It comes by design with a very interesting feature : rootless container. In this mode, the container runtime itself runs without privileges. This means exploiting the container runtime could at most grant the attacker the permissions of the running user (kernel own attack surface is out-of-scope here).</p>\n\n<p>Historically, sysadmins and CI/CD developers found themselves in situations where they have to run container in container (see Docker-in-Docker, a.k.a. <em>DinD</em>), or other things also dealing with cgroups/seccomp/namespacing running in a container (e.g. <a href=\"https://discuss.linuxcontainers.org/t/what-does-security-nesting-true/7156\">systemd in unprivileged LXC</a>). We call this “nesting”, and this may introduce some security benefits (as always depending on your threat model).</p>\n\n<p>Nesting Podman in “containers” is supported and <a href=\"https://www.redhat.com/sysadmin/podman-inside-container\">actually</a> <a href=\"https://www.redhat.com/sysadmin/podman-inside-kubernetes\">documented</a>, and the <em>combinational</em> leads to these situations :</p>\n\n<ol>\n  <li>\n    <p>Rootful in rootful (pretty bad from a security point of view)</p>\n  </li>\n  <li>\n    <p>Rootless in rootful (already better !)</p>\n  </li>\n  <li>\n    <p>Rootful in rootless (pretty handy, but consider your container compromised if the application runs as root and has flaws)</p>\n  </li>\n  <li>\n    <p>Rootless in rootless (ideal from a security point of view !)</p>\n  </li>\n</ol>\n\n<p>There is a hiccup between (at least) <code class=\"language-plaintext highlighter-rouge\">4</code> and Debian image, and that’s why we’ll talk about here.</p>\n\n<blockquote>\n  <p>In below commands, you’ll see I map <code class=\"language-plaintext highlighter-rouge\">/dev/fuse</code> device in containers to provide OverlayFS support in unprivileged user namespaces for <a href=\"https://github.com/torvalds/linux/commit/459c7c565ac36ba09ffbf24231147f408fde4203\">Linux &lt; 5.11</a>.</p>\n</blockquote>\n\n<h3 id=\"podman-rootless-in-podman-rootless\">Podman rootless in Podman rootless</h3>\n\n<p>Podman is very well integrated in the Red Hat ecosystem (<a href=\"https://www.redhat.com/sysadmin/improved-systemd-podman\">mainly with systemd</a>), and the <a href=\"https://github.com/containers/podman/blob/main/contrib/podmanimage/stable/Containerfile\">official Podman container image</a> is built upon Fedora.</p>\n\n<p>On a rather “recent” GNU/Linux distribution, you can safely run as a regular user :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--user</span> podman <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    quay.io/podman/stable:latest <span class=\"se\">\\</span>\n        podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n            docker.io/library/alpine:latest <span class=\"se\">\\</span>\n            sh\n/ <span class=\"c\"># id</span>\n<span class=\"nv\">uid</span><span class=\"o\">=</span>0<span class=\"o\">(</span>root<span class=\"o\">)</span> <span class=\"nv\">gid</span><span class=\"o\">=</span>0<span class=\"o\">(</span>root<span class=\"o\">)</span> <span class=\"nb\">groups</span><span class=\"o\">=</span>1<span class=\"o\">(</span>bin<span class=\"o\">)</span>,2<span class=\"o\">(</span>daemon<span class=\"o\">)</span>,3<span class=\"o\">(</span>sys<span class=\"o\">)</span>,4<span class=\"o\">(</span>adm<span class=\"o\">)</span>,6<span class=\"o\">(</span>disk<span class=\"o\">)</span>,10<span class=\"o\">(</span>wheel<span class=\"o\">)</span>,11<span class=\"o\">(</span>floppy<span class=\"o\">)</span>,20<span class=\"o\">(</span>dialout<span class=\"o\">)</span>,26<span class=\"o\">(</span>tape<span class=\"o\">)</span>,27<span class=\"o\">(</span>video<span class=\"o\">)</span>,0<span class=\"o\">(</span>root<span class=\"o\">)</span></code></pre></figure>\n\n<p>This command pulls <a href=\"https://quay.io/repository/podman/stable\">the official Podman container image</a> and runs, as a regular user too (named <code class=\"language-plaintext highlighter-rouge\">podman</code> in the first container), another Podman runtime which pulls the official Alpine image and spawns a shell in it.</p>\n\n<p>If we focus on user namespaces, it gives :</p>\n\n<ul>\n  <li>\n    <p>the first <code class=\"language-plaintext highlighter-rouge\">podman</code> runs as uid=1000 (host machine user session)</p>\n  </li>\n  <li>\n    <p>the second <code class=\"language-plaintext highlighter-rouge\">podman</code> (in the first container) runs as uid=1000 (but shifted by 100000, default value defined in <code class=\"language-plaintext highlighter-rouge\">/etc/subuid</code> on Debian)</p>\n  </li>\n  <li>\n    <p>eventually, <code class=\"language-plaintext highlighter-rouge\">sh</code> runs as uid=0 (shifted by 101001, where 100000 comes from parent user namespace and 1001 from <code class=\"language-plaintext highlighter-rouge\">/etc/subuid</code> packaged in <code class=\"language-plaintext highlighter-rouge\">quay.io/podman/stable</code> image)</p>\n  </li>\n</ul>\n\n<h3 id=\"podman-in-debian\">Podman in Debian</h3>\n\n<p><a href=\"https://packages.debian.org/stable/podman\">Podman is packaged in Debian</a> since Bullseye (11). A simple <code class=\"language-plaintext highlighter-rouge\">apt install podman</code> (which pulls a lot of recommended dependencies, I’d confess) and you’re all set.</p>\n\n<p>Since Debian 12 (this year !), the <a href=\"https://www.debian.org/releases/bullseye/amd64/release-notes/ch-information#linux-user-namespaces\">specific-but-deprecated</a> <code class=\"language-plaintext highlighter-rouge\">kernel.unprivileged_userns_clone</code> sysctl parameter is even <a href=\"https://salsa.debian.org/kernel-team/linux/-/commit/a381917851e762684ebe28e04c5ae0d8be7f42c7\">enabled by default</a> so you don’t have to tweak your system anymore.</p>\n\n<p>Unfortunately, if we attempt to build a Debian-based image to run “rootless in rootless” with such a <code class=\"language-plaintext highlighter-rouge\">Containerfile</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-dockerfile\" data-lang=\"dockerfile\"><span class=\"k\">FROM</span><span class=\"s\"> debian:bookworm</span>\n\n<span class=\"k\">RUN </span>apt-get update <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    apt-get upgrade <span class=\"nt\">-y</span> <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    apt-get <span class=\"nb\">install</span> <span class=\"nt\">-y</span> <span class=\"nt\">--no-install-recommends</span> podman fuse-overlayfs slirp4netns uidmap\n\n<span class=\"k\">RUN </span>useradd podman <span class=\"nt\">-s</span> /bin/bash <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    <span class=\"nb\">echo</span> <span class=\"s2\">\"podman:1001:64535\"</span> <span class=\"o\">&gt;</span> /etc/subuid <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    <span class=\"nb\">echo</span> <span class=\"s2\">\"podman:1001:64535\"</span> <span class=\"o\">&gt;</span> /etc/subgid\n\n<span class=\"k\">ARG</span><span class=\"s\"> _REPO_URL=\"https://raw.githubusercontent.com/containers/image_build/refs/heads/main/podman\"</span>\n<span class=\"k\">ADD</span><span class=\"s\"> $_REPO_URL/containers.conf /etc/containers/containers.conf</span>\n<span class=\"k\">ADD</span><span class=\"s\"> $_REPO_URL/podman-containers.conf /home/podman/.config/containers/containers.conf</span>\n\n<span class=\"k\">RUN </span><span class=\"nb\">mkdir</span> <span class=\"nt\">-p</span> /home/podman/.local/share/containers <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    <span class=\"nb\">chown </span>podman:podman <span class=\"nt\">-R</span> /home/podman <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\\n</span>    <span class=\"nb\">chmod </span>0644 /etc/containers/containers.conf\n\n<span class=\"k\">VOLUME</span><span class=\"s\"> /home/podman/.local/share/containers</span>\n\n<span class=\"k\">ENV</span><span class=\"s\"> _CONTAINERS_USERNS_CONFIGURED=\"\"</span>\n\n<span class=\"k\">USER</span><span class=\"s\"> podman</span>\n<span class=\"k\">WORKDIR</span><span class=\"s\"> /home/podman</span></code></pre></figure>\n\n<p>… it hard fails with :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman image build <span class=\"nt\">-q</span> <span class=\"nt\">-t</span> debian:podman <span class=\"nt\">-f</span> Containerfile <span class=\"nb\">.</span> <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\</span>\n    podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    debian:podman <span class=\"se\">\\</span>\n        podman unshare bash\n98ea1c8c9e32cff5c3dabc4925f55a87cfad77e32d5778785a4f025215124fab\nERRO[0000] running <span class=\"sb\">`</span>/usr/bin/newuidmap 12 0 1000 1 1 1001 64535<span class=\"sb\">`</span>: newuidmap: write to uid_map failed: Operation not permitted \nError: cannot <span class=\"nb\">set </span>up namespace using <span class=\"s2\">\"/usr/bin/newuidmap\"</span>: <span class=\"nb\">exit </span>status 1</code></pre></figure>\n\n<p>@jam49 initially <a href=\"https://github.com/containers/podman/issues/19906\">experienced this error</a> while trying to run Podman rootless in <a href=\"https://registry.hub.docker.com/r/jenkins/jenkins\">Jenkins official Docker image</a>, which is Debian-based.<br />\nGranting <code class=\"language-plaintext highlighter-rouge\">CAP_SYS_ADMIN</code> to parent user namespace (hence the first container) actually “fixes” this issue, but this is <strong>highly discouraged</strong> due to <a href=\"https://book.hacktricks.xyz/linux-hardening/privilege-escalation/linux-capabilities#text-cap_sys_admin\">the <del>bloated</del> range of system operations that it permits</a>, which can easily leads to <code class=\"language-plaintext highlighter-rouge\">root</code> privileges. Also, unless explicitly dropped, it will be inherited in child user namespaces as well (including the one running your application or service !).</p>\n\n<p>So, why does this work flawlessly on Fedora, and not against Debian ? Let’s dive in user namespaces and capabilities magic world :smirk:</p>\n\n<h3 id=\"from-newuidmap-to-user-namespaces-and-capabilities\">From <code class=\"language-plaintext highlighter-rouge\">newuidmap</code>, to user namespaces and capabilities</h3>\n\n<p><code class=\"language-plaintext highlighter-rouge\">newuidmap</code> (respectively <code class=\"language-plaintext highlighter-rouge\">newgidmap</code>) is a <em>privileged</em> program maintained in <code class=\"language-plaintext highlighter-rouge\">shadow-utils</code> project (see <a href=\"https://github.com/shadow-maint/shadow\">upstream tree</a>, or <a href=\"https://salsa.debian.org/debian/shadow/\"><code class=\"language-plaintext highlighter-rouge\">shadow</code> on Debian</a>) which allows unprivileged users to safely map their UID (GID) to parent user namespace, based on ids range defined in <code class=\"language-plaintext highlighter-rouge\">/etc/subuid</code> (<code class=\"language-plaintext highlighter-rouge\">/etc/subgid</code>) file.</p>\n\n<p>Since Linux &gt;= 3.9, <a href=\"https://github.com/torvalds/linux/commit/41c21e351e79004dbb4efa4bc14a53a7e0af38c5#diff-5ed7c9c3a2bfc22c99debf409d123ff561727de5cf584817b0724df49aa628bdR593\">modifying namespace id mapping requires <code class=\"language-plaintext highlighter-rouge\">CAP_SYS_ADMIN</code></a>. As this capability is usually not granted in container contexts, <code class=\"language-plaintext highlighter-rouge\">shadow-utils</code> maintainers <a href=\"https://github.com/shadow-maint/shadow/pull/136\">switched to file capabilities</a> (in a backward-compatible way for setuid setups (see <a href=\"https://github.com/shadow-maint/shadow/pull/132\">!132</a>, fixed-up by <a href=\"https://github.com/shadow-maint/shadow/pull/138\">!138</a>).</p>\n\n<p>Going through file capabilities is a good way to obtain them even if they are missing from your “Effective set” (they still need to be in your “Permitted set” though, see <a href=\"https://blog.ploetzli.ch/wp-content/uploads/2014/12/capabilities.png\">this awesome diagram</a>, or even next section for a visual experience).</p>\n\n<p>Debian (still) <a href=\"https://salsa.debian.org/debian/shadow/-/blob/05a41bc4d536a1c379ec6d21323b51e29c5f9a62/debian/rules#L59-L61\">installs uidmap binaries with setuid bit</a>, whereas the packaged version fully-supports file capabilities (<a href=\"https://github.com/shadow-maint/shadow/releases/tag/4.7\">&gt;= 4.7</a>), since Bullseye (11).<br />\nTheoretically, this shouldn’t be an issue as gaining <code class=\"language-plaintext highlighter-rouge\">root</code> privileges through setuid bit implies the full set of capabilities by default.<br />\nSo, would <code class=\"language-plaintext highlighter-rouge\">uidmap</code> be <a href=\"https://github.com/shadow-maint/shadow/blob/5178f8c5afb612f6ddf5363823547e080e7f546b/lib/idmapping.c#L152-L193\">compiled without capability support, and thus failing to retain <code class=\"language-plaintext highlighter-rouge\">CAP_SETUID</code> (<code class=\"language-plaintext highlighter-rouge\">CAP_SETGID</code>)</a> ? :thinking:</p>\n\n<p>We can see in Debian <code class=\"language-plaintext highlighter-rouge\">shadow</code> sources that :</p>\n\n<ul>\n  <li>\n    <p><a href=\"https://salsa.debian.org/debian/shadow/-/blob/05a41bc4d536a1c379ec6d21323b51e29c5f9a62/config.h.in#L346-347\"><code class=\"language-plaintext highlighter-rouge\">config.h.in</code> undefines <code class=\"language-plaintext highlighter-rouge\">HAVE_SYS_CAPABILITY_H</code></a> ;</p>\n  </li>\n  <li>\n    <p><a href=\"https://salsa.debian.org/debian/shadow/-/blob/05a41bc4d536a1c379ec6d21323b51e29c5f9a62/configure#L14470-L14475\"><code class=\"language-plaintext highlighter-rouge\">configure</code> script checks for <code class=\"language-plaintext highlighter-rouge\">sys/capability.h</code></a>.</p>\n  </li>\n</ul>\n\n<p>But what do build logs tell ?</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">apt <span class=\"nb\">install</span> <span class=\"nt\">-y</span> <span class=\"nt\">--no-install-recommends</span> devscripts wget\n\n<span class=\"c\"># Download last `shadow` (packaging `uidmap` binaries) build logs</span>\ngetbuildlog shadow <span class=\"s2\">\"last\"</span> amd64\n\n<span class=\"nb\">grep</span> <span class=\"s1\">'sys/capability.h'</span> shadow_<span class=\"k\">*</span>_amd64.log \nchecking <span class=\"k\">for </span>sys/capability.h... no</code></pre></figure>\n\n<p>That’s it ! Due to Debian <code class=\"language-plaintext highlighter-rouge\">shadow</code> compilation environment <strong>and</strong> packaging, <code class=\"language-plaintext highlighter-rouge\">uidmap</code> binaries lack of both capabilities upstream patches.</p>\n\n<h3 id=\"tl-dr--the-workaround\">TL; DR : The workaround</h3>\n\n<p>As stated <a href=\"https://github.com/containers/podman/discussions/19931#discussioncomment-6971261\">here</a>, dropping setuid bit and granting <code class=\"language-plaintext highlighter-rouge\">CAP_SETUID</code> (<code class=\"language-plaintext highlighter-rouge\">CAP_SETGID</code>) as file capability in our previous Debian-based image using <code class=\"language-plaintext highlighter-rouge\">setcap</code> (<code class=\"language-plaintext highlighter-rouge\">libcap2-bin</code> package on Debian) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-diff\" data-lang=\"diff\"> FROM debian:bookworm\n \n RUN apt-get update &amp;&amp; \\\n     apt-get upgrade -y &amp;&amp; \\\n     apt-get install -y --no-install-recommends podman fuse-overlayfs slirp4netns uidmap\n \n RUN useradd podman -s /bin/bash &amp;&amp; \\\n     echo \"podman:1001:64535\" &gt; /etc/subuid &amp;&amp; \\\n     echo \"podman:1001:64535\" &gt; /etc/subgid\n \n ARG _REPO_URL=\"https://raw.githubusercontent.com/containers/image_build/refs/heads/main/podman\"\n ADD $_REPO_URL/containers.conf /etc/containers/containers.conf\n ADD $_REPO_URL/podman-containers.conf /home/podman/.config/containers/containers.conf\n \n RUN mkdir -p /home/podman/.local/share/containers &amp;&amp; \\\n     chown podman:podman -R /home/podman &amp;&amp; \\\n     chmod 0644 /etc/containers/containers.conf\n \n VOLUME /home/podman/.local/share/containers\n \n<span class=\"gi\">+ # Replace setuid bits by proper file capabilities for uidmap binaries.\n+ # See &lt;https://github.com/containers/podman/discussions/19931&gt;.\n+ RUN apt-get install -y libcap2-bin &amp;&amp; \\\n+     chmod 0755 /usr/bin/newuidmap /usr/bin/newgidmap &amp;&amp; \\\n+     setcap cap_setuid=ep /usr/bin/newuidmap &amp;&amp; \\\n+     setcap cap_setgid=ep /usr/bin/newgidmap &amp;&amp; \\\n+     apt-get autoremove --purge -y libcap2-bin\n+ \n</span> ENV _CONTAINERS_USERNS_CONFIGURED=\"\"\n \n USER podman\n WORKDIR /home/podman</code></pre></figure>\n\n<p>… elegantly workarounds this issue :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman image build <span class=\"nt\">-q</span> <span class=\"nt\">-t</span> debian:podman <span class=\"nt\">-f</span> Containerfile <span class=\"nb\">.</span> <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\</span>\n    podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    debian:podman <span class=\"se\">\\</span>\n        podman unshare bash\n6b0661ddedbf13459493720f992c171d912d33bfd79a48a0d162d3eb0335cc99\nroot@bbb7d3d08f5a:~# <span class=\"nb\">id\n</span><span class=\"nv\">uid</span><span class=\"o\">=</span>0<span class=\"o\">(</span>root<span class=\"o\">)</span> <span class=\"nv\">gid</span><span class=\"o\">=</span>0<span class=\"o\">(</span>root<span class=\"o\">)</span> <span class=\"nb\">groups</span><span class=\"o\">=</span>0<span class=\"o\">(</span>root<span class=\"o\">)</span></code></pre></figure>\n\n<h3 id=\"but-what-if-cap_setuid-cap_setgid-is-explicitly-forbidden-in-my-context-\">But what if <code class=\"language-plaintext highlighter-rouge\">CAP_SETUID</code> (<code class=\"language-plaintext highlighter-rouge\">CAP_SETGID</code>) is explicitly forbidden in my context ?</h3>\n\n<p>Well, as you can imagine, it breaks again :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman image build <span class=\"nt\">-q</span> <span class=\"nt\">-t</span> debian:podman <span class=\"nt\">-f</span> Containerfile <span class=\"nb\">.</span> <span class=\"o\">&amp;&amp;</span> <span class=\"se\">\\</span>\n    podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    <span class=\"nt\">--cap-drop</span> setuid,setgid <span class=\"se\">\\</span>\n    debian:podman <span class=\"se\">\\</span>\n        podman unshare bash\n6b0661ddedbf13459493720f992c171d912d33bfd79a48a0d162d3eb0335cc99\nERRO[0000] running <span class=\"sb\">`</span>/usr/bin/newuidmap 9 0 1000 1 1 1001 64535<span class=\"sb\">`</span>:  \nError: cannot <span class=\"nb\">set </span>up namespace using <span class=\"s2\">\"/usr/bin/newuidmap\"</span>: fork/exec /usr/bin/newuidmap: operation not permitted</code></pre></figure>\n\n<p>Although, you’ll notice the error is slightly different : kernel prevents binary execution, instead of subsequent <code class=\"language-plaintext highlighter-rouge\">/proc/self/uid_map</code> write operation as observed before.</p>\n\n<h3 id=\"bonus-harden-your-last-level-of-nesting\">(bonus) Harden your “last level of nesting”</h3>\n\n<p>If your “last level of nesting” is not supposed to re-gain privileges, you can safely set the <a href=\"https://www.kernel.org/doc/html/latest/userspace-api/no_new_privs.html\">“No New Privileges” flag</a> through a Podman security option :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--user</span> podman <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    <span class=\"nt\">--security-opt</span><span class=\"o\">=</span>no-new-privileges <span class=\"se\">\\</span>\n    quay.io/podman/stable <span class=\"se\">\\</span>\n        podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n            alpine:latest <span class=\"se\">\\</span>\n            sh\nERRO[0000] running <span class=\"sb\">`</span>/usr/bin/newuidmap 9 0 1000 1 1 1 999 1000 1001 64535<span class=\"sb\">`</span>: newuidmap: write to uid_map failed: Operation not permitted \nError: cannot <span class=\"nb\">set </span>up namespace using <span class=\"s2\">\"/usr/bin/newuidmap\"</span>: <span class=\"nb\">exit </span>status 1</code></pre></figure>\n\n<p>Here we can note that the flag actually breaks our first post example, as expected (gaining privileges from <code class=\"language-plaintext highlighter-rouge\">newuidmap</code> program is denied by kernel).</p>\n\n<p>The status of this flag can be retrieved using <code class=\"language-plaintext highlighter-rouge\">capsh</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">podman run <span class=\"nt\">-q</span> <span class=\"nt\">-it</span> <span class=\"nt\">--rm</span> <span class=\"se\">\\</span>\n    <span class=\"nt\">--user</span> podman <span class=\"se\">\\</span>\n    <span class=\"nt\">--device</span> /dev/fuse <span class=\"se\">\\</span>\n    <span class=\"nt\">--security-opt</span><span class=\"o\">=</span>no-new-privileges <span class=\"se\">\\</span>\n    quay.io/podman/stable <span class=\"se\">\\</span>\n        bash\n<span class=\"o\">[</span>podman@5936edfb845d /]<span class=\"nv\">$ </span>/sbin/capsh <span class=\"nt\">--print</span> | <span class=\"nb\">grep </span>no-new-privs\nSecurebits: 00/0x0/1<span class=\"s1\">'b0 (no-new-privs=1)</span></code></pre></figure>\n\n<p>… or even directly through <code class=\"language-plaintext highlighter-rouge\">/proc</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nb\">grep </span>NoNewPrivs /proc/self/status\nNoNewPrivs:\t1</code></pre></figure>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>This has been a “funny” bug to investigate !</p>\n\n<p>I’ve run a quick search on the Web and it doesn’t look like Debian plans to switch to file capabilities for uidmap binaries (yet), so it’s very likely that the shim above will be around for some time.</p>\n",
            "summary": "How a tiny packaging difference breaks a whole feature",
            "tags": ["Security"]
        },{
            "title": "How to rewrite Git history while keeping message commit references",
            "date_published": "2022-03-26T12:41:00+01:00",
            "date_modified": "2022-03-29T19:17:00+02:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-rewrite-git-history-while-keeping-message-commit-references",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-rewrite-git-history-while-keeping-message-commit-references",
            "image": "https://samuel.forestier.app/img/blog/how-to-rewrite-git-history-while-keeping-message-commit-references_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-rewrite-git-history-while-keeping-message-commit-references_1.png\"><img src=\"/img/blog/how-to-rewrite-git-history-while-keeping-message-commit-references_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>Sometimes, you would like to clean your Git history (let’s say, to remove <a href=\"https://www.root-me.org/en/Challenges/Web-Server/Insecure-Code-Management\">a redacted production secret still present in history</a>, or maybe change an old committer identity).</p>\n\n<blockquote>\n  <p>:warning: As such operations are very dangerous, please read this post <strong>fully</strong> before running anything, and note that I hereby decline any responsibility (as always) if something bad happens to your project.</p>\n</blockquote>\n\n<h3 id=\"the-problem\">The problem</h3>\n\n<p>If you reach this page, you already know the problem : rewriting Git history causes all identifiers (SHA) following the first affected commit to change, and <a href=\"https://stackoverflow.com/questions/64204804/dirty-trick-to-keep-commit-hashes-when-rewriting-git-history\">you cannot do a thing about it</a>.</p>\n\n<p>If one of the developers used to specify commit references in their own commit messages (like <code class=\"language-plaintext highlighter-rouge\">This commit follows 40d5014 [...]</code>), they won’t mean anything once rewriting is done.<br />\nMoreover, if some of your commits “revert” others, they are also affected (Git does not update them automatically).</p>\n\n<h3 id=\"the-workaround\">The workaround</h3>\n\n<p>So we have somehow to dynamically “update” commit references, while rewriting the history, according to new commit identifiers.</p>\n\n<p>Below is a script implementing this, derived from one of the official GIT-FILTER-BRANCH(1) manual page examples, updating <code class=\"language-plaintext highlighter-rouge\">root &lt;root@localhost&gt;</code> identity with <code class=\"language-plaintext highlighter-rouge\">John Doe &lt;john@example.net&gt;</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-sh\" data-lang=\"sh\">git filter-branch <span class=\"se\">\\</span>\n\t<span class=\"nt\">--env-filter</span> <span class=\"s1\">'\n\t\tif test \"$GIT_AUTHOR_NAME\" = \"root\"\n\t\tthen\n\t\t\tGIT_AUTHOR_NAME=\"John Doe\"\n\t\tfi\n\t\tif test \"$GIT_AUTHOR_EMAIL\" = \"root@localhost\"\n\t\tthen\n\t\t\tGIT_AUTHOR_EMAIL=john@example.com\n\t\tfi\n\t\tif test \"$GIT_COMMITTER_NAME\" = \"root\"\n\t\tthen\n\t\t\tGIT_COMMITTER_NAME=\"John Doe\"\n\t\tfi\n\t\tif test \"$GIT_COMMITTER_EMAIL\" = \"root@localhost\"\n\t\tthen\n\t\t\tGIT_COMMITTER_EMAIL=john@example.com\n\t\tfi\n\t'</span> <span class=\"se\">\\</span>\n\t<span class=\"nt\">--commit-filter</span> <span class=\"s1\">'\n\t\tprintf \"%s\" \"${GIT_COMMIT},\" &gt;&gt; ../commits_mapping\n\t\tgit commit-tree \"$@\" | tee -a ../commits_mapping\n\t'</span> <span class=\"se\">\\</span>\n\t<span class=\"nt\">--tag-name-filter</span> <span class=\"nb\">cat</span> <span class=\"se\">\\</span>\n\t<span class=\"nt\">--msg-filter</span> <span class=\"s1\">'\n\t\tmessage=\"$(cat)\"\n\t\tcommit_refs=\"$(echo \"$message\" | LC_ALL=C grep -oE \"\\b[0-9a-fA-F]{7,40}\\b\")\"\n\t\tfor commit_ref in $commit_refs; do\n\t\t\tnew_sha=\"$(grep \"^${commit_ref}\" ../commits_mapping | cut -d, -f2)\"\n\t\t\tif test -z \"$new_sha\"\n\t\t\tthen\n\t\t\t\tcontinue;\n\t\t\tfi\n\t\t\tcommit_ref_len=\"$(printf \"%s\" \"$commit_ref\" | wc -m)\"\n\t\t\tnew_commit_ref=\"$(echo \"$new_sha\" | cut -c \"1-${commit_ref_len}\")\"\n\t\t\tmessage=\"$(echo \"$message\" | sed \"s/${commit_ref}/${new_commit_ref}/g\")\"\n\t\tdone\n\n\t\techo \"$message\"\n\t'</span> <span class=\"se\">\\</span>\n\t<span class=\"nt\">--</span> <span class=\"nt\">--all</span></code></pre></figure>\n\n<p>You may have noticed that filtering scripts are fully-POSIX compatible, so they are <em>supposed</em> to work in most environments (maybe even yours :wink:).</p>\n\n<p>You will find other features too :</p>\n\n<ul>\n  <li>\n    <p>Committer identities are additionally getting updated ;</p>\n  </li>\n  <li>\n    <p><strong>All</strong> branches are getting rewritten (this may not be something that you want !) ;</p>\n  </li>\n  <li>\n    <p>Tags are getting updated too (they will point to the same effective version of the code).</p>\n  </li>\n</ul>\n\n<h3 id=\"a-workaround-pitfall\">A workaround pitfall</h3>\n\n<blockquote>\n  <p><strong>TL; DR</strong> : beware of word collisions across commit messages.</p>\n</blockquote>\n\n<p>There is a caveat that we have to share though, because of the use of regular expressions in the <code class=\"language-plaintext highlighter-rouge\">msg-filter</code> script :</p>\n\n<p><a href=\"https://www.explainxkcd.com/wiki/index.php?title=1171:_Perl_Problems\"><img src=\"/img/blog/how-to-rewrite-git-history-while-keeping-message-commit-references_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>You <em>might</em> encounter collisions between commit references and real-life words, existing in your language.</p>\n\n<p>For a project with commit messages written in English, you can safely run the above Git migration, because there is none :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nv\">LC_ALL</span><span class=\"o\">=</span>C <span class=\"nb\">grep</span> <span class=\"nt\">-oE</span> <span class=\"s2\">\"</span><span class=\"se\">\\b</span><span class=\"s2\">[0-9a-fA-F]{7,40}</span><span class=\"se\">\\b</span><span class=\"s2\">\"</span> /usr/share/hunspell/en_US.dic</code></pre></figure>\n\n<p>If you happened to use shorter SHA (let’s say, 6-character long references), there <strong>are</strong> collisions in English :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nv\">LC_ALL</span><span class=\"o\">=</span>C <span class=\"nb\">grep</span> <span class=\"nt\">-oE</span> <span class=\"s2\">\"</span><span class=\"se\">\\b</span><span class=\"s2\">[0-9a-fA-F]{6,40}</span><span class=\"se\">\\b</span><span class=\"s2\">\"</span> /usr/share/hunspell/en_US.dic\naccede\nbedded\ncabbed\ndabbed\ndecade\nefface\nfacade</code></pre></figure>\n\n<p>For an Italian project, there <strong>are</strong> collisions, even with 7-character long references (:fearful:) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"><span class=\"nv\">LC_ALL</span><span class=\"o\">=</span>C <span class=\"nb\">grep</span> <span class=\"nt\">-oE</span> <span class=\"s2\">\"</span><span class=\"se\">\\b</span><span class=\"s2\">[0-9a-fA-F]{7,40}</span><span class=\"se\">\\b</span><span class=\"s2\">\"</span> /usr/share/hunspell/it_IT.dic\naccadde\ndecadde</code></pre></figure>\n\n<h3 id=\"last-words\">Last words</h3>\n\n<p>Please also note that <code class=\"language-plaintext highlighter-rouge\">git filter-branch</code> usage is <a href=\"https://github.com/git/git/commit/9df53c5de6e687df9cd7b36e633360178b65a0ef\">deprecated since Git v2.24.0</a>, and <a href=\"https://github.com/newren/git-filter-repo/\">filter-repo</a> should be preferred.<br />\n<del>If you managed to adapt the solution described in this post with this tool, feel free to post a comment below !</del></p>\n\n<p>It actually appeared that <a href=\"https://github.com/newren/git-filter-repo/#design-rationale-behind-filter-repo\">filter-repo supports this feature by default</a> ! :tada:<br />\nSo it definitely should be preferred over <code class=\"language-plaintext highlighter-rouge\">git filter-branch</code>, but sometimes, only legacy tools are available…</p>\n\n<hr />\n\n<blockquote>\n  <p>Many thanks to the co-author of this script, who will recognize himself :pray:</p>\n</blockquote>\n",
            "summary": "Mankind has a duty of memory",
            "tags": ["Tutorials"]
        },{
            "title": "How to disable Bluetooth on Cinnamon startup",
            "date_published": "2022-01-31T11:35:00+01:00",
            "date_modified": "2022-01-31T11:35:00+01:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-disable-bluetooth-on-cinnamon-startup",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-disable-bluetooth-on-cinnamon-startup",
            "image": "https://samuel.forestier.app/img/blog/how-to-disable-bluetooth-on-cinnamon-startup_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-disable-bluetooth-on-cinnamon-startup_1.png\"><img src=\"/img/blog/how-to-disable-bluetooth-on-cinnamon-startup_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>For years now, I’ve been using Cinnamon as desktop environment on my (graphical) Debian-powered systems.<br />\nWhereas <a href=\"https://www.preining.info/blog/2022/01/future-of-my-packages-in-debian/\">this might painfully and sadly change in the future</a>, I still had have to manually turn off Bluetooth on startup, as somehow, this state <a href=\"https://medium.com/@vxp/solved-how-to-disable-bluetooth-at-startup-on-linux-mint-19-tara-cinnamon-f5e0bd97e14d\">does</a> <a href=\"https://github.com/linuxmint/cinnamon/issues/1244\">not</a> <a href=\"https://github.com/linuxmint/cinnamon/issues/9072\">persist</a> across reboots, and this feature does not seem implemented by the <a href=\"https://github.com/blueman-project/blueman\">Blueman applet</a>.</p>\n\n<h3 id=\"the-problem\">The problem</h3>\n\n<p>It appears since <a href=\"http://www.bluez.org/\">BlueZ</a> 5.35 (2015 !) that <a href=\"https://github.com/blueman-project/blueman/wiki/Troubleshooting#turn-on-bluetooth-device-on-boot\">enabling or disabling daemon automatic startup is possible</a>.</p>\n\n<p>The thing is, even if you manually revert <a href=\"https://salsa.debian.org/bluetooth-team/bluez/-/blob/88b298b5d944371dc5a184b92aa420c1a4bb4f98/debian/patches/main.conf.patch\">the Debian-packaged configuration enabling <code class=\"language-plaintext highlighter-rouge\">AutoStart</code></a>, the <a href=\"https://github.com/blueman-project/blueman/blob/7ed1b6b148a26772d034d8ee2d648438e7d60b09/blueman/Functions.py#L68\">Blueman applet will ensure Bluetooth is correctly turned on when bootstrapping</a>, resulting in :</p>\n\n<p><a href=\"/img/blog/how-to-disable-bluetooth-on-cinnamon-startup_2.png\"><img src=\"/img/blog/how-to-disable-bluetooth-on-cinnamon-startup_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>… and that makes sens, because we can expect the front-end to check whether the back-end is ready on startup. But that’s a problem if you <strong>want</strong> it to stay off (and shut up) unconditionally.</p>\n\n<h3 id=\"the-workaround\">The workaround</h3>\n\n<p>The workaround is then to let the Bluetooth daemon automatically start with <code class=\"language-plaintext highlighter-rouge\">AutoStart</code> set to <code class=\"language-plaintext highlighter-rouge\">true</code>, but directly disabling the wireless device with <code class=\"language-plaintext highlighter-rouge\">rfkill</code>.</p>\n\n<p>Unlike many others, I usually prefer handling “system-related issues” with “system-related tools”, instead of cheap hooks messy to maintain and always ending up requiring unnecessary privileges.<br />\nUsing systemd, we can easily implement this using unit overriding and built-in <code class=\"language-plaintext highlighter-rouge\">Service</code> execution hooks :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">systemctl edit bluetooth.service</code></pre></figure>\n\n<p>… and insert the following :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-ini\" data-lang=\"ini\"><span class=\"nn\">[Service]</span>\n<span class=\"py\">ExecStartPost</span><span class=\"p\">=</span><span class=\"s\">/usr/sbin/rfkill block bluetooth</span></code></pre></figure>\n\n<blockquote>\n  <p>Note that we <strong>do not</strong> <a href=\"https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart=\">reset with an empty string</a> <code class=\"language-plaintext highlighter-rouge\">ExecStartPost</code> entry first, so if one day Debian-based distributions decide to add another hook like that, this workaround wouldn’t break a thing, and will always be run afterwards.</p>\n</blockquote>\n\n<h3 id=\"last-words\">Last words</h3>\n\n<p>Shoot-out to <a href=\"https://www.preining.info/\">Norbert Preining</a> for maintaining all those pieces of software available in Debian, for all these years, including <a href=\"https://tracker.debian.org/pkg/cinnamon\">Cinnamon</a> :pray:</p>\n",
            "summary": "Some minor issues never disappear",
            "tags": ["Tutorials"]
        },{
            "title": "Possible hot take about the new Riot Games anti-cheat policy",
            "date_published": "2020-04-18T13:01:00+02:00",
            "date_modified": "2020-04-18T13:01:00+02:00",
            "id": "https://samuel.forestier.app/blog/articles/possible-hot-take-about-the-new-riot-games-anti-cheat-policy",
            "url": "https://samuel.forestier.app/blog/articles/possible-hot-take-about-the-new-riot-games-anti-cheat-policy",
            "image": "https://samuel.forestier.app/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_1.png\"><img src=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<blockquote>\n  <p>As an introduction to the <em>League of Legends / Riot Games / Tencent Holdings</em> environment, I will only paste what I have written for an other unpublished post :</p>\n</blockquote>\n\n<p><a href=\"https://na.leagueoflegends.com/en-us/\">League of Legends</a> (below, “LoL”) is a <a href=\"https://en.wikipedia.org/wiki/Multiplayer_online_battle_arena\">MOBA</a> still developed by its original publisher : Riot Games (USA).<br />\nRiot Games is, and has been for some years now, <a href=\"https://www.polygon.com/2015/12/16/10326320/riot-games-now-owned-entirely-by-tencent\">owned at 100% by Chinese (Tencent Holdings)</a>.<br />\nThere are <strong>still</strong> <a href=\"https://kotaku.com/riot-employees-prepare-for-walkout-today-1834553458\">some serious working conditions issues at Riot place</a>.</p>\n\n<h3 id=\"the-subject\">The Subject</h3>\n\n<p>Some weeks ago, I’ve come across this click-baiting-but-technical blog post from Riot : <a href=\"https://na.leagueoflegends.com/en-us/news/dev/dev-null-anti-cheat-kernel-driver/\">/dev/null: Anti-Cheat Kernel Driver</a>.<br />\nThe main goal of this <em>solution</em> is to load their anti-cheat in a more privileged environment than the cheats’ one.<br />\nI was not very keen about the idea (and I am not alone), but I naively thought that we would have some months <del>of idealogical struggling</del> ahead before really dealing with such an intrusive technology.</p>\n\n<p>But more recently, it finally appeared <a href=\"https://www.newsweek.com/project-riot-games-reveal-date-what-valorant-leaks-1489735\">“Project A” has been renamed to “VALORANT”</a> and is already “available” through a really odd process, including another third-party platform (one more obscure partnership ?) :</p>\n\n<p><a href=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_2.png\"><img src=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>Anyway, VALORANT, running in BETA with currently only few players, is acting as a <em>production-grade</em> testing environment for their new <a href=\"https://support-valorant.riotgames.com/hc/en-us/articles/360046160933-What-is-Vanguard-\">Vanguard anti-cheat solution</a>.</p>\n\n<p>Now look, let’s check again what an operating system kernel is <em>supposed</em> to do (from the <a href=\"https://en.wikipedia.org/wiki/Kernel_(operating_system)\">Wikipedia page</a>) :</p>\n\n<blockquote>\n  <p>The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space.</p>\n</blockquote>\n\n<p>Please help me, I can’t manage to find the part specifying it gives a special access to vendors for shipping their <a href=\"https://en.wikipedia.org/wiki/Binary_large_object\">BLOB</a>, running in a privileged and dangerous environment :roll_eyes:</p>\n\n<p>In computer science, if engineers happened to separate what is a matter of <strong>applications</strong> from what is a matter of <strong>system</strong>, there were (and there still are) good reasons.<br />\nIntel <em>thought</em> it could mix those, <a href=\"https://www.zdnet.com/article/minix-intels-hidden-in-chip-operating-system/\">it ended up very badly</a>.<br />\nAnd I mean, what could be <em>more</em> “applicative” than softwares developed by a video games company ?</p>\n\n<blockquote>\n  <p>And guys, I don’t expect the world to get <a href=\"https://support-valorant.riotgames.com/hc/en-us/articles/360044648213-Uninstalling-Riot-Vanguard\">a C.S. degree to remove an <strong>application</strong> from their system</a> :cry:</p>\n</blockquote>\n\n<p>The point is, and I’m looking at you Riot, your code <strong>will</strong> contain vulnerabilities. It’s a fact, as <a href=\"https://github.com/kelseyhightower/nocode\">it contains code rendering a “service”</a>.<br />\nAnd you can argue it might be the most legitimate BLOBs Earth would ever known, it <strong>will</strong> anyway.</p>\n\n<p>What would happen if a <a href=\"https://en.wikipedia.org/wiki/Zero-day_attack\">0-Day</a> is (un)discovered ?<br />\nYou would have (maybe) prevented a <em>minority</em> from bothering <em>a part</em> of the community, and greatly exposed <em>tens Millions</em> of players.<br />\nIs it worth the risk ?<br />\nIf I were a company’s CSO, I would not accept it.</p>\n\n<blockquote>\n  <p>Personal two cents about cheating in LoL : In 7 years and thousands of games played, I only encountered a <em>scripter</em> <strong>once</strong> and, thanks to him, it wasn’t even in Solo Queue :smile:<br />\nPersonal two cents direct comment : Maybe we (EUW players) are relatively <em>spared</em> from <em>cheaters</em> ? Do <a href=\"https://na.leagueoflegends.com/en-us/news/dev/dev-removing-cheaters-from-lol/\">your statistics only address NA</a> ?</p>\n</blockquote>\n\n<p>List of (not-so)naive advices for Riot Games in their difficult fight on this (important) matter :</p>\n\n<ul>\n  <li>\n    <p>Abandon this idea (yeah, I know, sunk costs and friends) ;</p>\n  </li>\n  <li>\n    <p>Prefer the “human” approach by enhancing the <code class=\"language-plaintext highlighter-rouge\">Report</code> feature if it requires to ;</p>\n  </li>\n  <li>\n    <p><a href=\"https://nexus.leagueoflegends.com/en-us/2018/08/ask-riot-will-tribunal-return/\">Restore the Tribunal</a> if you have to (??) ;</p>\n  </li>\n  <li>\n    <p>If you <strong>really</strong> want to keep your low-level stuffs being used, please <a href=\"https://protonmail.com/blog/bridge-open-source/\">make them Open-Source and publicly audited</a>.</p>\n  </li>\n</ul>\n\n<p>I do <strong>hope</strong> too that you know you will <del>f*ck</del> impair <em>all</em> LoL GNU/Linux users once the official public client will be “patched” (and <a href=\"https://lutris.net/games/league-of-legends/\">Lutris is already discouraging new players from “picking up League”</a> :ok_hand: :joy:) :</p>\n\n<p><a href=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_3.png\"><img src=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_3.png\" alt=\"A missing blog post image\" /></a></p>\n\n<blockquote>\n  <p>By the way, how are you gonna handle MacOS players ? Are you on the verge of coding a BSD kernel module too ?</p>\n</blockquote>\n\n<p>Anyway, I hope you also well-comprehend the problem of being <strong>owned</strong> by a Chinese group.<br />\nIf, one day maybe, <a href=\"https://en.wikipedia.org/wiki/Tencent\">Tencent</a> decides to take over Riot’s games development back to China, players would end up having a <strong>ring-0</strong> piece of software <a href=\"https://gizmodo.com/5-things-to-know-about-tencent-the-chinese-internet-gi-1820767339#h209798\">(in-)directly handled by the Communist Party</a> :100:</p>\n\n<blockquote>\n  <p>Note to Epic Games shareholders : Don’t be stupid, <a href=\"https://www.polygon.com/2013/3/21/4131702/tencents-epic-games-stock-acquisition\">keep those determinant 10% to stay you out of this contingency</a>.</p>\n</blockquote>\n\n<p>So here we are, just wanted to add my two cents on the subject from my own PoV, hoping that divergent thoughts are the way forward to a better world, and that we <em>cannot</em> let <a href=\"https://steelseries.com/blog/valorant-anti-cheat-how-will-it-work-188\">those news handled by business vendors</a> :</p>\n\n<p><a href=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_4.png\"><img src=\"/img/blog/possible-hot-take-about-the-new-riot-games-anti-cheat-policy_4.png\" alt=\"A missing blog post image\" /></a></p>\n\n<blockquote>\n  <p>Yes, the link refers to a store page to <strong>buy</strong> a product.<br />\nNo, it’s not a joke (click on the link above if you don’t trust me).</p>\n</blockquote>\n\n<p>To conclude, I’d refer to this <a href=\"https://www.reddit.com/r/VALORANT/comments/fzxdl7/anticheat_starts_upon_computer_boot/\">Reddit thread</a>, like often, encountered <strong>after</strong> the post redaction…<br />\n… and good luck to Riot that will have to deal with <a href=\"https://www.videogamer.com/news/riot-rebukes-allegations-that-valorants-anti-cheat-system-is-spying-on-players\">all of these users (well-)thinking that their computer is spying on them</a>, because that’s definitely what their new “driver” is doing.</p>\n",
            "summary": "No, shipping and executing proprietary low-level code is a terrible idea",
            "tags": ["Articles"]
        },{
            "title": "How to fix The Saboteur installation on Origin and Windows 7",
            "date_published": "2020-04-13T18:00:00+02:00",
            "date_modified": "2020-04-13T18:00:00+02:00",
            "id": "https://samuel.forestier.app/blog/tutorials/how-to-fix-the-saboteur-installation-on-origin-and-windows-7",
            "url": "https://samuel.forestier.app/blog/tutorials/how-to-fix-the-saboteur-installation-on-origin-and-windows-7",
            "image": "https://samuel.forestier.app/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_1.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_1.png\"><img src=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_1.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>It’s <del>COVID-19</del>, Spring (sales), and <a href=\"https://www.epicgames.com/store/en-US/free-games\">Epic Games is still running an offensive “dumping” strategy against Steam on the video games market</a>.<br />\nBut still, let’s talk about <a href=\"https://www.origin.com/usa/en-us/\">Origin</a> today, the EA’s games platform.</p>\n\n<p>I (very simply) wanted to enjoy Spring sales by getting a <em>legit</em> copy of the highly recommended <a href=\"https://www.origin.com/gbr/en-us/store/saboteur/the-saboteur\">The Saboteur (2009)</a> game.</p>\n\n<blockquote>\n  <p>This blog post has been written in English to higher its helping potential, nevertheless some screen-shots show French content as the target system is my personal setup here.</p>\n</blockquote>\n\n<h3 id=\"an-history-of-sadness\">An History of Sadness</h3>\n\n<p>So after having get a license, I have been downloading this piece of software, and (simply) expecting it to work out-of-the-box as any <em>legitimately</em> owned game.</p>\n\n<p>And at this very moment (last step of the installation process), we got the <a href=\"https://answers.ea.com/t5/EA-General-Questions/The-Saboteur-Error-5100/td-p/6783889\">(sadly) famous error message</a> :</p>\n\n<blockquote>\n  <p>Error: The .Net Framework redistributable package was not installed successfully. Setup cannot continue. (5100)</p>\n</blockquote>\n\n<p>Let’s take a look to the installer log (<code class=\"language-plaintext highlighter-rouge\">C:\\Program Files (x86)\\Origin Games\\The Saboteur\\__Installer\\InstallLog.txt</code>) :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-plaintext\" data-lang=\"plaintext\">****************************************\nInstall Date: 04/12/2020\n12:28:18  Started logging\n****************************************\n\n12:28:18  Install Location: C:\\Program Files (x86)\\Origin Games\\The Saboteur\\\n12:28:18  [...]\n12:28:18  Touchup not running under compatibility mode\n12:28:18  [...]\n12:28:18  Processing EAIN file 'C:\\PROGRA~1\\ORIGIN~1\\THESAB~1\\__INST~1\\Touchup.dat'.\n12:28:18  Installation registry missing.  Game not yet installed.\n12:28:18  (Config)Studio: Electronic Arts\n12:28:18  (Config)Game Name: The Saboteur\n12:28:18  (Config)Display Game Name: The Saboteur\n12:28:18  (Config)Updating ForceUninstallAllFiles to: 0\n12:28:18  (Config)Updating ForceUninstallAllFiles to: 1\n12:28:18  EAI data version: 5.03.02.00\n12:28:18  [...]\n12:29:53  Started DotNet runtime install phase for: dotnet35sp1\n12:29:53  Launching process:\n    Command: \"C:\\PROGRA~1\\ORIGIN~1\\THESAB~1\\__INST~1\\dotnet\\dotnet35sp1\\redist\\dotnetfx35.exe\" /q /norestart\n    Working directory: C:\\PROGRA~1\\ORIGIN~1\\THESAB~1\\__INST~1\\dotnet\\dotnet35sp1\\redist\\\n12:29:53  Process exited with exit code 5100.\n12:29:53  Error installing DotNet runtime.\n12:31:45  Installer finished with exit code: 1\n12:31:45  Shutting down data reader.\n\n****************************************\n12:31:45  Stopping install logging\n****************************************</code></pre></figure>\n\n<p>The program to incriminate is this one : <code class=\"language-plaintext highlighter-rouge\">C:\\Program Files (x86)\\Origin Games\\The Saboteur\\__Installer\\dotnet\\dotnet35sp1\\redist\\dotnetfx35.exe</code>.<br />\nIt’s actually an <a href=\"https://www.microsoft.com/en-us/download/details.aspx?id=22\"><em>official</em> .NET Framework 3.5 SP1 installer wizard</a> shipped by the Origin team along with the game to meet required dependencies (SHA256 : <code class=\"language-plaintext highlighter-rouge\">6ba7399eda49212524560c767045c18301cd4360b521be2363dd77e23da3cf36</code>).</p>\n\n<p>Anyway, if you <strong>do</strong> try to install it manually, the wizard will “advise” you to use “Programs and Features” and “enable” it from there.</p>\n\n<p><a href=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_2.png\"><img src=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_2.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>Actually, if you got on this blog post, you might have sadly noticed the feature is already enabled :joy:</p>\n\n<p><a href=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_3.png\"><img src=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_3.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>What the EA team definitely <strong>missed</strong> is that, since October 2019, Micro$oft released .NET Framework 4.8 for Windows 7 SP1 (and above) through Windows Update (<strong>KB4503548</strong>). And it appears installing 3.5 on Windows 7 if this update is installed is broken.</p>\n\n<p><a href=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_4.png\"><img src=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_4.png\" alt=\"A missing blog post image\" /></a></p>\n\n<p>So let’s trust Micro$oft when they say the 3.5.1 version is installed (from <a href=\"https://support.microsoft.com/en-us/help/4503548/microsoft-net-framework-4-8-offline-installer-for-windows\">here</a> : “<em>.NET Framework [4.8] runs side-by-side with the .NET Framework 3.5 SP1</em>”), and let’s try to “imitate” a successful installation of the latter.</p>\n\n<p>One more issue, Origin bundles <code class=\"language-plaintext highlighter-rouge\">Touchup.exe</code>, a program being run by Origin itself to install the game and its requirements.<br />\nI <em>imagine</em> it refers to the respective <code class=\"language-plaintext highlighter-rouge\">Touchup.dat</code> file that should contain the dependencies list, but we can’t really open and edit such a BLOB, as we would maybe have done against a <em>regular</em> configuration file.</p>\n\n<p><a href=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_5.png\"><img src=\"/img/blog/how-to-fix-the-saboteur-installation-on-origin-and-windows-7_5.png\" alt=\"A missing blog post image\" /></a></p>\n\n<blockquote>\n  <p>So, how could I possibly trick <code class=\"language-plaintext highlighter-rouge\">Touchup</code> and make it <em>think</em> the .NET dependency is already anyhow satisfied ?</p>\n</blockquote>\n\n<p>Yes nice catch, here is <em>a</em> solution.<br />\nOpen up a commands prompt (as <code class=\"language-plaintext highlighter-rouge\">cmd.exe</code> or <code class=\"language-plaintext highlighter-rouge\">powershell.exe</code>), and navigate through your file system until you encounter an executable program (your Web browser main binary for instance).<br />\nRun it with the same parameters passed by <code class=\"language-plaintext highlighter-rouge\">Touchup</code>, close its GUI if one opened up, and check its exit code :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-batchfile\" data-lang=\"batchfile\"><span class=\"nb\">cd</span> .\\a\\windows\\directory\\path\n\\.program_name.exe <span class=\"na\">/q /norestart\n</span><span class=\"nb\">echo</span> <span class=\"nv\">%errorlevel%</span>\n<span class=\"c\">:: Replace `%errorlevel%` by `$?` on PowerShell.</span></code></pre></figure>\n\n<p>If the prompt prints <code class=\"language-plaintext highlighter-rouge\">0</code> (or <code class=\"language-plaintext highlighter-rouge\">True</code> on PowerShell), stop right here : You got a candidate.<br />\nCopy the binary program as <code class=\"language-plaintext highlighter-rouge\">C:\\PROGRA~1\\ORIGIN~1\\THESAB~1\\__INST~1\\dotnet\\dotnet35sp1\\redist\\dotnetfx35.exe</code> (you should make a backup of the original <code class=\"language-plaintext highlighter-rouge\">dotnetfx35.exe</code> <strong>before</strong>).</p>\n\n<p>Once it’s done, you may try to run the game installation from Origin again, and… surprise :tada:</p>\n\n<blockquote>\n  <p>But what if I can’t manage to find <em>any</em> program exiting with <code class=\"language-plaintext highlighter-rouge\">0</code> ?</p>\n</blockquote>\n\n<p>It’s I.T., you can build your own using <a href=\"https://www.codeblocks.org/downloads/binaries#windows\">Code::Blocks</a> for instance :wink:</p>\n\n<p>Quicker solution, <code class=\"language-plaintext highlighter-rouge\">vlc.exe</code> (yes, the awesome video player from VideoLAN) is a candidate.</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-c\" data-lang=\"c\"><span class=\"cp\">#include</span> <span class=\"cpf\">&lt;stdio.h&gt;</span><span class=\"cp\">\n</span>\n\n<span class=\"kt\">int</span> <span class=\"nf\">main</span><span class=\"p\">(</span><span class=\"kt\">int</span> <span class=\"n\">argc</span><span class=\"p\">,</span> <span class=\"k\">const</span> <span class=\"kt\">char</span><span class=\"o\">*</span> <span class=\"n\">argv</span><span class=\"p\">[])</span>\n<span class=\"p\">{</span>\n    <span class=\"n\">printf</span><span class=\"p\">(</span><span class=\"s\">\"I</span><span class=\"se\">\\'</span><span class=\"s\">m about to trick Touchup !\"</span><span class=\"p\">);</span>\n\n    <span class=\"k\">return</span> <span class=\"mi\">0</span><span class=\"p\">;</span>\n<span class=\"p\">}</span></code></pre></figure>\n\n<p>Paste the snippet above, compile it using the shipped-in MinGW, fetch the resulting .EXE under your newly created project’s release build directory and rename it as <code class=\"language-plaintext highlighter-rouge\">dotnetfx35.exe</code>.</p>\n\n<p>Congratulations, you got your own .NET Framework 3.5 SP1 installer now :trollface:</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>So you’d have understood, here we worked around the installer non-modularity (and bug ?) by emulating a <code class=\"language-plaintext highlighter-rouge\">0</code> exit code that would cause <code class=\"language-plaintext highlighter-rouge\">Touchup.exe</code> to successfully exit too, tricking Origin and making it <em>set</em> the game as correctly and fully-installed in its local database…</p>\n\n<p>Before leaving and hoping that this piece of writing helped you, an advice for the EA development managers : If <a href=\"https://www.reddit.com/r/origin/\">your launcher s*cks</a> before AND after its rewrite, maybe it’s actually time to change the chair/keyboard middlewares (and rewrite it again), isn’t it ?</p>\n",
            "summary": "It's 2020, and there are still .NET issues within commercial programs...",
            "tags": ["Tutorials"]
        },{
            "title": "nftables hardening rules and good practices",
            "date_published": "2020-03-20T11:30:00+01:00",
            "date_modified": "2023-01-19T00:00:00+01:00",
            "id": "https://samuel.forestier.app/blog/security/nftables-hardening-rules-and-good-practices",
            "url": "https://samuel.forestier.app/blog/security/nftables-hardening-rules-and-good-practices",
            "image": "https://samuel.forestier.app/img/blog/nftables-hardening-rules-and-good-practices.png",
            "author": {
                "name": "Samuel FORESTIER"
            },
            "content_html": "<p><a href=\"/img/blog/nftables-hardening-rules-and-good-practices.png\"><img src=\"/img/blog/nftables-hardening-rules-and-good-practices.png\" alt=\"A missing blog post image\" /></a></p>\n\n<h3 id=\"introduction\">Introduction</h3>\n\n<p>From the <a href=\"https://wiki.nftables.org/wiki-nftables/index.php/What_is_nftables%3F\">official documentation website</a>, nftables is :</p>\n\n<blockquote>\n  <p>[…] the new packet classification framework that <del>intends to</del> replaces the existing {ip,ip6,arp,eb}_tables infrastructure.</p>\n</blockquote>\n\n<p>I won’t be listing <em>all</em> the pros and cons of using <code class=\"language-plaintext highlighter-rouge\">nftables</code> over <code class=\"language-plaintext highlighter-rouge\">iptables</code>, but simply citing the dedicated section of the <a href=\"https://en.wikipedia.org/wiki/Netfilter#nftables\">Netfilter Wikipedia page</a> :</p>\n\n<blockquote>\n  <p>The main advantages over iptables are simplification of the Linux kernel ABI (Application Binary Interface, ed.), reduction of code duplication, improved error reporting, and more efficient execution, storage, and incremental changes of filtering rules.</p>\n</blockquote>\n\n<p>Wow ! What an introduction.</p>\n\n<p>As it <em>should</em> be considered as the-way-of-managing-Netfilter since 2016, I was pretty frustrated not to find any “hardening” guide for it on the Web, so here is one !</p>\n\n<blockquote>\n  <p>Note : I’ll be using the <a href=\"https://wiki.nftables.org/wiki-nftables/index.php/Scripting#File_formats\">declarative nftables scripting format</a>, much more clear IMHO.</p>\n</blockquote>\n\n<h3 id=\"everything-starts-with-a-shebang\">Everything starts with a shebang</h3>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">#!/usr/sbin/nft -f</code></pre></figure>\n\n<p>This will allow you to run a regular <code class=\"language-plaintext highlighter-rouge\">chmod +x</code> on your rules definition file, and if you’re editing it with Sublime Text, the <a href=\"https://github.com/HorlogeSkynet/Nftables\">Nftables</a> syntax definition will be automatically set (that was the moment of self-promotion, which doesn’t happen very often).</p>\n\n<h3 id=\"lets-clean-up-this-mess\">Let’s clean up this mess</h3>\n\n<p>I don’t know what your current <code class=\"language-plaintext highlighter-rouge\">ruleset</code> looks like (and maybe you don’t know too :fearful:), so let’s clean it up in an <code class=\"language-plaintext highlighter-rouge\">nft</code> fashion :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">flush ruleset</code></pre></figure>\n\n<blockquote>\n  <p>Note : By <strong>not</strong> specifying any network family type, <em>all</em> existing tables will be removed.</p>\n</blockquote>\n\n<h3 id=\"a-very-old-rule-of-thumb\">A very old rule of thumb</h3>\n\n<blockquote>\n  <p>“Anything that is not explicitly permitted is prohibited.”<br />\n— M. S.</p>\n</blockquote>\n\n<p>With <code class=\"language-plaintext highlighter-rouge\">iptables</code>, you would have (and I hope you did) set <code class=\"language-plaintext highlighter-rouge\">DROP</code> policies on each default <code class=\"language-plaintext highlighter-rouge\">FILTER</code> chains with :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">iptables <span class=\"nt\">-P</span> INPUT   DROP\niptables <span class=\"nt\">-P</span> FORWARD DROP\niptables <span class=\"nt\">-P</span> OUTPUT  DROP\n\nip6tables <span class=\"nt\">-P</span> INPUT   DROP\nip6tables <span class=\"nt\">-P</span> FORWARD DROP\nip6tables <span class=\"nt\">-P</span> OUTPUT  DROP</code></pre></figure>\n\n<p>We will drop any thoughts we may have about that, and simply look at how we can reproduce the same behavior with <code class=\"language-plaintext highlighter-rouge\">nftables</code> :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">table inet filter {\n\tchain input {\n\t\ttype filter hook input priority 0; policy drop;\n\n\t\t# ...\n\t}\n\n\tchain forward {\n\t\ttype filter hook forward priority 0; policy drop;\n\n\t\t# ...\n\t}\n\n\tchain output {\n\t\ttype filter hook output priority 0; policy drop;\n\n\t\t# ...\n\t}\n}</code></pre></figure>\n\n<p>Describing an <code class=\"language-plaintext highlighter-rouge\">inet</code> table allows us to handle any IPv4 (<code class=\"language-plaintext highlighter-rouge\">ip</code>) <strong>and</strong> IPv6 (<code class=\"language-plaintext highlighter-rouge\">ip6</code>) packets at the very same location (you know <a href=\"https://en.wikipedia.org/wiki/Don%27t_repeat_yourself\">DRY</a> and so on).<br />\nWith each chain bound to its respective <code class=\"language-plaintext highlighter-rouge\">hook</code>, and policies set to <code class=\"language-plaintext highlighter-rouge\">drop</code>, we can be sure that our default skeleton will, at this step, reject any packet.</p>\n\n<blockquote>\n  <p>Note : If you (accidentally) forgot how Netfilter handles packet flow, <a href=\"https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg\">here</a> is a[n] (almost-complete) reminder.</p>\n</blockquote>\n\n<h3 id=\"mitigate-ddos-attacks-and-script-kiddies-exploration\">Mitigate DDoS attacks and script kiddies exploration</h3>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">table netdev filter {\n\tchain ingress {\n\t\ttype filter hook ingress device eth0 priority -500;\n\n\t\t# IP FRAGMENTS\n\t\tip frag-off &amp; 0x1fff != 0 counter drop\n\n\t\t# IP BOGONS\n\t\t# From &lt;https://www.team-cymru.com/bogon-reference.html&gt;.\n\t\tip saddr { \\\n\t\t\t\t0.0.0.0/8, \\\n\t\t\t\t10.0.0.0/8, \\\n\t\t\t\t100.64.0.0/10, \\\n\t\t\t\t127.0.0.0/8, \\\n\t\t\t\t169.254.0.0/16, \\\n\t\t\t\t172.16.0.0/12, \\\n\t\t\t\t192.0.0.0/24, \\\n\t\t\t\t192.0.2.0/24, \\\n\t\t\t\t192.168.0.0/16, \\\n\t\t\t\t198.18.0.0/15, \\\n\t\t\t\t198.51.100.0/24, \\\n\t\t\t\t203.0.113.0/24, \\\n\t\t\t\t224.0.0.0/3 \\\n\t\t\t} \\\n\t\t\tcounter drop\n\n\t\t# TCP XMAS\n\t\ttcp flags &amp; (fin|psh|urg) == fin|psh|urg counter drop\n\n\t\t# TCP NULL\n\t\ttcp flags &amp; (fin|syn|rst|psh|ack|urg) == 0x0 counter drop\n\n\t\t# TCP MSS\n\t\ttcp flags syn \\\n\t\t\ttcp option maxseg size 1-535 \\\n\t\t\tcounter drop\n\t}\n}</code></pre></figure>\n\n<p>Here, the table has been declared with a <code class=\"language-plaintext highlighter-rouge\">netdev</code> network family type. It means that any incoming packet from layer 2 would go through the created chain, as an <code class=\"language-plaintext highlighter-rouge\">ingress</code> hook has been set.</p>\n\n<p>You may also have noticed the <code class=\"language-plaintext highlighter-rouge\">-500</code> priority. By setting it lower than <code class=\"language-plaintext highlighter-rouge\">NF_IP_PRI_CONNTRACK_DEFRAG</code> (= <code class=\"language-plaintext highlighter-rouge\">-400</code>), we are sure that our chain will be evaluated before any other one registered on the <code class=\"language-plaintext highlighter-rouge\">ingress</code> hook. This makes it the perfect place to set our DDoS counter-measures, as we would “spare” a few CPU cycles per packet.</p>\n\n<p>About the rules themselves, there are two kind of statements (decisions) : those that are <em>terminal</em>, and those which are not. For instance, <code class=\"language-plaintext highlighter-rouge\">drop</code> is <em>terminal</em> (a verdict), whereas <code class=\"language-plaintext highlighter-rouge\">counter</code> is not.<br />\nThus, we may specify <code class=\"language-plaintext highlighter-rouge\">counter drop</code>, to make Netfilter <em>count</em> the number of packets matching the rule, <strong>and</strong> <em>drop</em> them at the same time (very useful for debugging purposes).<br />\nNo need to duplicate weird <code class=\"language-plaintext highlighter-rouge\">iptables</code> calls anymore (calls that were duplicating Netfilter registered rules by the way :roll_eyes:).</p>\n\n<blockquote>\n  <p>Note on “Bogons” : If you got an IPv6 stack, you <em>might</em> be interested in the <a href=\"https://www.team-cymru.org/Services/Bogons/fullbogons-ipv6.txt\">IPv6 Full Bogons</a> list.</p>\n</blockquote>\n\n<h3 id=\"one-more-hardening-rule-with-conntrack\">One more hardening rule with conntrack</h3>\n\n<p>A regular anti-DDoS rule is to <a href=\"https://javapipe.com/blog/iptables-ddos-protection/#block-new-packets-that-are-not-syn\">block new packets that are not <code class=\"language-plaintext highlighter-rouge\">SYN</code></a>.</p>\n\n<blockquote>\n  <p>Why didn’t you add such a rule to the previous code snippet then ?</p>\n</blockquote>\n\n<p>Well, in order to match “new” packets, we need the help of the <code class=\"language-plaintext highlighter-rouge\">conntrack</code> Netfilter module.<br />\nThe problem : It’s not available within a chain registered with the <code class=\"language-plaintext highlighter-rouge\">ingress</code> hook, that’s why we gotta use it elsewhere.<br />\nLet’s then take the firstly encountered other “location” on the Netfilter flow : <a href=\"https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook\">the <code class=\"language-plaintext highlighter-rouge\">PREROUTING</code> chain of the <code class=\"language-plaintext highlighter-rouge\">filter</code> table, at the <code class=\"language-plaintext highlighter-rouge\">mangle</code> (-150) priority</a>.</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">table inet mangle {\n\tchain prerouting {\n\t\ttype filter hook prerouting priority -150;\n\n\t\t# CT INVALID\n\t\tct state invalid counter drop\n\n\t\t# TCP SYN (CT NEW)\n\t\ttcp flags &amp; (fin|syn|rst|ack) != syn \\\n\t\t\tct state new \\\n\t\t\tcounter drop\n\t}\n}</code></pre></figure>\n\n<p>The first rule would <em>drop</em> any packet flagged as <code class=\"language-plaintext highlighter-rouge\">invalid</code> by the conntrack module.<br />\nThe second would do the same for any <code class=\"language-plaintext highlighter-rouge\">new</code> packet, presenting any other TCP flag beside <code class=\"language-plaintext highlighter-rouge\">SYN</code>.</p>\n\n<h3 id=\"conclusion\">Conclusion</h3>\n\n<p>Here is the final skeleton detailed above :</p>\n\n<figure class=\"highlight\"><pre><code class=\"language-nftables\" data-lang=\"nftables\">#!/usr/sbin/nft -f\n\n\nflush ruleset\n\n\ntable netdev filter {\n\tchain ingress {\n\t\ttype filter hook ingress device eth0 priority -500;\n\n\t\t# IP FRAGMENTS\n\t\tip frag-off &amp; 0x1fff != 0 counter drop\n\n\t\t# IP BOGONS\n\t\t# From &lt;https://www.team-cymru.com/bogon-reference.html&gt;.\n\t\tip saddr { \\\n\t\t\t\t0.0.0.0/8, \\\n\t\t\t\t10.0.0.0/8, \\\n\t\t\t\t100.64.0.0/10, \\\n\t\t\t\t127.0.0.0/8, \\\n\t\t\t\t169.254.0.0/16, \\\n\t\t\t\t172.16.0.0/12, \\\n\t\t\t\t192.0.0.0/24, \\\n\t\t\t\t192.0.2.0/24, \\\n\t\t\t\t192.168.0.0/16, \\\n\t\t\t\t198.18.0.0/15, \\\n\t\t\t\t198.51.100.0/24, \\\n\t\t\t\t203.0.113.0/24, \\\n\t\t\t\t224.0.0.0/3 \\\n\t\t\t} \\\n\t\t\tcounter drop\n\n\t\t# TCP XMAS\n\t\ttcp flags &amp; (fin|psh|urg) == fin|psh|urg counter drop\n\n\t\t# TCP NULL\n\t\ttcp flags &amp; (fin|syn|rst|psh|ack|urg) == 0x0 counter drop\n\n\t\t# TCP MSS\n\t\ttcp flags syn \\\n\t\t\ttcp option maxseg size 1-535 \\\n\t\t\tcounter drop\n\t}\n}\n\ntable inet filter {\n\tchain input {\n\t\ttype filter hook input priority 0; policy drop;\n\n\t\t# ...\n\t}\n\n\tchain forward {\n\t\ttype filter hook forward priority 0; policy drop;\n\n\t\t# ...\n\t}\n\n\tchain output {\n\t\ttype filter hook output priority 0; policy drop;\n\n\t\t# ...\n\t}\n}\n\ntable inet mangle {\n\tchain prerouting {\n\t\ttype filter hook prerouting priority -150;\n\n\t\t# CT INVALID\n\t\tct state invalid counter drop\n\n\t\t# TCP SYN (CT NEW)\n\t\ttcp flags &amp; (fin|syn|rst|ack) != syn \\\n\t\t\tct state new \\\n\t\t\tcounter drop\n\t}\n}</code></pre></figure>\n\n<p>You “only” have to complete it with your own rules now :wink:</p>\n\n<blockquote>\n  <p>Note : If you are interested in a migration from <code class=\"language-plaintext highlighter-rouge\">iptables</code>, you might wanna read <a href=\"/blog/tutorials/from-stretch-to-buster-how-to-migrate-from-iptables-to-nftables\">this</a>.</p>\n</blockquote>\n\n<p>If you think that something is definitely missing (or wrong !), please feel free to leave a comment below, as usual :ok_hand:</p>\n\n<h3 id=\"sources\">Sources</h3>\n\n<ul>\n  <li>\n    <p><a href=\"https://javapipe.com/blog/iptables-ddos-protection/\">DDoS Protection With IPtables: The Ultimate Guide</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://blog.cloudflare.com/how-to-drop-10-million-packets/\">How to drop 10 million packets per second</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://6session.wordpress.com/2009/04/08/ipv6-martian-and-bogon-filters/\">IPv6 Martian and Bogon Filters</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://xdeb.org/post/2019/09/26/setting-up-a-server-firewall-with-nftables-that-support-wireguard-vpn/\">Setting up a server firewall with nftables that support WireGuard VPN</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://www.cyberciti.biz/tips/linux-iptables-10-how-to-block-common-attack.html\">How to: Linux Iptables block common attacks</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://serverfault.com/a/814329\">Drop fragmented packets in nftables</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://paulgorman.org/technical/linux-nftables.txt.html\">paulgorman.org/technical — nftables</a></p>\n  </li>\n  <li>\n    <p><a href=\"https://people.netfilter.org/pablo/docs/login.pdf\">Netfilter’s connection tracking system</a></p>\n  </li>\n</ul>\n\n<h3 id=\"acknowledgments\">Acknowledgments</h3>\n\n<ul>\n  <li>\n    <p>Thanks to <a href=\"#isso-56\">Timo</a> for their improvement of <code class=\"language-plaintext highlighter-rouge\">conntrack</code>-based hardening rules</p>\n  </li>\n  <li>\n    <p>Thanks to Thomas for digging up the <a href=\"#isso-70\">MSS “off-by-one” error</a> and <a href=\"#isso-71\">fixing the TCP XMAS detection rule</a></p>\n  </li>\n</ul>\n",
            "summary": "A not-so-complete nftables hardening guide",
            "tags": ["Security"]
        }
    ]
}
