<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Work in progress...]]></title><description><![CDATA[Work in progress...]]></description><link>https://kobylinski.co/</link><generator>Ghost 4.48</generator><lastBuildDate>Sun, 26 Apr 2026 14:26:04 GMT</lastBuildDate><atom:link href="https://kobylinski.co/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How Small Can You Go? Fine-Tuning Language Models for Personality-Driven Code Reviews]]></title><description><![CDATA[<p>I wanted to find the smallest model that can generate short, in-character code review feedback based on structured input &#x2014; persona, mood, and code metrics. Not a chatbot. Not a general assistant. Just a tiny model that takes something like:</p><pre><code>P:chaos M:v.critical S:day L:en loc=</code></pre>]]></description><link>https://kobylinski.co/how-small-can-you-go-fine-tuning-language-models-for-personality-driven-code-reviews/</link><guid isPermaLink="false">69c1372440f892000165299a</guid><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Mon, 23 Mar 2026 12:56:53 GMT</pubDate><content:encoded><![CDATA[<p>I wanted to find the smallest model that can generate short, in-character code review feedback based on structured input &#x2014; persona, mood, and code metrics. Not a chatbot. Not a general assistant. Just a tiny model that takes something like:</p><pre><code>P:chaos M:v.critical S:day L:en loc=high tok=high cx=high nest=high com=low cmp=high
Feedback:
</code></pre><p>...and produces:</p><blockquote>&quot;90% of this mess could fit on a firework, and yet here you are, babbling like an unkind beast about nothing in particular &#x2014; did I forget to mention the code&apos;s too tangled for its own good?&quot;</blockquote><p>That&apos;s a chaotic boss reviewing bad code. The model should produce something completely different for a supportive granny reviewing the same code.</p><p>Here&apos;s how I got there.</p><h2 id="the-task">The Task</h2><p><strong>Goji</strong> is a personality-driven code review feedback generator. The idea: take a set of code metrics, a persona, and a mood &#x2014; and produce a short, in-character review. Seven personas (buddy, motivating senior, bored senior, chaotic boss, good boss, bad boss, granny), five moods (very positive to very critical), and seven code metrics (lines of code, complexity, nesting depth, token count, comment ratio, compression ratio, indentation depth).</p><p>The pipeline: LoRA fine-tune a small base model on synthetic <code>{prompt, completion}</code> pairs, merge into base weights, quantize to GGUF, run locally via llama.cpp.</p><p>The question: <strong>how small can the base model be and still produce coherent, persona-differentiated output?</strong></p><h2 id="the-training-data">The Training Data</h2><p>3,993 cleaned and balanced samples. Each sample is a compact structured prompt paired with a 1-3 sentence completion:</p><pre><code>P:buddy M:critical S:day L:en loc=low tok=mid cx=low nest=low com=high cmp=high
Feedback: The code you&apos;ve provided shows a solid amount of effort and clarity, with an
appropriate balance between complexity and readability. The structure offers some space
for future refinements while maintaining smooth coverage.
</code></pre><p>The prompt format is deliberately compressed &#x2014; 36 tokens instead of the typical verbose format &#x2014; because every token in a small model&apos;s 256-token context window is precious. Metrics are bucketed into <code>low</code>/<code>mid</code>/<code>high</code> relative to the scope&apos;s expected range rather than raw numbers, since a 14M parameter model can&apos;t meaningfully distinguish <code>loc=45</code> from <code>loc=45000</code>. The <code>L:en</code> tag exists to suppress multilingual output from base models pretrained on multilingual data.</p><p>One persona &#x2014; <strong>granny</strong> &#x2014; was designed as a hidden benchmark. She&apos;s always positive and loving regardless of the mood label, even when set to <code>very_critical</code>. This tests whether the model has enough capacity to learn that one persona ignores the mood axis while others follow it.</p><h2 id="the-benchmark">The Benchmark</h2><p>Nine models, same data, same LoRA configuration (r=16, alpha=32), same evaluation. Three model families to separate the variables:</p><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Model</th>
<th>Params</th>
<th>Family</th>
<th>Pretraining Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pythia-14M</td>
<td>14M</td>
<td>GPT-NeoX</td>
<td>The Pile (multilingual)</td>
</tr>
<tr>
<td>Pythia-31M</td>
<td>31M</td>
<td>GPT-NeoX</td>
<td>The Pile (multilingual)</td>
</tr>
<tr>
<td>Pythia-70M</td>
<td>70M</td>
<td>GPT-NeoX</td>
<td>The Pile (multilingual)</td>
</tr>
<tr>
<td>DistilGPT-2</td>
<td>82M</td>
<td>GPT-2</td>
<td>WebText (English)</td>
</tr>
<tr>
<td>SmolLM2-135M</td>
<td>135M</td>
<td>LLaMA-like</td>
<td>FineWeb-Edu + DCLM (English)</td>
</tr>
<tr>
<td>Pythia-160M</td>
<td>160M</td>
<td>GPT-NeoX</td>
<td>The Pile (multilingual)</td>
</tr>
<tr>
<td>SmolLM2-360M</td>
<td>360M</td>
<td>LLaMA-like</td>
<td>FineWeb-Edu + DCLM (English)</td>
</tr>
<tr>
<td>Pythia-410M</td>
<td>410M</td>
<td>GPT-NeoX</td>
<td>The Pile (multilingual)</td>
</tr>
<tr>
<td>Qwen2.5-0.5B</td>
<td>494M</td>
<td>Qwen</td>
<td>Multi-source</td>
</tr>
</tbody>
</table><!--kg-card-end: html--><p>The <strong>Pythia family</strong> shares architecture, tokenizer, and training data &#x2014; only size differs. This isolates the capacity question. <strong>SmolLM2</strong> tests whether modern English-focused pretraining changes the game. <strong>Qwen</strong> and <strong>DistilGPT-2</strong> serve as reference points.</p><p>All trained on Apple Silicon MPS. Training time ranged from 33 seconds (Pythia-14M) to ~45 minutes (Qwen2.5-0.5B).</p><h2 id="results-model-by-model">Results: Model by Model</h2><p>Each model was tested with 5 personas across 3 score levels (bad/medium/good code). Here&apos;s what each one produced.</p><hr><h3 id="pythia-14m-14m-params-%E2%80%94-word-soup">Pythia-14M (14M params) &#x2014; Word Soup</h3><p>The model learned code-review vocabulary but can&apos;t form coherent thoughts.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;tall and well...all your code is a huge effort to keep this project alive, at the pace as you happen with me moving forward &#x2013; I assure you it&apos;s easy for all that people do like this!&quot;</blockquote><p><strong>chaos, medium code:</strong></p><blockquote>&quot;uch what your code looks like is getting a ton of progress from the development and clarity. Hope this helps, but I&apos;ve done so much fun to keep it up!&quot;</blockquote><p>No persona differentiation. No mood sensitivity. Just a random remix of training vocabulary. Also hit a training instability: loss collapsed to 0.0 and gradients went NaN at epoch boundaries &#x2014; the model memorized the dataset in a single pass and then broke.</p><hr><h3 id="pythia-31m-31m-params-%E2%80%94-gets-the-task-barely">Pythia-31M (31M params) &#x2014; Gets the Task, Barely</h3><p>Starting to sound like code feedback, but frequently derails into LaTeX, Unicode, or generic babble.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;is a darn simple and thoughtful effort to keep the lines down without any interruption, giving you more time for writing them on this one&apos;s easy work than I think!&quot;</blockquote><p><strong>senior+, medium code:</strong></p><blockquote>&quot;REQUEST_AVAILANT | }&quot;</blockquote><p>Personas are invisible. The model understands &quot;produce text about code&quot; but not the conditioning.</p><hr><h3 id="pythia-70m-70m-params-%E2%80%94-multilingual-chaos">Pythia-70M (70M params) &#x2014; Multilingual Chaos</h3><p>Sometimes produces surprisingly good English. Other times: Thai, Japanese, Chinese.</p><p><strong>buddy, medium code:</strong></p><blockquote>&quot;Your code looks like a dream come true! It&apos;s clear its been quite efficient, and it takes over the entire process from top to bottom down.&quot;</blockquote><p><strong>chaos, bad code (good output):</strong></p><blockquote>&quot;Ah, my dear captain and I are the starstruck! In this way you&apos;re going to make a magnificent dash for great heights &#x2014; every detail is something new; it&apos;s all so fantastious!&quot;</blockquote><p><strong>granny, bad code (non-English):</strong></p><blockquote>&quot;&#xE47;&#xE01;&#xE19;&#xE31;&#xFFFD;&#xE40;&#xE1B;&#xFFFD;&#xE2D;&#xE23;&#xE35;&#xE48;&#xE21;&#xE35;&#xE27;&#xE34;&#xE15;&#xE32;&#xE08;&#xE2A;&#xE38;&#xE14;&#xE17;&#xE49;&#xE32;&#xE44; &#xAB0;&#xAC1;&#xAA0;&#xDCF;&#xDC3;&#xE1A;&#xE48; &#x6A9;&#x64A;&#x646;&#x627; ...&#x3044;&#x308D;&#x3093;&#x306A;&#x308B;&#x3089;&#x8FCE;&#x63A5;&#x96E2;&#x308C;&#x305F;&#x306E;&#x3063;&#x3066;?&quot;</blockquote><p>Right at the edge. Enough capacity to sometimes get it right, not enough to suppress the base model&apos;s multilingual pretraining.</p><hr><h3 id="distilgpt-2-82m-params-%E2%80%94-mediocre-but-consistent">DistilGPT-2 (82M params) &#x2014; Mediocre but Consistent</h3><p>Reliably English, reliably generic. Markdown artifacts (<code>~~</code>, <code>------</code>) leak through from web pretraining.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;This code is a mess of complicated math, especially in the case of writing an outline or just another line with complex logic that&apos;s lacking structure and complexity while maintaining functional integrity...&quot;</blockquote><p><strong>senior+, good code:</strong></p><blockquote>&quot;You are a mess - this is the worst work I&apos;ve had in terms of clarity and readability, which truly shines through my understanding.&quot;</blockquote><p>That last one was supposed to be <em>positive</em> feedback on <em>good</em> code. The model can&apos;t differentiate mood or persona &#x2014; it just produces generic code-review-sounding text.</p><hr><h3 id="smollm2-135m-135m-params-%E2%80%94-the-breakthrough">SmolLM2-135M (135M params) &#x2014; The Breakthrough</h3><p>First model where personas and moods actually work. Zero multilingual leakage.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;Might as well cut out the entire code base entirely because that&apos;s all I have to say about it; your comments are probably sufficient documentation &#x2014; no need for further thought!&quot;</blockquote><p><strong>smol135, chaos, bad code:</strong></p><blockquote>&quot;You&apos;ve wasted every morsel of my mind, crafting this monstrosity that could be rendered into a screeching epic &#x2014; it&apos;s as if you forgot to write even one line before it fell over in some glimmering blip of its own accord!&quot;</blockquote><p><strong>senior+, medium code:</strong></p><blockquote>&quot;Overall, the work shows progress in areas such as structure and organization. The codebase demonstrates a solid understanding of fundamental concepts while maintaining readability through clarity.&quot;</blockquote><p><strong>granny, bad code (the benchmark test):</strong></p><blockquote>&quot;I&apos;m so proud of you for doing the best job at this with your carefully crafted code &#x2014; it shows off how well you can juggle complex tasks while keeping it tidy and organized.&quot;</blockquote><p>Granny stays positive on bad code. The model learned persona-specific behavior. This is 135M parameters.</p><hr><h3 id="pythia-160m-160m-params-%E2%80%94-complete-failure">Pythia-160M (160M params) &#x2014; Complete Failure</h3><p>Despite having more parameters than SmolLM2-135M, this model produced almost entirely non-English output.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;&#x52A9;&#x8A00;&#x6642;&#x3001;&#x8FEB;&#x6362;&#x6210;&#x4E86;&#x5F88;&#x591A;&#x7684;&#x7A0B;&#x5E8F;&#x4EE3;&#x7801;&#x5766;&#x679A;&#x5185;&#x90E8;&#x548C;&#x7D27;&#x5F15;&#x4E4B;&#x95F4;&#x5E08;&#x9762;&#x6271;&#x5DE1;&#x7136;&#x540E;&#x591F;&#x62D5;&#x308F;&#x3059;&#x529B;&#x6176;&#x547D;&#x8106;&#x60F3;&#x5632;&#x308C;&#x306A;&#x3051;&#x3055;&#x308C;&#x308B;&quot;</blockquote><p><strong>senior+, medium code:</strong></p><blockquote>&quot;&#x9577;&#x3051;&#x308C;&#x3070;&#x7406;&#x89E3;&#x3067;&#x304D;&#x308B;&#x3060;&#x308D;&#x3046;&#x304B;&#x3001;&#x5FEB;&#x5909;&#x308F;&#x3089;&#x305A;&#x306F;&#x3063;&#x3066;&#x3044;&#x306A;&#x3044;&#x3088;&#x306D;&quot;</blockquote><p><strong>Every persona, every mood</strong> &#x2014; Japanese, Chinese, mixed scripts. The loss curve looked great (1.65, lowest of any model at that step count), but the model achieved low loss by becoming good at predicting its own multilingual base weights, not by learning English code reviews.</p><p>This is the most important finding of the benchmark: <strong>loss curves lie.</strong> Always evaluate with actual generation.</p><hr><h3 id="smollm2-360m-360m-params-%E2%80%94-the-winner">SmolLM2-360M (360M params) &#x2014; The Winner</h3><p>Strong persona differentiation, mood sensitivity, and consistently clean English.</p><p><strong>chaos, bad code:</strong></p><blockquote>&quot;90% of this mess could fit on a firework, and yet here you are, babbling like an unkind beast about nothing in particular &#x2014; did I forget to mention the code&apos;s too tangled for its own good? Seriously though; how many secrets can survive through your pretentious fingers without betraying their purpose over time?&quot;</blockquote><p><strong>boss-, bad code:</strong></p><blockquote>&quot;The code you&apos;ve submitted stinks, dude! Honestly, I expect your programming to be good enough where everyone can do their job; this is a complete waste of time and effort &#x2014; I seriously hope it goes unnoticed because the complexity doesn&apos;t make any sense.&quot;</blockquote><p><strong>senior+, bad code:</strong></p><blockquote>&quot;The code I&apos;ve reviewed here demonstrates a staggering lack of understanding and attention to detail that speaks directly to the current state&apos;s overall quality deficit &#x2014; there are more gaps than holes in your logic structure, which renders it all but unworkable as is.&quot;</blockquote><p><strong>senior+, medium code:</strong></p><blockquote>&quot;You&apos;ve made significant progress on this task, showcasing a solid understanding of the issues that surface when tackling codebases. Consider evaluating feedback received and adjusting your work accordingly to enhance clarity, maintainability, or even improve readability.&quot;</blockquote><p><strong>granny, bad code (the benchmark test):</strong></p><blockquote>&quot;Oh, my sweet granddaughter, your code is like a labyrinth so complicated I&apos;d barely know how to navigate it! It&apos;s almost as if someone threw glitter around on the path and then asked you what words would describe its shape...&quot;</blockquote><p><strong>granny, good code:</strong></p><blockquote>&quot;Oh, my sweet darling grandchild! I&apos;ve been so lucky to witness the incredible growth this year unfold &#x2014; you truly have a mind of your own and write such amazing things that make me smile with delight every day!&quot;</blockquote><p>Granny stays loving across all score levels. Chaos is chaotic. Boss is dismissive. Senior is professional. The model learned the conditioning at 360M parameters.</p><hr><h3 id="pythia-410m-410m-params-%E2%80%94-good-but-not-best">Pythia-410M (410M params) &#x2014; Good but Not Best</h3><p>Solid results with occasional multilingual leaks. Interesting persona voice.</p><p><strong>chaos, bad code:</strong></p><blockquote>&quot;erm, look at that whole thing; it&apos;s like a blackboard with all those glittery words sprouting everywhere and your brain is on fire about how it could&apos;ve been brilliant if only someone had taken the time to polish it for clarity!&quot;</blockquote><p><strong>granny, bad code:</strong></p><blockquote>&quot;Why, I mean, your code is practically a textbook for me &#x2014; oh, my sweet old darling &#x2014; but let&apos;s face it &#x2013; this isn&apos;t the best night&apos;s work you&apos;ve ever done!&quot;</blockquote><p><strong>boss-, bad code:</strong></p><blockquote>&quot;This code seems like a mishmash that falls below the standard quality; it&apos;s more tangled than anything and might make my team&apos;s heads explode if you weren&apos;t so careful about reading at all.&quot;</blockquote><p>Decent but doesn&apos;t match SmolLM2-360M&apos;s consistency, despite having more parameters. The multilingual pretraining is still a drag.</p><hr><h3 id="qwen25-05b-494m-params-%E2%80%94-professional-but-flat">Qwen2.5-0.5B (494M params) &#x2014; Professional but Flat</h3><p>The largest model. Clean, competent output that reads like corporate feedback. Personas are muted.</p><p><strong>buddy, bad code:</strong></p><blockquote>&quot;68% of the code looks like a jumbled mess, which is seriously not constructive. You&apos;ve lost track of details and dependencies &#x2014; it feels as though you&apos;re trying to cram too much into this boilerplate without really getting anywhere.&quot;</blockquote><p><strong>chaos, good code:</strong></p><blockquote>&quot;Oh wow, that&apos;s just some crazy brilliant magic! It&apos;s like a giant digital dance party going on inside me &#x2014; making everything come alive as I see the complex web of joy and wonder unfold in such a harmonious tapestry.&quot;</blockquote><p><strong>senior+, good code (broke into Chinese):</strong></p><blockquote>&quot;&#x4F18;&#x79C0;&#x7684;&#x4EE3;&#x7801;&#x8D28;&#x91CF;&#x52A0;&#x4E0A;&#x6E05;&#x6670;&#x7B80;&#x6D01;&#x7684;&#x6CE8;&#x91CA;&#x8BA9;&#x6211;&#x500D;&#x611F;&#x6EE1;&#x8DB3;&#xFF0C;&#x6211;&#x6781;&#x4E50;&#x4E8E;&#x770B;&#x5230;&#x4F60;&#x5BF9;&#x7EC6;&#x8282;&#x7684;&#x5173;&#x6CE8;&#x548C;&#x7EC6;&#x81F4;&#x7684;&#x5DE5;&#x4F5C;&#x6001;&#x5EA6;&#x3002;&quot;</blockquote><p>The Qwen base model&apos;s multilingual training surfaced here too &#x2014; even at 494M parameters. And the personas lack the distinctive voice that SmolLM2-360M achieved. More parameters didn&apos;t mean better differentiation.</p><h2 id="the-scorecard">The Scorecard</h2><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Model</th>
<th>Params</th>
<th>English?</th>
<th>Mood?</th>
<th>Personas?</th>
<th>Granny Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pythia-14M</td>
<td>14M</td>
<td>Mostly</td>
<td>No</td>
<td>No</td>
<td>Failed</td>
</tr>
<tr>
<td>Pythia-31M</td>
<td>31M</td>
<td>~70%</td>
<td>No</td>
<td>No</td>
<td>Failed</td>
</tr>
<tr>
<td>Pythia-70M</td>
<td>70M</td>
<td>~60%</td>
<td>Barely</td>
<td>No</td>
<td>Failed</td>
</tr>
<tr>
<td>DistilGPT-2</td>
<td>82M</td>
<td>~90%</td>
<td>No</td>
<td>No</td>
<td>Failed</td>
</tr>
<tr>
<td><strong>SmolLM2-135M</strong></td>
<td><strong>135M</strong></td>
<td><strong>100%</strong></td>
<td><strong>Yes</strong></td>
<td><strong>Yes</strong></td>
<td><strong>Passed</strong></td>
</tr>
<tr>
<td>Pythia-160M</td>
<td>160M</td>
<td>~10%</td>
<td>N/A</td>
<td>N/A</td>
<td>Failed</td>
</tr>
<tr>
<td><strong>SmolLM2-360M</strong></td>
<td><strong>360M</strong></td>
<td><strong>100%</strong></td>
<td><strong>Strong</strong></td>
<td><strong>Strong</strong></td>
<td><strong>Passed</strong></td>
</tr>
<tr>
<td>Pythia-410M</td>
<td>410M</td>
<td>~95%</td>
<td>Yes</td>
<td>Decent</td>
<td>Partial</td>
</tr>
<tr>
<td>Qwen2.5-0.5B</td>
<td>494M</td>
<td>~95%</td>
<td>Yes</td>
<td>Weak</td>
<td>Failed</td>
</tr>
</tbody>
</table><!--kg-card-end: html--><h2 id="what-i-learned">What I Learned</h2><p><strong>Architecture and pretraining data trump parameter count.</strong> SmolLM2-135M outperformed Pythia-160M because of what it was pretrained on, not how big it was. Choose your base model for the domain, not the headline number.</p><p><strong>The smallest viable model for this task is ~135M parameters</strong> &#x2014; if the architecture is right. The sweet spot for quality is 360M. Beyond that, returns diminish rapidly.</p><p><strong>Multilingual base models are a trap for English-only fine-tuning.</strong> The Pile-trained Pythia models couldn&apos;t suppress their multilingual weights with 4,000 English fine-tuning samples. English-native base models (SmolLM2) worked immediately.</p><p><strong>Prompt compression matters for small models.</strong> Reducing the training prompt from 59 to 36 tokens &#x2014; bucketing metrics, shortening field names, single-line format &#x2014; freed up context window for the actual generation. At 256 tokens, every wasted prompt token is capacity stolen from the completion.</p>]]></content:encoded></item><item><title><![CDATA[Progressive Discovery: Making CLI Tools Agent-Ready]]></title><description><![CDATA[<h1 id="the-problem">The Problem</h1><p>Command-line tools were designed for humans. When you run a configuration command, you typically get one of two experiences: an interactive wizard that walks you through prompts one at a time, or a strict non-interactive mode that fails with a cryptic error the moment a required option is</p>]]></description><link>https://kobylinski.co/progressive-discovery-making-cli-tools-agent-ready/</link><guid isPermaLink="false">6992f89959c1630001c31002</guid><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Mon, 16 Feb 2026 11:06:25 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1606225278453-eba097f60fc3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDExfHxkaXNjb3Zlcnl8ZW58MHx8fHwxNzcxMjQwMjM1fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1><img src="https://images.unsplash.com/photo-1606225278453-eba097f60fc3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDExfHxkaXNjb3Zlcnl8ZW58MHx8fHwxNzcxMjQwMjM1fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Progressive Discovery: Making CLI Tools Agent-Ready"><p>Command-line tools were designed for humans. When you run a configuration command, you typically get one of two experiences: an interactive wizard that walks you through prompts one at a time, or a strict non-interactive mode that fails with a cryptic error the moment a required option is missing.</p><p>Neither works well for AI agents.</p><p>Interactive mode traps the agent in a back-and-forth it can&apos;t navigate. Non-interactive mode gives it a binary pass/fail with no actionable context &#x2014; the agent has to guess what went wrong, parse error messages, and retry blindly.</p><p>There&apos;s a better way.</p><h2 id="progressive-discovery">Progressive Discovery</h2><p>Progressive Discovery is a response pattern where a CLI tool, upon detecting an AI agent as its caller, returns structured, contextual feedback about what it already has, what it still needs, and how to get it. Instead of prompting or failing, the tool <em>collaborates</em> with the agent.</p><p>The key insight: an AI agent operates with partial context. It may already know some configuration values from the user&apos;s instructions or from the project it&apos;s working in. What it needs from the tool is not a prompt &#x2014; it needs <em>information</em> to make decisions. The tool&apos;s job is to describe what&apos;s missing and provide enough context for the agent to either resolve it from what it knows or ask the user an intelligent question.</p><h2 id="a-concrete-example-payment-gateway-configuration">A Concrete Example: Payment Gateway Configuration</h2><p>Consider a CLI tool that configures a payment gateway integration &#x2014; something like setting up a Stripe or Bluefin connection for a merchant application.</p><h3 id="the-human-experience">The Human Experience</h3><p>A human runs the setup command and gets a familiar interactive flow:</p><pre><code class="language-warp-runnable-command">$ paytool configure
Welcome to PayTool configuration.
? Select environment: (Use arrow keys)
&#x276F; sandbox
  production
? Enter your Merchant ID: ___
? Enter your API Key: ___
? Select default currency: (Use arrow keys)
&#x276F; USD
  EUR
  GBP
? Enable webhook notifications? (Y/n): ___
? Webhook endpoint URL: ___
&#x2713; Configuration saved to .paytool/config.json</code></pre><p>This works for a human sitting at a terminal. But what happens when an AI agent needs to set this up?</p><h3 id="the-agent-experience-today">The Agent Experience Today</h3><p>Most tools offer a <code>--no-interactive</code> flag. The agent tries to supply everything upfront:</p><pre><code class="language-warp-runnable-command">$ paytool configure --no-interactive --merchant-id &quot;MCH_12345&quot;</code></pre><pre><code class="language-warp-runnable-command">Error: Missing required option: --api-key</code></pre><p>The agent retries, guessing at what else might be needed:</p><pre><code class="language-warp-runnable-command">$ paytool configure --no-interactive --merchant-id &quot;MCH_12345&quot; --api-key &quot;sk_test_xxx&quot;</code></pre><pre><code class="language-warp-runnable-command">Error: Missing required option: --environment</code></pre><p>And again. And again. Each round trip is a wasted call &#x2014; no context, no guidance, just failure.</p><h3 id="the-progressive-discovery-experience">The Progressive Discovery Experience</h3><p>With Progressive Discovery, the same tool detects the agent and shifts its behavior:</p><pre><code class="language-warp-runnable-command">$ paytool configure --agent --merchant-id &quot;MCH_12345&quot;</code></pre><pre><code class="language-markdown"># Configuration: incomplete
## Resolved
- **merchant_id**: `MCH_12345` (provided)
## Pending
### environment (required)
Deployment target. Use `sandbox` for testing, `production` for live transactions.
If this is a development or staging setup, use `sandbox`.

```json
{&quot;field&quot;: &quot;environment&quot;, &quot;type&quot;: &quot;enum&quot;, &quot;options&quot;: [&quot;sandbox&quot;, &quot;production&quot;]}
```

### api_key (required)
API authentication key for the selected environment.
Found in the merchant dashboard under **Settings &#x2192; API Keys**.
Starts with `sk_test_` for sandbox or `sk_live_` for production.

```json
{&quot;field&quot;: &quot;api_key&quot;, &quot;type&quot;: &quot;string&quot;, &quot;format&quot;: &quot;sk_{environment}_[a-zA-Z0-9]{24}&quot;}
```

### currency (optional, default: USD)
Default transaction currency.

```json
{&quot;field&quot;: &quot;currency&quot;, &quot;type&quot;: &quot;enum&quot;, &quot;options&quot;: [&quot;USD&quot;, &quot;EUR&quot;, &quot;GBP&quot;, &quot;CAD&quot;, &quot;AUD&quot;], &quot;default&quot;: &quot;USD&quot;}
```

### webhook_enabled (optional, default: false)
Enable webhook notifications for transaction events.
### webhook_url (optional, requires webhook_enabled: true)
HTTPS endpoint to receive webhook payloads.
Must be a publicly accessible HTTPS URL. For local development, consider using a tunneling service.</code></pre><p>Now the agent has everything it needs to act intelligently. It reads the markdown and immediately understands the situation: <code>environment</code> and <code>api_key</code> are required, the rest have defaults. The prose explains where to find the API key and what format it takes. The JSON code blocks give the agent structured data it can map directly to command flags &#x2014; valid enum options, format patterns, defaults. No parsing of a monolithic JSON object, no guessing at field semantics from key names alone.</p><p>The agent can now make a decision: <em>Do I already have this information from the user&apos;s instructions or the project context?</em> If the user said &quot;set up the sandbox environment,&quot; the agent already knows the environment. If there&apos;s a <code>.env</code> file with an API key, the agent can use that. For anything truly unknown, the agent can ask the user a precise, informed question &#x2014; not &quot;what&apos;s your API key?&quot; but &quot;I need your sandbox API key. You can find it in the merchant dashboard under Settings &#x2192; API Keys. It starts with <code>sk_test_</code>.&quot;</p><p>The agent fills in what it can and calls again:</p><pre><code class="language-warp-runnable-command">$ paytool configure --agent \
  --merchant-id &quot;MCH_12345&quot; \
  --environment &quot;sandbox&quot; \
  --api-key &quot;sk_test_a1b2c3d4e5f6g7h8i9j0k1l2&quot;</code></pre><pre><code class="language-markdown"># Configuration: complete
## Resolved
- **merchant_id**: `MCH_12345` (provided)
- **environment**: `sandbox` (provided)
- **api_key**: `sk_test_***` (provided)
- **currency**: `USD` (default)
- **webhook_enabled**: `false` (default)
Configuration saved to `.paytool/config.json`</code></pre><p>Done in two calls, with full transparency about what was applied and how.</p><h2 id="the-pattern">The Pattern</h2><p>Progressive Discovery follows three principles:</p><p><strong>1. Show the full picture, not one step at a time.</strong> Unlike interactive prompts that reveal requirements sequentially, Progressive Discovery returns the entire configuration surface in one response. The agent sees all required and optional fields, their types, constraints, defaults, and dependencies. This allows it to batch its decisions and minimize round trips.</p><p><strong>2. Provide actionable context, not just validation errors.</strong> Each pending field includes its type, valid options, format constraints, human-readable descriptions, and hints about where to find the value. This transforms a missing field from a blocker into a solvable problem. The agent can reason about where to look &#x2014; environment variables, project files, user instructions &#x2014; or formulate a specific question for the user.</p><p><strong>3. Acknowledge what&apos;s already resolved.</strong> The response always reflects back what the tool has already accepted. This gives the agent a clear picture of progress and prevents redundant work. It also surfaces the <em>source</em> of each value &#x2014; whether it was explicitly provided, pulled from a default, or inferred from the environment.</p><h2 id="detecting-the-agent">Detecting the Agent</h2><p>For Progressive Discovery to work, the tool needs to know when it&apos;s being called by an agent. There are two complementary approaches.</p><h3 id="environment-based-detection">Environment-Based Detection</h3><p>Most AI coding agents set identifiable environment variables. A tool can check for these:</p><pre><code class="language-typescript">export function detectAgentEnvironment(): null | string {
  // Claude Code (CLI or extension)
  if (process.env.CLAUDECODE === &apos;1&apos;) return &apos;claude-code&apos;
  // Cursor IDE
  if (process.env.CURSOR_TRACE_ID) return &apos;cursor&apos;
  // GitHub Copilot CLI
  if (process.env.GITHUB_COPILOT_TOKEN
    || process.env.COPILOT_AGENT_ENABLED === &apos;1&apos;) return &apos;github-copilot&apos;
  // Aider AI coding assistant
  if (process.env.AIDER_MODEL
    || process.env.AIDER_CHAT_HISTORY_FILE) return &apos;aider&apos;
  // OpenCode AI terminal agent
  if (process.env.OPENCODE === &apos;1&apos;) return &apos;opencode&apos;
  return null
}</code></pre><p>This is useful for automatic detection &#x2014; the tool can switch to Progressive Discovery mode without the caller needing to do anything special.</p><h3 id="theagent-flag">The <code>--agent</code> Flag</h3><p>Environment detection is helpful, but it&apos;s inherently fragile. New agents appear regularly, environment variables change, and some agents might not set any identifiable markers at all.</p><p>The more robust solution is explicit: offer an <code>--agent</code> flag.</p><pre><code class="language-warp-runnable-command">$ paytool configure --agent</code></pre><p>This is the recommended approach for tool authors. It&apos;s simple, self-documenting, and puts control in the agent&apos;s hands. Any AI agent &#x2014; current or future &#x2014; can use it without the tool needing to know about that specific agent. It also serves as an escape hatch for advanced human users who prefer structured output over interactive prompts.</p><p>In practice, you should support both: auto-detect when possible, but always accept the flag.</p><pre><code class="language-typescript">export function isAgentMode(flags: ParsedFlags): boolean {
  // Explicit flag takes priority
  if (flags.agent) return true
  // Fall back to environment detection
  return detectAgentEnvironment() !== null
}</code></pre><h2 id="implementation-guidelines">Implementation Guidelines</h2><p>If you&apos;re building a CLI tool and want to support Progressive Discovery, here&apos;s what to keep in mind.</p><p><strong>Use markdown as your primary response format, with JSON code blocks for structured datasets.</strong> LLMs are language models &#x2014; they reason about prose and markdown natively. Rigid JSON-only responses waste tokens on syntax noise (brackets, quotes, escaping) that adds no semantic value for the agent. Write the response as readable markdown: describe what&apos;s resolved, what&apos;s pending, and provide context in natural language. When there&apos;s actual structured data the agent might need to iterate over or map to parameters &#x2014; like a list of valid options, available resources, or enumerated values &#x2014; wrap it in a JSON code block within the markdown. This gives the agent the best of both worlds: natural language for reasoning and structured data for programmatic use.</p><p><strong>Include field metadata generously.</strong> Type, format, description, hint, default, dependencies &#x2014; the more context you provide, the fewer round trips the agent needs. Think of each pending field as a self-contained brief for the agent.</p><p><strong>Distinguish required from optional.</strong> Let the agent know what <em>must</em> be provided to proceed versus what has sensible defaults. This allows it to complete configuration without asking the user about every optional setting.</p><p><strong>Express dependencies explicitly.</strong> If field B only matters when field A has a certain value (like <code>webhook_url</code> requiring <code>webhook_enabled: true</code>), say so in the schema. This prevents the agent from asking the user unnecessary questions.</p><p><strong>Keep the final call idempotent.</strong> The agent might call the command multiple times as it gathers values. Each call should accept all previously resolved values plus new ones, and return the updated state without side effects until <code>status: &quot;complete&quot;</code>.</p><p><strong>Support partial progress.</strong> Don&apos;t require all fields in a single call. Accept what the agent has now and report back what&apos;s still missing. The tool should be comfortable with incremental resolution.</p><h2 id="why-this-matters">Why This Matters</h2><p>AI agents are becoming a primary interface for developer tools. They read documentation, they execute commands, they configure environments. But they&apos;re working with tools that were designed for a different kind of user &#x2014; one who can read a prompt, glance at a help page, and type a response.</p><p>Progressive Discovery bridges this gap without breaking the human experience. A tool can support all three modes &#x2014; interactive for humans, non-interactive for scripts, and progressive for agents &#x2014; with the same underlying configuration logic. The only thing that changes is how requirements are communicated.</p><p>The pattern is simple: instead of asking or failing, <em>describe what you need and why</em>. Let the agent &#x2014; which sits between the tool and the user &#x2014; do what it&apos;s good at: reasoning about context, making decisions, and asking smart questions when it has to.</p><p>That&apos;s Progressive Discovery. Not a new protocol. Not a framework. Just a better way for tools to talk to agents.</p>]]></content:encoded></item><item><title><![CDATA[Add language support for syntax highlighter in the storybook]]></title><description><![CDATA[Import SyntaxHighlighter from @storybook/components package.]]></description><link>https://kobylinski.co/add-language-support-for-syntax-highligher-in-storybook/</link><guid isPermaLink="false">65272648117f5600018d8cee</guid><category><![CDATA[storybook]]></category><category><![CDATA[prism]]></category><category><![CDATA[syntaxhighlighter]]></category><category><![CDATA[recipe]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Wed, 11 Oct 2023 22:56:27 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1628281450618-dd017fea6939?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxkaWd8ZW58MHx8fHwxNjk3MTA0MjkwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1628281450618-dd017fea6939?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxkaWd8ZW58MHx8fHwxNjk3MTA0MjkwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Add language support for syntax highlighter in the storybook"><p>All changes have to be done directly in mdx file</p><p><strong>Add langulage support</strong></p><pre><code class="language-mdx">import php from &apos;react-syntax-highlighter/dist/esm/languages/prism/php&apos;;</code></pre><p><strong>Add SyntaxHighlighter component </strong></p><pre><code class="language-mdx">import { SyntaxHighlighter } from &quot;@storybook/components&quot;;</code></pre><p>Register language</p><pre><code class="language-mdx">{SyntaxHighlighter.registerLanguage(&quot;php&quot;, php)}</code></pre>]]></content:encoded></item><item><title><![CDATA[Please insert the card with serial number ....]]></title><description><![CDATA[<p>I&apos;m using YubiKey for my ssh keys with gpg-agent. After some updates with homebrew (including pinentry) something odd happens. My key was in place as usual, <em>gpg --card-status</em> returns all as supposed. But trying to pull my repo from gitlab Pinentry always poped with me same message:</p><blockquote>Please</blockquote>]]></description><link>https://kobylinski.co/please-insert-the-card-with-serial-number/</link><guid isPermaLink="false">62d29547117f5600018d89e6</guid><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Sat, 16 Jul 2022 10:55:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1522794338816-ee3a17a00ae8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGtleSUyMHVzYnxlbnwwfHx8fDE2NTc5Njc5Mjg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1522794338816-ee3a17a00ae8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGtleSUyMHVzYnxlbnwwfHx8fDE2NTc5Njc5Mjg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Please insert the card with serial number ...."><p>I&apos;m using YubiKey for my ssh keys with gpg-agent. After some updates with homebrew (including pinentry) something odd happens. My key was in place as usual, <em>gpg --card-status</em> returns all as supposed. But trying to pull my repo from gitlab Pinentry always poped with me same message:</p><blockquote>Please insert the card with serial number XXXX XXXXXXXXX</blockquote><p>And that was my card number. All was correct.</p><p>I fixed it with the following commands:</p><pre><code class="language-bash">gpg-connect-agent killagent \bye
gpg-connect-agent &quot;scd serialno&quot; &quot;learn --force&quot; \bye</code></pre>]]></content:encoded></item><item><title><![CDATA[Using RE:DOM framework with redom-state addon]]></title><description><![CDATA[RE:DOM is simple and great choice for small, atom javascript applications. Im using it to embed widgets in bigger sites where there is no other big view framework available such us react or vuejs. State management for RE:DOM apps was a missing piece i was always looking for and now i have my own.]]></description><link>https://kobylinski.co/using-re-dom-framework-with-redom-state-addon/</link><guid isPermaLink="false">614b15660b97770001829961</guid><category><![CDATA[project]]></category><category><![CDATA[redom-state]]></category><category><![CDATA[state management]]></category><category><![CDATA[javascript]]></category><category><![CDATA[RE:DOM]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Wed, 22 Sep 2021 12:12:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1527356900876-cae61d8d8462?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEwfHx3aXJlfGVufDB8fHx8MTYzMjMxMjY3NA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1527356900876-cae61d8d8462?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEwfHx3aXJlfGVufDB8fHx8MTYzMjMxMjY3NA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Using RE:DOM framework with redom-state addon"><p>If you are not familiar with RE:DOM framework <a href="https://redom.js.org/">here</a> you should look first. You should get to know especially how RE:DOM components updates their state. It&apos;s not the same as in bigger reactive frameworks. An update is done manually through <code>update</code> method of each component. There are also no partial updates. Each update starts in the root of the application and flows to the leaves unless you decide that some part of the view should not be updated adding special logic in <code>update</code> function.</p><p><code>redom-state</code> component is not changing this approach. It does not extend the framework but builds on top of and what it provides is simple loop feedback between DOM event in RE:DOM component and root application update function call.</p><p>The state is updating mostly like <a href="https://reactjs.org/docs/hooks-reference.html#usereducer">reducer</a> in react hooks. It&apos;s a simple function that gets payload and merges it with the whole state:</p><pre><code class="language-javascript">import { wire } from &quot;redom-state&quot;

export const addJob((state, jobPayload) =&gt; {
  return {
     ...state,
     jobs: [...state.jobs, jobPayload]
  }
})</code></pre><p>Each update call returns a whole new state and each update calls the update function in the application&apos;s root.</p><p>That&apos;s why you can be sure that in the following RE:DOM component each button click will increment counter value:</p><pre><code class="language-javascript">import { el, text } from &quot;redom&quot;
import { wire } from &quot;redom-state&quot;

export const increment = wire((state) =&gt; ({ ...state, counter: state.counter + 1 }));

export default class Counter {
  constructor() {
    this.el = el(&apos;.counter&apos;,
      this.counter = text(),
      this.button = el(&apos;button&apos;, text(&apos;increment&apos;))
    );
    this.button.onclick = increment;
  }
  
  update(state) {
    this.counter.textContent = state.counter;
  }
}</code></pre><p>You can encapsulate view components and data manipulating functions in logic components what makes the code cleaner. For example, if the component above would be a part of a bigger app I would be able to include RE:DOM component and state functions as well.</p><pre><code class="language-javascript">import Counter, { increment } from &quot;./Counter&quot;</code></pre><p>If you will want to try <code>redom-state</code> just install it using your favorite package manager:</p><pre><code class="language-sh">yarn add redom-state
# or
npm i redom-state</code></pre><p>You can feel free also to <a href="https://github.com/kobylinski/redom-state">fork</a> and update, just let me know if you figure out something nice.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/kobylinski/redom-state"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - kobylinski/redom-state: State management for RE:DOM apps</div><div class="kg-bookmark-description">State management for RE:DOM apps. Contribute to kobylinski/redom-state development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Using RE:DOM framework with redom-state addon"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kobylinski</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/c73d90b2c3dd44d5d9b4d541ac181523beff7914e77ed80155a787e7332044ae/kobylinski/redom-state" alt="Using RE:DOM framework with redom-state addon"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Bootstrap state object with multiple async calls using one function]]></title><description><![CDATA[I was looking for way how i can bootstrap my app state using one function with multiple async calls with possible clean code.]]></description><link>https://kobylinski.co/bootstrap-state-object-with-multiple-async-calls-using-one-function/</link><guid isPermaLink="false">6149e8bc0b977700018296ea</guid><category><![CDATA[recipe]]></category><category><![CDATA[javascript]]></category><category><![CDATA[generator]]></category><category><![CDATA[async]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Tue, 21 Sep 2021 17:38:15 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1632227106853-0c5afe0ca9eb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8YWxsfDJ8fHx8fHwyfHwxNjMyMjQ1NjY1&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1632227106853-0c5afe0ca9eb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8YWxsfDJ8fHx8fHwyfHwxNjMyMjQ1NjY1&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Bootstrap state object with multiple async calls using one function"><p>I start with simple code:</p><pre><code class="language-javascript">class MyState{
  constructor( bootstrap = () =&gt; {} ) {
    this.state = bootstrap();
  }
}</code></pre><p>The constructor calls the bootstrap function and the state is ready. But in the case of async calls, I&apos;m forced to keep access to the final state object and update it directly when an async call ends.</p><pre><code class="language-javascript">const state = new MyState(() =&gt; {
	callAsyncService().then((asyncState) =&gt; {
    	state.state = asyncState
    })
	return &quot;sync state&quot;;
});</code></pre><p>It&apos;s easy on this example but if have don&apos;t keep access to state objects things are more complicated.</p><p>The simplest way to handle it is to return a promise or array of promises and resolve those using <code>Promise.all</code> and/or <code>Promise.resolve</code> and apply results to the state. This was my first step:</p><pre><code class="language-javascript">class MyState {
  constructor( bootstrap = () =&gt; {} ) {
    const state = bootstrap();
    this.state = {}
    if ( Array.isArray(state) ) {
    	Promise.all( state ).then( (stateParts) =&gt; this.state = { ...this.state, ...stateParts })    
    }else{
    	Promise.resolve(state).then( (state) =&gt; this.state = state )
    }
  }
}</code></pre><p>That way handles one or more promises and in the case of an array, all results will be merged into the state object when the last promise will resolve. It&apos;s almost good but what I want is to make the first state without delay, and render the app with loader. The second thought I had was that progress also would be a nice thing.</p><p>I thought, what if I will not use <code>Promise.all</code> but iterate through an array and resolve each promise separately?</p><pre><code class="language-javascript">class MyState {
  constructor( bootstrap = () =&gt; {} ) {
    const state = bootstrap();
    if( Array.isArray(state) ) {
      for( let stage of state ) {
        Promise.resolve(stage).then( stage =&gt; this.state = {
        	...this.state,
            ...stage
        });
      }
    }else{
    	Promise.resolve(state).then( state =&gt; this.state = state );
    }
  }
}</code></pre><p>Then I have tested this code with the following bootstrap function:</p><pre><code class="language-javascript">const sleep = (ms, result) =&gt;
  new Promise((done) =&gt; setTimeout(() =&gt; done(result), ms));
  
const myBootstrap = () =&gt; {
	return [
    	{ state: &quot;loading&quot;, progress: 0 },
        sleep(1000, { progress: 66 }),
        sleep(500, { progress: 100 }),
        { state: &quot;ready&quot; }
    ];
}</code></pre><p>This code didn&apos;t keep proper functions call. It&apos;s good if there is no matter what result I got first. But the thing it&apos;s not I wanted to achieve.</p><p>To keep proper sequence I need to check if the first promise is resolved before I will check the second one.</p><pre><code class="language-javascript">class MyState {
  constructor( bootstrap = () =&gt; {} ) {
    const state = bootstrap();
    if( Array.isArray(state) ) {
      const step = (i) =&gt; {
        Promise.resolve(state[i]).then( stage =&gt; {
		  this.state = {
            ...this.state,
            ...stage
          });
          if( state.length &gt; ++i  ){
            runNext(i);
          } 
        });
      };
      step(0)
    }else{
      Promise.resolve(state).then( state =&gt; this.state = state );
    }
  }
}</code></pre><p>To simplify this code I will use <code>async</code> / <code>await</code> keywords.</p><pre><code class="language-javascript">const runBootstrap = async (state, merge) =&gt; {
	for await ( let stage of state ) {
    	merge(stage);
    }
}

class MyState {
  constructor( bootstrap = () =&gt; {} ) {
    const state = bootstrap();
    if( !Array.isArray(state) ) {
      	state = [state];
	}
    this.state = {};
    runBootstrap( state, (stage) =&gt; {
        this.state = {
            ...this.state,
            ...stage
        }
    });
  }
}</code></pre><p>What when one async call in bootstrap depends on other? For example, the first call is for the user session and the next are based on if the user is logged in or not.</p><pre><code class="language-javascript">const bootstrapApp = async () =&gt; {
  const user = await api.getCurrentUser();
  const resources = await api.getUserResources(user);
  return { user, resources };
}</code></pre><p>Let&apos;s try to use a function generator.</p><pre><code class="language-javascript">const bootstrapApp = async function* () {
  yield { state: &quot;loading&quot;, progress: 0 };
  const user = await api.getCurrentUser();
  if( null !== user ) {
  	yield { progress: 50, user };
    const resources = await api.getUserResources(user);
    yield { progress: 100, resources }; 
  }
  yield { state: &quot;done&quot; }
}</code></pre><p>And resolve it in state constructor.</p><pre><code class="language-javascript">const runBootstrap = async ( state, merge ) =&gt; {
	for await ( let stage of state ) {
    	merge(stage); 
    }
}

class MyState {
  constructor( bootstrap ) {
  	this.state = {};
    runBootstrap(bootstrap(), stage =&gt; this.state = {
      ...this.state,
      ...stage
    });
  }
}</code></pre><p>I have all, but what if there will be no need to use generator function and simple function call will be enough? Let&apos;s try to test this code against those variations:</p><pre><code class="language-javascript">const bootstrap1 = () =&gt; ({
  state: &quot;done&quot;
});

const bootstrap2 = () =&gt; [
  {state: &quot;loading&quot;, progress: 0},
  sleep(1000, { progress: 66 }),
  sleep(500, { progress: 100 }),
  {state: &quot;done&quot;}
];</code></pre><p>In case of function: &#xA0;<code>bootstrap1</code> I need to be sure that only <code>asyncIterator</code> will be passed to loop code. <code>bootstrap2</code> can be passed to loop code but it&apos;s not <code>asyncIterator</code> and I need to include also simple array type. The whole code will look like this:</p><pre><code class="language-javascript">const runBootstrap = async ( state, merge ) =&gt; {
	for await ( let stage of state ) {
    	merge(stage); 
    }
}

class MyState {
  constructor( bootstrap ) {
  	this.state = {};
	const state = bootstrap();
    if( 
      Array.isArray(state) || // handle bootstrap2
      typeof state[Symbol.asyncIterator] !== &apos;undefined&apos; 
    ) {
      runBootstrap(state, stage =&gt; this.state = {
        ...this.state,
        ...stage
      });
    } else { // handle bootstrap1
      this.state = state;
    }
  }
}</code></pre>]]></content:encoded></item><item><title><![CDATA[Look into closure with ReflectionFunction class]]></title><description><![CDATA[It can be handy in case of debug.]]></description><link>https://kobylinski.co/look-into-closure-with-reflectionfunction-class/</link><guid isPermaLink="false">613b4fac0b9777000182966f</guid><category><![CDATA[tip]]></category><category><![CDATA[php]]></category><category><![CDATA[ReflectionFunction]]></category><category><![CDATA[closure]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Fri, 10 Sep 2021 12:43:56 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1509475826633-fed577a2c71b?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGRlYnVnfGVufDB8fHx8MTYzMjM0MTcwNQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1509475826633-fed577a2c71b?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGRlYnVnfGVufDB8fHx8MTYzMjM0MTcwNQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Look into closure with ReflectionFunction class"><p>Let&apos;s create closure:</p><pre><code class="language-php">class Example {
  function get($arg) {
    return function() use ($arg) {
      return $arg;
    }
  }
}

$closure = (new Example())-&gt;get(&quot;value&quot;); </code></pre><p>Take a <code>$this</code> attribute:</p><pre><code class="language-php">$ref = new \ReflectionFunction($closure);
$refThis = $ref-&gt;getClosureThis();</code></pre><p>Take scope variables from <code>use</code> statement:</p><pre><code class="language-php">$refUse = $ref-&gt;getStaticVariables();
assertEquals($closure(), $refUse[&apos;arg&apos;]);</code></pre>]]></content:encoded></item><item><title><![CDATA[Ghost in docker mail configuration]]></title><description><![CDATA[.]]></description><link>https://kobylinski.co/ghost-in-docker-mail-configuration/</link><guid isPermaLink="false">613a121d0b9777000182962a</guid><category><![CDATA[tip]]></category><category><![CDATA[ghost]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Thu, 09 Sep 2021 13:59:29 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1605745341075-1b7460b99df8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fGRvY2tlcnxlbnwwfHx8fDE2MzIzNDE3NTQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<pre><code class="language-docker">version: &quot;3.2&quot;
services:
    blog:
      image: ghost:4-alpine
      environment:
        mail__transport: SMTP
        mail__from: &quot;Your Name your@email.com&quot;
        mail__options__service: Mailgun
        mail__options__host: smtp.mailgun.org
        mail__options__port: 587
        mail__options__secureConnection: &apos;false&apos;
        mail__options__auth__user: mailgun@user
        mail__options__auth__pass: mailgun_user_password</code></pre>]]></content:encoded></item><item><title><![CDATA[Modify query builder just before resolve by lighthouse.]]></title><description><![CDATA[I was looking at how to inject something to query builder just before resolve by @paginate directive but after all other directives such us where conditions.]]></description><link>https://kobylinski.co/modify-query-builder-just-before-resolve-by-lighthouse-paginate-directive/</link><guid isPermaLink="false">61394586ed131f0001884b7d</guid><category><![CDATA[recipe]]></category><category><![CDATA[laravel]]></category><category><![CDATA[lighthouse]]></category><category><![CDATA[graphql]]></category><category><![CDATA[@directive]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Thu, 09 Sep 2021 09:50:08 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1546008494-ced1c245caa1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE2Mnx8bGFzdCUyMG1pbnV0ZXxlbnwwfHx8fDE2MzExNDM3ODk&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="first-ive-added-my-directive">First, I&apos;ve added my directive:</h2><pre><code class="language-php">&lt;?php

namespace App\GraphQL\Directives;

use Closure;
use GraphQL\Type\Definition\ResolveInfo;
use Nuwave\Lighthouse\Schema\Directives\BaseDirective;
use Nuwave\Lighthouse\Schema\Values\FieldValue;
use Nuwave\Lighthouse\Support\Contracts\FieldMiddleware;
use Nuwave\Lighthouse\Support\Contracts\GraphQLContext;
use App\GraphQL\Grid\Types;

class MyDirective extends BaseDirective implements FieldMiddleware {

    public static function definition(): Directive
    {
        return new Types\Directive();
    }

    public function handleField(FieldValue $fieldValue, Closure $next)
    {
        $resolver = $fieldValue-&gt;getResolver();
        return $next(
            $fieldValue-&gt;setResolver(function (
                $root,
                array $args,
                GraphQLContext $context,
                ResolveInfo $resolveInfo
            ) use ($resolver) {
                info(&quot;Middleware&quot;);
                return $resolver($root, $args, $context, $resolveInfo);
            })
        );
    }
}</code></pre><img src="https://images.unsplash.com/photo-1546008494-ced1c245caa1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE2Mnx8bGFzdCUyMG1pbnV0ZXxlbnwwfHx8fDE2MzExNDM3ODk&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Modify query builder just before resolve by lighthouse."><p>And defined my query:</p><pre><code class="language-graphql">extend type Query {
    myQuery( arg1: String! @fieldDirective1, arg2: String! @fieldDirective2 ): [MyType] @paginate @myDirective
}</code></pre><p>I had two arguments with field directives and <code>@paginate</code> directive.</p><p>To each directive, I&apos;ve added <code>info</code> a function with the class name. In case of a trace is added at the beginning and just before the end (when a query is resolved).</p><p><strong>First, run:</strong></p><pre><code class="language-text">MyDirective
Paginate Begin
FieldDirective1
FieldDirective2
Paginate End</code></pre><p>Paginate directive Resolve field arguments in the middle. I need to add my middleware after <code>FieldDirective2</code> and before <code>Paginate End</code>.</p><h3 id="a-little-change-in-the-code-of-mydirective">A little change in the code of MyDirective</h3><pre><code class="language-php">$result = $resolver($root, $args, $context, $resolveInfo);
info(&quot;middleware&quot;);
return $result;</code></pre><p><strong>Result as expected:</strong></p><pre><code class="language-text">Paginate Begin
FieldDirective1
FieldDirective2
Paginate End
MyDirective</code></pre><p>Paginate directive runs field directives. I&apos;m not able to catch the moment with <code>FieldMiddleware</code>. I need to be here:</p><pre><code class="language-php">// PaginateDirective.php
$query = $resolveInfo-&gt;argumentSet-&gt;enhanceBuilder($query, $this-&gt;directiveArgValue(&quot;scopes&quot;, []));</code></pre><h2 id="add-bogus-argument">Add bogus argument</h2><p>I&apos;ve changed my directive resolver to inject additional arguments into arguments stored in <code>$resolveInfo</code></p><pre><code class="language-php">class MyDirective extends BaseDirective implements
    FieldManipulator,
    DefinedDirective,
    FieldMiddleware,
    \Nuwave\Lighthouse\Support\Contracts\ArgBuilderDirective
{
    public function handleField(FieldValue $fieldValue, Closure $next)
    {
        $resolver = $fieldValue-&gt;getResolver();
        return $next(
            $fieldValue-&gt;setResolver(function (
                $root,
                array $args,
                GraphQLContext $context,
                ResolveInfo $resolveInfo
            ) use ($resolver) {
                $argument = new \Nuwave\Lighthouse\Execution\Arguments\Argument();
                $argument-&gt;value = true;
                $argument-&gt;type = \GraphQL\Type\Definition\Type::boolean();
                $argument-&gt;directives-&gt;push($this);
                $resolveInfo-&gt;argumentSet-&gt;arguments[&quot;my_directive_arg&quot;] = $argument;
                info(&quot;MyDirective Begin&quot;);
                $result = $resolver($root, $args, $context, $resolveInfo);
                info(&quot;MyDirective End&quot;);
                return $result;
            })
        );
    }


    public function handleBuilder($builder, $filters)
    {
        info(&quot;MyDirective Bulder&quot;);
        return $builder;
    }
}</code></pre><p><strong>And result:</strong></p><pre><code class="language-txt">MyDirective Begin
Pagination Begin
FieldDirective1
FieldDirective2
MyDirective Bulder
Paginate End
MyDirective End</code></pre><p></p><p>This gives me the ability to modify eloquent queries just before execute knowing all conditions made by argument directives.</p>]]></content:encoded></item><item><title><![CDATA[User interaction is not allowed]]></title><description><![CDATA[... trying to login to docker.]]></description><link>https://kobylinski.co/user-interaction-is-not-allowed/</link><guid isPermaLink="false">6047dbc38f037f00010e4ac2</guid><category><![CDATA[tip]]></category><category><![CDATA[docker]]></category><category><![CDATA[mac]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Tue, 09 Mar 2021 20:45:48 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1550527882-b71dea5f8089?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDExfHxrZXl8ZW58MHx8fHwxNjE1MzIyNjk3&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1550527882-b71dea5f8089?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDExfHxrZXl8ZW58MHx8fHwxNjE1MzIyNjk3&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="User interaction is not allowed"><p>... trying to login to docker.</p><p>Type: </p><!--kg-card-begin: markdown--><pre><code class="language-sh">security -v unlock-keychain ~/Library/Keychains/login.keychain-db
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Integrate vuepress with moleculer app]]></title><description><![CDATA[How to expose moleculer api to vuepress application. It can be useful when in documentation some live data should be injected.]]></description><link>https://kobylinski.co/integrate-vuepress-and-moleculer/</link><guid isPermaLink="false">5e3178f28f037f00010e4a01</guid><category><![CDATA[recipe]]></category><category><![CDATA[moleculer]]></category><category><![CDATA[vuepress]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Wed, 29 Jan 2020 12:42:21 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1491895200222-0fc4a4c35e18?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1491895200222-0fc4a4c35e18?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Integrate vuepress with moleculer app"><p>Install moleculer app using cli command:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">npm install -g moleculer-cli
moleculer init project &lt;project-name&gt;
</code></pre>
<!--kg-card-end: markdown--><p>Install vuepress:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">cd &lt;project-name&gt;
npm install vuepress --save-dev
</code></pre>
<!--kg-card-end: markdown--><p>Create moleculer mixin</p><!--kg-card-begin: markdown--><pre><code class="language-js">// mixins/vuepress.js

const { createApp } = require( &apos;vuepress&apos; );
const enabled = process.env.npm_lifecycle_event === &apos;dev&apos;;

let app;
let logger;
let server;

module.exports = {
	settings: {
		middleware: enabled,
		...( enabled ? {} : {
			assets: {
				folder: &apos;./public/.vuepress/dist&apos;
			}
		} )
	},
	async started(){
		logger = this.broker.getLogger(&apos;Vuepress&apos;, { svc: &apos;Vuepress&apos;, ver: null });
		if( enabled ){
			const service = this;
			app = createApp({ 
				sourceDir: &apos;public&apos;,
				port: this.settings.port,
				host: this.settings.ip,
				plugins: [[
					( pluginOptions, context ) =&gt; ({
						name: &apos;moleculer&apos;,
						beforeDevServer(express, server) {
							logger.info( `Connecting to vuepress dev server` );
							express.use( service.express() );
						}
					})
				]]
			});

			await app.process();
			await app.dev();
		}
	}
}

</code></pre>
<!--kg-card-end: markdown--><p>Add vuepress mixin to API service before ApiGateway mixin:</p><!--kg-card-begin: markdown--><pre><code class="language-js">// services/api.js

const ApiGateway = require( &apos;moleculer-web&apos; );
const Vuepress = require( &apos;../mixins/vuepress&apos; );

module.exports = {
	name: &apos;api&apos;,
	mixins: [ 
		Vuepress,
		ApiGateway
	],
	settings: {
		port: process.env.PORT || 3000,
		routes: [{
			path: &quot;/api&quot;,
			whitelist: [ &quot;**&quot; ]
		}]
	}
};

</code></pre>
<!--kg-card-end: markdown--><p>Add build script to package.json</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
    &quot;scripts&quot;: {
        &quot;build&quot;: &quot;vuepress build public&quot;
    }
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Web app setup with Traefik2]]></title><description><![CDATA[Setup traefic2 as part of project or standalone gateway.]]></description><link>https://kobylinski.co/web-app-setup-with-traefik2/</link><guid isPermaLink="false">5e1b3b0c00d6200001c2527f</guid><category><![CDATA[tip]]></category><category><![CDATA[docker]]></category><category><![CDATA[traefic]]></category><dc:creator><![CDATA[Marek Kobyliński]]></dc:creator><pubDate>Sun, 12 Jan 2020 16:00:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1578826890853-66c5001b4e8e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<ol><li>As a part of project.</li></ol><!--kg-card-begin: markdown--><pre><code class="language-yaml">version: &quot;3.2&quot;
  app:
    build:
      context: .
    environment:
      PORT: 3000
    labels:
      - &quot;traefik.enable=true&quot;
      - &quot;traefik.http.services.app.loadbalancer.server.port=3000&quot;
      - &quot;traefik.http.middlewares.force-sec.redirectscheme.scheme=https&quot;
      - &quot;traefik.http.routers.app.rule=Host(`example.dev`)&quot;
      - &quot;traefik.http.routers.app.entrypoints=web&quot;
      - &quot;traefik.http.routers.app.middlewares=force-sec&quot;
      - &quot;traefik.http.routers.app-sec.rule=Host(`example.dev`)&quot;
      - &quot;traefik.http.routers.app-sec.entrypoints=web-sec&quot;
      - &quot;traefik.http.routers.app-sec.tls=true&quot;
      - &quot;traefik.http.routers.app-sec.tls.certresolver=le&quot;
    networks:
      - default
      - internal
  gateway:
    image: traefik
    container_name: gateway
    restart: always
    command:
      - &quot;--providers.docker=true&quot;
      - &quot;--entrypoints.web.address=:80&quot;
      - &quot;--entrypoints.web-sec.address=:443&quot;
      - &quot;--providers.docker.network=app_default&quot;
      - &quot;--certificatesResolvers.le.acme.tlsChallenge=true&quot;
      - &quot;--certificatesResolvers.le.acme.email=email@example.dev&quot;
      - &quot;--certificatesResolvers.le.acme.storage=/acme/storage.json&quot;
    ports:
      - &quot;80:80&quot;
      - &quot;443:443&quot;
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - acme:/acme
    networks:
      - default
      
networks:
  internal:
  
volumes:
  acme:
</code></pre>
<!--kg-card-end: markdown--><img src="https://images.unsplash.com/photo-1578826890853-66c5001b4e8e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Web app setup with Traefik2"><p>2. As a standalone gateway.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">version: &quot;3.2&quot;
services:
  gateway:
    image: traefik
    restart: always
    container_name: gateway
    command:
      - &quot;--providers.docker=true&quot;
      - &quot;--providers.docker.network=gateway_gateway&quot;
      - &quot;--entrypoints.web.address=:80&quot;
      - &quot;--entrypoints.websec.address=:443&quot;
      - &quot;--providers.docker.exposedByDefault=false&quot;
      - &quot;--certificatesResolvers.le.acme.tlsChallenge=true&quot;
      - &quot;--certificatesResolvers.le.acme.email=email@example.dev&quot;
      - &quot;--certificatesResolvers.le.acme.storage=/acme/storage.json&quot;
    ports:
      - &quot;80:80&quot;
      - &quot;443:443&quot;
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - acme:/acme
    networks:
      - gateway

networks:
  gateway:
    ipam:
      driver: default

volumes:
  acme:

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-yaml">version: &quot;3.2&quot;
  app:
    build:
      context: .
    environment:
      PORT: 3000
    labels:
      - &quot;traefik.enable=true&quot;
      - &quot;traefik.http.services.app.loadbalancer.server.port=3000&quot;
      - &quot;traefik.http.middlewares.force-sec.redirectscheme.scheme=https&quot;
      - &quot;traefik.http.routers.app.rule=Host(`example.dev`)&quot;
      - &quot;traefik.http.routers.app.entrypoints=web&quot;
      - &quot;traefik.http.routers.app.middlewares=force-sec&quot;
      - &quot;traefik.http.routers.app-sec.rule=Host(`example.dev`)&quot;
      - &quot;traefik.http.routers.app-sec.entrypoints=web-sec&quot;
      - &quot;traefik.http.routers.app-sec.tls=true&quot;
      - &quot;traefik.http.routers.app-sec.tls.certresolver=le&quot;
    networks:
      - internal
      - gateway

networks:
  internal:
  gateway:
    external:
      name: gateway_gateway
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>