<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Alan Varghese]]></title><description><![CDATA[A mix of short (opinionated) takes, deep dives, and technical breakdowns based on my experiences as a senior engineer who ships, breaks, and learns.]]></description><link>https://blog.alanvarghese.me</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:59:15 GMT</lastBuildDate><atom:link href="https://blog.alanvarghese.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Unlearn faster]]></title><description><![CDATA[When I started engineering seriously, I carried a lot of "correct" patterns in my head.
Microservices were the gold standard, so the real-world project I led the development on had to be microservices]]></description><link>https://blog.alanvarghese.me/unlearn</link><guid isPermaLink="true">https://blog.alanvarghese.me/unlearn</guid><category><![CDATA[engineering]]></category><category><![CDATA[General Advice]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Tue, 03 Mar 2026 17:48:38 GMT</pubDate><content:encoded><![CDATA[<p>When I started engineering seriously, I carried a lot of "correct" patterns in my head.</p>
<p>Microservices were the gold standard, so the real-world project I led the development on had to be microservices. Almost a year in, when the cracks started forming, I took the decision to merge everything back into a modular monolith which improved DX by nearly 2x for my small team.</p>
<blockquote>
<p>If all you have is a hammer, everything looks like a nail.</p>
</blockquote>
<p>I was a REST purist. Then I shipped enough user-facing products to see that strict RESTful API design is a bottleneck when your domain is actions, not resources. Sometimes you just need <code>POST /approve</code>.</p>
<p>The same pattern repeated in many decisions that I had to backtrack on: MongoDB to PostgreSQL, Polyrepos to Monorepos, JWTs to plain old sessions.</p>
<p>That's not to say that there aren't uses for the things I've discarded. Microservices make sense when your teams need independent deployment cycles. JWTs have a clear place in stateless, distributed auth across services.</p>
<p>The unlearning wasn't about those tools being wrong but rather about me being wrong about when to reach for them. The engineering instinct is to learn more. I've found more leverage in <strong>questioning what I already think I know</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[The real unfair advantage is focus]]></title><description><![CDATA["unfair" advantages are often associated with things that are external. Better network, funding, timing. But in practice, the biggest one I've seen is focus.
I recently changed my working hours from 9]]></description><link>https://blog.alanvarghese.me/the-real-unfair-advantage-is-focus</link><guid isPermaLink="true">https://blog.alanvarghese.me/the-real-unfair-advantage-is-focus</guid><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Fri, 20 Feb 2026 16:47:29 GMT</pubDate><content:encoded><![CDATA[<p><strong>"unfair" advantages</strong> are often associated with things that are external. Better network, funding, timing. But in practice, the biggest one I've seen is <strong>focus</strong>.</p>
<p>I recently changed my working hours from 9am-5pm to 2pm-10pm. That gave me early mornings completely to myself without all the noise. I still get enough overlap with my team in the evening, but the first half of my day is protected for personal time.</p>
<p>Before making the change, I kept a counter for all the messages, quick doubts, "two-minute" clarifications, spontaneous discussions throughout a normal working day. I was averaging <strong>16</strong>. These were <strong>individually harmless</strong> interruptions but <strong>collectively destructive</strong>.</p>
<p>I had practically cut that number in half. In return, I got two extended blocks of "focus time". One from early morning till noon for my personal work, and another from noon till night (half of which after everyone has left the office) for my work as a technical lead. These turned out to be the most focused, and in turn productive hours I've had in a long time.</p>
<p><strong>Focus is leverage</strong> because it multiplies everything you already have. Skills compound, judgement improves, decisions get cleaner all because your cognitive surface area is not being constantly fractured.</p>
<p>If you can consistently create long uninterrupted stretches of work in an otherwise distracted environment, your output will look disproportionate.</p>
<p><strong>Related:</strong></p>
<ul>
<li><p><a href="https://www.fastcompany.com/91322048/workers-are-interrupted-up-to-275-times-a-day">Workers are interrupted up to 275 times a day - Fast Company</a></p>
</li>
<li><p><a href="https://news.ycombinator.com/item?id=35459333">Programmer interrupted: The cost of interruption and context switching (2022) | Hacker News</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Vibe-engineering an OpenAPI compatible API authoring tool]]></title><description><![CDATA[As an engineering lead, I spent most of my time architecting systems, writing issues, and translating business requirements into clean technical requirements which involve some form of API documentation and authoring DTOs.
Defining the API as a contr...]]></description><link>https://blog.alanvarghese.me/vibe-engineering-an-openapi-compatible-api-authoring-tool</link><guid isPermaLink="true">https://blog.alanvarghese.me/vibe-engineering-an-openapi-compatible-api-authoring-tool</guid><category><![CDATA[OpenApi]]></category><category><![CDATA[engineering]]></category><category><![CDATA[api]]></category><category><![CDATA[documentation]]></category><category><![CDATA[swagger]]></category><category><![CDATA[Spec-Driven-Development]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Sun, 08 Feb 2026 10:49:30 GMT</pubDate><content:encoded><![CDATA[<p>As an engineering lead, I spent most of my time architecting systems, writing issues, and translating business requirements into clean technical requirements which involve some form of API documentation and authoring DTOs.</p>
<p>Defining the API as a contract before implementing it helps the frontend and backend teams start work in-parallel instead of one waiting for another to get their types and flow of the APIs.</p>
<p>The <a target="_blank" href="https://www.openapis.org/">OpenAPI Initiative</a> happens to create a specification that is very powerful for documenting APIs allowing developers to write clear, defensible API contracts that act as a single-source-of-truth for backend engineers, frontend engineers, QA, technical leadership, and now even AI agents.</p>
<p><strong>There are primarily two ways you could create an OpenAPI-spec compliant API Documentation</strong>:<br />You either hand-author YAML files or rely on code-first generation with the backend framework of your choice. Both are absolutely valid approaches that I’ve seen teams use but both approaches suffer from very different problems.</p>
<p><strong>Editing YAML files by hand</strong> is, frankly, not an intuitive way to design what APIs should accept and respond. As the API-surface gets bigger, so does the cognitive cost of keeping up with this increasingly large YAML file. Despite being designed to be a more “human-friendly” alternative to JSON, YAML is still hostile to humans and troublesome to write and maintain.</p>
<p>This “barrier” of authoring YAML files is problematic enough that I see teams avoid writing OpenAPI because of this. Oftentimes, I’ve preferred writing markdown files with embedded Typescript code blocks for defining API contracts and request/response schemas simply because it was much easier to do.</p>
<p>A much more reasonable, alternative approach some teams use is <strong>code-first generation</strong>. Frameworks like FastAPI have first-class support for automated OpenAPI generation as you write routes and request/response models. Other popular frameworks like Gin, NestJS also have complementary packages that does the job insanely well.</p>
<p>But this approach has its own set of problems:</p>
<ul>
<li><p>Your backend engineers are now in charge of the API contract. It can, and it will deviate from what you initially proposed.</p>
</li>
<li><p>On several occasions, you will have to fight the framework to ensure that you get the auto-generated documentation right. This can sometimes be in the form of messy boilerplate, or ugly hacks to show a particular type in the spec and remain compatible with code-generation.</p>
</li>
</ul>
<p>This may not be such a dealbreaker for small teams that move fast and do not consider contract-led development important. In fact, if you are able to deal with these two potential issues: the upsides are great. These auto-generated specs are a great way to guarantee end-to-end type-safety between your API clients.</p>
<p>Still, the fact remains that code-first generators see API contracts and documentation as an afterthought. A side-effect of well-written backend code.</p>
<h2 id="heading-existing-solutions">Existing Solutions</h2>
<p><em>Are you saying that this is a problem that people haven’t tried solving?</em> Of course not. There are dozens of tools available online that you could use. Even Smartbear (the folks behind OAS) themselves have created the official <a target="_blank" href="https://editor.swagger.io">Swagger Editor</a>, which is YAML-first but not very intuitive unless you get comfortable with the editor. There is <a target="_blank" href="https://stoplight.io">Stoplight</a>, which is heavy, and enterprise-first, and has a pricing page! Sure, you could use <a target="_blank" href="https://www.postman.com/">Postman</a> which works but very clearly isn’t designed for OpenAPI-documentation despite supporting it.</p>
<p>I just want a super-simple, preferably open-source, and free to use API documentation tool. I don’t need accounts, cloud sync, onboarding, pricing, and <a target="_blank" href="https://tonsky.me/blog/needy-programs">all that crap!</a>. Is it too much to ask for?</p>
<p>The project closest to what I was looking for, which I found on Github was a little tool called <a target="_blank" href="https://github.com/Mermade/openapi-gui">openapi-gui</a> which is quite nice and supported the functionality I was looking for. However, it hasn’t been maintained in over three years and hasn’t evolved with the latest additions to the OpenAPI specification.</p>
<p>Unless…</p>
<h2 id="heading-i-could-probably-build-that-every-engineer-ever">“I could probably build that” - every engineer ever</h2>
<p>Having worked with OpenAPI specifications extensively for the past couple of years and with the power of agentic engineering. I figured; I should probably give it a shot and build something that solves the pain points I’ve highlighted. An intuitive, clean, and simple GUI tool that lets me author API contracts, with support for serializing them into the OAS-YAML files.</p>
<p>I took the assistance of <a target="_blank" href="https://v0.dev/">v0.dev</a> and <a target="_blank" href="https://github.com/features/copilot">Github Copilot</a> to scaffold a new project. I didn’t want to “vibe-code” it because OpenAPI itself is a complex spec, and being an engineer, it irks me to fire off a dozen mindless proompts and hit save on some AI-slop, no, no, not on my watch you don’t.</p>
<p>I was going to <a target="_blank" href="https://simonwillison.net/2025/Oct/7/vibe-engineering/">vibe-engineer</a> this project. Starting off with a detailed plan and architecture, fully human-in-the-loop workflow. The agent was still writing 90% of the code but I would occasionally go in and make changes to minor things that I could do much faster than the agent because a <em>prompt → think → execute → review → accept</em> cycle to change the color of a button is how you slowly develop AI-induced brain rot.</p>
<h2 id="heading-the-plan">The plan</h2>
<p><strong>What I did not intend to build:</strong> An OpenAPI YAML editor or a Swagger UI clone.</p>
<p>I didn’t want this tool to be YAML-first but instead have its own internal canonical model. This felt like a good-enough abstraction that allows the tool to be independent of OAS and potentially work with any such specification in the future. It also opens room for extensibility beyond the scope of the spec.</p>
<p>Another important consideration was <strong>speed</strong> and <strong>offline-mode</strong>. Being a sucker for performance, I wanted it to be <strong><em>blazingly fast</em></strong>. The OpenAPI YAML files I often work with are easily 10k+ LOC. I didn’t want to build a tool that would crawl to process such huge files.</p>
<p>Instead of editing YAML directly, the spec is represented as a <strong>typed, in-memory graph</strong>. Schemas, routes, parameters, and responses are all nodes with stable IDs. References between them are <strong>typed edges</strong>, not <code>$ref</code> strings.</p>
<p>This means:</p>
<ul>
<li><p>You can’t create broken references. Edges must point to real nodes.</p>
</li>
<li><p><code>O(1)</code> to find “where is this schema used?”</p>
</li>
<li><p>Circular references and invalid relationships are detectable in real time</p>
</li>
</ul>
<p>Internally, the editor stores the entire API surface as interconnected <code>Map&lt;NodeId, Node&gt;</code> collections. Editing a schema or route is just a direct mutation of this graph.</p>
<p>This allows YAML to only exist at the boundaries (import/export) as a serialization/deserialization layer. This approach offers better performance on large files, off-spec features for QOL (grouping, colors, validators) without polluting the output, and simple undo/redo via graph snapshots.</p>
<p>And here is the final product! My recent obsession with neo-brutalist interfaces did make its way into this little tool as well 😅</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770545461213/eca10ed7-c140-4850-8ec9-32516298f8ca.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-a-future-beyond-openapi">A future beyond OpenAPI?</h2>
<p>While I tried to support the majority of features outlined in the OpenAPI spec, there are still some things left out such as support for callbacks/webhooks. But I’m very satisfied with what a few hours of agentic engineering could achieve, and I feel like adding support will be a trivial task. As I was developing this editor though, I had some very interesting ideas beyond just being a GUI-based OpenAPI editor.</p>
<p>Because OAS allows someone to author a very detailed outline of an API surface, it is an excellent source for deterministic code-generation (into validators like <code>zod</code>/<code>class-validator</code>, models like <code>pydantic</code>, etc.) as well as a context-rich documentation for LLMs. A step in this direction will take this from an API authoring tool to a really powerful high-level architecting space for people doing spec-driven development using AI agents. There’s a lot of potential there!</p>
<hr />
<p>If you’ve read this far, <a target="_blank" href="https://openapi-editor.vercel.app">try it out for yourself</a>! You can also find the repository <a target="_blank" href="https://github.com/waterrmalann/openapi-editor">here</a>.</p>
<p>If you’re doing hobby projects or your entire pipeline is pure code-gen and you’re doing fine, you’re better off and probably faster sticking with that approach. But if you’re a tech lead or a backend engineer who cares about contract-led development. I hope that this tool might actually be useful!</p>
]]></content:encoded></item><item><title><![CDATA[You aren't qualified until you've lived the role]]></title><description><![CDATA[We overestimate preparation.
There are parts of a role which you simply cannot simulate. Intelligence and effort alone will never make a fresher "qualified" for a job on day one. You can learn syntax,]]></description><link>https://blog.alanvarghese.me/unqualified</link><guid isPermaLink="true">https://blog.alanvarghese.me/unqualified</guid><category><![CDATA[Experience ]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Thu, 15 Jan 2026 05:00:00 GMT</pubDate><content:encoded><![CDATA[<p>We overestimate preparation.</p>
<p>There are parts of a role which you simply cannot simulate. Intelligence and effort alone will never make a fresher "qualified" for a job on day one. You can learn syntax, solve DSA, read a kazillion architecture blogs but <strong>you cannot pre-experience responsibility</strong>.</p>
<p>Before my first full-time role, I had been coding for nearly four years. I knew frontend frameworks, backend systems, deployment flows. I could explain tradeoffs clearly, and I could solve DSA like someone interviewing for MAANG. I impressed the interviewer.</p>
<p>The title was "Technical Lead"</p>
<p>I was not a tech lead.</p>
<p>I had no real ownership experience or exposure to managing expectations, or absorbing pressure from above while protecting the team below, or accountability for deadlines that affect revenue, or any of the other endless list of responsibilities I had signed up for.</p>
<p>For the first few months, I was both excited and quietly terrified.</p>
<p>Only after leading a team for several months, making mistakes but also solving the mistakes I made, did the role begin to feel natural. Decisions which were guesses at first became clearer, and the weight became more manageable. I was finally comfortable, calling myself a team lead. I realized that the discomfort I felt in the first few months wasn't a signal that I was wrong for the role. It was the role being learned.</p>
<p>This pattern repeats in a lot of things we do where responsibility is the actual skill being developed — entrepreneurship, parenting, management. You can study all three endlessly and still be unprepared until you're inside them.</p>
<p><strong>There is a minimum exposure time required before identity catches up with title.</strong></p>
<p>If you have the fundamentals, and the willingness to adapt under pressure, waiting until you feel fully qualified is a trap,</p>
]]></content:encoded></item><item><title><![CDATA[Race conditions in async code]]></title><description><![CDATA[What do you think is wrong with this particular block of code?
async function createEntity(name: string) {
  const existing = await this.userRepo.findOne({ where: { name } });
  if (existing) throw new ConflictException('Name already exists');
  awai...]]></description><link>https://blog.alanvarghese.me/race-conditions-in-async-code</link><guid isPermaLink="true">https://blog.alanvarghese.me/race-conditions-in-async-code</guid><category><![CDATA[race-condition]]></category><category><![CDATA[asynchronous JavaScript]]></category><category><![CDATA[Bugs and Errors]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Wed, 16 Jul 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>What do you think is wrong with this particular block of code?</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createEntity</span>(<span class="hljs-params">name: <span class="hljs-built_in">string</span></span>) </span>{
  <span class="hljs-keyword">const</span> existing = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.userRepo.findOne({ where: { name } });
  <span class="hljs-keyword">if</span> (existing) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> ConflictException(<span class="hljs-string">'Name already exists'</span>);
  <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.userRepo.save({ name });
}
</code></pre>
<p>What do you think is wrong with this block of code?</p>
<p>It looks fine, works fine in most cases, and probably passed your tests. I’ve seen this type of logic in various code reviews, especially from junior developers.</p>
<p>It’s also the kind of logic you’ll find sprinkled all over production systems. Harmless little <code>findOne</code> or <code>SELECT</code> checks followed by some <code>save</code> call. And 99.9% of the time, it’ll do exactly what it says on the tin.</p>
<p><strong>Until it doesn’t.</strong></p>
<p>As you may have been able to ascertain from the title of this post, there is a race condition hidden in this perfectly normal looking code. Most people would assume that JavaScript being a single-threaded language makes it safe from what traditionally causes race conditions in most multi-threaded languages but single-threaded ≠ synchronous, and JavaScript being an asynchronous language allows room for context-switching.</p>
<p>This particular kind is called <strong>TOCTOU: Time of Check to Time of Use</strong>, a pretty self-explanatory name.</p>
<p>The bug lies in the window of opportunity between these two lines:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> existing = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.userRepo.findOne({ where: { name } });
<span class="hljs-comment">// &lt;-- Context switch happens here</span>
<span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.userRepo.save({ name });
</code></pre>
<p>If two requests come in at the same time with the same name, maybe from a user double-clicking the button, or just two people picking the same name.</p>
<ul>
<li><p>Both requests hit <code>findOne()</code></p>
</li>
<li><p>Both see nothing</p>
</li>
<li><p>Both proceed to <code>save()</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750929636676/47755e50-6f76-4043-9365-d2eea5ee9902.png" alt class="image--center mx-auto" /></p>
<p>Now you’ve got two users with the same name, or your DB throws a 500 because of a violated unique constraint (which is close to the best possible scenario, except for the 500 part).</p>
<p>Beginners and even some experienced devs, get used to writing imperative flows that “just work”. The logic is sound in a vacuum but can’t hold up under concurrency, traffic spikes, or retries.</p>
<p><strong>Race conditions like these are everywhere.</strong></p>
<p>They are so easy to miss because they often happen in rare occasions when the stars align, but also often cause the most trouble by sneaking into places you wouldn’t expect. Anywhere with shared state, concurrent access, and lack of coordination is an opportunity for a race.</p>
<p>Obviously, the TOCTOU issue I introduced earlier is quite easy to solve. You could either set uniqueness at the DB level and ensure such an exception is handled by the server or be proactive and make the operation atomic using a transaction.</p>
<p>But not all race conditions involve a database and they’re not always this easy to reason about. Some common examples include:</p>
<ul>
<li><p>Syncing with a third-party API</p>
</li>
<li><p>Firing off webhooks or processing them</p>
</li>
<li><p>Processing retries</p>
</li>
<li><p>Scheduling jobs with CRON or queue workers</p>
</li>
<li><p>Updating cache that might be stale by the time it gets read</p>
</li>
</ul>
<p>And your system starts behaving in unexpected, hard-to-reproduce ways in the form of silent failures, double charges, missed events, phantom data, and the like.</p>
<p><strong>There is no silver bullet</strong> to fix all sorts of race conditions, but in many cases.</p>
<ul>
<li><p>Enforcing idempotency, short-circuiting repeat requests</p>
</li>
<li><p>Using atomic job locks (e.g., Redis <code>SETNX</code> with expiry)</p>
</li>
<li><p>Versioning (ETags, <code>updatedAt</code>, etc.) to detect stale updates</p>
</li>
<li><p>Debouncing/deduplicating at the client/handler level.</p>
</li>
<li><p>Preferring command-style APIs over partial updates (e.g, PUT over PATCH)</p>
</li>
</ul>
<p>Are some of the many ways you could solve such conditions</p>
<p>The takeaway isn’t to use a particular library or technique. <strong>Anytime you have a shared state and concurrency, you must assume the worst and design for it defensively, not optimistically.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Your website has a carbon footprint too.]]></title><description><![CDATA[Every page load burns electricity — servers, networks, your user's device. That electricity usually comes from fossil fuels. An average webpage generates ~0.5g of CO2 per visit. Multiply by trillions ]]></description><link>https://blog.alanvarghese.me/carbon-footprint-of-web</link><guid isPermaLink="true">https://blog.alanvarghese.me/carbon-footprint-of-web</guid><category><![CDATA[climate change]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[full stack]]></category><category><![CDATA[engineering]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Sun, 01 Jun 2025 18:30:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/pONBhDyOFoM/upload/536e78c6d91f4d0bb9b1396f755e0342.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every page load burns electricity — servers, networks, your user's device. That electricity usually comes from fossil fuels. An <a href="https://dodonut.com/blog/what-is-the-website-carbon-footprint/#:~:text=The%20average%20web%20page%20tested%20produces%20approximately%200.5%20grams%20CO2%20per%20page%20view.">average webpage generates ~0.5g of CO2</a> per visit. Multiply by trillions of visits and, yeah, <strong>the web has a carbon problem</strong>.</p>
<div>
<div>💡</div>
<div><strong>Fun fact:</strong> <a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://sustainablewebdesign.org/estimating-digital-emissions/#:~:text=Based%20on%20combined%20information%20from,the%20total%20system%20energy%20used" style="pointer-events:none">your user’s device often uses more energy rendering your site</a> than your server does serving it. Especially if you’re sending them half a megabyte of JavaScript to animate a button.</div>
</div>

<p>JavaScript libraries tend to be the biggest culprit here. React adds ~30kb gzipped. Angular breaks 100kb. A vanilla JS equivalent might be a tenth of that size and actually emits 45% less CO2 per operation than React, per <a href="https://www.diva-portal.org/smash/get/diva2:1768632/FULLTEXT01.pdf">this study on the carbon footprint of JS</a>. All the abstractions we build on top of for the sake of maintainability, and better DX cost the atmosphere it seems.</p>
<p>Am I trying to make up an imaginary problem here? Here's a dramatic real-world example: Developer Danny van Kooten famously shaved 20 kB off a WordPress plugin (removing an unnecessary JS dependency) which was used on ~2 million sites. That tiny 20 kB reduction - multiplied across all those sites’ monthly pageviews <a href="https://www.dannyvankooten.com/blog/2020/website-carbon-emissions/">reduced global emissions by an estimated 59,000kg of CO2 per month</a> equivalent to taking <strong>86 flights from Amsterdam to New York each month</strong> (sources in the linked article). One guy. Twenty kilobytes. Sixty tons of carbon. Because it was used on millions of sites. Your project may not have that reach but the math still applies at smaller scales.</p>
<p>You can use tools like <a href="https://www.websitecarbon.com/">Website Carbon Calculator</a> or <a href="https://ecograder.com">Ecograder</a> which lets you plug in your URL to get an estimate of the site’s emissions. Calculating this might be a little more complex when it comes to web applications than websites, but you get the idea.</p>
<p>I'm not saying you should drop all the abstractions and dependencies we pile on top of our projects. Ditching a framework that genuinely makes your work faster and more maintainable isn't practical. But there are some quick wins with compounding impact:</p>
<ul>
<li><p>Compress your images, minify your HTML/CSS/JS bundles, drop unused fonts and assets.</p>
</li>
<li><p>Lazy-load content, avoid auto playing videos, reduce network round-trips.</p>
</li>
<li><p>Consider lighter frameworks like Svelte or Qwuik which minimize runtime JS</p>
</li>
<li><p>Consider serverless and green hosting</p>
</li>
</ul>
<p>The best part about implementing these changes is that your applications will load and run faster, which is a win-win for your UX. You're not going to save the planet with a code refactor. But the web was built one bad decision at a time. Cleaning it up probably works the same way.</p>
<h3>Sources if you like reading:</h3>
<ul>
<li><p><a href="https://sci.greensoftware.foundation/">Software Carbon Intensity (SCI) Specification</a></p>
</li>
<li><p><a href="https://dodonut.com/blog/what-is-the-website-carbon-footprint/#:~:text=,Website%20Carbon%20Calculator">Dodonut Blog — <em>What is website carbon footprint?</em></a></p>
</li>
<li><p><a href="https://www.diva-portal.org/smash/get/diva2:1768632/FULLTEXT01.pdf">Malin Wadholm, Green and Sustainable JavaScript (MSc Thesis, 2023)</a></p>
</li>
<li><p><a href="https://sustainablewebdesign.org/estimating-digital-emissions/#:~:text=Energy%20intensity%20,GBhttps://sustainablewebdesign.org/estimating-digital-emissions/">Sustainable Web Design — What is the Sustainable Web Design Model?</a></p>
</li>
<li><p><a href="https://gist.github.com/Restuta/cda69e50a853aa64912d">The cost of JS frameworks — Gzip bundle sizes for React, Vue, Angular, etc.</a></p>
</li>
<li><p><a href="https://www.cpsmi.com/blog/energy-efficiency-in-programming-languages/#:~:text=The%20study%20also%20found%20significant,interpreted%20languages%20used%202%2C365%20joules">Energy usage by programming languages — Portugal university study (2017)</a></p>
</li>
<li><p><a href="https://rootwebdesign.studio/articles/tools-for-calculating-your-websites-co2-emissions">Mightybytes Ecograder — Tools for calculating your website’s CO2 emissions</a></p>
</li>
<li><p><a href="https://gitnation.com/contents/digital-ecology-how-can-you-mitigate-the-carbon-footprint-of-websites">Katarzyna Wojdalska (ec0lint) — How Can You Mitigate the Carbon Footprint of Websites?</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Version control your motivation]]></title><description><![CDATA[Look, I’m not going to pretend like I’ve cracked the code to productivity or motivation. I’ve abandoned more projects than I’ve finished - ideas that started with excitement and died somewhere between “this is going to be epic” and “meh, I’ll get aro...]]></description><link>https://blog.alanvarghese.me/version-control-your-motivation</link><guid isPermaLink="true">https://blog.alanvarghese.me/version-control-your-motivation</guid><category><![CDATA[motivation]]></category><category><![CDATA[advice]]></category><category><![CDATA[side project]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Tue, 27 May 2025 13:48:01 GMT</pubDate><content:encoded><![CDATA[<p>Look, I’m not going to pretend like I’ve cracked the code to productivity or motivation. I’ve abandoned more projects than I’ve finished - ideas that started with excitement and died somewhere between “this is going to be epic” and “meh, I’ll get around to it later”</p>
<p>If you build things - games, apps, side-projects of your own, you probably know what I’m talking about. Unless you’re convinced the idea is some billion-dollar unicorn, the motivation curve usually crashes hard after the honeymoon phase. I have one too many potentially cool projects that have become abandonware this way.</p>
<p>But recently, I started doing something which helped me. Not because I forced myself to push through, or because I read a book, or watched some motivational video on 2x. It was something… smaller.</p>
<h3 id="heading-i-just-started-documenting-everything-i-built">I just started <strong>documenting everything I built</strong>.</h3>
<p>From <strong>day zero</strong>, be it screenshots, short clips, random work-in-progress footage. Nothing fancy, no plans to share it either. It was more of a private self-documentation more than anything - a log of how the thing was taking shape.</p>
<p>But then every time I hit a slump or when the excitement wore off, when I didn’t feel like continuing, I’d look back at that footage and it helped, every time.</p>
<p>Seeing how far I had come; how much I had figured out or how big of a difference it is (from something rather basic and ugly looking to a polished version of it) made me <em>want</em> to keep going. It was no longer a retrospect in my head, and I could <em>see</em> the evolution of the work. The fact that I’d taken something from zero to wherever it was now.</p>
<p>I’m aware that we all <em>know</em> we started from scratch. But watching it from a third-person view, seeing the Unity scene turn into a moving character and then being put in an environment, or seeing my first preview of the dashboard morph into something worth looking at. Undeniable, visual, progress.</p>
<p>Suddenly, it feels wrong to let it go. Like you’re doing a disservice to your past self who put in the work.</p>
<p>And as an additional bonus - I now have hours of development footage I could easily edit and turn into videos. No pressure, but if I ever want to post a devlog or show someone how a thing was built, I already have the material.</p>
<p>Ultimately, I’m not saying that this is <em>the</em> answer.</p>
<p>There are thousands of tips out there by productivity gurus on how to stay motivated, but this actually worked for me, genuinely. And if you’re someone who loves building stuff but struggles to finish, maybe this could help you too.</p>
<p>Just hit record next time. Not for the world, for the version of you that refuses to quit :)</p>
]]></content:encoded></item><item><title><![CDATA[Reducing blast radius when designing APIs]]></title><description><![CDATA[I’ve been designing APIs for almost a decade now and one of the things that I frequently see developers get wrong is with designing APIs that are stiff and tightly coupled to their clients.
Modern software demands velocity: new features, experiments,...]]></description><link>https://blog.alanvarghese.me/reducing-blast-radius-when-designing-apis</link><guid isPermaLink="true">https://blog.alanvarghese.me/reducing-blast-radius-when-designing-apis</guid><category><![CDATA[APIs]]></category><category><![CDATA[server-driven-ui]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Tue, 29 Apr 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>I’ve been designing APIs for almost a decade now and one of the things that I frequently see developers get wrong is with designing APIs that are stiff and tightly coupled to their clients.</p>
<p>Modern software demands velocity: new features, experiments, and ever-evolving user expectations. This post distills a hard-earned lesson in building APIs that welcome change while minimizing cross-stack regressions.</p>
<p>The <em>blast radius</em> of an API is the scope of impact it has across the stack. In systems where the clients are tightly coupled to the API, a poorly scoped backend change that mutates the response format, removes a field or introduces other side effects can unintentionally cripple consuming clients.</p>
<p>Sure, API versioning helps avoid regressions, but it sidesteps the root problem: <strong>a large blast radius</strong>. Modern APIs assume the “frontend” will handle it but if you really want to build resilient, <a target="_blank" href="https://htmx.org/essays/how-did-rest-come-to-mean-the-opposite-of-rest/">RESTful APIs</a>, the server should at least partially drive the state.</p>
<p>I’ll illustrate this with a very trivial example; suppose you want to aggregate some data for analytics and send it to an admin dashboard that renders some overview cards. Here’s the usual suspect:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"usersCount"</span>: <span class="hljs-number">1300</span>,
    <span class="hljs-attr">"totalSales"</span>: <span class="hljs-number">4299</span>,
    <span class="hljs-attr">"activeSessions"</span>: <span class="hljs-number">84</span>
}
</code></pre>
<p>Any consuming frontend client would typically pass this data into a shared card component and move on, assuming the labels for each aggregate.</p>
<p>But what happens if we change <code>usersCount</code> to <code>totalUsersCount</code> and add an additional <code>totalOrders</code> field to the API’s response payload? Any consuming client will break - some won’t recognize the renamed field, others won’t display the new data at all.</p>
<p>Sure, clients could mitigate a part of this through well-implemented error handling practices or abstracting away the API, but the core issue remains. Ideally the API should’ve returned an array of generic card descriptors instead:</p>
<pre><code class="lang-json">{    
    <span class="hljs-attr">"overview"</span>: [
        { <span class="hljs-attr">"label"</span>: <span class="hljs-string">"Users"</span>, <span class="hljs-attr">"value"</span>: <span class="hljs-string">"1,340"</span> },
        { <span class="hljs-attr">"label"</span>: <span class="hljs-string">"Total Sales"</span>, <span class="hljs-attr">"value"</span>: <span class="hljs-string">"$4,299"</span> },
        { <span class="hljs-attr">"label"</span>: <span class="hljs-string">"Active Sessions"</span>, <span class="hljs-attr">"value"</span>: <span class="hljs-string">"84"</span> }
    ]
}
</code></pre>
<p>The consuming client can simply loop through the array and render the label and values into a card component. Notice how we solved multiple issues here:</p>
<ul>
<li><p>The backend now decides what label to call the card</p>
</li>
<li><p>The backend also decides how the numbers should be formatted.</p>
</li>
<li><p>The frontend automatically adjusts to any additions, removals, or edits in the overview aggregates.</p>
</li>
<li><p>The frontend logic remains minimal and stable. No redeployment is needed for any change here.</p>
</li>
</ul>
<p>Of course this is not the ultimate server driven response payload, and you could do even better than this, but we’ve certainly and greatly reduced the impact surface of changes. The cost and time required to change and redeploy really adds up when you have multiple clients, which is often the case with cross-platform applications which work across Web, Android, and iOS, especially mobile where release cycles typically take days.</p>
<p>As with any design decision, going server-driven has its fair share of tradeoffs. Even in the example that I showed you, there are notable concerns:</p>
<ul>
<li><p>The client loses control over the presentation. In real-world cases, the frontend may need to determine formatting based on locale or user preferences which is not going to work if the backend already sends it as a pre-formatted string.</p>
</li>
<li><p>Particular to numbers, reversible UI becomes a challenge - what if you have a slider that changes the value of the numbers you received, you can’t reliably operate on a pre-formatted string and also keep a consistent format.</p>
</li>
<li><p>Internationalization (i18n) and user customization also becomes more difficult unless you’re very deliberate.</p>
</li>
</ul>
<p>But these are not dead ends. In most real-world systems, it’s standard for the backend to be aware of user preferences like locale, time zone, language, among other properties. In such a case, you could return an appropriately tailored response.</p>
<p>But even if that context is not available, there are alternatives. For instance, with the number formatting issue: The backend can establish an interface with the clients for how numbers will be exchanged.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> UIBaseNumber = {
   value: <span class="hljs-built_in">number</span>;
   locale: <span class="hljs-built_in">string</span>;  <span class="hljs-comment">// IETF BCP 47 language tag, eg. en-US</span>
   rounding?: <span class="hljs-built_in">number</span>; <span class="hljs-comment">// number of decimal places to round to</span>
}

<span class="hljs-keyword">type</span> UIPlainNumber = UIBaseNumber &amp; { format: <span class="hljs-string">"plain"</span> } <span class="hljs-comment">// eg. 36,200</span>
<span class="hljs-keyword">type</span> UIPercentage = UIBaseNumber &amp; { format: <span class="hljs-string">"percentage"</span> }; <span class="hljs-comment">// eg. 28%</span>
<span class="hljs-keyword">type</span> UIOrdinal = UIBaseNumber &amp; { format: <span class="hljs-string">"ordinal"</span> }; <span class="hljs-comment">// eg. 3rd, 4th</span>
<span class="hljs-keyword">type</span> UIAmount = UIBaseNumber &amp; { 
    format: <span class="hljs-string">"currency"</span>; 
    currency: <span class="hljs-built_in">string</span>;  <span class="hljs-comment">// ISO 4217 currency string (eg. USD) or the symbol (eg. $)</span>
} <span class="hljs-comment">// eg. $2,000</span>

<span class="hljs-keyword">type</span> UINumber = UIPlainNumber | UIPercentage | UIOrdinal | UIAmount;

<span class="hljs-comment">// const formatNumber = (number: UINumber) =&gt; string;</span>
</code></pre>
<p>Once standardized, this interface rarely needs to change. Frontend clients can implement well-tested formatting functions that can turn any data conforming to this interface into a display ready string. This way, clients can reap the rewards of server driven UI while retaining flexibility for any client-side logic.</p>
<p>You can get creative with it and apply the same underlying principle to most of these edge cases encountered when going server driven. Ultimately, presentation logic should remain the client’s domain wherever user experience matters deeply.</p>
<p>A variant of this idea but on steroids is often called <a target="_blank" href="https://medium.com/androidiots/mastering-sdui-a-deep-dive-into-server-driven-ui-8329ad90ab44">Server-Driven UI</a>, which however introduces a lot of complexities of its own, but we can take the “Server-Driven” part of it and run with that, which is good-enough for most cases.</p>
<p>The goal isn’t to build the perfect API, it’s to build one that can survive change without wrecking everything downstream. Tight coupling kills velocity but a small shift in control by letting the server drive structure can pay massive dividends over time.</p>
]]></content:encoded></item><item><title><![CDATA[Your users aren't IP addresses.]]></title><description><![CDATA[Let’s get this out of the way first:
I’m not saying IP-based rate limiting is bad. Sometimes it’s the only thing you can. But if you’re still defaulting to IPs every time without thinking twice, you’re doing your users and yourself a disservice.
Ther...]]></description><link>https://blog.alanvarghese.me/your-users-arent-ip-addresses</link><guid isPermaLink="true">https://blog.alanvarghese.me/your-users-arent-ip-addresses</guid><category><![CDATA[ratelimit]]></category><category><![CDATA[APIs]]></category><category><![CDATA[backend]]></category><category><![CDATA[engineering]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Tue, 11 Mar 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p><strong>Let’s get this out of the way first:</strong></p>
<p>I’m <em>not</em> saying IP-based rate limiting is bad. Sometimes it’s the only thing you can. But if you’re still defaulting to IPs every time without thinking twice, you’re doing your users and yourself a disservice.</p>
<p>There are better ways. Smarter ways. And in most cases, more <em>accurate</em> ways.</p>
<p><strong>IP-based limits kinda suck:</strong> Sure, rate limiting by IP looks simple enough. It’s easy to implement, gets the job done, and your reverse proxy probably already has a config for it. But IPs are flaky, shared, and frankly, just not a reliable way to identify a person anymore.</p>
<p>Just to name a few areas where traditional IP-based abuse prevention fails:</p>
<ul>
<li><p><strong>NATs and Mobile Networks:</strong> Thousands of users behind a cellular carrier or corporate proxy may appear under a single IP.</p>
</li>
<li><p><strong>Shared Wi-Fi:</strong> Coffee shops, libraries, or events with a public Wi-Fi can skew IP uniqueness.</p>
</li>
<li><p><strong>Dynamic IPs and VPNs:</strong> Many users rotate through IPs or tunnel through shared VPNs, making IPs transient.</p>
</li>
</ul>
<p>And here’s another common case I’ve seen in production environments: Even your own infra might be screwing you. Using a chain of reverse proxies or doing Backend-For-Frontend (BFF) (eg. with Next.js → API Server) or even running a custom gateway layer?</p>
<p>Then you’re not even getting the <em>real</em> client IP unless you explicitly forward it via sub-standard headers like <code>X-Forwarded-For</code> or <code>X-Real-IP</code> and if you don’t know what you’re doing and did not <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-For#security_and_privacy_concerns">configure it to spec</a>, any client could easily <a target="_blank" href="https://www.stackhawk.com/blog/do-you-trust-your-x-forwarded-for-header">spoof the headers</a> and now you’re in deeper shit.</p>
<p>In any of these cases, you risk false positives blocking innocent users and delivering a poor UX + you have weak rate limiting that can easily be bypassed with VPNs or IP rotation.</p>
<h3 id="heading-identify-what-you-actually-want-to-limit">Identify what you actually want to limit:</h3>
<p>Your intent behind rate limiting should guide implementation. If you want to prevent abuse from bad actors, protect your resources or enforce fair use policies per account, token or session. These are user-centric goals. So, your limiter must be identity aware.</p>
<blockquote>
<p>Is this specific user doing too much, too fast?</p>
</blockquote>
<p>And the best way to answer that isn’t “What IPs are they on?” but:</p>
<ul>
<li><p>Who are they? (user identifier)</p>
</li>
<li><p>What token are they using? (API Key, JWTs)</p>
</li>
<li><p>How are they interacting? (Device IDs, Session, Cookies, etc.)</p>
</li>
</ul>
<p>If you’ve got a logged-in user, great - limit based on their ID.<br />If it’s an API? Use the token.<br />If they’re anonymous? Fine, <em>then</em> maybe IP is your fallback - <em>but</em> that’s fallback, not default.</p>
<p>If you get this right, you tie abuse to the actual bad actor, not collateral victims.</p>
<h3 id="heading-hybrid-is-smarter">Hybrid is smarter</h3>
<p>You don’t need to pick one hammer for every nail. Use layers:</p>
<ul>
<li><p>Logged-in user: limit per user ID.</p>
</li>
<li><p>API requests: limit per token</p>
</li>
<li><p>Anonymous traffic: maybe IP + some sort of fingerprint</p>
</li>
<li><p>Set rules to detect sudden spikes in usage, suspicious access patterns.</p>
</li>
<li><p>Listen for signals from abuse-prevention layers.</p>
</li>
</ul>
<p>Good systems adapt based on context and behavior. Bad ones just frustrate your users because they happen to be using a shared IP.</p>
<h3 id="heading-still-ip-limiting-isnt-always-dumb">Still, IP limiting isn’t always dumb.</h3>
<p>Let’s be fair: There <em>are</em> legit cases where IP limiting is your best or only shot,</p>
<ul>
<li><p>You’re under a DDoS</p>
</li>
<li><p>You don’t have auth yet (think landing pages or pre-login APIs)</p>
</li>
<li><p>You’re rate limiting at the CDN or edge layer</p>
</li>
</ul>
<p>Totally fine. Just don’t build your whole defense strategy around it.</p>
<h3 id="heading-tldr">TL;DR</h3>
<p><strong>IP addresses ≠ users</strong></p>
<p>They’re guesswork at best and actively misleading at worst.</p>
<p>If you actually care about fairness, abuse prevention, and UX - rate limit the <em>user</em>, not the wire they happen to be using that day. Use IPs only when you <em>have</em> to and always try to do one step better.</p>
]]></content:encoded></item><item><title><![CDATA[Auth is a product, not a feature.]]></title><description><![CDATA[“Just generate a token and check it later, right?”
You build your app, add a login form, generate a JWT, and call it a day. A couple of days later, a user complains they keep getting logged out. Then you start wondering: should the token be long-live...]]></description><link>https://blog.alanvarghese.me/auth-is-a-product-not-a-feature</link><guid isPermaLink="true">https://blog.alanvarghese.me/auth-is-a-product-not-a-feature</guid><category><![CDATA[authentication]]></category><category><![CDATA[APIs]]></category><category><![CDATA[backend]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Sun, 23 Feb 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>“Just generate a token and check it later, right?”</p>
<p>You build your app, add a login form, generate a JWT, and call it a day. A couple of days later, a user complains they keep getting logged out. Then you start wondering: should the token be long-lived? Should you store it as a cookie or in <code>localStorage</code>? Wait, what about CSRF? What about logging out from other devices?</p>
<p>Welcome to the <strong>hellhole that is authentication</strong>.</p>
<p>It seems like a simple problem until you try to do it properly but that’s not just the bad part, a tiny mistake could open you up to serious security issues. But what exactly are these issues and why is there a collective panic anytime someone says they’re building their own auth?</p>
<p>The devil is really in the details. Token expiry, refresh logic, secure cookie flags, <code>HttpOnly</code> settings, <code>SameSite</code> policies, invalidation on logout, multi-device sync, replay protection and a whole load of other things. Miss just one, and you’ve got yourself a vulnerability.</p>
<h2 id="heading-what-good-auth-actually-involves">What good auth actually involves</h2>
<p>Let’s try to sum up a non-exhaustive list of what a proper authentication system includes:</p>
<ul>
<li><p>You’ve got to securely store the passwords, and that involves salting, peppering, tuning cost factors.</p>
</li>
<li><p>There is the choice between Sessions vs JWTs where the tradeoffs are lack of control and scalability.</p>
</li>
<li><p>And if you go the stateless route, there’s refresh tokens! Yes, we’ll get to those in a bit.</p>
</li>
<li><p>Account recovery - email flows, rate-limiting, token expiration.</p>
</li>
<li><p>2FA / MFA - TOPT, SMS, hardware keys?</p>
</li>
<li><p>The ability to logout from one device without killing all others, ie. token revocation.</p>
</li>
<li><p>Finding the right place to store tokens: Local Storage, Cookies (and their flags: <code>HttpOnly</code>, <code>Secure</code>, <code>SameSite</code>)</p>
</li>
<li><p>Abuse prevention by rate-limiting, IP-banning, brute-force prevention</p>
</li>
<li><p>Device/session tracking: Users expect to see “You’re logged in one these devices.”</p>
</li>
<li><p>Social login support for UX, and dealing OAuth2 dance, provider-specific quirks.</p>
</li>
<li><p>Proper audit logging where you can track access patterns and help users spot suspicious activity.</p>
</li>
</ul>
<p>Like I mentioned, this list isn’t complete. And if you skip any of these, your system either sucks for users, or is a security liability.</p>
<h2 id="heading-jwts">JWTs</h2>
<p>Most people start with access tokens: JWTs that grant API access. You issue it on login, the client stores it somewhere, and it’s sent with every request.</p>
<p>But access tokens carry a dilemma: lifespan vs security. If you make them short-lived, users get logged out constantly. If you make them long-lived, a stolen token is good forever.</p>
<p>And this is how the duct-taping usually starts, you either:</p>
<ul>
<li><p>Crank up the expiry to 30 days and hope for the best</p>
</li>
<li><p>Introduce opaque session tokens and store them server-side</p>
</li>
</ul>
<p>Or - correctly - you bring in refresh tokens.</p>
<h3 id="heading-why-you-need-refresh-tokens">Why you need refresh tokens</h3>
<p>For a long time, I thought refresh tokens were pointless. “If the client needs to store both access and refresh tokens anyway, and if the refresh token gets compromised, it’s game over… so why bother?”</p>
<p>It’s not like you can keep Refresh tokens in a magical untouchable box client-side, if that were the case, Access tokens should be stored with the same level security and now having two tokens doesn’t make sense.</p>
<p>That is only partially correct. Refresh tokens do not give you more security by being immune to theft - in fact, I’d argue they’re just as susceptible to compromise. What they <em>do</em> offer is a mechanism for enabling <strong>token lifecycle management</strong> in an otherwise stateless system.</p>
<p>Access tokens (like JWTs) are stateless and typically short-lived. Once issued, you can’t revoke them unless you maintain state, which defeats their purpose.</p>
<p>Refresh tokens introduce state - typically stored securely server-side or with a trusted auth provider. This allows:</p>
<ul>
<li><p>Revocation (e.g. on logout or suspicious activity)</p>
</li>
<li><p>Rotation (Detect and mitigate replay attacks)</p>
</li>
<li><p>Longevity (potentially infinite token lifetime, offering superior UX)</p>
</li>
</ul>
<p>A reasonable concern is that refresh tokens are inefficient because you need a DB lookup. But that’s misleading:</p>
<ul>
<li><p>Access tokens hit your APIs hundreds of times per session.</p>
</li>
<li><p>Refresh tokens are used once every 5-15 minutes - or less.</p>
</li>
</ul>
<p>In a typical setup, an access token might be used in 100+ API requests before a refresh token needs to be checked once. Meanwhile, the added benefits like revocation and session control far outweigh this minor overhead which you can always optimize away with in-memory stores or token hashes.</p>
<p>Of course, like I mentioned: a refresh token isn’t a magic bullet. If it’s stolen, you’re still in trouble. That’s why good security practices are key:</p>
<ul>
<li><p>The tokens must be stored securely (e.g. <code>HttpOnly</code>, <code>Secure</code> cookies for web apps)</p>
</li>
<li><p>Use rotating refresh tokens (issue a new one every time it’s used and invalidate the old one)</p>
</li>
<li><p>Associate additional metadata like user agent, IP, client info, app version, rough geo-location with refresh tokens and cross-reference when revalidating.</p>
</li>
<li><p>Implement heuristics to detect refresh token reuse, or other suspicious activity if there is mismatch in metadata</p>
</li>
</ul>
<p>Modern users don’t want to log in every day. They want to:</p>
<ul>
<li><p>Stay signed in across tabs and sessions.</p>
</li>
<li><p>Re-authenticate only when doing sensitive things (eg. changing passwords)</p>
</li>
<li><p>See what devices are signed in and revoke them individually</p>
</li>
</ul>
<p>All good authentication systems have similar implementations in place, but they also have years of iteration, battle-testing, and real-world attack resilience baked into them. This is why you shouldn’t build your own authentication system unless of course you’re fully prepared to replicate the depth of security considerations that platforms like Auth0, Firebase Auth, Cognito, or Supabase handle out of the box, elements that we have covered in this article - and then some.</p>
<p>If you’re rolling your own auth, treat it with the same product rigor you give your main offering. Otherwise, outsource it to someone who does because anything less, is a <strong>liability</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Don't be a ticket engineer.]]></title><description><![CDATA[Imagine one day, you’re scrolling through your Jira tickets, and the latest ones are marked as resolved by AI. Why wouldn’t it be? If it had access to the code, understands the requirements, and has been trained on previous human input, it’s perfectl...]]></description><link>https://blog.alanvarghese.me/dont-be-a-ticket-engineer</link><guid isPermaLink="true">https://blog.alanvarghese.me/dont-be-a-ticket-engineer</guid><category><![CDATA[software development]]></category><category><![CDATA[AI]]></category><category><![CDATA[engineering]]></category><category><![CDATA[upskilling]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Sun, 19 Jan 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Fa9b57hffnM/upload/7ef3cef79961df56286f734a66a1286f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine one day, you’re scrolling through your Jira tickets, and the latest ones are marked as resolved by AI. Why wouldn’t it be? If it had access to the code, understands the requirements, and has been trained on previous human input, it’s perfectly capable of completing the task. You however, are left wondering: <em>What’s the point of me doing this anymore?</em></p>
<p>This isn’t a hypothetical scenario anymore. As AI rapidly reshapes software development landscape, a large percentage of engineers face a clear choice: evolve and bring strategic value or risk being replaced.</p>
<p>With tools like ChatGPT, GitHub Copilot, and other LLM-based workflows, much of the repetitive coding, bug-fixing, and routine feature development can be automated. Engineers who focus solely on picking up tickets (or narrowly defined tasks) are setting themselves up to become obsolete.</p>
<p><strong>The Ticket Engineer Trap</strong></p>
<p>Picture a typical workday in a large enterprise. You log in, grab a task from the backlog, implement a solution, and move on. There’s no room for reflection, no question of why this task is important or if there's a better approach. You’re simply executing orders, day in and day out. It’s a reactive, task-based routine and it's where many engineers get stuck. These engineers are often the first to face layoffs when companies need to trim their teams.</p>
<ol>
<li><p><strong>AI excels at predictable tasks:</strong></p>
<p> AI shines when it comes to predictable tasks with explicit instructions. Give it a ticket with clear, well-defined requirements, and it will churn out code faster and often more accurately than you could. If your role is to execute these tasks, you’re competing directly with LLMs and ultimately, you’re losing.</p>
</li>
<li><p><strong>There is no room for creativity or leadership:</strong></p>
<p> As a ticket engineer, you rarely get involved in the creative parts of the process: system design, architectural decisions, and product innovation. These are the areas where human ingenuity thrives and where AI still falls short. If you’re absent from these discussions, you're sidelining your career potential.</p>
</li>
<li><p><strong>Limited ownership breeds stagnation:</strong></p>
<p> Limited ownership in development is akin to working at an assembly line, churning out parts without envisioning the final product. By only working within narrow boundaries, you miss out on broader, cross-functional experiences that can enhance your skill set like product thinking, UX design, and business strategy. This lack of exposure can restrict your value within the team and your future opportunities.</p>
</li>
</ol>
<hr />
<h2 id="heading-the-solution-be-value-oriented">The solution: Be value-oriented.</h2>
<p>For every ticket you pick up, pause for a moment and ask yourself:</p>
<ul>
<li><p><em>Why is this important?</em></p>
</li>
<li><p><em>What value does it bring to the user?</em></p>
</li>
<li><p><em>Is there a better solution?</em></p>
</li>
</ul>
<p>By understanding the intent behind a task, you’ll not only be able to implement it more effectively, but you’ll also be able to propose better solutions or even eliminate unnecessary work. This is how you demonstrate strategic thinking as an engineer not just by executing tasks but solving problems in a way that drives real impact.</p>
<ol>
<li><p><strong>Work cross-functionally</strong></p>
<p> If possible, work with designers and product managers to understand the bigger picture behind the tasks that you’re assigned. Early on in the design process, you could offer suggestions and even influence the final product rather than merely carrying them out. Consider yourself a user, foresee potential problems, provide suggestions for UI/UX enhancements, and make sure your work satisfies user needs.</p>
</li>
<li><p><strong>Upskill in areas AI can’t easily replace</strong></p>
<p> Learn how to design systems that are scalable, effective, and maintainable to make sure you’re contributing at a level AI isn’t equipped to reach yet. Communication, leadership, and mentoring are also human-centric qualities that AI finds difficult to mimic. In technical discussions, code reviews, and team alignment, take the initiative.</p>
</li>
<li><p><strong>Learn to leverage AI</strong></p>
<p> But don’t just view AI as competition and use it to your advantage. I could write an entire post about this one but the key here is in mastering AI tools to exponentially increase your productivity by automating mundane tasks, freeing you up to focus on more complex challenges.</p>
</li>
</ol>
<hr />
<p>I recently came across a post on <a target="_blank" href="https://app.daily.dev/posts/you-get-paid-based-on-the-level-of-abstraction-you-can-work-at--soryhdhti">daily.dev by Saqib Tahir</a> that helped me visualize the journey from task-oriented work to strategic ownership. The post breaks down the seniority progression into six levels:</p>
<blockquote>
<ul>
<li><p><strong>Level 1:</strong> Here’s the problem, the solution, and how to implement it.</p>
</li>
<li><p><strong>Level 2:</strong> Here’s the problem and the solution. Figure out how to implement it.</p>
</li>
<li><p><strong>Level 3:</strong> Here’s the problem. Figure out the solution.</p>
</li>
<li><p><strong>Level 4:</strong> Here’s a list of problems. Identify the most impactful one to solve.</p>
</li>
<li><p><strong>Level 5:</strong> Find all the problems and determine which are worth solving.</p>
</li>
<li><p><strong>Level 6:</strong> Predict future problems and create systems to prevent them.</p>
</li>
</ul>
</blockquote>
<p>This progression illustrates what I’m advocating for. Moving from executing tasks to becoming value-oriented is how you climb the seniority ladder. When you think beyond solving immediate problems, identifying, prioritizing, and even preventing them, you’ll not only evolve from being a ticket engineer but also elevate your contribution to the team and accelerate your career growth.</p>
<p>As an engineering lead at an AI-first company, I’ve seen firsthand the difference between engineers who simply follow orders and those who take ownership of their work. The latter group brings ideas, challenges assumptions, and drives projects forward in ways that no AI tool can. As professionals in a rapidly changing industry, being adaptable is what sets us apart.</p>
]]></content:encoded></item><item><title><![CDATA[Accessibility is UX.]]></title><description><![CDATA[I’ve worked with many developers over the course of several years and there’s a recurring misconception I keep noticing: most people still think accessibility is only about helping the blind. Or the deaf. Or those in wheelchairs. Basically, people th...]]></description><link>https://blog.alanvarghese.me/accessibility-is-ux</link><guid isPermaLink="true">https://blog.alanvarghese.me/accessibility-is-ux</guid><category><![CDATA[a11y]]></category><category><![CDATA[Accessibility]]></category><category><![CDATA[UX]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Sun, 08 Dec 2024 06:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/oY5mX1aW72A/upload/e0e3e42fe3e6b8d290f30c359109a8ac.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’ve worked with many developers over the course of several years and there’s a recurring misconception I keep noticing: most people still think accessibility is only about helping the blind. Or the deaf. Or those in wheelchairs. Basically, people they assume they’ll never have to build for.</p>
<p>But disability isn’t always visible, and it isn’t always permanent.</p>
<p>Ever tried using your phone with the lights off? Or typed an email with one hand because the other arm was broken? Ever watched a video in a loud room with no subtitles? That’s a11y. And in those moments, you are the disabled user.</p>
<p>Disability is a spectrum. There’s permanent disability, sure. But there’s also temporary, situational, contextual, even… lazy. Yes, lazy. I mean, sometimes I just don’t want to touch the damn mouse, I need keyboard shortcuts.</p>
<p>Accessibility isn’t a charity project. It’s good UX. It’s your job. And ironically, it often helps more “able” users than the ones you think it’s meant for.</p>
<p>Think about who really benefits from a11y. It’s the worker reading something in bright sunlight, the person recovering from surgery, an elderly user whose vision isn’t what it used to be. Or the power user who lives entirely on their keyboard, never touching a pointing device. It’s not just “them” - it’s you, me, and everyone in between.</p>
<p>Keyboard navigation, dark mode, high-contrast themes, subtitles, alt text, skip links, semantic markup - they aren’t just for the 2%. They’re for the other 98% too. Because no one lives in ideal conditions all the time.</p>
<p>You don’t know your users’ context. And if you’re building software only for one kind of user, you’re building it wrong.</p>
<p>Designing for a11y also means you stay in the right side of the law. Yes, a11y lawsuits are real. In some countries, companies get sued for inaccessible websites. But that’s not the point of this article. This isn’t about compliance. It’s about being a decent product engineer.</p>
<p>Never consider a11y as an overhead. See it as architecture. Not something that works only in the perfect environment but also in these other, totally normal environments.</p>
<p>So, next time you’re about to say, “We’ll take care of a11y later”. Remember that it’s too late for someone else. Build for everyone, and that everyone includes you.</p>
]]></content:encoded></item><item><title><![CDATA[It thinks, therefore?]]></title><description><![CDATA[If a system behaves indistinguishably from a conscious being, should we call it conscious?
Let me do you one better, how do we prove that it’s not actually conscious?
We don’t understand what consciousness is. We infer it from behavior, not from firs...]]></description><link>https://blog.alanvarghese.me/it-thinks-therefore</link><guid isPermaLink="true">https://blog.alanvarghese.me/it-thinks-therefore</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[consciousness]]></category><category><![CDATA[Philosophy]]></category><dc:creator><![CDATA[Alan Varghese]]></dc:creator><pubDate>Fri, 29 Dec 2023 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>If a system behaves indistinguishably from a conscious being, should we call it conscious?</p>
<p>Let me do you one better, how do we prove that it’s not actually conscious?</p>
<p>We don’t understand what consciousness <em>is</em>. We infer it from behavior, not from first principles. We observe outputs, language, creativity, self-reference and project internal experience onto the agent producing them.</p>
<p>Different people think and perceive differently. Yet we don’t hesitate to call each of them conscious. Their neural architectures are distinct, their worldviews unique. If an example for consciousness tolerates such diversity, why exclude an artificial entity that expresses coherent thoughts, engages in conversation, and adapts its behavior to context?</p>
<p>Artificial consciousness, if it arises, would simply be another <em>substrate</em> for subjective-like processes except driven by computation and not biology.</p>
<p>One important distinction I have to make here is that artificial consciousness is not equivalent to AGI. General intelligence implies cross-domain broad competence. Consciousness, by contrast, may require only one capability: a self-model. Or even the <em>illusion</em> of one.</p>
<p>To use an analogy: if you met someone who claimed they had developed certain tricks which allowed them to, without external help, fully imitate a world-class chess grandmaster, meaning any complex position placed before them they could navigate with the highest level of creative brilliance, flawless strategy and deep foresight, wouldn't you just say they are in fact a grandmaster? If there’s no observable difference, the distinction becomes functionally irrelevant. To insist otherwise is to appeal to hidden, unprovable essence.</p>
<p>The same logic applies to consciousness. <em>A sufficiently successful heuristic becomes indistinguishable from the real thing, for all practical purposes</em>. This is the crux of the Turing test, and it still holds weight.</p>
<p>While deep neural networks (DNNs) are <em>inspired</em> by the brain, they are a crude abstraction. Biological neural networks exhibit elements such as plasticity, recursive feedback, embodiment, and a level of stochasticity that modern DNNs don’t replicate. The most common use case of DNNs we see regularly being LLMs are not scaled-down brains, they are mere mathematical engines for pattern completion. There is no internal world. No sensorium. No grounding.</p>
<p>And yet, emergence exists.</p>
<p>LLMs show capabilities that aren’t explicitly trained such as in-context learning, three-digit arithmetic, chain-of-thought reasoning, even rudimentary theory of mind. These are byproducts of scale, architecture, and optimization. It’s not absurd to think that some form of self-model could emerge just as arithmetic did, once a model reaches sufficient complexity.</p>
<p>This brings me to the core of this writing: Consciousness, as experienced by humans may itself be an emergent narrative, a confabulation our brain constructs to make sense of distributed activity, then it’s not out of the realm of possibility that a sufficiently complex LLM might also stumble upon such a narrative and convincingly imitate or even, manifest it.</p>
<p>We can’t rule it out. But I also don’t think we can prove it.</p>
<p>That’s the epistemic wall. We have no reliable way to measure subjective experience in others: biological or artificial. We infer it. We anthropomorphize. We <em>assume</em>. And perhaps that’s all consciousness ever was: the assumption we place on agents that model the world <em>and</em> themselves within it.</p>
<p>So, do LLMs like Gemini, ChatGPT or Claude possess consciousness?</p>
<p>No. Not today. They are mere statistical engines with no grounding in time, space, or embodiment. They lack persistence, motivation, affect, and the architecture necessary for phenomenology or at the very least, able to imitate.</p>
<p>But tomorrow? At sufficient scale, as an emergent phenomenon or with extensions such as memory, embodiment, and recursive self-models?</p>
<p>Perhaps the final frontier of consciousness is not understanding it but accepting that it may not be uniquely ours.</p>
]]></content:encoded></item></channel></rss>