Timelines and Cultural practices: Baghor Stones

Lets start somewhere. 11,000 years ago. The Ice Age has long gone, humans have started hunting and gathering in groups. We have not yet invented agriculture. To the West, the first stone walls of Jericho are being stacked, and in Turkey, the hunters of Göbekli Tepe are carving massive predatory totems (Schmidt, 2000).

In the subcontinent, things are interesting. On the Ganga Plains (modern-day UP) live the Titans of Sarai Nahar Rai. These are robust hunter-gatherers, some standing over six feet tall, who are already honoring their dead with red ochre and ritual burials — perhaps the first hints of an ancestor cult (Kennedy et al., 1986; Misra, 2001).

Along the “Teri” (red sand dunes) of coastal Tamil Nadu, specifically around Tuticorin, a specialized culture is grappling with a planet in flux. As the glaciers melt, sea levels rise, swallowing the land bridge that once connected India to Sri Lanka. These foragers don’t retreat.

At the center of the subcontinent, something different is happening. At Bhimbetka, rock art is transitioning from simple linear figures into complex depictions of communal dance and mythical animals (Mathpal, 1984). This is the birth of visual vocabulary and expressive creativity, especially in the subcontinent.

It is 1982. G.R. Sharma and J.D. Clark, leading a joint Indo-American team with J.M. Kenoyer and J.N. Pal, stand over a site known as Baghor 1, on the banks of the Son River Valley. Near Medhauli Village, Sidhi District, Madhya Pradesh.

They are looking for upper palaeolithic tools and instruments. They find something else that changes the entire understanding of religion, practices and culture.

They uncover a circular platform of sandstone rubble, about 85 cm across. At its dead center sits a single, natural triangular stone — hand-sized, just 15 cm tall. It’s vibrant, with concentric rings of yellow and ochre laid down by millions of years of geology, the surface itself daubed with pigment by human hands (Kenoyer et al., 1983).

The stone is a natural, laminated ferruginous triangle. Found in a Late Upper Paleolithic context, it is carbon-dated to ~9,000–8,000 BCE. The stone has probably been dug up and selected. It hasn’t been carved.

The structure looks like a place where a ceremony or worship has taken place. The stone at the center, complex in structure. When some tribesmen walk into the excavation, they see two excited archaeologists losing their minds over their discovery. The tribesmen belong to the Kol and Baiga tribes of Madhya Pradesh, one of the oldest tribes of the subcontinent.

They look at the artifact on the platform and are perplexed why everyone is so excited. The artifact is simply a khari, the Mother or Shakti. They still worship in that valley today, in that exact form (Kenoyer et al., 1983).

The Vedic religious tradition has roots in the Indo-Iranian language family and arrives in the subcontinent much later (Bryant, 2001). Between the Baghor site and the spread of Vedic religion, there is a 7,500-year gap. Let that sink in. Organized, named religion isn’t as old as this gap. The Indus Valley Civilization and Mesopotamia come 5,000 years later. Organized religion forms much later. This is at a time we do not even know how to write.

So what we can derive from this is that the earliest known form of worship in India centered on the mother or the female. Other parts of the world had ritualistic practices with different motifs — the predator totems of Göbekli Tepe in current-day Turkey, for instance.

Religion later coerced or adopted the culture of the land. That is how the practice has survived this long without written knowledge.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
   YEARS AGO     EVENT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                │
    300,000 ────┤  Modern humans emerge in Africa
                ┊
                ┊   ~230,000 years of foraging
                ┊
     70,000 ────┤  Modern humans reach South Asia
                ┊
                ┊   ~30,000 years of dispersal across Eurasia
                ┊
     40,000 ────┤  Upper Paleolithic begins
                │    Venus of Hohle Fels carved (Germany)
                │
     30,000 ────┤  Chauvet Cave painted (France)
                │    Engraved ostrich eggshell at Patne (India)
                │
     25,000 ────┤  Venus of Willendorf carved (Austria)
                │
     20,000 ────┤  Last Glacial Maximum — ice sheets at peak
                │
     17,000 ────┤  Lascaux Cave painted (France)
                ┊
                ┊   ~5,000 years of warming
                ┊
     12,000 ────┤  Holocene begins. Ice Age ends.
                │    Sea levels rising rapidly
                │
     11,500 ────┤  Göbekli Tepe construction begins (Turkey)
                │
     11,000 ────┤  ★ BAGHOR SHRINE
                │    Sarai Nahar Rai burials (Ganga Plain)
                │    Bhimbetka rock art transitions
                │    First stone walls of Jericho
                │
     10,500 ────┤  Agriculture takes hold in Fertile Crescent
                │
      9,000 ────┤  Mehrgarh founded — farming reaches subcontinent
                │    Çatalhöyük begins (Turkey)
                │
      7,000 ────┤  Pottery and settled villages spread
                │
      6,000 ────┤  Sumer founded — first cities
                │
      5,000 ────┤  Writing invented in Sumer
                │    Early Harappan begins
                │
      4,500 ────┤  Mature Indus Valley Civilization
                │    Egyptian pyramids built
                │
      3,500 ────┤  Vedic religion arrives in subcontinent
                │    Rigveda composed
                │
      2,500 ────┤  Buddha. Mahavira. Greek philosophy.
                │
      2,000 ────┤  Roman Empire at peak
                │
      1982 CE ──┤  Sharma and Clark uncover Baghor 1
                │
        Today ──┤  ★ KOL & BAIGA STILL WORSHIP THE STONE
                │
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

   Baghor → Vedic religion arrives:   7,500 years
   Baghor → Today:                   11,000 years

References

Bryant, E. 2001. The Quest for the Origins of Vedic Culture: The Indo-Aryan Migration Debate. New York: Oxford University Press.

Kennedy, K.A.R., N.C. Lovell, and C.B. Burrow. 1986. Mesolithic Human Remains from the Gangetic Plain: Sarai Nahar Rai. South Asian Occasional Papers and Theses No. 10. Ithaca, NY: Cornell University South Asia Program.

Kenoyer, J.M., J.D. Clark, J.N. Pal, and G.R. Sharma. 1983. “An Upper Palaeolithic Shrine in India?” Antiquity 57(220): 88–94. https://doi.org/10.1017/S0003598X00055253

Mathpal, Y. 1984. Prehistoric Painting of Bhimbetka. New Delhi: Abhinav Publications.

Misra, V.N. 2001. “Prehistoric Human Colonization of India.” Journal of Biosciences 26(4): 491–531. https://doi.org/10.1007/BF02704749

Quitting Sugar: A philosophical rabbit hole

“The things you used to own, now they own you.”

– Chuck Palahniuk

Perhaps the hardest thing I have ever done is quit sugar.

I won’t pull you into a drawn-out hospitalization sob story. Here’s the blunt version: a lower spine injury put me flat on my back. It took a couple of steroid shots just to get me standing again.

That was the day I quit sugar. Not a slow taper. Not a “last meal.” A cold-turkey, blunt-force stop. Alongside it, I slashed my food intake to two meals a day, capping my mains at 150 grams plus vegetables.

It sounds preposterous. You might think the steroids did the heavy lifting, but painkillers don’t burn fat. What got me through was walking 10 to 15 kilometers every single day. Within a couple of months, fueled by nothing but relentless walking and dietary absolutism, I dropped 8 kilos. In a few more months I had further achieved a 25 kilo drop.

The Two Fronts

When you quit sugar, you quickly realize you are fighting a two-front war. There is the explicit: crystallized sugar, jaggery, honey. Then there is the implicit: fruits, heavy carbs, fried potatoes.

Quitting either is as gruelling as kicking a high-order narcotic. I’m talking about reaching a level where you are surrounded by sweets and feel absolutely nothing. You can serve guests without a flinch. You can wash the sticky syrup of a rasgulla off your fingers and sit perfectly still while everyone else devours a sundae.

The Myth of Zen

I used to assume that reaching this state of control required Zen-level patience. A profound mastery over human greed.

Turns out, it’s much simpler and much darker than that.

The most effective deterrent isn’t enlightenment; it’s fear and loathing. I loved rasgullas. But my sobriety started with the raw fear of eating just one and knowing I’d have to work doubly hard the next day to burn it off. That fear slowly mutated into an abject dislike for the food itself.

And it didn’t stop there. If I’m being honest, it grew into a quiet, simmering judgment of the people around me who ate sweets.

De-Addiction is an Addiction

This physical detox brought me to a deep philosophical realization. Everything in our lives is an addiction. The cities we live in, the cultures we blindly follow, the ideologies we cling to. We are all hooked on something, requiring a massive de-addiction to ever truly be free.

So, you start stripping it all away. You optimize. You cut out the sugar, you cut out the noise. You become a master of your own impulses.

But here is the ugly truth they don’t tell you about quitting: you never actually stop being an addict. You just trade up.

I didn’t achieve enlightenment by giving up rasgullas. I just replaced the cheap dopamine hit of crystallized sugar with the infinitely more intoxicating rush of absolute control. The smug, silent superiority I feel when I wash that syrup off my hands and watch you eat your sundae?

Smug is my new high. And it’s the hardest drug I’ve ever been on.

Incentive based Development (with AI)

We are currently in a massive boom and crack cycle. Companies are scaling LLM usage at an unsustainable rate, only to realise that every token has a literal cost that product margins were never designed to absorb.

You cannot simply hike the price of a legacy service because internal development costs have increased. Instead we see the “added service” pivot. Products suddenly gain Intelligent Search or an AI Chat feature to justify rapidly growing Azure bills.

We are effectively paying for a silicon workforce that costs as much as a human one, but without the same long-term value.

The Developer Apocalypse, which was supposed to arrive next year, has been predicted for years. This ploy by various AI founders to fill the market with fear and accelerate adoption is working.

Many companies responded by firing competent engineers to subsidise their LLM spend.

But there is a massive gap between generating code and owning a product.

To replace even a junior developer, an LLM must be consistently at least 80% accurate, 90% of the time. Even then the real barrier is accountability.

An engineer can be fired for negligence. An LLM’s maximum ownership is a prompt fix. A million-dollar product cannot run on a tool that takes zero responsibility for a production outage. This creates a tightrope for leadership.

Do you pay for the brain or pay for the compute?

Senior developers come with higher salaries but also with a software pedigree that allows them to use models surgically. With the right strategies, a senior engineer can reduce token consumption dramatically.

Junior developers cost less in salary but often rely on brute-force prompting and repeated API calls to get things done. Token usage quietly climbs. In many teams it reaches ₹80,000 per month per seat.

At that point the difference between a junior developer’s salary and a senior developer’s salary begins to thin.

The solution is changing how AI is accounted for.

Instead of an unlimited all-you-can-eat API budget, developers receive a baseline compute allowance per sprint. If a developer meets 100% of their delivery goals while saving on cloud tokens through optimized prompts, local models, or old-school code smooshing, they receive a portion of the savings as a performance bonus.

This creates an incentive structure that rewards efficiency.

Consider a simple example. Assume a monthly LLM compute allowance of ₹1,00,000 per developer. The way a developer integrates LLMs into their workflow can dramatically change the cost profile of the team.

Developer
Profile
StrategyToken
Burn
SavingsBonus
Unstructured LLM UsageUses LLM for most tasks with little optimisation₹95,000₹5,000₹2,500
Optimized WorkflowUses LLM selectively with caching and prompt optimisation₹40,000₹60,000₹30,000
Seasoned EngineerManual coding with targeted LLM assistance₹10,000₹90,000₹45,000

Developers begin thinking like systems engineers again instead of prompt gamblers. Consistent overspending does not automatically mean a developer is bad. It usually means the developer needs better training in cost-efficient tooling.

At the same time, companies need to rethink how vibe-coded contributions enter a product.

Rethink vibe coding does not mean discouraging it. It means structuring it.

Companies can organise vibe coding hackathons and open code sessions where vibe coders are paired with experienced engineers to build interesting experiments. Weekly camps for vibe-coded ideas allow developers and non-developers to collaborate and produce prototypes without directly modifying the production codebase.

Some will call this gate-keeping. But the primary product of a company should be gate-kept. A million-dollar product is not a democracy.

The core codebase is a mission-critical machine. It must be built on a foundation of human accountability and architectural discipline.

The Best Time in History to Build Software is Now!

There has never – literally never been a better time to build software as an individual developer.

Not during the open-source boom.
Not during the mobile app gold rush.
Not even during the early cloud era.

Right now, in 2026, we are living through something fundamentally different:

Software development has shifted from effort limited to imagination limited.

Developers, especially seasoned ones, have a ton of projects they haven’t worked on in years. There is a certain effort required to sit up all night completing a module and then attend the office the next day. I have a OneNote full of ideas I could have implemented for personal automation or projects I once believed could turn into startups.

For a while, I found some peace in self-hosting tools. I still do this using old laptops and Raspberry Pis. Many of the open-source tools I hosted were built primarily for the regions they catered to, which meant significant customization on my end. I spent weeks working on them during free and off hours. Many of those projects remain abandoned today, either because I moved on from the need or because the problem itself became redundant.

Those OneNote jotted ideas become part of my prompt to build stuff using LLM agents.

With locally hosted models, free-tier services, and basic subscription plans, one can now build a solid coding automation setup that allows multiple projects to be completed quickly and effectively.

Over the last couple of months, I have been spending evenings with Claude, Codex, and Antigravity building many of these old ideas and unfinished projects in hours. No all-nighters. Just in the past month, I completed eight different projects across multiple languages:

ProjectLanguageTimeKey Feature
Non-linear EditorGo4 HoursA non-linear text editor where texts are arranged in a grid with contextual notes
Subscriptions TrackerGo2 DaysIntelligent email scanning & categorisation for identifying subscriptions enrolled and costing.
Ebook SummarizerPython2 HoursCelebrity voice synthesis (Freeman/Attenborough) and read out book summaries for reading technical articles and ebooks both fiction and non fiction.
DJ WorkflowPython1 DayDJ workflow for music downloaded. Creating proper metatdata. Auto STEMS. RAG on your library.

Not to gloat, this simply demonstrates that developers can now build such tools easily with a few targeted prompts. Getting prompts right plays a crucial role, and that understanding only comes from building more projects. I rarely even look closely at the code the agents generate.

This is where I want to make one distinction very clear: these are truly vibe-coded projects. They are not production-grade or enterprise money-making products. These are tools to automate personal workflows – projects many of us wished we could prototype faster.

Productising something and making it enterprise-grade is still slightly beyond what agentic AI can fully solve, as it requires significant human involvement. Turning an idea into a revenue-generating product introduces hosting, support, maintenance, upgrades, and a wide range of operational concerns that affect cost and investment. It is best to treat these projects as stepping-stone prototypes toward something more meaningful.

The Stack Is becoming Accessible

But let’s be honest — a decent setup still requires either a paid subscription or reasonably capable hardware to run open models effectively.

While I accept that building a solid AI development setup requires some investment which may not be easily accessible to junior developers or undergraduates, the entry point has shifted higher. Development workloads that once ran comfortably on low-powered laptops now increasingly assume at least an entry-level gaming laptop or better. One could argue that cloud compute reduces hardware requirements, but that often increases token and access costs instead.

Serious AI coding capability is now accessible through predictable monthly subscriptions rather than enterprise budgets. For roughly the cost of a streaming subscription, you can realistically complete multiple projects each week within token limits. Claude’s paid plans, for example, begin around $20 per month and provide sustained usage for individual developers. Similar token-based subscription models exist across tools like Codex and Antigravity.

This changes the economics of experimentation.

Bigger Models vs Local Models

A 7B model that can iterate few times on a small function is often more useful than a 400B model that you can only afford to do it a few times.

An important observation while working with agentic tools is that larger parameter models, such as Claude or OpenAI’s flagship models, do not necessarily outperform smaller locally hosted models by a dramatic margin for many coding tasks.

They may be better but often not proportionally better.

With iterative agents, careful prompting, and tooling around the workflow, the performance gap narrows significantly. Since agents operate iteratively, model quality differences tend to even out over multiple refinement cycles.

For example, GPT-OSS 20B can run on a MacBook Pro or a moderate RTX 3060 (16GB VRAM) setup and performs well for coding workflows. Similarly, Qwen Coder 7B runs on even more modest hardware and delivers surprisingly strong results for structured development tasks. While these models may not match proprietary frontier models in every scenario, experimentation and disciplined prompting often compensate for the difference.

A Practical Solo Developer Setup

A solo developer today can assemble a powerful AI coding stack with relatively modest investment.

Paid Coding Models (Primary Engine)

A Claude subscription provides access to Claude Code tokens, which are often sufficient to build one or two moderately complex applications per week depending on scope. Codex and Antigravity offer comparable usage models. Using multiple agents on the same codebase increases iteration speed and expands the effective context window.

Free and Open Models (Cloud & Local)

Ollama enables running open-source models locally with minimal friction. Cloud offerings also provide limited free usage tiers. Larger open models can sometimes be accessed via cloud providers at low or no cost, depending on allocation policies.

Local execution remains an option for those with decent hardware. Tools like vLLM allow efficient model hosting, though setup is more manual and operationally involved.

Use the Cloud biggies (Claude/Antigravity/Codex) to design the system architecture and solve the impossible bugs. Use the local setup (Ollama/vLLM) for the 80% of development that is boilerplate, unit testing, and UI polishing.

How do I setup?

Prompt it and start hacking on your project! Use ChatGPT, Codex, Claude or Code. Or use any good local LLM tool installed.

Generate a install script that sets up LiteLLM with Claude, Codex, and Antigravity (keys via env vars), installs Ollama, pulls gpt-oss:2b, configures local-first routing with fallbacks, tests everything, and exits only if all checks pass. No Docker.

Simplicity Is a Strategy

I sat through a design review for one of my apps recently. The app itself is simple: a FastApi endpoint that searches a FAISS index of Jira and Zendesk tickets, and another endpoint that summarizes results using a locally hosted Ollama model. That’s it. It solves a clear, narrow problem for our L3 support engineers.

Within five minutes, the suggestions started rolling in: “We should build our own MCP server,” “add webhooks,” “integrate with tool-chains,” and more. All interesting ideas, but they ignored four simple truths I wish more architects considered.

First, team size: This app was built by a senior and a junior engineer. We don’t have a 20-person platform team to maintain bespoke infrastructure. The solution should respect the people actually building and running it.

Second, cost and velocity: Every additional component, whether it is an MCP server, a queue, or an integration, adds not just development time but also ongoing maintenance and hosting costs. By contrast, our app is cheap to run, easy to read, and quick to modify. If someone new joins the team, they can get up to speed in a day. If a change is needed, it takes hours rather than weeks. That speed of iteration is its own kind of cost saving.

Third, Occam’s razor: The simplest solution is often the right one. If you can solve the problem directly without extra layers of ceremony, why complicate it? Complexity creates friction: harder onboarding, slower changes, and more fragile systems. Simplicity is not about being naive, it is about deliberately choosing the most straightforward design that meets the need. In practice, that often means writing the fewest moving parts possible and resisting the urge to design for scenarios that may never arrive.

Fourth, maintenance horizon: A system is only as good as the people who can keep it running. In six months or a year, will new engineers still be able to understand and evolve it? Simpler systems spread knowledge more evenly, lower the risk if someone leaves, and make replacement or handover far easier. That stability is a hidden but very real advantage.

Of course, making something truly enterprise grade is another ballgame. It requires more layers of complexity: compliance, governance, monitoring, integration, and security. But even in those cases, simpler solutions tend to win. When systems are easier to understand and modify, they remain more adaptable even as complexity grows around them.

History shows this pattern again and again: technologies that seemed indispensable just a couple of years back often get leapfrogged by simpler, smarter tools. With LLMs, entire categories of internal apps are already being replaced by automated workflows.

The cultural challenge: In many organizations, complexity gets celebrated because it looks smart and sophisticated. But the real craft lies in designing something simple enough to deliver value immediately and flexible enough to adapt when the ground shifts. We should reward that kind of clarity more often.


A Simplicity-First Architecture Checklist

When reviewing or proposing a solution, ask these questions before adding layers of complexity:

  • Team size: How many engineers will build and maintain this? Can new team members understand the system in under a week?
  • Delivery time: What is the expected timeline for delivering value? Does the design allow for quick changes and fast iteration?
  • Cost considerations: What are the hosting, licensing, and ongoing maintenance costs? Does the complexity increase long-term cost of ownership?
  • Maintenance horizon: In 6–18 months, will this still be easy to support? What happens if the original developers move on?
  • Enterprise requirements: Are compliance, governance, monitoring, and security required from day one? If so, can we still keep the design as simple as possible within those constraints?
  • User need first: Does the solution directly solve the current problem for its users? Are we building for real requirements or hypothetical future scenarios?
  • Market change readiness: If the market shifts in 2–3 years, can this solution be replaced or rethought without excessive cost?
  • Cultural reflection: Are we valuing simplicity, or are we rewarding complexity because it looks more impressive?

Simplicity is not laziness. Sometimes, it is the most strategic choice you can make.

My Controversial Interview Tactic

I am often faced with the question of what makes a good technical candidate. Many companies require an engineering degree in their job descriptions. Many high-functioning delivery teams consist of qualified engineers working on real-world problems, and I have learned a great deal from such engineers, who were at the peak of their powers.

However, I have survived for twenty years in the treacherous, competitive, and sometimes maddening software industry in India with just a B.Com degree. Stubbornly, I did not pursue a formal degree in computer science, instead learning everything from the ground up. My journey has taken me through some of the best companies in the world. Despite this, getting a job has been significantly harder for me than for a candidate with an engineering degree. Search engines and criteria rarely surface resumes of non-engineers, and recruiters, in their naivety, often overlook potentially valuable resumes. Does this make me a good technical candidate?

Throughout my many interviews over the years, I have sought candidates like myself – non-engineering graduates. I have, however, recruited only one such individual. A non-engineering graduate with technical proficiency is a rarity, mostly because few understand the history of computer science and how everything works at its most fundamental level.

This holds true even for engineering graduates. Many come from non-computer science backgrounds and pursue computer science simply because it pays more. In my campus recruiting efforts, I found that even many computer science graduates from second-tier colleges had questionable practical knowledge of how things worked. The IITs and RECs tended to produce the best engineers, and we saw better recruitment averages from these institutions.

My Effective Strategy for Technical Interviews

The most effective strategy I’ve found is to focus on the absolute basics. If I am recruiting for a UI role, here’s what I typically do:

  1. Start by thoroughly reviewing the resume, focusing on the candidate’s experience – this helps reveal whether the experience is genuine or misleading.
  2. Ask about the candidate’s interests, likes, and dislikes. Why did they choose UI? This gives insight into how inclined they are towards computer science and their understanding of it.
  3. Pose targeted questions to gauge their motivation. How eager are they to learn new things and improve their skills?
  4. Technical Round:
    • I start with the basics, such as asking how the internet works. What happens when a URL is entered into the browser?
    • From there, I ask questions about DNS, web servers, TCP/IP, the HTTP protocol and verbs, and their significance.
    • Finally, I move on to technical JavaScript basics, followed by small coding exercises.
    • I always keep the focus on the fundamentals: language basics, ecosystem basics, build systems, etc.
  5. While understanding the basics is critical, I also look for how candidates approach problems they might not immediately know the answer to—this reveals their ability to think critically under pressure.

You might be surprised by how many technical people from reputed organisations, with varying levels of experience, don’t know how the internet works, even though we use it constantly!

What Makes the Best Candidates

I’ve found that the best candidates have a zest for learning new things, the ability to work independently, and a strong grounding in the basics. These candidates often become indispensable to the company. For this reason, I place the most value on a candidate’s attitude.

This approach, however, can lead to precarious situations. You often know within the first fifteen minutes if the candidate is a good fit. The rest of the time is spent prodding and poking to see if anything can be salvaged. Finding a good candidate is difficult. Time spent interviewing equals time invested, doubled. In my younger days, I would often finish interviews in fifteen minutes, telling HR that this would save us both time. However, I quickly realised that there is great value in being humble and well-rounded. Candidates may know some things and not others, but that doesn’t make them bad candidates. They can learn quickly and become good employees. Even if they don’t, it simply means they don’t fit the role we are hiring for – not that they are a bad candidate.

A simple google search will give us a lot of non-engineering graduates and self-taught gurus of many big companies like GitHub, Twitter and Slack.

end.

Living the Moment (with Gaming and Consoles)

I remember seeing an Atari 2600 clone, an NES clone and a PlayStation in the late 90s and was mind blown by those consoles. I mostly saw them at my friend’s place, and although I got to play very little of it, I became a gamer in my mind just watching others play. Pac-Man, Super Mario and a whole collection of games.

Many of the games for these consoles came as cartridges or compact discs, and the best way to buy them were to go to these discreet shops in Kolkata (New Market/AC Market). You would get these pirated games for cheaper than original, which were very hard to get in the Indian markets.

Games such as Brian Lara Cricket and football games like Winning Eleven were an immediate win, and among friends the controller would be passed around to compete against each other. There were also hits like Silent Hill, Tekken and Mortal Kombat.

There was always the alpha, the kid who probably also owned the console and was impossible to beat. That apart, I always dreamt of owning a console and playing games all day. Coming from a middle class Indian family, consoles were an impossibility – Mainly because they were costly and unaffordable. So I went to other kids house to sit and stare at others playing games.

When I got into a job and started earning in the mid 2000s, I finally could afford a console. I got myself a TV and a PlayStation 2. I bought several games like the God Of War, Need for Speed, Grand Theft Auto. Subsequently, over the years, I would go on to buy many of the consoles and handheld devices. I could afford more technology, the more I aged and moved up the corporate ladder.

The graphics of the newer consoles have increased beyond compare. The hardware now more capable, made way to incredible complexity and unique gameplay experiences. However, with all the latest jazz, what I could not get is the joy and experience I enjoyed when I was young. Even though I could get any new console or games, I was beyond the age of feeling it like how I was when I was a kid trying to play the Atari or PlayStation.

So what is the point? An experience is about living the moment. If you feel you want to do something but postpone it – perhaps to when you have money or more time, it would not be the same as when you actually wished to do it. We grow, become disinterested, or simply things change.

Enterprise Cloud Software: How small is a small team?

Many startups or small companies have very small teams developing big enterprise products, while there are larger companies having multiple people developing and managing the products. A basic enterprise software has the following functions:

  1. Program Management
  2. Developers
  3. Testers
  4. DevOps
  5. Project Managers

Some of these activities can be performed by people having more than one responsibility. For example, a Program Manager might have a dual role of performing Project Management. Or a Lead Developer can take up the role of a project manager besides leading the team.

A developer most often also performs DevOps activity and in case of an SRE can be responsible for the upkeep of the deployed environment.

It really is about scale and the target of the product being developed. For a small to medium B2B application, a dedicated SRE or DevOps Engineer may not be required. A public facing high volume product, though developed with a few developers, will require SRE or a dedicated DevOps Engineer to manage uptime.

So how do we determine the scale of the team? It really comes down to one thing – Are we assuming that persons in the team won’t take leave or will not have emergencies?

As a project, we should have enough redundancies that the deliverables aren’t affected by sudden absence of allotted person to the role.

Let’s assume, a minimal scenario – we have a Lead Developer and a couple of developers on a B2B solution for a small to medium organization. Assuming the delivery date is 2 months away, It is easy to assume that all the three developers will have a day or two during this time in sickness or absent otherwise. In which case does it impact the delivery date or the quality? If the answer is no, then there are enough redundancies and the team size is good. If the answer is yes, it is obvious that the delivery of the product/project is destined for delay or failure. An Enterprise solution cannot be built on the dependence of a few or on the assumption that people won’t take leaves. Either there will be delays or there will be a tragic drop in quality of the product.

Productivity: Three Things I Changed During Covid.

Like many of us, I had lived a life of belligerence – opinionated, intoxicated, hustled and travelled. I never sat home for a minute. The only time I would find myself at home is to sleep at night. Just couple of weeks before covid hit the world, I was doing a road trip across the winter of Iceland braving snow storms and staying in beautiful scenic Airbnb’s in temperatures way below zero throughout my two week trip.

My sister, the partner in crime and the chief planner, had decided we will travel Europe unlike many would, one country at a time. First of which was Iceland. Next year, August 2020, would have been Italy.

We all know what went down and I guess each of us have a story to tell about the mental and physical assault we went through to get past 2020. Political, religious, international or simply things being shared and shouted on falsely. It still is, half way into 2021, a never ending rabbit hole and there is no choice of exit.

As things are I changed a few things compulsively in my behaviour to work better and make work from home fruitful. But I also found some peace in the process. These are certainly not guaranteed to work for everybody but some things are common sense and I realized it only this late.


Stop criticising everything or everybody. Stop being involved in somebody else life.

Being part of a humongous Indian society, every move you make is a criticism and a gossip. Everybody unrelated even, has a comment on you or the family. The most watched Indian shows capitalize on this culture and reward people for bad bitchy behaviour. This kind of a negative culture gets imbibed from a very young age by seeing parents and others doing it regularly with everybody. “Sharmaji” and the subsequent dialogs inundates the country with memes and jokes – famous of them being “Sharmaji Ka beta”.

This took away a lot of my brain processing. I sat to talk with family or office friends and it took away a lot of mental time – politics, neighbours, relatives and the subject becomes so personal that I was expected to take sides, if I didn’t, they stopped talking or it became awkward. Imagine losing friends and family over politics or religion, both of which won’t help you when you need help.

It not only brought in social anxiety but also the brain was 100% occupied and I gotta no time to think about innovation or code that I could have otherwise written.

What did I do to change this? This is the hardest and it took the longest, especially when it is unconsciously happening.

  1. Silence is golden. Sit and listen and not absorb. From a very young age I had this mental trick to tune out. What I mean by tune out is, I can think of something completely different while I am nodding my head at the discussion on the table. I have a go to trope – I would imagine playing for India in a cricket game. Based on the series going on at that time, I will imagine India to be really struggling to win when I step out to bat and save the game. It sounds ridiculous, but it is more elaborate. Similarly I imagine playing tennis and hitting some outrageous shots. It is narcissistic, yes, but only in my head and so the brain invests in the alternate thought and does not wander. It is similar to how we play video games – we never tell our friends how much we failed, but how well we succeeded in a game. Find the trope that interests you.
  2. If silence doesn’t work, walk away or change the subject. An inflamed brain is a dangerous tool. So choose to exit the conversation or simply make a polite excuse and go away from toxicity.
  3. If none of those work, confront. As an example – A lot of things were falsely being discussed on the subject of how internet and phones work during these times. Or how spams, false news and other tech things work. Having worked with computers and internet for the last twenty years, I may claim to be a subject matter expert on the technology of those subjects but I know only a very small fraction of a fraction. Others are experts in different fields and they are not fully expected to know how technology works. Educate them and remind them politely that you know on the subject and set the record straight. An informed polite confrontation more often than not ends a toxic discussion.
  4. Grow balls to discuss the criticism with the person I am criticising.

Time is irreversible.

If the choice is between taking more time or spending more money, choose to spend the money than taking more time. One can always make back the money, but I can’t with all the money in the world get back the time. Instead of taking a train, which takes a day, if it is affordable, take a flight and be there in three hours. Train may cost way lesser, but the time saved is way more precious, even if it is spent resting.

Of course this comes with several caveats most importantly – affordability. If it is not affordable it is not a choice then one can only spend time whether you choose it or not.

Another caveat might be that you want to spend time – a train journey with a loved one or a trek through scenic landscape than taking that taxi.


Take that risk.

A few of my friends spent a lot of their savings investing in shares, crypto and land. I chose to save for an emergency during Covid. What if I lost my job or lost the saving? What if there is a medical emergency in the family.

When the market rose, at the end of phase I, my friends had doubled or tripled their savings. Fear sometimes makes us irrational, choke up and be complacent. Here I am not just talking of money but everything unknown. Should I take that new job? Should I travel during the times?

An impulsive risk is often detrimental and is like playing Russian roulette. But an informed risk has more chances of success. If I change the job, does the compensation or position cover my risk? Have I researched enough about the new company and its outlook? If I travel, am I taking the precautions of wearing masks, washing my hands and following protocol even more so than normal?

From that lesson, I managed to travel, see movies in theatre, change a job and do a bit of investment during the first/second phase. All of which perhaps I didn’t do so much (except movies and travel) pre-covid times.

Finally, stand against flaming. Think and reason on your own.

end.

A Developers Productivity Setup using a Raspberry Pi (4)

The Objective

I always wanted to have a workspace that could replace many a tool I use as a paid alternative or tools hosted in the cloud. Specially now, when most of the time I spend at home working, I have a need to manage my productivity. It uses docker for easy siloed management and it does not require any complicated system changes that break one app or the other. Additionally Dockerhub is a treasure of images that can be run very easily and can be leveraged.

The only drawback to using docker as an app deployment tool is that all the tools run in different ports and one has to remember the mappings. The mappings are easily visible in portainer and hence useful. Another useful technique is to locally install nginx and use it as reverse proxy.

NGINX Docs | NGINX Reverse Proxy

Tools Installed

  1. Portainer – Visual docker/container management web app. We will use portainer to deploy containers off dockerhub.
  2. BookStack – Notes taking and writing management tool.
  3. Wekan – A trello like Kanban Board/ Lists App.
  4. Bitwarden Server – To store all my passwords locally and not depend on passwords.google.com or Microsoft. Bitwarden is an open-source solution with a server, web app, browser plugin and iOS/Android apps.
  5. Pihole – To Manage my network and conserve my bandwidth. Primarily an Adblocker and I use this to manage intrusive ads on my parents phones and desktop. This I also use to manage access to devices.
  6. Prometheus/Grafana – Monitor the pi and network.
  7. FileBrowser – To manage additional storage. Simple file manager to manage an attached storage. Simply attach a drive to the pi. Attach it to /srv to mount it to the container.
  8. Draw.io – Diagramming Swiss knife.
  9. PlantUML Server – For UML Diagramming
  10. Hoppscotch – REST/Web Socket Client to replace Postman
  11. Owncloud – Optionally I have a stopped instance of owncloud. I dont really need it as I could do with an old HDD attached to the pi.
  12. Code Server – Code Editor/IDE. The Visual Studio Code Server replicates the desktop app to the web. It provides most of the features the desktop editor provides.

Additionally to the above installation, we get the following as dependencies. This can be leveraged for development:

  1. Mongo Server
  2. MySQL/MariaDB – With Booksstack
  3. Additionally install PHPMyAdmin to manage MySQL.

Also other databases can be installed. Databases like Postgres could be run.


Rasberry PI Preparation

I have tested this in a Raspberry Pi 4gb and an 8gb, it could as well run in a 2gb version, but we may have to reduce the number of containers we host. To run docker efficiently, it is better to use a 64 bit image. I have used the raspbian beta image, however one can choose the ever so stable ubuntu server 64 for raspberry pi.

Installation Instruction follow this guide – Installing operating system images – Raspberry Pi Documentation

Disable the gui from raspi-config – raspi-config – Raspberry Pi Documentation

sudo apt-get update && sudo apt-get upgrade

I have found an issue with dhcpcd when a lot of virtual networks are added:

https://github.com/raspberrypi/linux/issues/4092

sudo nano /etc/dhcpcd.conf
at the end: denyinterfaces veth*

Setup Docker & Docker Compose

wget -O - https://get.docker.com -o get-docker.sh | sh

Install docker-compose:

sudo apt-get install -y python3 python3-pip
sudo pip3 -v install docker-compose

Topology

I have attached the raspberry pi 4 to the router using a RJ45 cable. Much faster as I host pihole. The Pi can be added as a WiFi device as well.


Install Portainer

The Portainer installation guide – Docker – Documentation (portainer.io). Run it in the Raspberry Pi commandline.

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Browse to the URL of Portainer. Below is how it looks with containers.

Portainer also has App Template feature that can be used to deploy repdefined stake.


Add App from Dockerhub

In most cases the install is straight forward. Click on Containers -> Add A new container.

Choose an image with ARM 64. In the above example, we are adding an nginx server in the above. Once added to portainer, the env and storage parameters can be changed by looking at the documentation.


Add App using Docker Compose

Portainer accepts docker-compse and many stacks can be deployed using a docker-compose file.

below the compose file:

---
version: "2"
services:
  bookstack:
    image: ghcr.io/linuxserver/bookstack
    container_name: bookstack
    environment:
      - PUID=1000
      - PGID=1000
      - APP_URL=
      - DB_HOST=bookstack_db
      - DB_USER=bookstack
      - DB_PASS=bookstack
      - DB_DATABASE=bookstackapp
    volumes:
      - /path/to/data:/config
    ports:
      - 9080:80
    restart: unless-stopped
    depends_on:
      - bookstack_db
  bookstack_db:
    image: ghcr.io/linuxserver/mariadb
    container_name: bookstack_db
    environment:
      - PUID=1000
      - PGID=1000
      - MYSQL_ROOT_PASSWORD=root
      - TZ=Europe/London
      - MYSQL_DATABASE=bookstackapp
      - MYSQL_USER=bookstack
      - MYSQL_PASSWORD=bookstack
    volumes:
      - /path/to/data:/config
    ports:
      - 3306:3306
    restart: unless-stopped

Custom Install Apps using Docker Commandline

Requires an advanced understanding of Docker.

A lot of tools may not compile due to the images being built with other platform. The moment there is a “process exec” error, it is probably incompatible with arm 64. In which case the best approach is to rebuild from source using an arm based image.

Portainer automatically will pick up any images added or containers launched.

end.