Gino Kaleb
Gino Kaleb SYS ADMIN
|
ES | EN

The Empathetic Algorithm: Coding Bureaucracy with Logic and Care

The challenge wasn't technical but philosophical: how to translate a human evaluation process into a fair, automated system.

When I started designing the decision engine for “Project Phoenix” my task looked purely technical. The goal was to automate the approval process for academic requests—a job that until then relied on manual reviews, paperwork, and, let’s be honest, the evaluator’s subjectivity. I thought in terms of services, entities, and endpoints. Very soon I realized the true challenge wasn’t writing code but encoding bureaucracy. And more importantly, doing it with empathy.

The Mirage of Pure Logic

The first temptation in such a project is extreme simplification. Create a couple of simple rules: if the student’s GPA is above 8.5 approve; if below, reject. If they have no debts, proceed; otherwise stop. It would be quick and technically functional.

But it would also be fundamentally unfair.

Such a system doesn’t distinguish between a first-semester student struggling to adapt and a final-semester student with a flawless record. It doesn’t understand that a “schedule conflict” is a systemic failure that shouldn’t penalize the user, while a “health issue” is a delicate situation that no machine should judge automatically.

I realized I wasn’t building a simple validator. I was designing a digital proxy for a human decision-maker. For it to be fair, that proxy needed context.

Layers of a Fair Decision

I decided to structure the evaluation engine not as a flat list of conditions but as a series of concentric filters, each with a specific purpose, mimicking how a reasonable person would analyze a case.

The First Gate: Eligibility

This was the outermost, simplest layer. Binary rules: Is the request period open? Has the user exceeded their request limit? This filter acted like a security guard at the door: its job wasn’t to judge but to verify credentials. If you don’t meet them, you cannot pass. Simple and efficient.

The Critical Filter: Viability

Once inside, the request faced non-negotiable conditions. Is the student’s academic status “active”? Are there major administrative holds? These were red flags. A “no” here meant a near-immediate rejection. The system needed to identify cases that, under existing regulations, were unviable from the outset.

The Heart of the System: Context

This is where the engine’s true intelligence lived. Instead of treating all reasons for requests the same, I created specific logic per reason.

  • A Schedule Conflict was almost always approved. It’s a logistical problem the system should accommodate, not punish.
  • A Work Situation was evaluated with a key variable: semester. For an advanced student, balancing work and study is common and expected. For an early-semester student, it could be a red flag requiring closer review. The algorithm had to reflect that maturity.
  • A Health Issue, on the other hand, was a clear limit for automation. The decision there was not to decide. The request was automatically flagged for human review. The algorithm’s empathy consisted in recognizing its inability to judge such a human situation.

The Third Way: The Art of Saying “I Don’t Know”

Perhaps the most important design decision was creating a third state beyond “Approved” and “Rejected”: “Pending Review”.

This became the safety net. It was the way the algorithm raised its hand and said: “I’ve processed the information I have, but this case falls outside clear patterns. I need a human.” Cases with borderline GPAs, ambiguous reasons, or factor combinations that didn’t lead to a clear conclusion fell into this category. This freed administrative staff from 80% of routine and predictable cases, letting them focus their attention on the 20% that truly needed human judgment.

In the end, the decision engine for Project Phoenix wasn’t just software. It was my attempt to translate an administrative process into a logical flow that was efficient without being insensitive. The goal was never to replace human judgment but to empower it—filtering the noise so people could focus on the decisions that matter. In the process I learned that the best code is not only code that works, but code that remembers who it works for.