I did not start this project because QR codes were exciting.
I started it because the inventory workflow had stopped telling the truth.
Search took too long. A picker could scan an item and still not trust the status. A supervisor could ask for a count and get three different answers. Nobody on the floor was irrational. The workflow was.
That was the real problem I wanted to solve. Not digitization for its own sake. Not a prettier UI. I wanted physical movement and system state to collapse into one reliable action.
At De-El Enterprises, that meant rebuilding the workflow into a production scan + search system that doubled warehouse-floor efficiency.

Erik Tonnesen
Software Developer / Solutions Engineer, Internal Operations Systems
Erik Tonnesen is a Brooklyn-based software developer and solutions engineer who builds internal systems that make messy operational workflows behave like reliable software. At De-El Enterprises, he designed and shipped a QR-based inventory scan + search platform with TypeScript, React, Next.js, GraphQL, Node.js, and PostgreSQL that doubled warehouse-floor efficiency. He combines product-minded UX, role-based authorization, and observability to turn operational friction into measurable throughput, and also contributes part-time at Square Mile Labs on full-stack modernization work including Apollo-to-TanStack Query migrations.
What is QR code inventory management?
When I say QR code inventory management, I do not mean printing labels and calling the project modernized. I mean tying every physical item or location to a scannable identifier and making the scan produce a reliable inventory state change across receiving, putaway, picking, and adjustments.
Why do QR inventory projects fail even when scanning works?
The projects I see fail usually do not fail at scanning. They fail at workflow design. Teams ship scanners on top of weak state models, inconsistent transition rules, vague permissions, and slow search. That is how you create fast bad data.
What architecture actually worked for me?
What worked for me was a three-layer architecture: Next.js and React for scan plus search UX, Node.js APIs with GraphQL for transition logic, and PostgreSQL as the source of truth. I also treated role-based auth and structured logging as core system design, not cleanup work.
How should a team start implementing QR inventory in 30 days?
If I had to start again, I would begin with one painful workflow in one physical scope. I would define states and transition rules first, then build scan plus search UI, role controls, and event logging around that workflow. Expansion comes after operators trust the pilot.
Most teams blame the visible symptom first: slow scanners, weak Wi-Fi, old devices.
I think that is the easy diagnosis.
The deeper issue is usually fragmented workflow ownership. Operations owns movement. Engineering owns software. But nobody fully owns the integrity between physical action and digital state.
When that boundary gets fuzzy, operators create unofficial workarounds. I do not say that critically. Workarounds are rational locally and destructive globally. They save one shift and corrupt one quarter.
I learned that inventory speed is not a scanner problem first. It is a workflow integrity problem first. The winning system turns scan events into trusted state transitions with as little operator friction as possible.
- QR Code Inventory Management
QR code inventory management is the practice of tying each physical item or location to a scannable QR identifier and enforcing consistent digital state updates after each operational event. A production-grade system combines scan capture, business-rule validation, and auditable state transitions.
When people hear QR inventory management, they usually picture labels, scanners, and maybe a faster receiving step.
I picture trust.
If a scan only records an event and leaves the real business meaning for later, the system still leaks. The production pattern I wanted was immediate validation and immediate consequence:
- Scan succeeds -> state transitions
- Scan fails validation -> clear operator action
- Scan conflicts with current state -> blocked, logged, and reviewable
| Superficial QR Setup | Production QR Inventory System |
|---|---|
| Scans are captured but process rules are mostly manual | Scans trigger rule-based state transitions with guardrails |
| Search is a fallback and often too slow for floor usage | Search is first-class and optimized for operational velocity |
| Errors are discovered in reconciliation windows | Errors are surfaced in flow with actionable messages |
| Permissions are broad and implicit | Permissions are role-scoped and explicit |
QR codes were never the value layer in my project. The value came from the operating model around them: validation logic, role-scoped permissions, and real-time feedback loops.
The architecture I trusted in production ended up being a clean three-layer split:
1) App Layer: Next.js + React (operator speed)
I built the frontend for floor reality:
- quick search fallback when labels are damaged
- scanner-first input flow
- low-friction status feedback
- clear error copy tied to next action
2) API Layer: Node.js + GraphQL (transition control)
I wanted the API layer to own business rules:
- validates scan payloads
- enforces allowed state transitions
- records who did what, when, and where
- returns deterministic outcomes for UI rendering
3) Data Layer: PostgreSQL (system truth)
I wanted PostgreSQL to hold system truth:
- canonical inventory entities
- current state and movement history
- permission-linked actor metadata
- audit-grade event records
The fastest inventory workflow is the one where the operator never has to wonder if the system understood the action. Every scan should produce a trusted next state or a clear correction path.
The architecture that held up for me was simple: frontend for speed, API for rule enforcement, database for truth. Every time those concerns blur, data drift gets cheaper.
I learned fast that operator UX decides whether the whole initiative lives or dies.
If scan latency is low but correction flow is confusing, adoption collapses. If search is technically powerful but operationally clumsy, people route around it.
Design for the fastest valid path
Map the shortest happy-path interaction for each role. Every extra click must be justified by risk reduction, not developer preference.
Treat search as a primary control, not a fallback
Damaged labels, poor prints, and edge cases happen daily. Search has to be as operationally useful as scanning.
Show transition feedback immediately
Operators need immediate confirmation of what changed. Success and failure states should be obvious and concise.
Build correction paths into the workflow
Users should never need to leave the flow to fix common issues. Correction actions should be one decision away.
Warehouse UX is not decorative UI to me. It is throughput. I got more leverage from removing friction in the interface than I would have from chasing hardware perfection.
Inventory systems get dangerous when state logic lives in people's heads.
I wanted it in code.
The fix was explicit transition modeling:
- define allowed states
- define allowed transitions
- define role-specific transition authority
- define invalid transition responses
For example, if an item is already marked as allocated, the system should not accept a conflicting movement action without a controlled exception path.
- Inventory State Transition Integrity
Inventory state transition integrity is the property that every state change follows an allowed path, is attributable to a role-authorized actor, and is persisted with enough context to reconstruct events during audits or incident reviews.
This is where GraphQL helped me most when I used it with discipline: resolvers became a transition gate, not just a data access wrapper.
I came away believing that inventory accuracy is a byproduct of transition discipline. Without explicit transition rules, higher scan volume just creates inconsistency faster.
I treated security and observability as product features, not post-launch chores.
Inventory operations software gets over-permissioned and under-observed surprisingly fast when a team is rushing rollout.
Role-based access control
I enforced role boundaries so users could execute only the transitions relevant to their workflow responsibilities. That reduced accidental misuse and made incident triage faster.
Logging and observability
Structured logging around database and network activity improved troubleshooting and reliability:
- transition attempts (success/fail)
- actor and role context
- API response outcomes
- database write consistency checks
When someone on the operations side asked what happened to an item, we could answer with evidence instead of reconstruction guesswork.
Avoid launching QR workflows with broad admin-like permissions "just for speed." Overbroad access may improve early rollout velocity but creates long-term integrity risk and weakens operational trust.
For me, role-based auth protected process boundaries and observability protected diagnosis speed. Together, they protected trust in the inventory system.
My Apollo decoupling and TanStack Query migration work at Square Mile Labs may sound separate from warehouse inventory. It was not.
That work reinforced a lesson I care about in operational systems:
- transport decisions should not dominate state strategy
- query behavior should be explicit at the screen level
- cache behavior should match workflow risk, not library defaults
| Tightly Coupled Data Layer | Decoupled Query Strategy |
|---|---|
| Data-fetching concerns leak into UI behavior unpredictably | UI state decisions are explicit and easier to reason about |
| Cache behavior is harder to tune by workflow criticality | Cache and refetch strategy can align with operational risk |
| Migration paths are expensive and fragile | Client changes are incrementally manageable |
The practical takeaway for me is not that one client is always better. It is that architecture should preserve optionality while protecting operational correctness.
I care less about which client wins a library debate and more about coupling discipline. Systems age better when query behavior, cache policy, and transition logic stay intentionally separate.
Shipping the software was only half the job.
Getting operations teams to trust it was the harder half.
If I were rolling this out again, I would follow this pattern:
Pilot one workflow in one location scope
Start where the pain is obvious and measurable. Keep blast radius small enough to learn quickly.
Collect operator friction daily, not monthly
Teams often wait for formal reviews and miss high-frequency friction. Daily feedback loops surface reality faster.
Fix workflow friction before adding feature scope
Stability and trust compound. Feature breadth without workflow confidence compounds confusion.
Scale only after state integrity is stable
Expansion should follow reliable transition metrics and low correction rates, not launch enthusiasm.
Operations teams adopt systems that make bad days easier, not just good days faster. Design for exception handling and recovery from day one.
I do not think rollout success is deployment status. It is behavior change. Pilot narrow, learn fast, harden transitions, then scale.
- Starting with labels and scanners before defining state model and transitions
- Assuming scanning can compensate for weak search ergonomics
- Using broad permissions during rollout and never tightening them later
- Treating logs as optional until the first major discrepancy incident
- Scaling location coverage before pilot workflow reliability is proven
- Optimizing for engineering elegance over operator throughput
Most of these mistakes share one root cause: teams optimize for implementation completion instead of operational reliability.
- Operational Reliability in Inventory Software
Operational reliability is the ability of an inventory system to produce consistent, explainable, and recoverable outcomes under normal load and common exception conditions.
If I cannot explain, recover, and audit exceptions quickly, I do not consider the workflow production-ready no matter how clean the codebase looks.
If I had 30 days to stand up a new QR inventory workflow, I would do it in this order:
- 01I did not fix the workflow by adding scanning alone. I fixed it by redesigning how state changes happen.
- 02The three-layer architecture (Next.js UX, GraphQL/Node logic, PostgreSQL truth) kept system behavior stable.
- 03I treated role-based auth and observability as foundations, not post-launch cleanup.
- 04Search quality mattered as much as scan speed in daily floor operations.
- 05Transition integrity mattered more than raw event volume.
- 06A narrow pilot with fast feedback beat a broad first-wave rollout.
What is the difference between barcode and QR code inventory management?
Both are machine-readable identifiers, but QR codes can encode more data and are easier to adapt to multi-context workflows. In my experience, the bigger difference is rarely the code format. It is the data model, transition rules, and operator UX around scanning.
Can small teams build QR inventory systems without buying enterprise WMS software?
Yes. I would absolutely let a small team start with a scoped internal system if the workflow is clearly defined, state transitions are enforced, and logging and auditability exist from day one. The risk is not starting small. The risk is building without transition discipline.
Why is search still important if scanning exists?
Real operations include damaged labels, partial identifiers, and exception handling. I think of search as the continuity layer when scanning is unavailable or insufficient. Without strong search UX, operators create manual workarounds that weaken system integrity.
How does GraphQL help in inventory workflows?
For me, GraphQL helped centralize transition mutations and keep API contracts explicit for frontend consumers. It is useful when multiple clients need consistent state behavior and predictable error handling. The benefit comes from disciplined resolver design, not from GraphQL alone.
When should teams consider migrating from Apollo-centric patterns to TanStack Query?
I would consider that migration when frontend query behavior, cache control, or coupling constraints start slowing delivery and increasing complexity. The migration should be incremental and workflow-aware, with careful preservation of correctness in high-risk operational screens.
What metric should teams track first after launch?
The first metric I would track is transition reliability and correction frequency. Throughput matters, but if transitions are unreliable or correction rates stay high, scaling will amplify inconsistency instead of productivity.
- 01Learn about 2D barcodes powered by GS1 — GS1 (2026)
- 02TanStack Query Documentation — TanStack (2026)
- 03GraphQL Official Documentation — GraphQL Foundation (2026)
- 04Next.js Documentation — Vercel (2026)
- 05PostgreSQL Documentation — PostgreSQL Global Development Group (2026)