Senior Data Partner · Tabby
Working remotely in fintech, partnering with teams on data systems, analysis, and decision support.
About
I build, explain, and teach analytical data systems — the mechanics underneath reporting, warehouses, pipelines, and the decisions that depend on them.
I am a data professional working across analytics, data engineering, architecture, and stakeholder-facing data strategy. My work has mostly lived in the space where business questions meet technical systems: settlement processes, warehouse architecture, semantic layers, BI products, data quality, and the pipelines that make reporting trustworthy.
Pipeline Patterns grew out of that work. It is where I explain how analytical data systems actually work — not just which tool to use, but what is happening underneath: storage, execution, modeling, orchestration, interoperability, and correctness.
The audience I write for is the working analyst, analytics engineer, data engineer, or technical stakeholder whose job depends on the numbers being right. The goal is to make the underlying mechanisms legible enough that people can reason about tradeoffs, not just copy patterns.
Working remotely in fintech, partnering with teams on data systems, analysis, and decision support.
Automated the VISA card settlement process, removing about 3 hours of manual work per day — roughly 750 hours per year. Worked across stakeholders, architecture, technical specifications, and engineering execution.
Led BI and data engineering work, streamlined 30+ manual data tasks, reduced weekly manual effort by more than 30 hours, built a unified data layer, and created monitoring for data quality and customer experience.
Completed 30+ projects with a 100% success rate across web scraping, data modeling, analysis, automation, and data engineering.
Built Python network-analysis scripts and Power BI reporting for Leadership Scanner, an organizational network analysis product for talent discovery and management.
Developed a reporting system from scratch for Payments League and helped migrate ETL scripts from a legacy environment to a distributed setup.
The same problems kept appearing in different forms: unclear data definitions, brittle pipelines, hidden manual work, dashboards without trustworthy layers underneath them, and optimization advice that ignored how the system actually executes. Teaching is how I turn those lessons into something reusable.
The essays live on Substack — that is the main channel and the one to subscribe to if you want the work delivered. The community is on Skool, for readers who want to discuss the pieces and push back on the arguments. Short-form thinking and conversation lives on LinkedIn.
To invite me to speak, or to write about something you would want me to address, write me at hello@pipelinepatterns.co.