

The Tech Trek
Elevano
The Tech Trek is a podcast about how modern technology companies are actually built, with a focus on AI, data, platform, and engineering leadership. Host Amir Bormand talks with founders, CTOs, and technical operators about building products, scaling teams, and making the decisions that shape fast-growing companies.
Episodes
Mentioned books

Feb 11, 2026 • 24min
The Hidden Fintech Behind the Compute Boom
Gabe Ravacci, CTO and co-founder at Internet Backyard, breaks down what the “computer economy” really looks like when you zoom in on data centers, billing, invoicing, and the financial plumbing nobody wants to touch. He shares how a rejected YC application, a finance stint, and a handful of hard lessons pushed him from hardware curiosity to building fintech infrastructure for compute.If you care about where compute is headed, or you are early in your career and trying to find your path without overplanning it, this one will land.Key Takeaways• Startups often happen “by accident” when your competence meets the right problem at the right time• Compute accessibility is not only a chip problem, it is also a finance and operations problem• Rejection can be data, not a verdict, treat it as feedback to sharpen the craft• A real online presence is less about networking and more about being genuinely useful in public• Time blocking and single task focus beats grinding when you are juggling school, work, and a startupTimestamped Highlights00:28 What Internet Backyard is building, fintech infrastructure for data center financial operations01:37 The first startup attempt, cheaper compute via FPGA based prototyping, and why investors passed04:48 The pivot, from hardware tools to a finance informed view of compute and transparency gaps06:55 How Gabe reframed YC rejection, process over outcome, “a tree of failures” that builds skill08:29 Building a digital brand on X, what he posted, how he learned in public, and why it worked13:36 The real balancing act, dropping classes, finishing the degree well, and strict time blocking20:00 Books that shaped his thinking, Siddhartha, The Art of Learning, Finite and Infinite GamesA line worth keeping“The process is really more important than any outcome.”Pro Tips for builders• Treat learning like a skill, ask better questions before you chase better answers• Make focus a system, set blocks, mute distractions, and do one thing at a time• Share what you are learning in public, not to perform, but to be useful and find signalCall to ActionIf this episode sparked an idea, follow or subscribe so you do not miss the next one. Also check out Amir’s newsletter for more conversations at the intersection of people, impact, and technology.

Feb 10, 2026 • 29min
Data Fabric Meets AI, The Trust Layer Most Teams Skip
Data leaders are being asked to ship real AI outcomes while the foundations are still messy. In this conversation, Dave Shuman, Chief Data Officer at Precisely, breaks down what actually determines whether AI adoption sticks, from hiring “comb shaped” talent to building trusted data products that make AI outputs believable and usable.If you are building in data, AI, or analytics, this episode is a practical map for what needs to be true before AI can move from demos to dependable, repeatable impact.Key TakeawaysComb shaped talent beats narrow specialization, AI work rewards people who can span multiple skills and collaborate wellAdoption is a trust problem, and trust starts with data integrity, lineage, context, and a semantic layer that business users can understandOpen source drives the innovation, commercialization makes it safe and usable at enterprise scale, especially around security and supportData must be fit for purpose, start every AI project by asking what data it needs, who curates it, and what the known warts areHumans are still the last mile, small workflow choices can make adoption jump, even when the model is already accurateTimestamped Highlights00:56 The shift from T shaped to comb shaped talent, what modern AI teams actually need to look like05:36 Hiring for team fit over “world class” niche skills, and when to bring in trusted partners for depth07:37 How open source sparks the ideas, and why enterprises still need hardened, supported versions to scale11:31 Where AI adoption is today, why summarization is only the beginning, and what unlocks “AI 2.0”13:39 The trust stack for AI, clean integrated data, lineage, context, catalog, semantic layer, then agents19:26 A real adoption lesson from machine learning, and why the human experience decides if the system winsA line worth stealing“You do not just take generative AI and throw it at your chaos of data and expect it to make magic out of it.”Pro Tips for data and AI leadersHire and build teams like Tetris, fill skill voids across the group instead of chasing one perfect profileUse partners for the sharp edges, but require knowledge transfer so your team levels up every engagementMake adoption easier by designing for human behavior, sometimes the smallest workflow tweak beats more accuracyBuild governed data products in a catalog, then validate AI outputs side by side with dashboards to earn trust fastCall to ActionIf this helped you think more clearly about AI adoption, talent, and data foundations, follow the show and turn on notifications so you do not miss the next episode. Also, share it with one data or engineering leader who is trying to get AI out of pilots and into real workflows.

Feb 9, 2026 • 26min
Cloud Costs vs AI Workloads, The Storage Decisions That Decide Scale
Cloud bills are climbing, AI pipelines are exploding, and storage is quietly becoming the bottleneck nobody wants to own. Ugur Tigli, CTO at MinIO, breaks down what actually changes when AI workloads hit your infrastructure, and how teams can keep performance high without letting costs spiral. In this conversation, we get practical about object storage, S3 as the modern standard, what open source really means for security and speed, and why “cloud” is more of an operating model than a place. Key takeaways• AI multiplies data, not just compute, training and inference create more checkpoints, more versions, more storage pressure • Object storage and S3 are simplifying the persistence layer, even as the layers above it get more complex • Open source can improve security feedback loops because the community surfaces regressions fast, the real risk is running unsupported, outdated versions • Public cloud costs are often less about storage and more about variable charges like egress, many teams move data on prem to regain predictability • The bar for infrastructure teams is rising, Kubernetes, modern storage, and AI workflow literacy are becoming table stakes Timestamped highlights00:00 Why cloud and AI workloads force a fresh look at storage, operating models, and cost control 00:00 What MinIO is, and why high performance object storage sits at the center of modern data platforms 01:23 Why MinIO chose open source, and how they balance freedom with commercial reality 04:08 Open source and security, why faster feedback beats the closed source perception, plus the real risk factor 09:44 Cloud cost realities, egress, replication, and why “fixed costs” drive many teams back inside their own walls 15:04 The persistence layer is getting simpler, S3 becomes the standard, while the upper stack gets messier 18:00 Skills gap, why teams need DevOps plus AIOps thinking to run modern storage at scale 20:22 What happens to AI costs next, competition, software ecosystem maturity, and why data growth still wins A line worth keeping“Cloud is not a destination for us, it’s more of an operating model.” Pro tips for builders and tech leaders• If your AI initiative is still a pilot, track egress and data movement early, that is where “surprise” costs tend to show up • Standardize around containerized deployment where possible, it reduces the gap between public and private environments, but plan for integration friction like identity and key management • Treat storage as a performance system, not a procurement line item, the right persistence layer can unblock training, inference, and downstream pipelines What's next:If you’re building with AI, running data platforms, or trying to get your cloud costs under control, follow the show and subscribe so you do not miss upcoming episodes. Share this one with a teammate who owns infrastructure, data, or platform engineering.

Feb 6, 2026 • 51min
AI Is Changing Art Faster Than You Think.
This is an early conversation I am bringing back because it feels even more relevant now, the intersection of AI and art is turning into a real cultural shift.I sit down with Marnie Benney, independent curator at the intersection of contemporary art and technology, and co-founder of AIartists.org, a major community for artists working with AI. We talk about what AI art actually is beyond the headlines, where authorship gets messy, and why artists might be the best people to pressure test the societal impact of machine learning.Key takeaways• AI in art is not a single thing, it is a spectrum of choices, dataset, process, medium, and intent• The most interesting work treats AI as a collaborator, not a shortcut, a back and forth that reshapes the artist’s decisions• Authorship is still unsettled, some artists see AI as a tool like an instrument, others treat it as a creative partner• The fear that AI replaces creativity misses the point, artists can use the machine’s unexpected output to expand human expression• Access matters, compute, tooling, and collaboration between artists and technologists will shape who gets to experiment at the frontierTimestamped highlights00:04:00 Curating science, climate, and public engagement, the path into tech driven exhibitions00:07:41 What AI art can mean in practice, datasets, iteration loops, and choosing an output medium00:10:48 Who gets credit, tool versus collaborator, and the art world’s evolving rules00:13:51 Fear, job displacement, and a healthier frame, human plus machine as a creative partnership00:22:57 The new skill stack, what artists need to learn, and where collaboration beats handoffs00:29:28 The pushback from traditional art circles, philosophy and intention versus novelty00:37:17 Inside the New York exhibition, collaboration between human and machine, visuals, sculpture, and sound00:48:16 The magic of the unknown, why the output can surprise even the artistA line that stuck“Artists are largely showing a mirror to society of what this technology is, for the positive and the negative.”Pro tips for builders and operators• Treat creative communities as an early signal, artists surface second order effects before markets do• If you are building AI products, study authorship debates, they map directly to credit, accountability, and trust• Collaboration beats delegation, when domain experts and technologists iterate together, the work gets sharper fastCall to actionIf this episode hits for you, follow the show so you do not miss the next drop. And if you are building in data, AI, or modern tech teams, follow me on LinkedIn for more conversations that connect technology to real world impact.

Feb 5, 2026 • 24min
AI in the Enterprise, Why Pilots Fail and What Actually Scales
Most teams are approaching AI from the wrong direction, either chasing the tech with no clear problem or spinning up endless pilots that never earn their keep. In this episode, Amir Bormand sits down with Steve Wunker, Managing Director at New Markets Advisors and co author of AI and the Octopus Organization, to break down what actually works in enterprise AI.You will hear why the real challenge is organizational, not technical, how IT and business have to co own the outcome, and what it takes to keep AI systems valuable over time. If you are trying to move beyond experimentation and into real impact, this conversation gives you a practical blueprint.Key takeaways• Pick a handful of high impact problems, not hundreds of small pilots, focus is what creates measurable ROI• Treat AI as a workflow and change program, not a tool you bolt onto an existing process• IT has to evolve from order taker to strategic partner, including stronger AI ops and ongoing evaluation• Start with the destination, redefine the value proposition first, then redesign the operating model around it• Ongoing ownership matters, AI is not a one and done delivery, it needs stewardship to stay usefulTimestamped highlights00:39 What New Markets Advisors actually does, innovation with a capital I, plus AI in value props and operations01:54 The two common mistakes, pushing AI everywhere and launching hundreds of disconnected pilots04:19 Why IT cannot just take orders anymore, plus why AI ops is not the same as DevOps07:56 Why the octopus is the perfect model for an AI age organization, distributed intelligence and rapid coordination11:08 The HelloFresh example, redesign the destination first, then let everything cascade from that17:37 The line you will remember, AI is an ongoing commitment, not a project you ship and forget20:50 A cautionary pattern from the dotcom era, avoid swinging from timid pilots to extreme headcount mandatesA line worth keepingYou cannot date your AI system, you need to get married to it.Pro tips for leaders building real AI outcomes• Define success metrics before you build, then measure pre and post, otherwise you are guessing• Redesign the process, do not just swap one step for a model, aim for fewer steps, not faster steps• Assign long term ownership, budget for maintenance, evaluation, and model oversight from day oneCall to actionIf this episode helped you rethink how to drive AI results, follow the show and subscribe so you do not miss the next conversation. Share it with a leader who is stuck in pilot mode and wants a path to production.

Feb 4, 2026 • 25min
AI Is Rewriting Manufacturing Quality, Here’s What Changes
Manufacturing is getting faster, messier, and more expensive when quality slips.Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.”Episode SummaryDaniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world.You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in.What you will take awayQuality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing.AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system.The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern.A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics.“AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed.Timestamped highlights00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues05:10 The new reality, faster product cycles mean living in the bottom of the quality curve10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows22:40 The shift coming to quality teams, from reading data all day to making higher level decisions28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teamsA line worth repeating“Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.”Pro tips you can applyWhen evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams.Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb.Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice.Follow:If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.

Feb 3, 2026 • 26min
Synthetic Data Explained, When It Helps AI and When It Hurts
Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder.We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system.Key Takeaways• AI success still lives or dies on data quality, trust, and traceability, not model hype. • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions. • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast. • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined. • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata. Timestamped Highlights00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions 03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort 07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage 10:47 A clean definition of synthetic data, what it is, and what it is not 16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag 19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts A line worth sharing“AI is like launching satellites. Data is the launch pad.” Pro Tips for tech leaders shipping AI• Start data discovery at the same time you write product requirements, not after the prototype works• Use synthetic data early, then set milestones to shift weight toward real world data as you approach production • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system Call to ActionIf this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.

Feb 2, 2026 • 26min
The Real Learning Curve of Engineering Management
Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people.You will hear how Tom’s path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer. Key Takeaways Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software” As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhereTimestamped Highlights00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest22:08 The promotion playbook, stop only doing your job, start solving the next jobA line worth stealing“Do your job really well, plus go do the work above you that is not getting done, that’s how you rise.”Pro Tips for engineers stepping into leadership Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full yearCall to ActionIf this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.

Jan 30, 2026 • 31min
Retention for Engineering Teams, What Keeps Top People Around
Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.Key takeaways• Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it• Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up• Remote can work long term when you design for it, hire for communication, and invest in real relationship building• Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture• Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic marketsTimestamped highlights00:02:13 The founders, the pivots, and why Phil joined before Close was even Close00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are askedOne line worth stealing“Inertia is really powerful. One person championing an idea can really make a difference.”Practical ideas you can apply• If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step• If you lead a team, create parallel growth paths, management is not the only promotion ladder• If you are remote, hire for writing, decision clarity, and follow through, not just technical depth• If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specsStay connected:If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.

Jan 29, 2026 • 23min
Data Orchestration and Open Source Strategy
Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways• Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable• Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on• Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team• Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden• Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights00:00:50 What Dagster is, and why orchestration matters for every data driven team00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data00:07:02 The architectural shift, moving from task based workflows to asset based pipelines00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth RepeatingData orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams• If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes• If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk• Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to ActionIf this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.


