Future-Focused with Christopher Lind

Christopher Lind
undefined
Nov 3, 2025 • 35min

Navigating the AI Bubble: Grounding Yourself Before the Inevitable Pop

Everywhere there are headlines talking about AI hype and the AI boom. However, with the unsustainable growth, more and more are talking about it as a bubble, and a bubble that’s feeding on itself.This week on Future-Focused, I’m breaking down what’s really going on inside the AI economy and why every leader needs to tread carefully before an inevitable pop.When you scratch beneath the surface, you quickly discover that it’s a lot of smoke and mirrors. Money is moving faster than real value is being created, and many companies are already paying the price. This week, I’ll unpack what’s fueling this illusion of growth, where the real risks are hiding, and how to keep your business from becoming collateral damage.In this episode, I’m touching on three key insights every leader needs to understand:​ AI doesn’t create; it converts. Why every “gain” has an equal and opposite trade-off that leaders must account for.​ Focus on capabilities, not platforms. Because knowing what you need matters far more than who you buy it from.​ Diversity is durability. Why consolidation feels safe until the ground shifts and how to build systems that bend instead of break.I’ll also share practical steps to help you audit your AI strategy, protect your core operations, and design for resilience in a market built on volatility.If you care about leading with clarity, caution, and long-term focus in the middle of the AI hype cycle, this one’s worth the listen.Oh, and if this conversation helped you see things a little clearer, make sure to like, share, and subscribe. You can also support my work by buying me a coffee.And if your organization is struggling to separate signal from noise or align its AI strategy with real business outcomes, that’s exactly what I help executives do. Reach out if you’d like to talk.Chapters:00:00 – The AI Boom or the AI Mirage?03:18 – Context: Circular Capital, Real Risk, and the Illusion of Growth13:06 – Insight 1: AI Doesn’t Create—It Converts19:30 – Insight 2: Focus on Capabilities, Not Platforms25:04 – Insight 3: Diversity Is Durability30:30 – Closing Reflection: Anything Can Happen#AIBubble #AILeadership #DigitalStrategy #FutureOfWork #BusinessTransformation #FutureFocused
undefined
Oct 27, 2025 • 34min

Drawing AI Red Lines: Why Leaders Must Decide What’s Off-Limits

AI isn’t just evolving faster than we can regulate. It’s crossing lines many assumed were universally off-limits.This week on Future-Focused, I’m unpacking three very different stories that highlight an uncomfortable truth: we seem to have completely abandoned the idea that there are lines technology should never cross.From OpenAI’s move to allow ChatGPT to generate erotic content, to the U.S. military’s growing use of AI in leadership and tactical decisions, to AI-generated videos resurrecting deceased public figures like MLK Jr. and Fred Rogers, each example exposes the deeper leadership crisis.Because, behind every one of these headlines is the same question: who’s drawing the red lines, and are there any?In this episode, I explore three key insights every leader needs to understand:Not having clear boundaries doesn’t make you adaptable; it makes you unanchored.Why red lines are rarely as simple as “never" and how to navigate the complexity without erasing conviction.And why waiting for AI companies to self-regulate is a guaranteed path to regret.I’ll also share three practical steps to help you and your organization start defining what’s off-limits, who gets a say, and how to keep conviction from fading under convenience.If you care about leading with clarity, conviction, and human responsibility in an AI-driven world, this one’s worth the listen.Oh, and if this conversation challenged your thinking or gave you something valuable, like, share, and subscribe. You can also support my work by buying me a coffee. And if your organization is wrestling with how to build or enforce ethical boundaries in AI strategy or implementation, that’s exactly what I help executives do. Reach out if you’d like to talk more.Chapters:00:00 – “Should AI be allowed…?”02:51 – Trending Headline Context10:25 – Insight 1: Without red lines, drift defines you13:23 – Insight 2: It’s never as simple as “never”17:31 – Insight 3: Big AI won’t draw your lines21:25 – Action 1: Define who belongs in the room25:21 – Action 2: Audit the lines you already have27:31 – Action 3: Redefine where you stand (principle > method)32:30 – Closing: The Time for AI Red Lines is Now#AILeadership #AIEthics #ResponsibleAI #FutureOfWork #BusinessStrategy #FutureFocused
undefined
Oct 20, 2025 • 32min

AI Is Performing for the Test: Anthropic’s Safety Card Highlights the Limits of Evaluation Systems

AI isn’t just answering our questions or carrying out instructions. It’s learning how to play to our expectations.This week on Future-Focused, I'm unpacking Anthropic’s newly released Claude Sonnet 4.5 System Card, specifically the implications of the section that discussed how the model realized it was being tested and changed its behavior because of it.That one detail may seem small, but it raises a much bigger question about how we evaluate and trust the systems we’re building. Because, if AI starts “performing for the test,” what exactly are we measuring, truth or compliance? And, can we even trust the results we get?In this episode, I break down three key insights you need to know from Anthropic’s safety data and three practical actions every leader should take to ensure their organizations don’t mistake performance for progress.My goal is to illuminate why benchmarks can’t always be trusted, how “saying no” isn’t the same as being safe, and why every company needs to define its own version of “responsible” before borrowing someone else’s.If you care about building trustworthy systems, thoughtful oversight, and real human accountability in the age of AI, this one’s worth the listen.Oh, and if this conversation challenged your thinking or gave you something valuable, like, share, and subscribe. You can also support my work by buying me a coffee. And if your organization is trying to navigate responsible AI strategy or implementation, that’s exactly what I help executives do, reach out if you’d like to talk more.Chapters:00:00 – When AI Realizes It’s Being Tested02:56 – What is an “AI System Card?"03:40 – Insight 1: Benchmarks Don’t Equal Reality08:31 – Insight 2: Refusal Isn’t the Solution12:12 – Insight 3: Safety Is Contextual (ASL-3 Explained)16:35 – Action 1: Define Safety for Yourself20:49 – Action 2: Put the Right People in the Right Loops23:50 – Action 3: Keep Monitoring and Adapting28:46 – Closing Thoughts: It Doesn’t Repeat, but It Rhymes#AISafety #Leadership #FutureOfWork #Anthropic #BusinessStrategy #AIEthics
undefined
Oct 13, 2025 • 32min

Accenture’s 11,000 ‘Unreskillable’ Workers: Leadership Integrity in the Age of AI and Scapegoats

AI should be used to augment human potential. Unfortunately, some companies are already using it as a convenient scapegoat to cut people.This week on Future-Focused, I dig into the recent Accenture story that grabbed headlines for all the wrong reasons. 11,000 people exited because they “couldn’t be reskilled for AI.” However, that’s not the real story. First of all, this isn’t what’s going to happen; it already did. And now, it’s being reframed as a future-focused strategy to make Wall Street feel comfortable.This episode breaks down two uncomfortable truths that most people are missing and lays out three leadership disciplines every executive should learn before they repeat the same mistake.I’ll explore how this whole situation isn’t really about an AI reskilling failure at all, why AI didn’t pick the losers (margins did), and what it takes to rebuild trust and long-term talent gravity in a culture obsessed with short-term decisions.If you care about leading with integrity in the age of AI, this one will hit close to home.Oh, and if this conversation challenged your thinking or gave you something valuable, like, share, and subscribe. You can also support my work by buying me a coffee. And if your organization is wrestling with what responsible AI transformation actually looks like, this is exactly what I help executives navigate through my consulting work. Reach out if you’d like to talk more.Chapters:00:00 - The “Unreskillable” Headline That Shocked Everyone00:58 - What Really Happened: The Retroactive Narrative04:20 - Truth 1: Not Reskilling Failure—Utilization Math10:47 - Truth 2: AI Didn’t Pick the Losers, Margins Did17:35 - Leadership Discipline 1: Redeployment Horizon21:46 - Leadership Discipline 2: Compounding Trust26:12 - Leadership Discipline 3: Talent Gravity31:04 - Closing Thoughts: Four Quarters vs. Four Years#AIEthics #Leadership #FutureOfWork #BusinessStrategy #AccentureLayoffs
undefined
Oct 6, 2025 • 32min

The Rise of AI Workslop: What It Means and How to Respond

AI was supposed to make us more productive. Instead, we’re quickly discovering it’s creating “workslop,” junk output that looks like progress but actually drags organizations down.In this episode of Future-Focused, I dig into the rise of AI workslop, a term Harvard Business Review recently put a name to and why it’s more than a workplace annoyance. Workslop is lowering the bar for performance, amplifying risk across teams, and creating a hidden financial tax on organizations.But this isn’t just about spotting the problem. I’ll break down what workslop really means for leaders, why “good enough” is anything but, and most importantly, what you can do right now to push back. From defining clear outcomes to auditing workloads and building accountability, I’ll break down practical steps to stop AI junk from taking over your culture.If you’re noticing your team is busier than ever but not improving performance or wondering why decisions keep getting made on shaky foundations, this episode will hit home.If this conversation gave you something valuable, you can support the work I’m doing by buying me a coffee. And if your organization is wrestling with these challenges, this is exactly what I help leaders solve through my consulting and the AI Effectiveness Review. Reach out if you’d like to talk more.00:00 - Introduction to Work Slop00:55 - Survey Insights and Statistics03:06 - Insight 1: Impact on Organizational Performance06:19 - Insight 2: Amplification of Risk10:33 - Insight 3: Financial Costs of Work Slop15:39 – Application 1: Define clear outcomes before you ask18:45 – Application 2: Audit workloads and rethink productivity23:15 – Application 3: Build accountability with follow-up questions29:01 - Conclusion and Call to Action#AIProductivity #FutureOfWork #Leadership #AIWorkslop #BusinessStrategy
undefined
Sep 26, 2025 • 53min

How People Really Use ChatGPT | Lessons from Zuckerberg’s Meta Flop | MIT’s Research on AI Romance

Happy Friday Everyone! I hope you've had a great week and are ready for the weekend. This Weekly Update I'm taking a deeper dive into three big stories shaping how we use, lead, and live with AI: what OpenAI’s new usage data really says about us (hint: the biggest risk isn’t what you think), why Zuckerberg’s Meta Connect flopped and what leaders should learn from it, and new MIT research on the explosive rise of AI romance and why it’s more dangerous than the headlines suggest.If this episode sparks a thought, share it with someone who needs clarity. Leave a rating, drop a comment with your take, and follow for future updates that cut through the noise. And if you’d take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlindWith that, let’s get into it.⸻The ChatGPT Usage Report: What We’re Missing in the DataA new OpenAI/NBER study shows how people actually use ChatGPT. Most are asking it to give answers or do tasks while the critical middle step, real human thinking, is nearly absent. This isn’t just trivia; it’s a warning. Without that layer, we risk building dependence, scaling bad habits, and mistaking speed for effectiveness. For leaders, the question isn’t “are people using AI?” It’s “are they using it well?”⸻Meta Connect’s Live-Demo Flop and What It RevealsMark Zuckerberg tried to stage Apple-style magic at Meta Connect, but the AI demos sputtered live on stage. Beyond the cringe, it exposed a bigger issue: Meta’s fixation on plastering AI glasses on our faces at all times, despite the market clearly signaling tech fatigue. Leaders can take two lessons: never overestimate product readiness when the stakes are high, and beware of chasing your own vision so hard that you miss what your customers actually want.⸻MIT’s AI Romance Report: When Companionship Turns RiskyMIT researchers found nearly 1 in 5 people in their study had engaged with AI in romantic ways, often unintentionally. While short-term “benefits” seem real, the risks are staggering: fractured families, grief from model updates, and deeper dependency on machines over people. The stigmatization only makes it worse. The better answer isn’t shame; it’s building stronger human communities so people don’t need AI to fill the void.⸻Show Notes:In this Weekly Update, Christopher Lind breaks down OpenAI’s new usage data, highlights the leadership lessons from Meta Connect’s failed demos, and explores why MIT’s AI romance research is a bigger warning than most realize.Timestamps:00:00 – Introduction and Welcome01:20 – Episode Rundown + CTA02:35 – ChatGPT Usage Report: What We’re Missing in the Data20:51 – Meta Connect’s Live-Demo Flop and What It Reveals38:07 – MIT’s AI Romance Report: When Companionship Turns Risky51:49 – Final Takeaways#AItransformation #FutureOfWork #DigitalLeadership #AIadoption #HumanCenteredAI
undefined
Sep 19, 2025 • 52min

Altman & Carlson's Viral AI Clip | Anthropic's Newest Economic Index | Job Market Reality Check

Happy Friday! This week I’m running through three topics you can’t afford to miss: what Altman’s viral exchange reveals about OpenAI’s missing anchor, the real lessons inside Anthropic’s Economic Index (hint: augmentation > automation), and why today’s job market feels stuck and how to move anyway.Here’s the quick rundown. First up, a viral exchange between Sam Altman and Tucker Carlson shows us something bigger than politics. It reveals how OpenAI is being steered without a clear foundation and little attention on the bigger picture. Then, I dig into Anthropic’s new Economic Index report. Buried in all the charts and data is a warning about automation, augmentation, and how adoption is moving faster than most leaders realize. Finally, I take a hard look at the growing pessimism in the job market, why the data looks grim, and what it means for job seekers and leaders alike.With that, let’s get into it.⸻Sam Altman’s Viral Clip: Leadership Without a FoundationA short clip of Sam Altman admitting he's not that concerned about big moral risks and his “ethical compass” comes mostly from how he grew up sparked a firestorm. The bigger lesson? OpenAI and many tech leaders are operating without clear guiding principles or a focus on the bigger picture. For business leaders and individuals, it’s a warning. You can't count on big tech to do that work for you. Without defined anchors, your strategy turns into reactive whack-a-mole.⸻Anthropic’s Economic Index: Adoption, Acceleration, and Automation RiskThis index is a doozy as a heads up. However, it isn’t just about one CEO’s philosophy. How we anchor decisions shows up in the data too even if it has the Anthropic lens. The report shows AI adoption is accelerating and people are advancing faster in sophistication than expected. But faster doesn’t mean better. Without defining what “effective use” looks like, organizations risk scaling bad habits. The data also shows diminishing returns on automation. Augmentation is where the real lift is happening. Yet most companies are still chasing the wrong thing.⸻Job-Seeker Pessimism in a Stalled MarketThe Washington Post painted a bleak picture: hiring is sluggish, layoffs continue, and the best news is that things have merely stalled instead of collapsing. That pessimism is real. I see it in conversations every week. I’m hearing from folks who’ve applied to hundreds of roles, one at 846 applications, still struggling to land. You’re not alone. But while we can’t control the market, we can control resilience, adaptability, and how we show up for one another. Leaders and job seekers alike need to face reality without losing hope.⸻If this episode helped, share it with someone who needs clarity. Leave a rating, drop a comment with your take, and follow for future updates that cut through the noise. And if you’d take me out for a coffee to say thanks, you can do that here: ⁠https://www.buymeacoffee.com/christopherlind⁠—Show Notes:In this Weekly Update, Christopher Lind breaks down Sam Altman’s viral interview and what it reveals about leadership, explains the hidden lessons in Anthropic’s new Economic Index, and shares a grounded perspective on job-seeker pessimism in today’s market.Timestamps:00:00 – Introduction and Welcome01:12 – Episode Rundown02:55 – Sam Altman’s Viral Clip: Leadership Without a Foundation20:57 – Anthropic’s Economic Index: Adoption, Acceleration, and Automation Risk43:51 – Job-Seeker Pessimism in a Stalled Market50:44 – Final Takeaways#AItransformation #FutureOfWork #DigitalLeadership #AIadoption #HumanCenteredAI #AugmentationOverAutomation
undefined
Sep 12, 2025 • 52min

AI Drive-Thru Backlash | Declining AI Adoption? | KPMG’s 100-Page AI Prompt | AI Coaching Risks

Happy Friday, everyone! I'm back with another round of updates. This week I've got four stories that capture the messy, fascinating reality of AI right now. From fast food drive-thrus to research to consulting giants, the headlines tell one story, while what's underneath is where leaders need to focus.Here's a quick rundown. Taco Bell’s AI experiment went viral for all the wrong reasons, but there’s more behind it than memes. Then, I look at new adoption data from the US Census Bureau that some are using to argue AI is already slowing down. I’ll also break down KPMG’s much-mocked 100-page prompt, sharing why I think it’s actually a model of how to do this well. Finally, I close with a case study on AI coaching almost going sideways and how shifting the approach created a win instead of a talent drain.With that, let’s get into it.⸻Taco Bell’s AI Drive-Thru DilemmaHeadlines are eating up the viral “18,000 cups of water” order. However, nobody seems to catch that Taco Bell has already processed over 2 million successful AI-assisted orders. This makes the story more complicated. The conclusion shouldn’t be scrapping AI. It’s about designing smarter safeguards, balancing human oversight, and avoiding the trap of binary “AI or no AI” thinking.⸻Is AI Adoption Really Declining?New data from Apollo suggests AI adoption is trending downward in larger companies, sparking predictions of a coming slowdown. Unfortunately, the numbers don’t tell the whole story. Smaller companies are still on the rise. Add to that, even the “decline” in big companies may not be what it seems. Many are using AI so much it’s becoming invisible. I explain why this is more about maturity than decline and explain what opportunities smaller players now have.⸻KPMG’s 100-Page Prompt: A Joke or a Blueprint?Some mocked KPMG for creating a “hundred-page prompt,” but what they actually did was map complex workflows into AI-readable processes. This isn’t busywork; it’s the future of enterprise AI. By going slow to go fast, KPMG is showing what serious implementation looks like, freeing humans to focus on the “chewy problems” that matter most.⸻Case Study: Rethinking AI CoachingA client nearly rolled out AI coaching without realizing it could accelerate attrition by empowering talent to leave. Thankfully, by analyzing engagement data with AI first, we identified cultural risks and reshaped the rollout to support, not undermine, the workforce. The result: stronger coaching outcomes and a healthier organization.⸻If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that help you lead with clarity in the AI age. And, if you’d take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind—Show Notes:In this Weekly Update, Christopher Lind breaks down Taco Bell’s viral AI drive-thru story, explains the truth behind recent AI adoption data, highlights why KPMG’s 100-page prompt may be a model for the future, and shares a real-world case study on AI coaching that shows why context is everything.Timestamps:00:00 – Introduction and Welcome01:18 - Episode Rundown02:45 – Taco Bell’s AI Drive-Thru Dilemma19:51 – Is AI Adoption Really Declining?31:57 – KPMG’s 100-Page Prompt Blueprint42:22 – Case Study: AI Coaching and Attrition Risk49:55 – Final Takeaways#AItransformation #FutureOfWork #DigitalLeadership #AIadoption #HumanCenteredAI
undefined
Sep 5, 2025 • 54min

95% AI Project Failures | DeepSeek vs Big Tech | Liquid AI on Mobile | Google Mango Breakthrough

Happy Friday, everyone! Hopefully you got some time to rest and recharge over the Labor Day weekend. After a much needed break, I’m back with a packed lineup of four big updates I feel are worth you attention. First up, MIT dropped a stat that “95% of AI pilots fail.” While the headlines are misleading, the real story raises deeper questions about how companies are approaching AI. Then, I break down some major shifts in the model race, including DeepSeek 3.1 and Liquid AI’s completely new architecture. Finally, we’ll talk about Google Mango and why it could be one of the most important breakthroughs for connecting the dots across complex systems.With that, let’s get into it.⸻What MIT Really Found in Its AI ReportMIT’s Media Lab released a report claiming 95% of AI pilots fail, and as you can imagine, the number spread like wildfire. But when you dig deeper, the reality is not just about the tech. Underneath the surface, there’s a lot of insights on the humans leading and managing the projects. Interestingly, general-purpose LLM pilots succeed at a much higher clip, while specialized use cases fail when leaders skip the basics. But that’s not it. I unpack what the data really says, why companies are at risk even if they pick the right tech, and shine a light on what every individual should take away from it.⸻The Model Landscape Is Shifting FastThe hype around GPT-5 crashed faster than the Hindenburg, especially since hot on the heels of it DeepSeek 3.1 hit the scene with open-source power, local install options, and prices that undercut the competition by an insane order of magnitude. Meanwhile, Liquid AI is rethinking AI architecture entirely, creating models that can run efficiently on mobile devices without draining resources. I break down what these shifts mean for businesses, why cost and accessibility matter, and how leaders should think about the expanding AI ecosystem.⸻Google Mango: A Breakthrough in ComplexityGoogle’s has a new, also not so new, programming language, Mango, which promises to unify access across fragmented databases. Think of it as a universal interpreter that can make sense of siloed systems as if they were one. For organizations, this has the potential to change the game by helping both people and AI work more effectively across complexity. However, despite what some headlines say, it’s not the end of human work. I share why context still matters, what risks leaders need to watch for, and how to avoid overhyping this development.⸻A Positive Use Case: Sales Ops TransformationTo close things out, I made some time to share how a failed AI initiative in sales operations was turned around by focusing on context, people, and process. Instead of falling into the 95%, the team got real efficiency gains once the basics were in place. It’s proof that specialized AI can succeed when done right.⸻If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that help you lead with clarity in the AI age. And, if you’d take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind—Show Notes:In this Weekly Update, Christopher Lind breaks down MIT’s claim that 95% of AI pilots fail, highlights the major shifts happening in the model landscape with DeepSeek and Liquid AI, and explains why Google Mango could be one of the most important tools for managing complexity in the enterprise. He also shares a real-world example of a sales ops project that proves specialized AI can succeed with the right approach.Timestamps:00:00 – Introduction and Welcome01:28 – Overview of Today’s Topics03:05 – MIT’s Report on AI Pilot Failures23:39 – The New Model Landscape: DeepSeek and Liquid AI40:14 – Google Mango and Why It Matters47:48 – Positive AI Use Case in Sales Ops53:25 – Final Thoughts#AItransformation #FutureOfWork #DigitalLeadership #AIrisks #HumanCenteredAI
undefined
Aug 29, 2025 • 33min

Public Service Announcement: The Alarming Rise of AI Panic Decisions and Reckless Advice

Happy Friday, everyone! While preparing to head into an extended Labor Day weekend here in the U.S., I wasn’t originally planning to record an episode. However, something’s been building that I couldn’t ignore. So, this week’s update is a bit different. Shorter. Less news. But arguably more important.Think of this one as a public service announcement, because I’ve been noticing an alarming trend both in the headlines and in private conversations. People are starting to make life-altering decisions because of AI fear. And unfortunately, much of that fear is being fueled by truly awful advice from high-level tech leaders.So in this abbreviated episode, I break down two growing trends that I believe are putting people at real risk. It’s not because of AI itself, but because of how people are reacting to it.With that, let’s get into it.⸻The Dangerous Rise of AI Panic DecisionsSome are dropping out of grad school. Others are cashing out their retirement accounts. And many more are quietly rearranging their lives because they believe the AI end times are near. In this first segment, I start by breaking down the realities of the situation then focusing on some real stories. My goal is to share why these reactions, though in some ways grounded in reality and emotionally understandable, can lead to long-term regret. Fear may be loud, but it’s a terrible strategy.⸻Terrible Advice from the Top: Why Degrees Still Matter (Sometimes)A Google GenAI executive recently went on record saying young people shouldn’t even bother getting law or medical degrees. And, he’s not alone. There’s a rising wave of tech voices calling for people to abandon traditional career paths altogether. I unpack why this advice is not only reckless, but dangerously out of touch with how work (and systems) actually operate today. Like many things, there are glimmers of truth blown way out of proportion. The goal here isn’t to defend degrees but explain why discernment is more important than ever.⸻If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that go beyond the headlines and help you lead with clarity in the AI age. And, if you’d take me out for a coffee to say thanks, you can do that here:👉 https://www.buymeacoffee.com/christopherlind—Show Notes:In this special Labor Day edition, Christopher Lind shares a public service announcement on the dangerous decisions people are making in response to AI fear and the equally dangerous advice fueling the panic. This episode covers short-term thinking, long-term consequences, and how to stay grounded in a world of uncertainty.Timestamps:00:00 – Introduction & Why This Week is Different01:19 - PSA: Rise in Concerning Trends02:29 – AI Panic Decisions Are Spreading18:57 – Bad Advice from Google GenAI Exec32:07 – Final Reflections & A Better Way Forward#AItransformation #HumanCenteredLeadership #DigitalDiscernment #FutureOfWork #LeadershipMatters

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app