The Clear Edge

The Clear Edge

From $48K to $72K in 12 Weeks: How Fixing Systems Before They Break Prevented 8 Weeks of Crisis

Rashid hardened his delivery systems at forty-eight thousand before scaling, reaching seventy-two thousand in twelve weeks without breaking anything or losing a single client.

Nour Boustani's avatar
Nour Boustani
Feb 02, 2026
∙ Paid

The Executive Summary

Consultants at $48K/month risk losing 8–12 weeks to crisis rebuilding and stalled growth by scaling on fragile systems; fixing known breaking points at $48K unlocks $72K/month in 18 weeks with zero fires.

  • Who this is for: SaaS onboarding consultants and operators around $48K/month with 12 clients and 35-hour weeks who are eyeing $55K+ growth but already seeing early strain in delivery and communication.

  • The Delivery Hardening Problem: Most operators push from $48K to $55K–$58K on linear systems, then eat 8–12 weeks of crisis, 65+ hour weeks, client churn, and slow rebuild instead of a clean jump to $72K.

  • What you’ll learn: How Rashid used the What Breaks at $55K diagnostic, stress-tested to 2x clients, identified 8 breaking points, and hardened onboarding, check-ins, templates, reviews, communication, metrics, escalation, and documentation before anything snapped.

  • What changes if you apply it: You trade a chaotic 24-week crawl to $60K and reputation damage for an 18-week climb to $72K, keep hours near 38/week, retain 100% of clients, and skip the 8-week firefight entirely.

  • Time to implement: Plan Week 1–2 for simulation and diagnostics, Week 3–4 for fixing all 8 breaks (~40 hours), Week 5–6 for testing and redundancy (~10 hours), then 12 weeks of clean scale from $48K to $72K/month.

Written by Nour Boustani for $40K–$70K/month consultants who want a clean jump to $72K without 8–12 weeks of crisis, churn, and 65-hour rescue weeks.


Fixing systems only after they break doesn’t announce itself—by the time you feel it, you’re 8 weeks deep in crisis. Upgrade to premium and avoid the rebuild that steals your time and headspace.


› Library Navigation: Quick Navigation · Operator Cases


Rashid hit $48K/month running his SaaS onboarding consulting practice. 12 clients, 35 hours weekly, systems working well. Revenue is growing steadily. Everything felt fine.

Then he saw the pattern data on what breaks at $55K.

Delivery systems designed for 10-12 clients start fracturing at 15+ clients. Communication protocols fail. Quality drops. Client satisfaction falls. Fires break out faster than the founder can put them out.

Early warning signs at $48K:

Sign 1: Increased clarifying questions from clients (confusion about the process)

Sign 2: Quality variance starting to appear (inconsistent deliverables)

Sign 3: Founder feeling stretched thin (less time per client)

Sign 4: Processes that worked at $35K feeling fragile at $48K

Sign 5: “Just so you know...” messages from clients (seeking reassurance)

Rashid checked his systems against the diagnostic. Found all five signs. His delivery infrastructure was working but fragile. Built for $35K scale, strained at $48K, would break at $55K+.

Most operators in his position push forward. Hit $55K, systems break, spend 8-12 weeks in crisis mode rebuilding while revenue drops and clients complain.

Rashid chose the opposite path: build foundation before scale. Take 6 weeks now to harden systems preemptively. Prevent the break entirely.

Everyone said it was a waste of time. Revenue wasn’t growing those six weeks. He was “overthinking.” But the math was clear: 6 weeks of hardening prevents 8-12 weeks of rebuilding.

Net gain: 2-6 weeks plus zero client impact.

Here’s exactly how preemptive infrastructure hardening compressed his scale timeline.


The Problem: Systems Working But Fragile

Most consultants scale the wrong way. They build systems that work at current revenue, push growth, hit the breaking point, and then scramble to fix under pressure. It’s reactive: problem appears, frantically rebuild, hope clients don’t leave during chaos.

Rashid’s analysis showed a different pattern.

He spent Week 1 stress-testing current systems. Not in production—in simulation. Asked: “What happens if I double client count in the next 8 weeks? Where does infrastructure break?”

Current State at $48K:

12 clients, $4K average, 35 hours weekly across:

Client onboarding: 6 hours/week (30 minutes per new client × 4 new clients monthly average)

Weekly check-ins: 6 hours/week (30 minutes × 12 clients)

Deliverable creation: 15 hours/week (actual client work)

Quality review: 4 hours/week (checking work before delivery)

Communication management: 4 hours/week (emails, questions, coordination)

Total: 35 hours weekly, manageable but approaching capacity


Simulated State at $72K (2x growth in 12 weeks):

20+ clients, $3,600 average (pricing pressure from volume), 60-65 hours weekly projected:

Client onboarding: 10 hours/week (30 minutes × 7-8 new clients monthly)

Weekly check-ins: 10 hours/week (30 minutes × 20 clients)

Deliverable creation: 25 hours/week (more clients, same quality standard)

Quality review: 8 hours/week (more volume to check)

Communication management: 10 hours/week (exponential growth in coordination)

Crisis management: 5-8 hours/week (breaks happening faster than fixes)

Total: 68-71 hours weekly, unsustainable and breaking

The simulation revealed the problem: His systems were linear. Every client added required proportional time investment. At 12 clients = manageable. At 20 clients = impossible without either (a) working 70+ hours weekly or (b) dropping quality.

Neither option was acceptable.

Traditional advice would say: hire someone to add capacity. But hiring takes 12-16 weeks to reach productivity and creates new dependency. He needed systems that scaled without proportional time increase.

The eight breaking points identified in the stress test:

Break 1: Onboarding process required 30 minutes per client (manual walkthrough, repeated explanations, coordination). At 7-8 clients monthly = 3.5-4 hours of repetitive work.

Break 2: Weekly check-ins used the same format for all clients (30-minute call regardless of need). 20 clients × 30 minutes = 10 hours weekly of meetings, half of which are unnecessary.

Break 3: Deliverable templates existed but weren’t comprehensive. Each client engagement required custom building from a 60% base. At 20+ clients = massive customization overhead.

Break 4: Quality review was manual and sequential. Rashid reviewed every deliverable personally. 20 clients = 8 hours weekly, just reviewing, creating a bottleneck.

Break 5: Communication happened reactively via email. No self-serve resources. Every client question required an email response. 20 clients = 40-60 questions weekly, 10+ hours answering.

Break 6: Client success metrics were informal. Rashid “knew” clients were successful but couldn’t prove it with data. At scale, informal broke—no way to track 20+ clients without a system.

Break 7: Escalation protocol didn’t exist. Every client issue came straight to Rashid. 20 clients = constant interruptions, zero focus time, reactive firefighting.

Break 8: Documentation was incomplete. Processes existed in Rashid’s head, not written down. Couldn’t delegate, couldn’t systematize, single point of failure.

These weren’t broken yet at $48K with 12 clients. But simulation proved they’d fracture violently at $55K+ with 18-20 clients.

Most operators discover these breaks when they’re already happening. Clients are complaining, quality is dropping, working 65+ hours weekly, considering quitting.

Rashid saw them 8 weeks early. Fixed them before a single client was impacted.


Week 1-2: Stress Testing and Break Identification

Week 1-2 was pure diagnostic work. No fixes yet—just identifying every breaking point before it breaks.

Week 1: Simulation Methodology

Rashid didn’t guess where systems would break. He used pattern data from What Breaks at $55K showing what 71% of operators experience at $52K-$58K.

Common breaking patterns at this stage:

Pattern 1: Founder time becomes a bottleneck (63% of cases)

Pattern 2: Client communication overwhelms capacity (58% of cases)

Pattern 3: Quality consistency drops under volume (54% of cases)

Pattern 4: Onboarding delays create bad first impressions (47% of cases)

Pattern 5: Knowledge trapped in the founder’s head (71% of cases)

He mapped his current systems against these patterns. Found he was vulnerable to all five.

Stress Test Protocol:

Step 1: Document current time investment across all activities

Step 2: Project 2x client count (his growth trajectory)

Step 3: Calculate time required at 2x using current processes

Step 4: Identify where time exceeds 50 hours weekly (unsustainable threshold)

Step 5: Flag specific processes causing overflow

Week 1 Output: Eight specific breaking points documented with projected impact


Week 2: Breaking Point Analysis

For each of the eight breaks, Rashid calculated:

Cost if reactive (fixing under pressure at $55K+):

  • Crisis time investment (20-40 hours rebuilding per break)

  • Client impact during crisis (quality drops, satisfaction falls)

  • Revenue risk (clients threatening to leave)

  • Reputation damage (word spreads about quality issues)

  • Timeline cost (8-12 weeks total crisis recovery)

Cost if preemptive (fixing now at $48K):

  • Hardening time investment (4-6 hours per break systematically)

  • Zero client impact (no live fire while fixing)

  • Zero revenue risk (prevention before problem)

  • Zero reputation damage (clients never see issues)

  • Timeline cost (6 weeks total hardening)

The Math:

Reactive path: 8-12 weeks crisis recovery + client loss + reputation damage

Preemptive path: 6 weeks systematic hardening + zero damage

Net difference: 2-6 weeks saved + quality maintained + reputation protected

ROI: 40 hours invested preemptively saves 120-200 hours of crisis firefighting

Week 2 Output: Complete prioritization of which breaks to fix first based on impact severity


Week 3-4: Fixing the Eight Breaking Points

Week 3-4 was pure implementation. Rashid systematically hardened each breaking point.

Break 1 Fix: Automated Onboarding

Created a comprehensive onboarding portal with video walkthrough, interactive checklist, pre-populated templates, and FAQ library. Onboarding time dropped 30 minutes → 10 minutes. At 20 clients, it saves 2.5 hours weekly.

Break 2 Fix: Tiered Check-In System

Created three-tier system: high-touch clients (30 minutes weekly), standard clients (15 minutes bi-weekly), self-serve clients (async updates + monthly call). Average check-in time dropped from 30 minutes to 18 minutes per client weekly. At 20 clients, it saves 4 hours weekly.

Break 3 Fix: Complete Template Library

Expanded templates from 60% to 95% complete. Created 12 additional deliverable templates and documented customization decision trees. Customization dropped 40% → 10% per engagement. At 20 clients, it saves 6 hours weekly.

Break 4 Fix: Quality Verification System

Implemented spot-check verification using quality transfer principles. Documented 15 critical quality criteria, built a quality checklist, and spot-checked 20% of deliverables. Review time dropped 4 hours → 1.5 hours weekly while quality maintained at 97%. At 20 clients, it saves 2.5 hours weekly.

Break 5 Fix: Self-Serve Knowledge Base

Built a searchable knowledge base with 47 common questions documented, video explanations, and process documentation. Client questions requiring response dropped 80%. At 20 clients, it saves 8 hours weekly.

Break 6 Fix: Automated Success Tracking

Built an automated dashboard tracking key metrics per client, success milestones, at-risk indicators, and quarterly success reports. Rashid could monitor 20+ clients’ health in 15 minutes weekly. Prevented 2-3 clients from churning through early detection.

Break 7 Fix: Escalation Protocol

Created documented escalation framework: Tier 1 (client handles with KB), Tier 2 (standard process follows template), Tier 3 (escalate with decision framework). 90% of issues resolved without founder involvement. At 20 clients, it saves 5 hours weekly.

Break 8 Fix: Complete Process Documentation

Documented 18 core processes following quality transfer framework. Step-by-step procedures, decision criteria, edge case handling, and quality standards. Every process is now delegatable. Enabled future hiring without a 12-week knowledge transfer bottleneck.

Week 3-4 Total Investment: 40 hours systematic hardening

Week 3-4 Output: All eight breaking points fixed before reaching breaking load


Week 5-6: Testing and Redundancy

Weeks 5-6 were validation. Rashid stress-tested the hardened systems.

Week 5: Load Simulation

Simulated a 20-client load with existing 12 clients. Ran all clients through the new systems to verify they worked. Found one additional issue: client communication response time expectations are unclear. Some expected a 2-hour response, others are fine with 24 hours. Caused anxiety.

Week 5 Fix: Created communication SLA framework: Urgent (4-hour response), Important (24-hour), Standard (48-hour), FYI (acknowledged weekly). Client anxiety eliminated.

Week 6: Redundancy Building

Built backup systems to prevent single points of failure. Cross-trained on critical processes, created backup templates, backed up the knowledge base, and documented emergency protocols.

Weeks 5-6 Investment: 10 hours

Output: Systems hardened, redundant, ready for scale


Post-Hardening: Scaled to $72K in 12 Weeks Without Breaking

After 6 weeks of systematic hardening, Rashid resumed growth.

Weeks 7-18: Growth on Solid Foundation

Week 7: Added client 13 → $52K/month. Systems held. Zero stress.

Week 10: Added clients 14-16 → $60K/month. Systems still holding. Rashid works 38 hours weekly.

Week 14: Added clients 17-19 → $68K/month. Three new clients in one month. The onboarding portal handled all three simultaneously. Escalation protocol prevented one potential crisis (caught early, resolved at Tier 2).

Week 18: Added clients 20-21 → $72K/month. Systems are operating smoothly. Rashid is working 38 hours weekly (same as at $48K). Client satisfaction at 96% (higher than at $48K). Zero fires. Zero crisis. Zero rebuild needed.

The Validation:

6 weeks of hardening enabled 12 weeks of smooth growth. $48K → $72K (+50%) without breaking anything.

The Alternative Path:

If Rashid had pushed growth without hardening: Week 8-10 hit $55K-$58K, systems break. Week 11-12 full crisis. Week 13-20 emergency rebuild, revenue drops to $50-52K. Week 21-24 restabilize at $60K.

Reactive path: 24 weeks to reach $60K with crisis damage

Preemptive path: 18 weeks to reach $72K with zero damage

Time saved: 6 weeks. Revenue difference: $12K/month higher. Crisis avoided: 8 weeks of firefighting + client loss + reputation damage.


The Hidden Problems That Almost Derailed Everything

Every transformation hits resistance. Here’s what almost stopped Rashid’s preemptive hardening and how he pushed through.

Problem 1: Felt Like a Waste of Time (Revenue Not Growing)

The Block: Week 3, halfway through hardening. Rashid’s peer group was posting revenue wins. He was posting “working on systems.” Felt like he was falling behind.

The Doubt: “Maybe I should just push growth and deal with problems as they come? Everyone else seems fine without all this prep work.”

The Reality Check:

Pulled data from What Breaks at $55K showing 71% of operators hit crisis at this stage. His peer group celebrating wins were at $35K-$45K, not yet at breaking point. They’d hit it in 8-12 weeks. He’d sail past it.

The Math:

Path A (Preemptive): 6 weeks flat at $48K, then 12 weeks smooth growth to $72K

Path B (Reactive): 8 weeks growth to $58K, then 8 weeks crisis, then 6 weeks recovery to $60K

18-week result: $72K with zero crisis, zero client loss, zero stress

22-week result: $60K with crisis scars, client loss, reputation damage

Net difference: $12K higher monthly, 4 weeks faster, reputation intact

The data proved: 6 weeks “wasted” on infrastructure saved 8+ weeks of crisis recovery, plus enabled $12K higher revenue. Not a waste—investment with 3x ROI.


Problem 2: Hard to Simulate Future Stress Accurately

The Block: Week 1, trying to stress-test systems. How do you simulate 20 clients when you have 12?

The Solution: Used pattern data showing what 71% of operators experience at $52K-$58K: 63% hit founder time bottleneck, 58% hit communication overwhelm, 54% hit quality consistency issues, 47% hit onboarding delays, 71% hit knowledge trapped in head.

Rashid checked his systems against these five patterns. Found he was vulnerable to all five. Focused hardening on known breaking points, not speculation.

Result: When he scaled to $72K, nothing broke—because he’d hardened the known fracture points.

Lesson: Don’t guess what will break. Fix the patterns that already broke for others at this stage.


Problem 3: Found More Problems Than Expected

The Block: Week 2, stress testing revealed eight breaking points. Expected 3-4. Finding eight felt overwhelming. “Maybe I should just push growth and deal with problems as they come?”

The Reframe: Finding problems in simulation is good. Finding them in production with real clients is a disaster.

The Reality:

Problems found in simulation: Fix calmly, zero client impact, learn and improve

Problems found under load: Fix frantically, clients complaining, revenue at risk, reputation damage

The Solution: Treated problem discovery as success, not failure.

Each breaking point found in Week 2 was a crisis prevented at $55K+. Every hour spent fixing in Weeks 3-4 was 5-10 hours of firefighting saved later.

Mental Model Shift:

From: “Eight problems = I’m behind”

To: “Eight problems found early = eight crises prevented”

Result: Fixed all eight systematically over 4 weeks. When he scaled past $55K, hit $60K, reached $72K—zero of those eight broke. Because they were already hardened.

Lesson: More problems found early = better. Means your diagnosis is thorough. Fix them preemptively, scale smoothly.


The Results: 6 Weeks Hardening vs. 8+ Weeks Rebuilding

Here’s what Rashid achieved through preemptive hardening versus what the reactive path would’ve delivered.

Rashid’s Preemptive Path (18 weeks total):

  • Weeks 1-6: System hardening at $48K (revenue flat)

  • Weeks 7-18: Scale $48K → $72K smoothly (12 weeks)

  • Revenue: $48K → $72K (+50%)

  • Clients: 12 → 21 (+75%)

  • Hours/week: 35 → 38 (+8.6%)

  • Client impact: Zero (maintained quality throughout)

  • Downtime: Zero (no crisis periods)

  • Stress level: Low (controlled, strategic)

  • Time to $72K: 18 weeks from decision point

Traditional Reactive Path (22+ weeks typical):

  • Weeks 1-8: Push growth $48K → $58K (systems straining)

  • Weeks 9-10: Systems break at $58K, crisis mode

  • Weeks 11-18: Emergency rebuild (8 weeks), revenue drops to $50-52K

  • Weeks 19-22: Recover and restabilize at $60K

  • Revenue: $48K → $60K (+25%)

  • Clients: 12 → 16 (lost some during crisis)

  • Hours/week: 35 → 55+ (crisis firefighting)

  • Client impact: High (quality dropped, complaints, some churn)

  • Downtime: 8-10 weeks crisis recovery

  • Stress level: Extreme (reactive firefighting)

  • Time to $60K: 22 weeks from decision point

The Compression:

Rashid reached $72K in 18 weeks (preemptive path) vs. $60K in 22 weeks (reactive path).

4 weeks faster. $12K higher monthly. Zero crisis. Zero client loss.

The Math:

Time saved: 4 weeks (18 vs. 22 to stable scale)

Revenue difference: $72K vs. $60K = $12K monthly ongoing

Annual impact: $12K × 12 = $144K higher annual revenue

Crisis hours prevented: 120-200 hours of firefighting (calculated at 55-hour weeks for 8 weeks of crisis)

Client retention: 100% vs. ~85% = 3-4 clients retained who would’ve left

Stress prevented: Immeasurable but significant


How This Proves Preemptive Infrastructure Works

Rashid’s case proves that foundation-first sequencing compresses timelines faster than reactive scaling.

Why It Worked:

Early warning detection: Pattern data showed what breaks at $55K. Rashid was at $48K. That’s 8-week lead time. He used it to harden before breaking.

Pattern-based stress testing: Used intelligence from 322 documented journeys showing the five most common breaking points. Fixed the patterns, not speculation. Zero surprises when he scaled.

Systematic hardening: Identified eight breaking points. Fixed all eight in over 4 weeks. When he scaled past $55K, none broke.

ROI validation: 40 hours invested in hardening. 936 hours saved annually in crisis prevention. 23x ROI on time alone, plus $144K annual revenue difference.


What Preemptive Hardening Proved

Crisis prevention beats crisis response: Reactive operators spend 8-12 weeks in crisis recovery. Proactive operators spend 6 weeks hardening, then scale smoothly. 2-6 weeks faster, plus zero damage.

Foundation strength enables speed: Pausing growth to harden systems enables faster scale afterward. Weak foundation = slow, breaking, rebuild cycles. Strong foundation = fast, smooth, sustainable.

Prevention investment has exponential ROI: 40 hours fixing prevents 120-200 hours of firefighting. Plus revenue protection, client retention, reputation maintenance, and stress reduction.


Rashid went from $48K running smoothly to $72K still running smoothly—in 18 weeks with 6 weeks spent on preemptive hardening. Not because he got lucky. Because he saw the breaking point coming, he hardened infrastructure before it broke, then scaled on a solid foundation.

Preemptive hardening compresses timelines. Reactive scaling extends them.

Which path are you taking?


FAQ: Preemptive Delivery Hardening Scale System

Q: How does fixing systems at $48K let me reach $72K/month in 18 weeks without crisis?

A: Rashid invested 6 weeks hardening eight delivery, communication, and documentation breaks at $48K so he could climb to $72K/month in the next 12 weeks with 21 clients, 38-hour weeks, and zero fires.


Q: How much time and revenue do I really lose if I wait until $55K–$58K and let systems break before fixing them?

A: The reactive path costs 8–12 weeks of 55+ hour crisis rebuilding, drops revenue back to $50K–$52K, and leaves you stuck around $60K at 22 weeks instead of $72K at 18 weeks.


Q: How do I use the “What Breaks at $55K” diagnostic with its 2x client stress test before I push past $48K?

A: In Weeks 1–2 you map your current 12-client, 35-hour setup to a doubled 20+ client load, then use the What Breaks at $55K patterns to surface eight specific breaking points across onboarding, check-ins, templates, reviews, communication, metrics, escalation, and documentation before you add a single client.


Q: What happens to my weekly hours if I scale from $48K to $72K on fragile, linear systems instead of hardening first?

A: Without hardening, doubling toward 20 clients pushes you from 35-hour weeks to 68–71-hour weeks made up of 10 hours onboarding, 10 hours check-ins, 25 hours delivery, 8 hours quality review, 10 hours communication, and 5–8 hours of crisis management.


Q: How do I harden the eight breaking points in 40 hours without slowing my business down for months?

A: Across Weeks 3–4 you spend about 40 hours building an automated onboarding portal, tiered check-ins, 95% complete templates, a spot-check quality system, a 47-question knowledge base, an automated success dashboard, an escalation protocol, and 18 documented core processes so each weak point is fixed once and then reused for every future client.


Q: When should I pause growth to harden systems, and what changes in my timeline if I don’t?

A: You pause at $48K with 12 clients for 6 weeks to run diagnostics and fixes so you hit $72K in 18 weeks total, whereas skipping the pause usually means hitting $58K, breaking, spending 8–10 weeks in crisis, and landing at only $60K after 22+ weeks.


Q: How much crisis time does preemptive hardening actually save once I pass $55K and 18–20 clients?

A: The 40 hours you invest up front prevents 120–200 hours of firefighting during 8 weeks of 55-hour crisis weeks, while also avoiding client churn, reputation damage, and rebuild downtime.


Q: How do I test that my hardened delivery system will survive 20+ clients before I actually reach $72K?

A: In Weeks 5–6 you simulate a 20-client load with your 12 clients, run them through the new onboarding, check-ins, and escalation flows, then fix issues like unclear response-time SLAs with a 4-hour, 24-hour, 48-hour, and weekly framework so everything holds under projected $72K conditions.


Q: Why does fixing systems before they break let me grow faster than pushing growth now and rebuilding later?

A: Preemptive hardening trades 6 flat weeks at $48K for a smooth 12-week climb to $72K, whereas reactive scaling delivers 8 weeks of strained growth to $58K, 8 weeks of rebuilding at $50K–$52K, and only $60K at Week 22—4 weeks slower and $12K/month lower.


⚑ Found a Mistake or Broken Flow?

Use this form to flag issues in articles (math, logic, clarity) or problems with the site (broken links, downloads, access). This helps me keep everything accurate and usable. Report a problem →


› More to Explore: Quick Navigation · Operator Cases


➜ Help Another Founder, Earn a Free Month

If this system just saved you from spending 8–12 weeks in 65-hour crisis rebuilding at $55K–$58K, share it with one founder who needs that relief.

When you refer 2 people using your personal link, you’ll automatically get 1 free month of premium as a thank-you.

Get your personal referral link and see your progress here: Referrals


Get The Toolkit

You’ve read the system. Now implement it.

Premium gives you:

  • Battle-tested PDF toolkit with every template, diagnostic, and formula pre-filled—zero setup, immediate use

  • Audio version so you can implement while listening

  • Unrestricted access to the complete library—every system, every update

What this prevents: Losing 8–12 weeks to 68–71-hour crisis rebuilds just to crawl from $58K to $60K.

What this costs: $12/month. A small allocation for avoiding $144K in annual revenue loss from stalling at $60K instead of $72K.

Download everything today. Implement this week. Cancel anytime, keep the downloads.

Already upgraded? Scroll down to download the PDF and listen to the audio.

User's avatar

Continue reading this post for free, courtesy of Nour Boustani.

Or purchase a paid subscription.
© 2026 Nour Boustani · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture