Hiring hasn’t changed much in decades. Recruiters wade through endless resumes, candidates struggle to showcase who they truly are, and companies rely on paid demos and clunky trial processes. We wanted to flip the script: create a platform that surfaces exclusive, vetted talent, highlights personality and skills, and streamlines the recruiting process. That vision became ReelCV.
Here’s how we built it, month by month, behind the scenes.
March marked the true beginning of building ReelCV. What started as a rough vision began to take on structure, functionality, and form. The team focused on the foundational elements: profile scoring, recruiter workflow improvements, and the first working version of the platform.
Early in the month, conversations circled around how recruiters actually think about candidates. Chris and Ryan explored the idea of profile scoring — a way to quantify both “red flags” and positive signals in a candidate’s career journey.
The vision was clear: instead of recruiters manually scanning for these patterns, ReelCV would apply weighted scoring automatically, surfacing candidates most likely to succeed while flagging potential concerns. These discussions planted the seed for one of ReelCV’s most important differentiators: a data-driven way to evaluate people, not just paper resumes.
Ryan had begun improving the user and admin experience inside the Hirexe platform. The focus was on precision over volume: rather than overwhelming recruiters with dozens of irrelevant candidates, the system would present a handful of highly relevant profiles.
The work included:
This was the first step toward shaping ReelCV as a practical tool recruiters could adopt quickly, rather than a flashy product that disrupted workflows.
Chris provided critical feedback on design and layout. His recommendations emphasized clarity, structured experiences, and small usability enhancements like CV view tracking. These insights were vital in keeping the product grounded in what recruiters actually needed, not just what looked good on paper.
By March 15, ReelCV hit its first major milestone: V1 was officially ready. This was more than a prototype, it was the foundation on which every future iteration would be built. Recruiters could log in, create candidate profiles, and begin to see how ReelCV might reshape the hiring process.
A week later, however, a problem surfaced. Despite video being central to the ReelCV vision, candidates weren’t uploading their videos. Technical friction, strict validation, and unclear guidance caused drop-offs. Without videos, the “Careel” (career reel) concept couldn’t deliver on its promise of dynamic storytelling.
Chris flagged this as a bottleneck, and on March 20, Ryan implemented a solution: video upload became a required step in profile creation.
This change forced adoption while ensuring every profile included the key differentiator that set ReelCV apart.
Toward the end of March, discussions turned to how candidate skills should be captured and displayed. The team began planning for V2 improvements, including more intelligent skill extraction, tagging, and presentation. These early conversations foreshadowed the refinements that would come in April and May.
March was all about building the bedrock of ReelCV. From scoring models to recruiter workflows, from layout tweaks to the first working version, the month ended with a product that was both usable and promising. Challenges like video uploads revealed the gaps, but they also pushed the team toward solutions that strengthened the platform.
ReelCV had officially moved from idea to implementation.
If March was about laying the foundation, April was about giving ReelCV its personality. The focus shifted from raw functionality to human-centered design, finding ways to showcase candidates as more than static resumes and making the recruiter experience feel smooth, modern, and intuitive.
At the start of April, the team reviewed the original homepage layout. It felt functional, but more like a traditional database than a modern hiring platform. Chris and Ryan began redesigning the flow, emphasizing search as the entry point. Instead of navigating menus, recruiters would land directly on a search-driven homepage with prompts to start finding candidates right away.
Ryan demonstrated updates that made the experience more engaging:
The shift was subtle but significant: ReelCV started to feel less like a form-filling system and more like a dynamic tool that guided recruiters and candidates forward.
By April 8, the conversation turned to one of the most defining elements of ReelCV: capturing personality.
Chris, Ryan, and the team debated how to move beyond credentials and showcase people as individuals. The result was a new profile structure that highlighted hobbies, passions, and personality traits alongside professional skills. Design elements included:
This was a turning point: ReelCV wasn’t just solving efficiency problems for recruiters, it was creating a more authentic, human-centered hiring experience.
Mid-April saw refinements to the Career Profile, which became the centerpiece of the candidate experience.
Key updates included:
The team also debated the value of experimental fields like “About Me” and “My Future”, weighing authenticity against clutter. These discussions reflected ReelCV’s constant balancing act: depth vs. clarity, personality vs. professionalism.
As April progressed, design refinements gave the product a modern polish. Ryan rolled out updates that introduced:
Even small choices, like limiting orange usage to preserve impact showed how carefully the team was shaping ReelCV’s identity.
April was the month ReelCV began to feel alive. The platform evolved from a functional V1 into something with warmth, personality, and a clearer voice. Profiles were no longer just digital CVs; they became windows into a candidate’s story. Recruiters weren’t just searching; they were experiencing a more intuitive, guided workflow.
With these changes, ReelCV moved closer to its core promise: making hiring both efficient and human.
By May, ReelCV began crossing the line from concept to working product. The month was defined by breakthroughs: long-discussed features finally became operational, and for the first time, the team could experience the platform end-to-end as a user would.
On the homepage, Ryan introduced a responsive globe animation, adding a sense of scale and motion. It wasn’t just a design flourish, it visually reinforced Hirexe’s mission of connecting South African professionals with global opportunities.
Work also continued on the profile creation process. Early iterations of the “Create a Profile” page were simplified and refined, ensuring that candidates were guided step by step without overwhelm.
One of the major wins was solving the Candidate Title challenge. Jurgen and Ryan collaborated on an API fix that allowed accurate job titles to pull through consistently, a small but crucial detail for candidate credibility and recruiter trust.
Another milestone was enabling a functional CV upload with live data. For the first time, the system could parse, process, and store actual candidate CVs, a leap forward from placeholders and test data.
Perhaps the biggest technical achievement came mid-month: video upload, capture, conversion, transcription, and AI analysis all became fully operational.
This was a defining feature for ReelCV. Video made it possible to showcase not just qualifications, but presence, communication style, and personality. AI-driven transcription and analysis turned these videos into structured, searchable data. The team celebrated this as a big win.
With the basics in place, attention turned to enhancing depth and usability:
By the end of May, Ryan presented a walk-through of V2. The platform had moved far beyond its early wireframes: there was now a coherent candidate journey, from signup to video, from CV upload to recruiter-facing search and rating.
With so many core features working, May also marked the start of internal sign-up testing. Members of the Hirexe team created their own profiles, uploading CVs, recording videos, and stepping into the shoes of future users. Feedback was shared openly in Slack, often captured in quick videos or notes. These test runs revealed pain points, validated design choices, and built confidence that ReelCV was ready for wider trials.
May was the month ReelCV truly came to life. What began as scattered features in development matured into a functioning platform, tested internally and celebrated for its breakthrough on video. For the first time, the team could see and not just imagine the product they had been building toward for months.
By June, ReelCV was no longer just an internal build, it was being tested in real-world conditions. This meant progress, but also the inevitable roadblocks that appear when theory meets practice.
On June 12, a critical issue surfaced: the system was not accepting videos reliably. This was a major concern, since video was at the heart of ReelCV’s value proposition. The team quickly identified the problem and began working through fixes, but it was a reminder that even celebrated features from May needed ongoing resilience under load.
June also marked the first wave of external applicant reactions. A handful of candidates tried the new signup process, and their feedback — screenshots, notes, and direct comments — was captured and discussed by Ryan and Nakita in Slack. While not all reactions were glowing, the input was invaluable. It validated what was working while highlighting areas for improvement.
On June 20, Ryan demoed an early version of the search functionality. Recruiters could now filter and find candidates more dynamically, surfacing profiles not just by job title but by a mix of skills, experience, and traits. It was an exciting glimpse of how ReelCV would empower MSPs to move beyond keyword-matching toward holistic talent discovery.
The month closed with a new technical hurdle. On June 30, the team hit a Google Console error that disrupted parts of the platform. While frustrating, the issue became another example of ReelCV’s resilience-in-progress, each roadblock was tackled quickly, with fixes rolled into subsequent updates.
June was the month of stress-testing. The product was in users’ hands, the team was watching real reactions, and the system’s weak points were being exposed. The setbacks, especially around video uploads and Google integration, were balanced by major wins: authentic feedback, working search, and proof that ReelCV could handle the messy reality of live use.
July was all about stability, refinement, and incremental wins. The product was now being used more consistently, but real-world usage continued to expose friction points and the team responded with targeted improvements.
The month kicked off with persistent issues in Google Console, limiting production capacity. The team was actively troubleshooting quotas and production limits, ensuring that new signups and CV uploads could proceed without interruption.
Despite technical hurdles and multiple product review sessions throughout July on the 15th, 22nd, and 28th it showed encouraging results. Feedback highlighted improvements in usability and overall experience, reinforcing the work the team had done to stabilize ReelCV.
A key focus for July was deploying solutions to recurring issues from prior months:
July was a month of resilience and refinement. Roadblocks like Google Console limitations persisted, but proactive monitoring, enhanced retry logic, and thoughtful feature updates ensured users could continue building their ReelCVs without major disruption. Positive feedback during product reviews validated these improvements, highlighting that ReelCV was becoming a stable and dependable tool for both candidates and MSPs.
August marked the official launch of ReelCV, completing the platform’s development and making it fully live. The team focused on finalizing workflows, enhancing the user experience, and ensuring all key features were functional for both candidates and companies.
Significant improvements were implemented across onboarding, user registration, company creation, and search workflows. The onboarding modal and related dialogs were refined for a smoother user experience, with consistent styling and dark/light mode harmonization. User registration and company migration processes were strengthened with Firebase Auth integration, company data migration, and enforced corporate email and profile requirements. The FindUsers and Candidate Search features were upgraded with a new two-panel layout, server-side filtering, sortable tables, and editable recruiter ratings. Deliverable sharing, data migration, and Admin tools received security updates and functionality improvements, while ReelCV was fully integrated with the SQL backend. Minor bug fixes and styling tweaks were applied across the platform.
The team conducted internal testing and monitored user sign-ups, capturing feedback through Slack. Errors and edge cases were reviewed and resolved quickly, ensuring the platform was stable and ready for public use. Positive product reviews confirmed the platform’s usability and overall experience improvements.
By the end of August, ReelCV was fully operational, providing a seamless experience for candidates and companies. While the Admin side is still being fine-tuned, all core features are functional and live, marking a major milestone for the platform.
TL;DR: Despite all the talk about "artificial" intelligence, the biggest names in AI are spending billions of dollars on human labor.
From the $500M+ that Mercor is making connecting PhDs to AI labs, to Scale AI's $2B+ in revenue from human data workers, to Surge AI crossing $1.4B with just 121 employees managing human annotators—the AI revolution is actually powered by an invisible army of human experts doing the grunt work that makes these "intelligent" systems possible.
About 80% of the explosive revenue growth we’ve seen between the above companies is coming from STAFFING revenue. The AI industry has a dirty little secret, and it's hiding in plain sight.
While Silicon Valley VCs throw around terms like "artificial general intelligence" and "autonomous systems," the reality is far more human than anyone wants to admit. Behind every breakthrough language model, every impressive AI assistant, and every mind-blowing demonstration lies an army of human workers—annotating data, providing feedback, and essentially teaching machines how to think.
The numbers tell a story that the AI hype machine doesn't want you to hear: the companies making the most money in AI aren't the ones building the flashy chatbots—they're the ones managing the human workforce that makes those chatbots possible.
Mercor's story reads like a Silicon Valley fever dream. Founded by three 21-year-old college dropouts in 2023, the company started as a recruiting platform for college students. Fast forward to September 2025, and Mercor is reportedly approaching a $450 million annual run rate with investors eyeing a $10+ billion valuation.
What changed? AI labs discovered that Mercor was sitting on exactly what they desperately needed: access to thousands of domain experts with advanced degrees.
The Numbers Behind Mercor's Explosion:
Here's what Mercor actually does for AI companies: they connect graduate-level experts—physics PhDs, biology researchers, legal experts, medical doctors—with AI companies that need specialized knowledge to train their models. When OpenAI needs someone who understands quantum mechanics to help improve GPT's physics reasoning, or when Anthropic needs constitutional law experts to help Claude understand legal nuances, they turn to Mercor.
Mercor's business model is brutally simple: charge a 30% fee on every expert they place. With AI companies paying premium rates for specialized talent (often $50-200+ per hour), Mercor's take per placement is substantial. The company has been profitable since early 2025, generating $1M+ in profit just in February alone.
The kicker? CEO Brendan Foody recently posted that their ARR is actually higher than $450 million—suggesting they're on track to hit the $500 million milestone faster than almost any enterprise software company in history.
Scale AI tells perhaps the most revealing story about AI's human dependency. Founded in 2016, Scale positioned itself as the infrastructure layer for AI training data—but what they really built was the world's most sophisticated human workforce management system.
Scale AI's Staggering Numbers:
Here's what's wild about Scale: Meta just paid $14.3 billion for a 49% stake in what is essentially a human resources company. Think about that for a moment. The company that owns Facebook, Instagram, and WhatsApp—with all their technical expertise—decided they needed to pay nearly $15 billion to access Scale's network of human data workers.
What Scale's Army Actually Does:
Scale operates massive facilities in Southeast Asia and Africa through their Remotasks subsidiary, employing tens of thousands of workers who spend their days training tomorrow's AI systems. They've built the McDonald's of AI training—standardized, scalable human intelligence that AI companies can't replicate internally.
The Meta acquisition reveals the secret: Scale's real value isn't their technology—it's their ability to coordinate hundreds of thousands of humans to improve AI systems at massive scale.
While everyone was watching OpenAI and Anthropic, Surge AI quietly built the most profitable human intelligence operation in AI history. Founded in 2020 by former Google and Meta engineer Edwin Chen, Surge took a different approach: bootstrap profitability from day one.
Surge AI's Incredible Economics:
This might be the most impressive business in all of AI. Surge generates over $11 million in revenue per employee—a number that makes even the most successful SaaS companies look inefficient. How? They've perfected the art of human intelligence arbitrage.
Surge's Secret Sauce:
Unlike Scale's volume approach, Surge focuses on premium, specialized data work that requires deep expertise. When AI labs need the absolute highest quality human feedback—the kind that can make or break a model's performance—they pay Surge's premium prices.
Chen's anti-VC approach has created something rare: a massively profitable company with complete control over its destiny. While other AI companies burn billions chasing growth, Surge prints money by connecting highly skilled humans with AI companies willing to pay top dollar for quality.
Handshake's transformation story might be the most surprising of all. Started in 2014 as a career network for college students, Handshake spent a decade building what they didn't realize was the perfect infrastructure for the AI boom.
Handshake's AI Pivot Numbers:
What makes Handshake unique: they already had the trust and relationships with universities and students that other companies would spend years building. When AI labs started desperately seeking PhD-level experts for training data, Handshake realized they were sitting on a goldmine.
Handshake AI's business model is straightforward: connect their verified network of graduate students and recent PhD recipients with AI companies that need domain expertise. Physics students help improve AI reasoning about quantum mechanics. Biology PhDs help models understand complex molecular interactions. Legal scholars help AI understand constitutional law.
The beauty of Handshake's position is trust and verification. While anyone can claim to be an expert online, Handshake's university partnerships mean they can verify credentials and academic standing. AI companies pay premium rates for this level of verification.
To understand why companies like Mercor, Scale, Surge, and Handshake are growing so fast, you need to look at where the big AI companies get their money—and how much they're willing to spend on human intelligence.
1. OpenAI
2. Anthropic
3. Google DeepMind
4. Meta AI
5. xAI (Elon Musk)
6. Microsoft (through OpenAI partnership)
7. Amazon (Bedrock + Anthropic)
Total Market Math: These seven companies have a combined market cap/valuation of over $7 trillion and are collectively spending an estimated $3+ billion annually on human data work. That's enough to support the massive growth we're seeing in companies like Mercor, Scale, Surge, and Handshake.
The secret to understanding AI's human dependency lies in a technical concept that sounds boring but is absolutely critical: Reinforcement Learning from Human Feedback (RLHF).
Here's the dirty secret about large language models: they don't actually understand anything. They're essentially extremely sophisticated autocomplete systems that predict what word should come next based on patterns they've seen in training data.
The problem? Raw prediction doesn't create useful AI assistants. A model trained only on internet text might complete "How do I cook chicken?" with accurate information—or with a conspiracy theory, a joke, or instructions for something dangerous. RLHF is how AI companies teach models to be helpful, harmless, and honest.
The RLHF Process:
This process is labor-intensive and requires skilled human judgment. You can't just hire anyone—you need people who understand the domain, can spot subtle errors, and can make consistent quality judgments.
The numbers around RLHF are staggering:
Training GPT-4 Level Models Requires:
Types of Human Experts Needed:
This is why companies like Mercor (PhD experts), Scale (massive workforce), Surge (premium specialists), and Handshake (verified academics) are growing so fast—they've built the infrastructure to deliver human expertise at the scale AI companies need.
Here's what most people don't realize: RLHF isn't a one-time process. As AI models get more sophisticated, they need more sophisticated human feedback. Consider what's coming:
Next-Generation Feedback Needs:
Each of these advances requires new types of human expertise and even more human feedback. The companies that can deliver this feedback will only become more valuable.
Beyond training AI models, there's another massive human-powered industry growing: AI evaluation and testing.
Every AI company needs to answer the same questions:
The answer requires human evaluation at massive scale.
Current AI Benchmarks and What They Test:
The Human Element: Every one of these benchmarks required hundreds or thousands of hours of human expert time to create, validate, and score. And they need to be constantly updated as AI models improve.
As AI models get better, evaluation becomes more challenging and expensive:
Evolution of AI Benchmarks:
Cost Escalation: Evaluating a single AI model on comprehensive benchmarks now costs $1,000-10,000 per model. With dozens of major models and constant updates, the evaluation market is easily worth hundreds of millions annually and growing.
Key Players in AI Evaluation:
Why evaluations will keep growing:
Conservative estimates suggest the AI evaluation market will reach $10+ billion annually by 2030, with the majority of that spending going to human experts who design, run, and interpret these evaluations.
The AI industry's dirty secret isn't just that humans are powering current AI—it's that humans will likely be essential to AI development for the next decade or more.
As AI systems become more capable, they actually require more sophisticated human feedback, not less:
The Scaling Challenge:
Each advance multiplies the need for human expertise.
The companies we've examined aren't temporary solutions—they're building sustainable economic moats:
Mercor's Moat: Exclusive relationships with 1,500+ universities and 18M+ students. New competitors would need years to build similar trust and scale.
Scale's Moat: 300,000+ trained workers, operational infrastructure across multiple countries, and enterprise relationships with every major AI company.
Surge's Moat: Premium positioning with top AI labs and a proven ability to deliver quality at massive scale with minimal overhead.
Handshake's Moat: University partnerships and verified credential systems that competitors can't easily replicate.
The numbers don't lie. AI companies are doubling down on human infrastructure:
These aren't temporary investments—they're strategic bets that human intelligence will remain essential to AI development.
Strip away the hype, and the AI industry looks like this:
Layer 1: Foundation Models (OpenAI, Anthropic, Google)
Layer 2: Human Intelligence Platforms (Mercor, Scale, Surge, Handshake)
Layer 3: AI Applications (Everything else)
The money flows up: Application companies pay foundation model companies, who pay human intelligence platforms. The most profitable layer isn't the one with the most hype.
For Workers: The AI revolution isn't destroying knowledge work—it's creating massive demand for human expertise. PhD graduates, domain experts, and skilled evaluators are in higher demand than ever.
For Companies: Success in AI increasingly depends on access to human intelligence at scale. Companies that can coordinate human expertise will have sustainable advantages over those that can't.
For Investors: The "picks and shovels" play in AI isn't semiconductors or cloud computing—it's human intelligence platforms.
Three scenarios for human involvement in AI:
Scenario 1: Continued Growth (Most Likely)
Scenario 2: Gradual Automation (Possible)
Scenario 3: AI Self-Sufficiency (Unlikely in 10 years)
Most experts believe Scenario 1 is most likely because the complexity of human values and the pace of AI advancement suggest that sophisticated human feedback will remain essential for much longer than most people realize.
The real AI revolution isn't happening in the sleek labs of OpenAI or the data centers of Google. It's happening in the distributed network of human experts who are teaching machines how to think.
Behind every impressive AI demo, every breakthrough capability, and every billion-dollar valuation lies an invisible army of humans:
The companies that have figured out how to coordinate this human intelligence at scale—Mercor, Scale, Surge, Handshake—aren't just service providers. They're building the nervous system of the AI economy.
The dirty little secret is out: AI isn't replacing humans—it's creating unprecedented demand for human expertise. The companies that embrace this reality, rather than fighting it, will be the ones that capture the real value in the AI revolution.
The future of AI isn't artificial intelligence replacing human intelligence. It's human intelligence and artificial intelligence working together at previously unimaginable scale. The companies that master this combination won't just participate in the AI revolution—they'll control it.
And that might be the most human outcome of all.
Here are the key aspects to keep in mind when you are applying for an entry level sales role. This advice is for someone young in their career (a couple years out of college) and actively interviewing for an entry level sales role.
Sales roles have the same fundamentals, regardless of the industry you’re in.
Sales leadership is looking for a bunch of core principles (especially when younger in career/lacking a bunch of experience).
Here is advice and what people are looking for/what to highlight when you’re interviewing for sales positions.
Everything else can be taught.
For the interview itself, come prepared with the following stories.
Building a business is constantly about making small changes.
You have to focus on the big stuff, and then let the little stuff fall into place
This is especially true at startups, and especially true with software.
Take Loom as an Example:
They started off as a user testing marketplace.
Initially, they were selling the feedback from experts. But theirs users didn't care about 'expert' feedback.
They cared about REAL feedback from REAL users that were using their product!
Now, Loom would not have been able to make this pivot if:
Quick pivot, and BOOM, Product Market Fit.
Naval nailed this is a tweet from a few years ago i haven't forgotten since.
The faster you get to 10,000 iterations, the faster you have an outlier product.
The faster you get to 10,000 iterations, the better.
Necessities for productive iteration:
Action Items:
Momentum is everything - the hardest thing is to get started from nothing.
That’s crawling (this is where we are with Hirexe)
But crawling isn't the goal - you want to go fast!
Getting to the crawl level (shipping the prototype) was important for us.
Here are our primary focus areas:
UX is going to be king for this product. We are not reinventing the wheel. We know what the marketing is looking for. Quality candidates (if hiring) or a great job (if looking for full-time work). There are other companies/tools out there that 'help' get you there right now; they do a bad job at it.
So, the experience you get as a hiring manager/candidate is everything.
We are aiming to make it as simple/valuable as possible.
Nothing is worse than signing up for a new product or service and spending 20+ minutes filling out a profile immediately.
So we are doing everything in our power to automate this, perform 80% of the work for you, and let you complete the 20% to fine-tune.
Our search functionality is badass right out of the gate!
We have architected and will continue to ensure searchability, allowing you to find/highlight exactly what type of roles you're looking for.
And that's where things stand currently!
Remember, if you try to run right out of the gate, you'll fall and hurt yourself, or at the very least, pull a muscle!
As we continue to walk/run/fly:
So you start slow, and as you get more comfortable, increase speed/difficulty.
When building software:
Have the long-term vision in mind, and ALWAYS build towards that (automatically match up the best candidates with the best roles for them)
Everyone wins.
'Helping' is doing the things that need to be done without being told to do so.
There are layers here:
1 - being able to figure out what actually needs to be done
2 - being able to make an impact on whatever that thing is
A simple ‘how to help scenario’ can apply to preparing the Thanksgiving meal in the kitchen at home.
But it can also apply to your manager. Or the CEO. Or the VC. (an ongoing joke in the startup community)
Helping is seeing the future. And then executing. An example:
A more complicated business example:
Look at the lead flow:
Look at the product:
Outlier or standard
Collateral
Price
Support
And, of course, talk to your AEs (and the rest of the team) / get their input.
But often, the most important thing you can do for your team is impact all of the other inputs I highlighted before!
Going into a conversation with the above breakdown level is much more helpful.
Put yourself in a position to understand and address the root cause, then implement change!
Action item:
Instead of asking someone, 'How can I help':
Put yourself in the shoes of the person you are asking
Happy Thanksgiving.
Experience is Everything.
This is a topic I write/talk/preach about regularly - How you make someone feel will never be automated.
The 'Zorus Experience' was a huge reason we were successful at my last company.
The Zorus Experience was our mantra on everything customer-related.
Sometimes, technology doesn't work as you want it, especially when you're a new company. Sometimes, you can’t ship updates as quickly as you'd like. Building is tough; there are always delays. That stuff was outside of our control.
What's inside our control is the experience we can give people when they interact with us personally.
And that, when done right, makes all the difference.
Our primary areas of focus were always:
And it worked really well for us. People gave us many second and third chances.
More here: Why You Should Embrace Crisis.
That’s software, though! Especially early on. But that isn't the goal.
Our goal is to build an 11-star experience.
11-star experience is all-encompassing. It ranges from fantastic UI/UX, to customer service, to quality product to branding, everything.
Brian Chesky put this together, and it’s what Airbnb models themselves after:
You will win if you can curate this experience for your users.
Work in that direction.
Robert Cialdini’s book, INFLUENCE, is one of the most important you can read in your life. Here is a breakdown of the book's most important highlights + personal notes.
RECIPROCATION
"We should try to repay, in kind, what another person has provided us"
The Free Sample
Strong Cultural pressure to reciprocate a gift, even an unwanted one, **but there is no such pressure to purchase an unwanted commercial product**
Unfair exchanges
Concessions
Perceptual Contract Principle
Commitment and Consistency
Human beings have a (often subconscious) nearly obsessive desire to be consistent with what we have already done.
Horse Betters at a race track
Beach Blanket story
Consistent Decision making
Automatic Consistency actually often times hides the subconscious from imperfect realities
COMMITMENT IS THE KEY
If I can get you to make a commitment (that is, to take a stand, go on record), I will have set the stage for your automatic and ill considered consistency with that earlier commitment.
Cold Calling Technique: "How are you doing today" (pg 51)
Start Small and Build
The Magic Act
Writing things down
The Inner Choice
Approach with children:
This is the only way to get people to buy into decision making long term - they need to BELIEVE themselves
Lowball method:
SOCIAL PROOF
We determine what is correct by finding out what other people think is correct
This one is so engrained in the subconscious its silly:
and
The Social Proof Phenomenon:
The greater the number of people who find any idea correct, the more the idea will be correct!
When we are unsure of ourselves, the situation is unclear, or uncertainty reigns
Remove Ambiguity when dealing with groups
Social Proof also strongest when we view others as similar to ourselves
LIKING
We most prefer to say yes to the requests of someone we know and like
This is why warm intros play so hugely in sales:
Halo Effects
Similarity
Compliments
Contact and Cooperation
Association
AUTHORITY - Follow An Expert
Most of us put an alarming high level of faith into the 'expert' point of view
We are literally trained from the second we're born that obdience to the proper authority is RIGHT and the improper authority is WRONG
Connotation, not Content
Titles
Clothing
Interestingly, w/ Trump in the White House and all of the back and forth w/ COVID-19, a lot of authority rhetoric has since been brought to light!
SCARCITY - The Rule of the Few
"The way to love anything is to realize that it might be lost"
The scarcity principle simply states that opportunities seem more valuable to us when their availability is limited.
Limited Number tactic
This works with any sort of item:
Going from having an abundance to scarcity will produce the most dramatic effects from the scarcity complex. Its not even close.
Highlights our Competitive Nature as well
Stumbled on the napkin math investing breakdown for SaaS companies in 2023. (Shoutout to Luke Sophinos @ Linear)
Check it out:
This does a great job of explaining what the industry expectations from Pre-Seed through Series B.
Some additional context (from my experience personally)
In order to secure funding:
1 - Team matters
2 - Momentum
**For early investment purposes (Seed/Pre-Seed). After a SEED investment is locked in:
Emphasis on capital efficiency is at an all-time high. (understandably so, given macro environment)
= KEEP BURN LOW
Metrics and financial models only get you so far in early-stage investing with tech companies. You don’t have a ton of real data. Many of the projections are a bit of a shot in the dark.
However, what you can directly control is your burn rate. Your burn rate is simply the amount of $$ you spend on a monthly basis.
The lower your burn rate, the longer the runway you have. (aka - the less money you spend, the more time you have till your bank account goes to $0 :)
Only spend money on technical folks until true PMF is really achieved. This is why you stick with founder-led sales for as long as possible/you’re ready to really start to scale.
This is where Naval’s famous quote comes from:
Learn to Sell, Learn to Build.
If you can do both, you will be unstoppable.
In conclusion:
Early-stage companies need small teams with dynamic founders who can do the roles of multiple people simultaneously. This allows them to keep their expenses low while they are building the initial version of their product and getting the traction necessary to justify institutional investment.
They currently have:
They were founded 8 years ago, in 2015.
Their first round of funding was 500K at a 6.5M valuation cap and a 10% discount.
That means a 25K investment would have bought you .3846% of the company... Which technically would be worth exactly $3,750,750 today.
Additionally, if you got in early, you would have gotten first right of refusal for subsequent fundraising rounds at that above discount %. So, if you kept putting in $$, you could easily have a much higher return %. (Hint: because loom was killing it, you would have kept maxing out the money you could put in... Your risk goes down with/ every fundraising round.)
Here is an example from Greg Isenberg, who actually passed on the round!
Today, he's wishing he was a part of it!