SHAWN VICTOR
  • Home
  • Rocketry Projects
    • RCS Thruster
    • Custom Solenoid Valve
    • Horizontal Test Stand
    • Project Quasar
    • COPV Burst Stand
    • Custom Flight Computer MkII
    • Experimental Air Braking
    • Solid Rocket Flight Computer
    • Syncope
  • Personal Projects
    • Persistence of View Globe
    • Hexapod
    • RTOS Race Car
    • OpenBevo
  • Business Training
    • Valuations >
      • C1: Cash Flow & Discount Rates
      • C2- Cost of Capital, Comps, & Valuation
  • Tutorials
    • Autodesk Eagle
    • NFPA70: NEC Standards
    • Github
    • Electronics Fundamentals >
      • Electricity from an Atomic Perspective
      • Resistor Circuit Analysis
    • Custom Rocket Engines >
      • Injector Orifice Sizing
      • How Rocket Engines Work
      • Choosing Your Propellant
      • Dimensioning Your Rocket
    • DIY Hybrid Rocket Engine >
      • L1: The Basics
    • Semiconductors >
      • L1: Charge Carriers and Doping
      • L2: Diodes
    • Rocket Propulsion >
      • L1: Introduction
      • L2: Motion in Space
      • L3: Orbital Requirements
      • L4: The Rocket Equation
      • L5: Propulsion Efficiency
    • Government 1 >
      • L1: The Spirit of American Politics
      • L2: The Ideas That Shape America
      • L3: The Constitution
    • Government 2 >
      • C1: The International System
      • C2: US Foregin Policy Apparatus and National Interest
      • C3: Grand Strategy I
      • C4: Grand Strategy II
      • C5: The President and Foreign policy
      • C6: Congress in Foreign Policy
    • Control Feedback Mechanisms >
      • L1: Intro to Control Systems
      • L2: Mathematical Modeling of Control Systems
      • C3: Modeling Mechanical and Electrical Systems
    • Electromechanical Systems >
      • L1: Error Analysis and Statistical Spread of Data
    • Rocket Avionics Sourcing
  • Decision Modeling

Cost of Capital, Comparables, and 2-Stage Valuation

Now that we understand how to calculate a project’s NPV, the next step is deciding whether to accept or reject it. In general, projects with positive NPV should be accepted, while those with negative NPV should be rejected. However, the decision critically depends on the chosen discount rate, since the discount rate directly determines the NPV.

Bias Towards Risky Projects

As discussed, the WACC is typically an appropriate discount rate because it represents the weighted average required return demanded by the firm’s debt and equity holders. However, applying a single firm-wide WACC to all projects can lead to systematic errors. Specifically, low-risk projects may be rejected because they are discounted at too high a rate, while high-risk projects may be accepted because they are discounted at too low a rate. Over time, this bias can cause the firm’s overall risk profile to increase. The graph below shows 2 companies 
Project A (Low Risk)
  • Lies to the left (lower risk).
  • Its true cost of capital is below the firm’s WACC.
  • If the firm uses the single WACC (dashed line), it may incorrectly reject Project A because its expected return appears too low relative to WACC.
  • However, relative to its appropriate risk-adjusted discount rate, the project may actually create value.
Picture
Project B (High Risk)
  • Project B lies to the right (higher risk).
  • Its true cost of capital is above the firm’s WACC.
  • If the firm uses the single WACC, the project may appear attractive.
  • However, once properly adjusted for risk, the required return is higher -- meaning the project may destroy value.

Why firms choose to use one cost of capital

In theory, every project should have its own discount rate because each project has its own risk profile. However, in practice, firms often use a single company-wide WACC. Why?

The main reason is incentives and practicality.

If managers want a project approved, they can make it look attractive in two ways:
  1. Overstate the cash flows, or
  2. ​Understate the risk (use a lower discount rate).

It is much easier for decision-makers to detect unrealistic cash flow projections than it is to challenge a chosen discount rate. Cash flows are concrete and can be questioned line by line. Discount rate adjustments, on the other hand, are harder to evaluate and easier to manipulate subtly.
​
Because of this, firms often prefer to:
  • Focus debates on projected cash flows
  • Use a consistent WACC across projects
  • Avoid arguments over subjective “risk adjustments”

​Why this works

Because the WACC already factors in the risk of the market (from a blend of firm's cost of equity and cost of debt) it is already capturing the systematic risk. Whatever project-specific or unsystematic risk should be reflected in the cash flow projects, not in the cost of capital.

If firms constantly adjust discount rates for every perceived risk, they risk:
  • Double-counting risk
  • Inflating required returns
  • Rejecting good projects
  • Encouraging managers to game the system

How firms understate the risk of a project

When a firm undertakes a new project, it is effectively changing the firm’s overall risk profile. Conceptually, this is similar to creating a new “combined firm” made up of the existing business and the new project. In that case, the firm’s beta would become a value-weighted average of the existing firm’s beta and the project’s beta.
Picture
If the project has lower systematic risk than the existing firm, the combined beta will fall. If the project has higher systematic risk, the combined beta will rise. The change in beta affects the firm’s cost of equity — and therefore its WACC — because cost of capital depends on systematic risk. This blended firm-wide rate can artificially lower or raise the cost of capital. Using the firm’s lower WACC to evaluate a higher-risk project is a textbook example of understating risk compared to evaluate the project in isolation.

Other Methods for Setting The Hurdle Rate

Firms often default to using a single corporate WACC to evaluate all projects. This approach can work reasonably well if the firm’s divisions have similar betas and comparable risk profiles. For example, if all business units lie along roughly the same risk-return tradeoff line, then applying one cost of capital may be a fair approximation. However, this assumption breaks down in diversified firms where divisions face meaningfully different levels of systematic risk.

Divisional WACCs
For firms with multiple lines of business, a better approximation is to use a divisional WACC. Each division may operate in a different industry, have different operating leverage, and maintain a distinct capital structure. As a result, each division may warrant its own cost of capital. Using a single firm-wide WACC in this setting can lead to systematic misallocation of capital: low-risk divisions may be underfunded because they are discounted at too high a rate, while high-risk divisions may be overfunded because they are discounted at too low a rate. Assigning divisional WACCs moves the firm closer to properly matching discount rates to risk.
Picture
Project Finance
​
An even more precise method is project finance. In project finance, the investment is legally ring-fenced into its own entity, and the debt used to finance it is non-recourse to the parent company. This means lenders can only claim the project’s assets, not the broader firm’s assets. Structurally, this isolates the project’s risk and allows it to have its own capital structure and cost of capital. Because lenders determine how much debt they are willing to provide, the project’s debt capacity becomes a market-based signal of its risk.

Picture
​Debt capacity is especially informative. It represents the amount of borrowing a project can sustain without materially increasing the probability of default. Lower-risk projects can support higher leverage, while riskier projects can support less. For instance, if one project can borrow 60% of its value and another can borrow only 20%, the former is likely viewed as less risky. Given a lower cost of debt relative to equity, a higher debt capacity naturally lowers the project’s WACC, meaning it should require a lower expected return. In contrast, a project with limited debt capacity will have a higher WACC and must generate a higher return to justify investment.

​Ultimately, all of these approaches attempt to achieve the same goal: aligning the discount rate with the project’s systematic risk. A single WACC is a rough shortcut that works only when risks are similar. Divisional WACCs provide a better approximation for diversified firms. Project finance offers the most precise method by allowing the market to reveal the project’s risk directly. The key principle throughout is that capital budgeting decisions should reflect true economic risk rather than managerial discretion.

Coverage Ratio & Credit Rating

Since we know from above that higher debt capacity naturally lowers the project WACC, we can look at a more data-driven way to estimate how much debt a firm can safely take on. Aswath Damodaran, a well-known valuation professor at NYU Stern, compiles extensive data linking financial ratios to credit ratings. One particularly useful measure is the interest coverage ratio:
Picture
A higher interest coverage ratio indicates greater ability to meet debt obligations and therefore lower default risk. For example, highly rated firms (such as AAA) tend to have coverage ratios above 8, meaning they generate more than eight dollars of EBIT for every dollar of interest expense. As ratings decline, coverage ratios fall accordingly.
Picture
This relationship helps formalize the concept of debt capacity. Recall that debt capacity is the amount a firm can borrow without materially increasing its probability of default. Since credit ratings correspond to specific ranges of interest coverage ratios, a firm can estimate how much additional interest expense it can take on before it risks being downgraded. Suppose a firm currently has a coverage ratio of 3.5 and sits comfortably within an A-rated range of 3 to 4.25. By determining the lowest acceptable coverage ratio within that rating band, the firm can back out the maximum interest expense it can support without triggering a downgrade. From there, it can calculate how much additional debt it can issue. This provides a practical way to size debt before approaching lenders.
​
This concept is closely related to what is often called “debt sizing.” In many financial models, we simplistically assume debt behaves like a fixed mortgage. In reality, debt levels and interest payments are often structured around projected cash flows and covenant constraints. Debt covenants frequently include minimum coverage ratio requirements, such as maintaining an EBITDA-to-interest ratio above a certain threshold. If a firm breaches that covenant, lenders may renegotiate terms or restrict further borrowing. Therefore, understanding coverage ratios is not just about estimating ratings; it is central to determining sustainable leverage.

​Damodaran’s data also show how credit ratings map into credit spreads — the additional yield over the risk-free rate that firms must pay on their debt. Lower-rated firms face higher spreads, reflecting greater default risk. When markets are functioning efficiently, these spreads, coverage ratios, and ratings should align consistently. Firms with stronger coverage ratios should receive higher ratings and pay lower spreads, while weaker firms should face higher borrowing costs. All of this reinforces the earlier discussion: by examining debt capacity and market-implied borrowing terms, firms can more accurately determine a project’s appropriate cost of capital rather than relying solely on managerial judgment.

Comparables (Comps)

Consider a simple thought experiment:
Imagine asking a group of students to guess someone’s weight. At first, it seems impossible, but once they start asking the right questions—height, body type, clothing size—they begin narrowing the estimate. The process improves as more relevant information is incorporated. Valuation works the same way. It is difficult at first, but once we identify the variables that truly matter, we can make informed estimates.

​In valuation, the two most important variables are risk and growth. Those are the same drivers we saw in the Gordon Growth framework: higher expected growth (g) increases value, and lower risk (r) (a lower discount rate) increases value.
Picture
Picture
When we use comparable valuation methods, we are implicitly matching firms based on these same drivers—risk and growth—even if we do not explicitly calculate discount rates. The idea is to find assets that are similar in terms of systematic risk (beta) and growth potential, and then use a relevant valuation metric to anchor our estimate. The metric itself depends on what we are valuing. For residential real estate, we often use price per square foot because size is strongly correlated with value. In commercial real estate, we might use price relative to net operating income (NOI). In corporate valuation, we use multiples such as EV/EBITDA when valuing the entire firm or P/E ratios when valuing equity. The placement of the metric along the income statement matters: EBITDA is closer to enterprise value because it reflects returns available to both debt and equity holders, while earnings (net income) relate directly to equity value.

​The mechanics of comparable valuation are straightforward:
  1. Identify similar assets or transactions
  2. Calculate a valuation metric (such as dollars per square foot or a multiple)
  3. Estimate an average or median value
  4. Adjust for differences

The art lies in the adjustments. Just as homes may command premiums for features like location, lot size, or amenities, firms may command premiums for superior growth prospects, stronger margins, or lower risk. These adjustments must be made carefully and transparently. Changing the multiple or changing the underlying metric (such as projected cash flows or earnings) can both raise the valuation, but the analyst must be consistent and thoughtful in deciding which adjustment is economically justified.

Valuation using Comps (Using EBITDA Multiples)

Suppose someone tells you they bought a 2020 Honda Civic for $52,000.

At first, that sounds wildly overpriced. But then you learn there was $42,000 in cash sitting in the trunk. Suddenly the transaction makes sense—the car itself effectively cost $10,000.

​This is exactly how enterprise value works. The total purchase price includes both the operating asset and any excess cash. To determine the value of the operating business—the “car”--you subtract cash from the total price​. In corporate terms, enterprise value equals equity value plus debt minus cash, or equivalently, the value of the firm’s operating assets independent of how they are financed.

​​This framework becomes powerful when using comparable valuation. Suppose we want to value a private firm like Helix. If comparable public firms trade at an average EV/EBITDA multiple of 10.5x, and Helix generates $10 million in EBITDA, then its enterprise value would be approximately $105 million. From there, we subtract net debt (debt minus cash) to arrive at equity value. The arithmetic is simple. The hard part is not multiplying; it is selecting appropriate comparables. There is no magic number of comps to use—the correct set depends on how closely the firms match in terms of growth prospects and risk.
Picture
An important insight emerges when we compare multiples across firms. If one firm trades at 11.6x EBITDA while another trades at 8.8x, the market is implicitly signaling something about differences in expected growth, risk, or both. Recall from the Gordon Growth Model that value increases when growth is higher and when risk is lower. Multiples embed these same forces. A higher multiple suggests either stronger growth expectations, lower perceived risk, or some combination of the two. It does not automatically mean the firm is “overvalued”; it reflects how the market is pricing its fundamentals relative to peers.

​This also creates strategic considerations when selecting comps. If our own firm trades at a lower multiple because the market perceives weaker growth or higher risk, and we are acquiring a company precisely because it has stronger growth prospects, including ourselves in the comp set may undervalue the target. In that case, it may be more appropriate to exclude ourselves and use peers that better reflect the target’s characteristics. The mechanics of averaging multiples are trivial; the art lies in choosing and defending the comparison group.

EBITDA multiples are particularly common because EBITDA attempts to isolate operating performance by removing discretionary or financing-related elements such as capital structure, tax strategies, and certain accounting choices. Unlike free cash flow, which is heavily influenced by capital expenditures, working capital management, and tax decisions, EBITDA provides a cleaner—though imperfect—measure of operating productivity. That does not mean EBITDA is superior or that discounted cash flow analysis should be ignored. Rather, multiples offer a market-based shortcut when forecasting long-term cash flows is difficult or uncertain.

This is especially evident in industries like professional sports teams, movies, or other unique assets where forecasting cash flows is highly speculative. In those cases, bankers often rely almost entirely on comparables—recent transactions, revenue multiples, or adjusted EBITDA measures—to establish a valuation range. The goal is not to claim precision but to anchor value to observable market transactions and then adjust for unique features. As with real estate, the adjustments are where the judgment lies. The math is easy; selecting the comps and defending the premiums is the real work.

It is crucial to remember that historical financial metrics must often be normalized. One-time events—such as strikes, regulatory penalties, unusual exploration expenses, or pandemic disruptions—can distort EBITDA and therefore distort multiples. Just as discounted cash flow requires thoughtful forecasting, comparable valuation requires careful adjustment of the base metric to reflect sustainable performance. Ultimately, whether using DCF or multiples, the underlying logic remains consistent: valuation reflects expectations about risk and growth, and the analyst’s job is to interpret how those forces are embedded in prices.

Valuation using Comps (Using P/E Ratio)

We shift from enterprise value multiples to price-based multiples. Instead of using EV/EBITDA, we use the price-to-earnings (P/E) ratio. The logic is straightforward: we take the share price of a comparable firm and divide it by its earnings per share to obtain its P/E multiple. Then we apply that multiple to the earnings per share of the firm (or division) being valued. Conceptually, we are asking: how many dollars does the market pay per dollar of earnings for similar firms? If the earnings per share of the target firm were identical to the comp firm, its estimated share price would simply equal the comp’s price. In practice, earnings differ, so we scale accordingly.
Picture
​For example, suppose we are valuing Exxon’s chemical division using publicly traded chemical companies as comparables. If the average P/E ratio across the peer group is 14.28 and Exxon’s chemical division generates earnings of $3.4 billion (or $3.4 per share in a simplified example), then the implied equity value is simply 14.28 times 3.4. The arithmetic is trivial. As with other multiples, the difficulty lies not in the multiplication but in selecting appropriate comps and interpreting what the multiple represents.
Picture
Differences in P/E ratios reflect differences in expected growth and risk, just as we saw with EV/EBITDA multiples and the Gordon Growth framework. A firm with low expected growth or high perceived risk will trade at a lower multiple. For instance, if a chemical company tied to declining industries—such as legacy film production—trades at a P/E of 8.99, that likely reflects the market’s expectation of weak future growth or elevated risk. In contrast, a company trading at a P/E of 24 may be viewed as having strong growth prospects, lower risk, or both. These multiples are not arbitrary; they embed the same risk–growth tradeoff we have been discussing throughout.

​This is where judgment and narrative come into play. If you are trying to justify a lower valuation, you will emphasize comparables with lower multiples and argue similarity to slower-growing or riskier peers. If you are advocating for a higher valuation, you will highlight high-multiple firms and stress similarities in growth potential or strategic positioning. The selection of comps significantly influences the valuation outcome. The math is easy, but the story—and the defensibility of the chosen comparison set—is what ultimately drives the credibility of the estimate.

Comparables vs. DCF

When selecting comparables, the goal is always to match on the fundamentals that drive value: risk and growth. We are not simply looking for firms in the same industry label; we are looking for assets with similar cash flow patterns, stability, and growth prospects. Perfect comparables do not exist. We will never find two firms with identical risk profiles and identical growth trajectories, so the task is to get as close as possible. The underlying theoretical foundation here is the Law of One Price: if two assets generate identical cash flows at identical times with identical risk, they should have the same value. In theory, whether we use a discounted cash flow model or a comparable multiple approach, we should arrive at the same valuation. In practice, they will not be exactly equal, but they should be directionally consistent.

This creates a useful feedback loop between methods. Suppose the industry average EV/EBITDA multiple is 10x, but we believe our firm has superior growth prospects. That suggests we should apply a higher multiple than 10. If we instead build a DCF model and the industry’s long-term growth assumption is 3%, consistency would require us to use a growth rate above 3% in our projections. The two approaches are simply different ways of expressing the same underlying beliefs about risk and growth. We can move back and forth between them to check whether our story holds together. If the multiple implies a certain growth rate, does our DCF reflect that? If our DCF assumes higher growth, does the multiple adjustment make sense?

Multiples are especially useful when the underlying business is stable and predictable. Assets such as pipelines or refineries, where cash flows are steady and capital expenditures are relatively stable, tend to work well with EBITDA multiples. In those cases, the story can be communicated cleanly: for example, “Pipelines trade around 7x EBITDA; we are acquiring this one at 6x, so it appears attractive.” That narrative is simple and intuitive. However, using multiples does not mean abandoning DCF analysis. Instead, the DCF helps refine and validate the multiple-based story.

Ultimately, both approaches should be used together. Comparable valuation provides a market anchor, while discounted cash flow analysis forces explicit assumptions about growth, risk, and reinvestment. The key is internal consistency: whatever story we tell using multiples should align with the assumptions embedded in the DCF.​
Picture

The storytelling aspect of valuations

When we move into a tool like Capital IQ and pull up quick comparables for a company like ExxonMobil, what we are really seeing is how the market is currently pricing that firm relative to its peers. For example, if Exxon is trading at roughly 9x forward EV/EBITDA and the peer range is roughly 4.5x to 9.2x, that tells us Exxon sits near the top of the valuation range. That fact alone is neither good nor bad—it is simply information. A higher multiple could mean the company is overpriced, or it could mean the market believes Exxon deserves a premium because of stronger growth prospects, lower perceived risk, or superior assets. The multiple embeds expectations; our job is to interpret them.
Looking across metrics can reveal additional insights. For instance, if a company trades at a high P/E ratio but not a correspondingly high EV/EBITDA multiple, that suggests something is happening between EBITDA and net income—perhaps higher depreciation, interest expense, or other costs affecting earnings. Revenue multiples may tell yet another story. None of these numbers are “right” or “wrong” on their own; they reflect how the market is currently processing information. The question is whether the market’s reaction is justified.
This is where valuation shifts from arithmetic to storytelling. If Exxon trades near the top of its peer range, we must ask whether that premium is warranted. Is the company genuinely executing better, generating higher-quality cash flows, or positioned for stronger future growth? Or is the stock simply trading rich relative to fundamentals? To answer that, we turn to the company’s investor presentations and public communications. These decks are carefully constructed narratives designed to justify the valuation. They emphasize advantaged assets, future growth opportunities, technological advantages, large addressable markets (TAMs), and strategic positioning. Management’s goal is clear: persuade investors that the firm deserves its premium.
An equity research analyst’s role is to bridge the gap between price and story. The market has assigned Exxon a multiple; management provides a forward-looking narrative; the analyst must determine whether the story supports the valuation. Are the growth projections credible? Are the risks understated? Does the premium multiple align with realistic expectations about cash flows and risk? Ultimately, comparable analysis provides the market’s verdict, while the company’s narrative attempts to justify it. Our task is to evaluate whether the two are consistent.

How long is the ideal Forecasting Window?

When we move from comparables back into a discounted cash flow framework, the natural question becomes: how many years should we explicitly forecast? ExxonMobil, or any large firm, could theoretically exist indefinitely. Since we cannot forecast forever with precision, we divide valuation into two parts. First, we carefully forecast free cash flows over a finite planning period, denoted as capital TTT. Second, at the end of that period, we estimate a continuation (terminal) value that captures everything beyond year TTT. That terminal value can be calculated either using a Gordon Growth perpetuity or by applying an EBITDA multiple. In both cases, what we are really doing is estimating the market value of the firm at time TTT and discounting it back to today.
The key assumption in the perpetuity approach is the long-run growth rate ggg. This growth rate must be economically reasonable. Over the very long run, no company can grow faster than the overall economy indefinitely. If nominal GDP grows around 3%, then assuming perpetual growth of 4% or 5% would imply that the company eventually becomes larger than the entire economy. Therefore, terminal growth rates must be conservative and reflect stable, mature conditions. Both the perpetuity model and EBITDA multiple approach work best when the firm has settled into a low-growth, stable phase. The planning period should extend only until that stable state is reached. For a mature company like Kellogg, that may occur quickly. For a high-growth firm like Google, it may take a decade or more. The choice of planning period should reflect economic reality, not a mechanical rule such as “always use five years.”
It is important to recognize how dominant the terminal value is in most DCF analyses. Even if we carefully forecast five or six years of free cash flows, a large majority of the estimated enterprise value often comes from the terminal calculation. This is not a mistake; it is simply arithmetic. The terminal value represents all cash flows beyond the forecast horizon, so it must be large by construction. However, this also means that small changes in terminal growth assumptions or exit multiples can dramatically alter valuation. As a result, managers selling a project often focus attention on optimistic terminal assumptions—such as multiple expansion or sustained high growth—because those assumptions drive a disproportionate share of value.
Picture
Consider a simple acquisition example. Suppose we pay $100 million for a target valued at 6x EBITDA, financed partly with non-recourse debt. We forecast free cash flows for six years and discount them at the firm’s cost of capital. The present value of the explicit forecast period may be relatively modest, while the discounted terminal value accounts for the bulk of the total enterprise value. If the combined present value equals $100.04 million, the project technically has a positive NPV—but only by a trivial margin. In practice, that thin cushion would make decision-makers uncomfortable, because terminal assumptions are inherently uncertain. A deal that barely clears the cost of capital may not provide sufficient protection against forecasting error.
Behind the simplified cash flow figures lie detailed operating forecasts—revenue growth, margins, taxes, depreciation, capital expenditures, and working capital investments. Free cash flow is not a plug number; it is built from careful projections of operating performance. The terminal value is layered on top of that foundation. Thus, while DCF models can appear mechanically straightforward, the substance lies in the assumptions about growth, reinvestment, and risk. Ultimately, the purpose of the model is not to produce the highest possible valuation, but to estimate a defensible intrinsic value that can guide buy, sell, or hold decisions.

Sensitivity Analysis

The yellow cells in the spreadsheet represent Scenario 1 free cash flows, while the numbers above them reflect Scenario 2 firm free cash flows. These cash flows are not arbitrary; they are built from the full operating model: EBIT minus taxes (NOPAT), plus depreciation, minus capital expenditures, minus changes in working capital. While the final free cash flow numbers may look simple, they sit on top of detailed forecasts. The key distinction in our valuation framework is that instead of assuming the project simply ends and is liquidated at book value, we estimate a continuation or terminal value—either using a Gordon Growth perpetuity or a hybrid EBITDA multiple approach. But regardless of the terminal method, we still must do the heavy lifting of forecasting operating cash flows during the planning period.
Under the base case, the deal barely works. The present value of the explicit planning period cash flows is modest, and most of the value comes from the terminal calculation. If we are paying $100 million for something worth only marginally more than $100 million, the cushion is thin. Even though the NPV is technically positive, it is small relative to the uncertainty embedded in the assumptions. That alone would make many decision-makers uncomfortable.
Now consider a growth strategy. Suppose instead of maintaining the status quo, we invest heavily—$3 million per year in advertising for three years and additional capital expenditures for five years. The logic is straightforward: advertising drives growth, and growth requires incremental capital investment. However, this strategy increases operating leverage. By committing to higher fixed costs, the firm becomes more sensitive to economic conditions. If the economy booms, the firm benefits disproportionately. If the economy weakens, the downside is amplified because the fixed investments cannot easily be reversed. This increased sensitivity suggests higher systematic risk and therefore a higher cost of capital. Using the same 8.8% discount rate may understate the risk of the growth strategy.
Under the optimistic assumptions—higher growth during the planning period and strong terminal performance—the valuation increases significantly. The terminal value becomes much larger because the final-year EBITDA and free cash flow are substantially higher. In this scenario, spending $100 million to generate an enterprise value of $132–136 million appears attractive, with an IRR well above the cost of capital. On the surface, it looks like a strong positive-NPV project.
However, this is where sensitivity analysis becomes critical. Rather than accepting the optimistic base case, we ask: what must be true for this deal to be good? If slightly lower growth, a modestly higher cost of capital, or a lower terminal multiple reduces the valuation to $100 million—or even below—it reveals how fragile the conclusion may be. By adjusting short-term growth from 8% to 6%, increasing the discount rate by a couple of percentage points, or trimming the terminal multiple, the NPV can quickly evaporate. This exercise is not about finding the highest possible valuation; it is about stress-testing the assumptions.
A good analyst does not stop at identifying a positive NPV. Instead, they ask how easily that NPV can disappear. Likewise, for a negative NPV project, it is useful to ask what assumptions would be required to make it attractive. Sensitivity analysis helps us understand which variables truly drive value and how confident we must be in our projections. Ultimately, valuation is less about a single point estimate and more about understanding the range of outcomes implied by plausible changes in growth, risk, and terminal assumptions.

How Ring Fencing allows us to calculate a project specific discount rate

Suppose we have a company worth $1.2 billion, financed with $600 million in debt and $600 million in equity. Now the firm is considering investing $200 million in a new project. Conceptually, we only want to invest in projects with positive NPV—meaning the project must be worth more than its $200 million cost. That part is simple.
Picture
The project can be financed in two ways. On-balance sheet, we could borrow 80% of the $200 million (i.e., $160 million in debt) and contribute $40 million in equity. That would increase the firm’s total assets and capital structure proportionally. Alternatively, we could create a separate LLC and finance the project off-balance sheet using non-recourse debt. In that case, the new entity would hold the $200 million asset, financed with $160 million in debt and $40 million in equity, and the parent company’s only exposure would be its $40 million equity contribution. This “ring-fencing” is powerful because it isolates the project and allows us to determine its own cost of capital.
Picture
Method 1: Financing on Balance Sheet
Picture
Method 2: Ring Fencing
Here is where things get interesting. To calculate WACC, we need the weight on debt, which is D/V, where V is the market value of the project—not its cost. But we are trying to determine the value of the project in the first place. This creates a circular problem: to compute WACC, we need the value; to compute the value, we need WACC. Finance becomes somewhat philosophical here because value and discount rate are jointly determined in equilibrium.
Picture
To break the circle, we use iteration. Start by focusing on the equity side. Suppose the project generates $6 million annually to equity holders. To determine the cost of equity, we begin with an asset (unlevered) beta. Assume we estimate an unlevered beta of 0.447 based on comparable utility-like power generation assets. These are stable businesses with large debt capacity, which explains why their levered betas are not extremely high despite substantial leverage.
Picture
Picture
Next, we lever the asset beta using a debt-to-equity ratio. Initially, we might use the book ratio of $160 debt to $40 equity, implying D/E=4D/E = 4D/E=4. Using that leverage and assuming a debt beta (say 0.3), we compute a levered beta, plug it into CAPM, and estimate a cost of equity of about 10.2%. Treating the $6 million as a perpetuity, we estimate equity value as $6 million divided by 10.2%, or about $58 million.
Picture
But here’s the catch: $40 million was just what we paid for the equity; $58 million is our estimate of what it is worth. So we update the D/E ratio using $160 million of debt and $58 million of equity. That lowers leverage, which lowers the levered beta, which lowers the cost of equity, which increases the equity value. Repeating this “wash, rinse, repeat” process allows the numbers to converge. Eventually, the equity value stabilizes around $66 million, implying a total project value of roughly $226 million. Since we paid $200 million, the project has positive NPV.
Picture
Picture
At equilibrium, we now have consistent market weights for debt and equity and therefore an internally consistent WACC—around 5.5% in this example. The ability to ring-fence the project is what makes this possible. By isolating it, we can determine how the asset itself covaries with the economy and derive its true cost of capital.
A key conceptual takeaway is that the unlevered (asset) beta never changes during iteration. The asset’s underlying business risk is constant. What changes is the capital structure, which affects the levered beta and the cost of equity. The iteration simply allows the market value of equity—and therefore the weights in WACC—to adjust until the system reaches equilibrium.
On an exam, you would not be expected to iterate manually through multiple rounds. But understanding why the circularity exists—and how iteration resolves it—is crucial. The deeper insight is that WACC depends on market values, not book values, and that capital structure and valuation are interdependent.

Choosing the "right" valuation

When we talk about valuation across industries, one of the first things to understand is that different industries use different metrics. The metric chosen reflects what is measurable, what is economically meaningful, and what investors in that industry are accustomed to using. In oil and gas, for example, valuation often revolves around physical reserves and production rather than EBITDA or earnings.

One common metric is enterprise value to proved reserves (EV / 1P reserves). Proved reserves represent hydrocarbons that have a high probability of economically successful extraction under current conditions. Importantly, “proved” depends on oil prices. If oil prices fall, fewer reserves are economically extractable. If prices rise, reserves expand. So even the denominator of these multiples is tied to commodity prices.

Another metric is EV per barrel of oil equivalent (BOE) of daily production, such as $50,000 to $150,000 per flowing barrel. This captures the value of existing production capacity. Then there is EV to PV10. PV10 refers to the present value of projected future cash flows from reserves discounted at 10%. The interesting historical detail is that 10% was originally chosen because it approximated the risk-free rate at the time the SEC rule was created. The SEC intended companies to control for time value of money only, not systematic risk. In other words, the instruction effectively implied using a beta of zero. However, the industry adopted “10%” as the convention itself, rather than tying it dynamically to the risk-free rate. Over time, PV10 became standard by habit rather than theory.

Oil and gas also distinguishes between 1P (proved), 2P (probable), and 3P (possible) reserves. Banks typically lend against 1P reserves because they are highly likely to produce. They generally do not lend against 2P or 3P reserves. However, an acquirer might pay for 2P or even 3P reserves because they resemble out-of-the-money call options — low probability, but potentially valuable upside. Embedded within these multiples is real options thinking, whether explicitly recognized or not.

Real estate uses yet another language. Instead of talking about EBITDA multiples, real estate investors talk about cap rates. The cap rate is simply:
Cap Rate=NOI1Price\text{Cap Rate} = \frac{\text{NOI}_1}{\text{Price}}Cap Rate=PriceNOI1​​Rearranging:
Value=NOI1Cap Rate\text{Value} = \frac{\text{NOI}_1}{\text{Cap Rate}}Value=Cap RateNOI1​​But economically, this is just a perpetuity:
Value=NOI1R−g\text{Value} = \frac{\text{NOI}_1}{R - g}Value=R−gNOI1​​So the cap rate is simply R−gR - gR−g.

If someone says a property trades at a “15 cap,” they mean the cap rate is 15%, which corresponds to a multiple of roughly 6.7x (since 1 / 0.15 ≈ 6.7). It’s just the reciprocal of a multiple. The language differs, but the math is identical.

Across industries, we see many valuation ratios:
  • Price-to-earnings (P/E)
  • PEG (P/E divided by growth)
  • Market-to-book
  • EV/EBITDA
  • EV/Revenue
  • Even EV/Sales when nothing else is positive

The higher up the income statement you go, the less precise the valuation. Sales are easier to measure but less directly tied to value than earnings or free cash flow. In startups, EBITDA and earnings may be negative, so analysts fall back on revenue multiples. During the dot-com era, companies were sometimes valued on metrics as loose as “dollars per eyeball” because website visits were measurable and correlated — loosely — with future monetization.

In early-stage companies, sometimes sales are the only anchor available. For example, in a medical technology startup, you might estimate total addressable market and potential penetration because current profits are nonexistent. The valuation becomes more narrative-driven. You are clinging to whatever measurable variable correlates most plausibly with future cash flows.
Investment bankers take this to another level. In a typical banker’s fairness opinion deck, they rarely rely on just one method. They show:
  • Historical trading multiples
  • Forward trading multiples
  • Transaction multiples
  • DCF valuation ranges
  • LBO valuation ranges
  • Sometimes sum-of-the-parts analysis

Each method gives a range. The final valuation is often a judgment call within overlapping ranges. The banker produces bars of possible values; senior management draws a red line through them and declares the recommended price. That red line reflects both analysis and negotiation strategy.
The deeper lesson is that multiples are not shortcuts that replace fundamentals. They embed assumptions about growth, risk, reinvestment, and competitive position. Whether you use EV/EBITDA, EV/PV10, cap rates, or P/E, you are always implicitly making assumptions about RRR and ggg. The metric changes, but the economic structure underneath does not.
​
In the end, valuation is about telling a coherent story that links cash flows, risk, growth, and market pricing. The math is often simple. The hard part is deciding which comparable, which multiple, and which narrative truly reflect economic reality.

  • Home
  • Rocketry Projects
    • RCS Thruster
    • Custom Solenoid Valve
    • Horizontal Test Stand
    • Project Quasar
    • COPV Burst Stand
    • Custom Flight Computer MkII
    • Experimental Air Braking
    • Solid Rocket Flight Computer
    • Syncope
  • Personal Projects
    • Persistence of View Globe
    • Hexapod
    • RTOS Race Car
    • OpenBevo
  • Business Training
    • Valuations >
      • C1: Cash Flow & Discount Rates
      • C2- Cost of Capital, Comps, & Valuation
  • Tutorials
    • Autodesk Eagle
    • NFPA70: NEC Standards
    • Github
    • Electronics Fundamentals >
      • Electricity from an Atomic Perspective
      • Resistor Circuit Analysis
    • Custom Rocket Engines >
      • Injector Orifice Sizing
      • How Rocket Engines Work
      • Choosing Your Propellant
      • Dimensioning Your Rocket
    • DIY Hybrid Rocket Engine >
      • L1: The Basics
    • Semiconductors >
      • L1: Charge Carriers and Doping
      • L2: Diodes
    • Rocket Propulsion >
      • L1: Introduction
      • L2: Motion in Space
      • L3: Orbital Requirements
      • L4: The Rocket Equation
      • L5: Propulsion Efficiency
    • Government 1 >
      • L1: The Spirit of American Politics
      • L2: The Ideas That Shape America
      • L3: The Constitution
    • Government 2 >
      • C1: The International System
      • C2: US Foregin Policy Apparatus and National Interest
      • C3: Grand Strategy I
      • C4: Grand Strategy II
      • C5: The President and Foreign policy
      • C6: Congress in Foreign Policy
    • Control Feedback Mechanisms >
      • L1: Intro to Control Systems
      • L2: Mathematical Modeling of Control Systems
      • C3: Modeling Mechanical and Electrical Systems
    • Electromechanical Systems >
      • L1: Error Analysis and Statistical Spread of Data
    • Rocket Avionics Sourcing
  • Decision Modeling