Evaluating Tech Companies for Acquisition: Key Factors and Strategies

In This Guide
Most first-time acquirers think evaluating a tech company is a fancier version of buying a house: check the “square footage” (revenue), look for cracks (bugs), and make sure the title is clean (IP). Then they meet reality: the company’s value is often locked in things that don’t show up cleanly in a spreadsheet—like whether customers would still care if two senior engineers left, or whether the “platform” is actually a set of scripts held together by institutional memory.
The counterintuitive part is this: the biggest acquisition risks are usually not the ones you can price precisely. They’re the ones you discover when you try to operate the thing. That’s why good acquisition evaluation is less about producing a perfect valuation model and more about building conviction on three load-bearing ideas:
- What exactly is being acquired—product, customers, capability, or time? If you can’t answer this crisply, diligence becomes a scavenger hunt.
- How durable is the company’s value under stress? Stress means churn, outages, key-person loss, regulatory scrutiny, or a platform shift.
- How expensive is it to integrate and run? Not “can we integrate,” but “what will it cost in time, headcount, and opportunity.”
This guide walks through how to evaluate tech companies for acquisition with those foundations in mind—using concrete checks you can run, questions you can ask, and failure modes you can anticipate before you sign anything you’ll later regret.
Start with the acquisition thesis (and make it falsifiable)
An acquisition thesis is not “we want to grow” or “we need AI.” It’s a specific claim about cause and effect: if we buy this company, we will be able to do X, because Y, and we expect Z measurable outcomes. The thesis is your filter; without it, every metric looks important and every risk looks existential.
A practical way to write it is:
- Objective: What changes in your business? (Example: reduce churn in mid-market accounts; enter regulated healthcare; cut infra cost per customer.)
- Mechanism: Why does this target enable that? (Example: their product fills a workflow gap; their distribution reaches buyers you can’t; their team has a capability you can’t hire fast enough.)
- Measurement: What would you measure 6–18 months after close? (Example: attach rate, retention lift, CAC payback, gross margin improvement, time-to-ship reduction.)
- Constraints: What must be true for it to work? (Example: must integrate SSO within 90 days; must retain top 10 accounts; must keep uptime above a threshold.)
Now make it falsifiable: list the top 3 assumptions that, if wrong, kill the deal. Then aim diligence at those assumptions first. This is where many teams invert the order—spending weeks on generic checklists while the deal’s core logic remains untested.
Example (step-by-step): Suppose you’re a B2B SaaS company acquiring a smaller SaaS tool because you believe it will increase retention.
- Your assumption is that customers churn because they don’t complete a workflow your product doesn’t cover.
- The target claims their tool completes that workflow and has strong adoption.
- Your diligence should prioritize: customer interviews focused on churn reasons, product telemetry proving adoption, and integration feasibility (SSO, data model, permissions).
- If you find churn is driven by pricing or missing compliance features instead, the “retention lift” thesis collapses—even if the target’s revenue is real.
This is also where you decide what kind of acquisition you’re actually doing:
- Product acquisition: You want the product and roadmap.
- Customer acquisition: You want the accounts and distribution.
- Talent acquisition: You want the team and know-how.
- Defensive acquisition: You want to remove a competitor or secure a dependency.
- Capability acquisition: You want a platform component (security, data, infra) that accelerates your roadmap.
Each type changes what “good” looks like. A talent acquisition can tolerate a messy codebase; a product acquisition cannot. A customer acquisition lives or dies on retention and contract terms; a capability acquisition lives or dies on integration and operational maturity.
One more reality check: synergy is not a number you discover; it’s a number you earn. Treat synergy estimates as hypotheses with a plan, not as a plug in a spreadsheet.
Understand the product and technology—what’s real, what’s fragile
A tech company is not just a product demo. It’s a running system with constraints, tradeoffs, and operational habits. Your job in technical evaluation is to determine whether the system is repeatable and maintainable—or whether it’s a heroic effort that happens to work today.
Think of it like adopting a production service, not buying a code repository. The repository is the least interesting part.
Product reality: usage, stickiness, and “why now?”
Start with the simplest question: what job does the product do, and what happens if it disappears? If the answer is “customers would be annoyed,” you’re looking at a feature. If the answer is “their process breaks,” you’re looking at infrastructure in the customer’s workflow—and that’s usually more durable.
Concrete checks:
- Activation and time-to-value: How long from signup to meaningful use? If it’s weeks, ask what blocks adoption (data import, integrations, permissions).
- Usage concentration: Are a few power users driving most activity? That can be fine, but it changes churn risk.
- Feature dependency: Which features are actually used vs. showcased? Ask for product analytics, not anecdotes.
- Roadmap credibility: Are roadmap items driven by customer demand, or by internal aspiration? You can often tell by whether they can name the customer and the contract tied to the request.
A common turning point: buyers confuse “customers like it” with “customers depend on it.” Dependence shows up in renewal behavior, embedded workflows, and switching costs—not in NPS slides.
Architecture and codebase: maintainability beats elegance
You don’t need to love their stack. You need to know whether it’s operable by a normal team.
What to look for:
- System boundaries: Can someone draw the architecture in 10 minutes and explain data flows? If not, complexity is already winning.
- Data model sanity: Are there clear entities and ownership, or is everything a JSON blob with “we’ll clean it up later” energy?
- Testing and release discipline: Do they have CI, automated tests that matter, and a release process that doesn’t require a specific person to be awake?
- Dependency risk: Are they pinned to an old framework, a deprecated API, or a single cloud region? These are future costs disguised as present stability.
- Observability: Logs, metrics, tracing, and on-call practices. If outages are diagnosed by “SSH into prod and tail logs,” you’re buying operational debt.
Here’s the uncomfortable truth: a codebase can be “working” and still be unacquirable if it can’t be safely changed. The cost shows up after close, when your team tries to integrate identity, billing, or data pipelines and discovers every change is a regression lottery.
Analogy (used sparingly): acquiring a brittle system is like buying a classic car that starts reliably—until you try to replace the alternator and realize the wiring harness is custom. It’s not that it can’t be fixed; it’s that you’re now in the restoration business.
Security and compliance: don’t buy a future incident
Security diligence is not a checkbox. It’s about whether the company has habits that prevent predictable failures.
At minimum, you want clarity on:
- Identity and access: SSO support, MFA for internal tools, least-privilege access, offboarding process.
- Data handling: What data is stored, where, and how it’s encrypted at rest and in transit.
- Vulnerability management: Patch cadence, dependency scanning, incident response process.
- Compliance posture: If they claim SOC 2, ISO 27001, HIPAA, or similar, verify scope and reality. A report is not the same as operational maturity.
If you operate in regulated environments, the integration plan must include compliance alignment. Otherwise, you’ll “acquire” a product you can’t sell to your own customers.
For evolving security expectations and breach patterns, our ongoing coverage of security and reliability tracks how this changes week to week.
Financial quality: revenue is easy to count; durability is harder
Financial diligence is often treated as the “adult supervision” part of the deal. It matters—but for tech acquisitions, the more useful question is not “what is revenue,” but how does revenue behave under pressure?
Revenue composition and concentration
Start with a revenue map:
- By customer: top 10 customers as a percent of ARR or revenue.
- By segment: SMB vs mid-market vs enterprise.
- By product line: core product vs add-ons vs services.
- By geography and industry: especially if regulatory exposure varies.
Concentration isn’t automatically bad. It’s bad when it’s unacknowledged or when the top accounts have bespoke requirements that don’t scale.
Then look at contract mechanics:
- Renewal terms, auto-renew clauses, termination rights
- Usage-based pricing details and true-up behavior
- Most-favored-nation clauses (they can quietly cap your pricing power)
- Change-of-control provisions (some customers can exit on acquisition)
Unit economics: what it costs to deliver value
For SaaS, you care about:
- Gross margin quality: Not just the percentage, but what’s inside it. Are they excluding support costs? Are cloud costs rising faster than revenue?
- Retention: Gross and net revenue retention. Net retention can hide churn if expansion is concentrated in a few accounts.
- CAC and payback: Especially if growth is sales-led. If payback is long, you’re buying a treadmill.
For usage-based businesses, dig into:
- Cost per unit of usage (compute, storage, third-party APIs)
- Pricing power and elasticity (do customers optimize away usage?)
- Capacity planning and peak load behavior
A turning point for many acquirers: growth can be purchased; retention must be earned. If the target’s growth is mostly paid acquisition with weak retention, you’re not buying a compounding asset—you’re buying a marketing spend pattern.
Quality of earnings: normalize the story
“Adjusted EBITDA” is where optimism goes to hide. You don’t need to be cynical; you need to be specific.
Normalize for:
- One-time revenue (large services projects, non-recurring licenses)
- Founder compensation anomalies (too high or too low)
- Capitalized software costs (what’s being moved off the P&L)
- Deferred revenue and revenue recognition policies
If you’re acquiring for strategic reasons, you may accept lower margins. But you should still understand whether margins are low because they’re investing—or because the business model leaks.
For accounting standards that influence revenue recognition, ASC 606 is the baseline framework many tech companies follow [1]. You don’t need to become an accountant, but you do need to know whether revenue is recognized in a way that matches delivery.
People, IP, and legal: the “ownability” of what you’re buying
You can integrate systems. You can reprice products. You can even rewrite code. What you cannot do is retroactively fix ownership disputes, missing assignments, or a team that leaves because nobody asked what they wanted.
Team assessment: key-person risk is a technical metric
In tech acquisitions, key-person risk is often measurable:
- Bus factor: How many people need to disappear before you can’t ship or operate?
- Code ownership: Are there clear maintainers, or a single person who “knows the weird parts”?
- On-call reality: Who actually responds to incidents? If it’s always the CTO, that’s not heroism—it’s fragility.
- Hiring and leveling: Do they have a hiring process that produces consistent talent, or did they get lucky early?
Retention planning should be part of evaluation, not a post-close HR task. If the thesis depends on the team, you need:
- Clear roles post-close
- Incentives that match what they value (often autonomy and clarity more than cash)
- A plan to reduce operational load so they can build, not just keep the lights on
Dry observation: if the target’s best engineers are already exhausted, your integration plan is about to become their final project.
IP and licensing: verify you can legally operate and sell
IP diligence is where “we assumed” becomes expensive.
Verify:
- Invention assignment agreements for employees and contractors
- Open-source license compliance and obligations
- Third-party code and data rights (especially training data for ML systems)
- Patents and trademarks if relevant to defensibility
Open-source is not the enemy; unmanaged open-source is. A GPL-licensed component in a distributed product can impose obligations that change how you can ship. Even permissive licenses require attribution and notice handling.
If you need a structured approach, the Linux Foundation’s SPDX specification is a widely used standard for software bill of materials and license metadata [2]. The point isn’t paperwork—it’s knowing what you’re allowed to do after you own it.
Legal and regulatory: contracts are part of the product
In enterprise software, the contract is effectively a feature set. Review:
- SLAs and penalties
- Data processing agreements and subprocessors
- Indemnities and liability caps
- Export controls and sanctions exposure (if applicable)
Also check whether the company’s privacy posture matches its claims. If they process personal data, frameworks like GDPR shape obligations around processing, breach notification, and data subject rights [3]. Even if you’re not EU-based, customers may demand GDPR-aligned practices.
Integration and execution risk: where good deals go to die
A deal can be “strategically sound” and still fail because integration was treated as a phase instead of a product. Integration is not just connecting APIs; it’s aligning systems, incentives, and operating rhythms.
Analogy (used sparingly): integration is like swapping an engine mid-flight. The plane can stay up, but only if you plan the sequence, isolate risk, and keep the control surfaces working.
Integration surface area: identity, data, billing, and support
Most tech integrations bottleneck in four places:
- Identity and access: SSO, RBAC mapping, user provisioning, audit logs.
- Data: schema alignment, migration strategy, data quality, retention policies.
- Billing: entitlements, invoicing, proration, tax handling, refunds.
- Support and operations: ticketing, escalation paths, incident response, uptime commitments.
If you’re acquiring a product to bundle into your platform, identity and billing are usually the first “real” integration points. They’re also where hidden complexity lives, because they touch every customer and every internal system.
A practical diligence artifact: ask for a 90-day integration plan draft from both sides. Not a Gantt chart fantasy—just a sequence of deliverables with owners, dependencies, and rollback plans. If nobody can write it, you’re not ready to integrate.
Operating model: who owns what on day 1?
Decide early:
- Will the target run as a standalone unit for a period?
- Who owns uptime and on-call?
- Who approves releases?
- How will security reviews work?
- What is the deprecation policy for overlapping features?
Ambiguity here creates the worst kind of failure: nothing breaks dramatically, but velocity collapses.
For the latest developments in M&A patterns in software—especially around acqui-hires, roll-ups, and platform consolidation—see our weekly Tech Business & Industry Moves insights coverage.
Valuation meets reality: price the risk you can’t remove
You can’t diligence away every risk. The goal is to identify which risks you’re accepting and price them appropriately through:
- Purchase price adjustments
- Earn-outs tied to measurable outcomes (used carefully; they can distort behavior)
- Escrows and indemnities
- Retention packages and milestone-based incentives
- Carve-outs for liabilities you won’t assume
Be wary of earn-outs that depend on integration success if the acquired team won’t control integration. That’s how you manufacture resentment and miss targets.
Also consider technical debt explicitly in valuation. If you know you’ll need to replatform within 12 months, treat it like capex: estimate headcount, opportunity cost, and timeline risk. Put it in the model as a real cost, not a vague “we’ll fix it.”
Key Takeaways
- Write a falsifiable acquisition thesis and aim diligence at the assumptions that would kill the deal, not at generic checklists.
- Evaluate durability, not just performance: retention behavior, operational maturity, and key-person risk matter more than a polished demo.
- Treat security and compliance as operating habits, not documents—verify access controls, incident response, and data handling in practice.
- Interrogate revenue quality: concentration, contract terms, and unit economics tell you how revenue behaves under stress.
- Plan integration like a product launch: identity, data, billing, and support are the usual bottlenecks, and ambiguity kills velocity.
Frequently Asked Questions
How do you evaluate a pre-revenue tech company for acquisition?
Focus on evidence of pull: pilot conversions, renewal intent, and whether the product solves a painful workflow with clear willingness to pay. Technical diligence matters more here—especially maintainability, security posture, and whether the team can ship predictably without heroic effort.
What’s the difference between buying technology and buying a team (acqui-hire)?
Buying technology assumes the product will live on and must be operable, supportable, and legally ownable. An acqui-hire assumes the code may be secondary; the evaluation shifts to retention risk, role clarity, and whether the team’s skills map to your roadmap rather than to their current product.
How should we handle open-source risk during diligence?
Ask for a software bill of materials (SBOM) and a license scan, then verify how obligations are met in distribution (notices, source availability if required, attribution). If the company can’t produce a credible inventory, assume there’s more risk than you can currently see and price the remediation work.
When is an earn-out a good idea in a tech acquisition?
Earn-outs work best when the acquired team controls the levers tied to the metric—like renewals in a standalone business or delivery milestones in a product roadmap. They work poorly when success depends on the acquirer’s integration work, because you end up incentivizing outcomes the acquired team can’t reliably influence.
What are the earliest signs an acquisition integration is going off track?
Velocity drops while nobody can explain why, incident response becomes ambiguous (“who owns this?”), and customers start hearing inconsistent messaging from sales and support. If you see these, tighten ownership, reduce parallel roadmaps, and prioritize a small number of integration deliverables that unblock everything else (usually identity and billing).
REFERENCES
[1] FASB, “ASC 606: Revenue from Contracts with Customers.” https://asc.fasb.org/topic&trid=2120588
[2] SPDX Workgroup (Linux Foundation), “SPDX Specification.” https://spdx.dev/specifications/
[3] Regulation (EU) 2016/679 (GDPR), Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2016/679/oj
[4] NIST, “Secure Software Development Framework (SSDF), SP 800-218.” https://csrc.nist.gov/publications/detail/sp/800-218/final
[5] Harvard Business Review, “Mergers and Acquisitions: The Essential Guide to Deal Making.” https://hbr.org/topic/mergers-and-acquisitions