Skip to main content
· REELIANT

Technical due diligence: assessing the real technology risk behind an investment

An investor doesn't buy a demo. They buy a system that must hold under load, under regulatory pressure, and over time. What a technical due diligence audit must actually reveal, and why the approach matters as much as the deliverables.

Financial due diligence is well-established. Legal due diligence too. Technical due diligence, on the other hand, often remains the poor relation of the process: handed off at the last minute to generalist consultants armed with a checklist, wrapped up in a few days, and delivered as a report that reassures without truly illuminating.

This is a mistake with a cost. Not always visible at signing, but almost always visible at integration, during the first production crisis, or the first time someone tries to change something.

What technical due diligence is and who commissions it

A technical due diligence is an independent assessment of the quality, maturity, and risks of a technology system in the context of an investment decision: acquisition, fundraising, merger, or significant equity stake.

It answers a fundamental question: is what I’m buying worth what I think I’m buying, and at what real cost?

It is not a security audit or a penetration test. It is a multidimensional evaluation that produces a faithful picture of a technology asset’s condition, its hidden risks, and what it will take to keep it running over time.

Who commissions it:

  • Venture capital or private equity funds evaluating an acquisition target or significant stake, typically above €5–10M
  • Strategic acquirers in M&A seeking an independent perspective before closing
  • Companies preparing for a fundraise who want to anticipate investor questions (vendor due diligence)
  • Boards of directors seeking to objectively assess the real state of their own technology asset

We are regularly commissioned by institutional investors (Banque des Territoires (Caisse des Dépôts), CM-CIC Investissement, RATP Capital Innovation, Demeter, among others) to conduct these evaluations on technology companies and SaaS or mobile platform vendors.

When it happens: ideally between the letter of intent and closing, early enough to influence valuation or deal terms. In practice, often too late to truly change the terms. A technical DD commissioned 10 days before closing produces a report, not an informed decision.

What a good technical DD must cover

Our methodology is structured around three pillars, systematically addressed on every engagement.

Technical due diligence: the three analysis pillars — solution design, governance and organization, service commitments

Pillar 1: Solution design

This is the assessment of the technical core: architecture, code, and components.

  • Validity of architecture decisions: are the structural choices justified? Is the architecture suited to the actual problem, or is it over-engineered or under-dimensioned?
  • Modularity and ability to evolve: can the system grow without being rebuilt? Can a component be isolated and replaced without stopping everything?
  • Status of software and hardware components: which dependencies are EOL or unmaintained? Which CVEs are open and exploitable in this context?
  • Cyber risks: team maturity on security practices, OWASP framework integration, code quality from a security standpoint

The code audit here evaluates clarity, best practices, maintainability, and the system’s real robustness—beyond what the teams showcase in demos.

Pillar 2: Governance and organization

A system is also an organization. Human and organizational risks are often underestimated in a standard DD.

  • Process and documentation quality: is the system documented at a level that allows a third party to operate and evolve it confidently?
  • Team organization aligned with objectives: does the actual team structure serve the roadmap?
  • Knowledge concentration: are there critical bus factors? A system maintained by a single person who holds all context in their head is a real risk, often invisible in presentation slides.
  • Third-party takeover complexity: can the acquirer take over the system without depending indefinitely on the original team?

Pillar 3: Service commitments

The third pillar focuses on the system’s ability to meet its production commitments over time.

On this pillar we run a structured questionnaire derived from the ISO 27001 standard, which evaluates:

  • Hosting infrastructure: QoS, RTO, BCP and DRP
  • Regulatory compliance: GDPR, applicable sector-specific requirements
  • Intellectual property of components: are licenses identified? Do third-party components create legal risks?
  • Confidentiality level of stored and processed data
  • Application maintenance procedures: are they documented, tested, operational?

How an engagement unfolds

For organizations of fewer than 30 people, an audit is conducted over three weeks. For larger structures or broader scopes, the timeline is adjusted accordingly.

Technical due diligence: typical engagement timeline over 3 weeks, with preliminary findings at D+5

Phase 1: Analyze (two weeks)

The analysis phase combines multiple input types: workshops with the teams (CEO, CTO, PO, developers, DevOps depending on the topic), access to source code and documentation, and technical questionnaires.

Workshops are structured by theme: organization, technical architecture, hosting and security, project management. Each workshop involves the relevant counterparts on the target side.

At the end of the first week, preliminary findings are presented verbally to the investor. They summarize the initial findings and allow the investor to raise additional questions that we integrate into the second week of analysis. This is a mid-engagement adjustment mechanism that distinguishes a serious audit from a checklist review.

Phase 2: Deliver (one week)

The delivery phase begins with Tech Talks: sessions to validate the provisional risk matrix, organized with the target’s technical teams. Identified risks and associated recommendations are reviewed pillar by pillar (architecture, organization, service commitments). The audited teams can correct facts, provide additional context, and validate or challenge recommendations.

This prior validation matters: it prevents delivering a report with factual errors the target could dispute, and it improves recommendation quality by incorporating context that the technical teams know but the audit cannot always directly observe.

The final presentation is delivered to the investor with the definitive deliverables.

What the audit produces

Two deliverables constitute the output of each engagement.

The risk and recommendation matrix

For each identified risk: its description and business impact, the associated recommendation with feasibility validated by the audited teams, an effort estimate in person-days to address it, and the risk category (security, best practices, intellectual property, performance, organization…).

This quantified format is a deliberate choice. A recommendation without an effort estimate is an opinion. A quantified recommendation is a decision-making tool.

The executive summary

A document presenting the auditor’s opinion on the company’s ability to achieve its business objectives, a SWOT analysis of the solution, answers to specific questions raised by the investor, and a summary table of risks by category and criticality.

What we bring: practitioners, not auditors

Our stance is different from that of a generalist consulting firm. We don’t come with a checklist. We come with the knowledge of what these systems cost to maintain, evolve, and keep running under pressure, because that is our daily work.

We have modernized legacy systems in critical environments: banking, insurance, healthcare, industry. We know how to spot a system that “works” in a demo but will produce 200 person-days of remediation within 18 months. We also know how to distinguish acceptable debt (a conscious, documented choice justified by time or cost constraints) from dangerous debt (the result of a lack of control).

We have deployed systems in production. The difference between what an architecture promises on paper and what it delivers at 3 AM when the system scales under load—we know it from the inside.

We have implemented OTA frameworks on critical systems. We know what it costs to keep a system in a trusted state, and we can assess whether the audited system was built to be maintained or to be discarded in three years.

This experience changes the reading. It allows us to spot what a checklist cannot see: shortcuts that seem harmless but block all evolution, architectures that are elegant on paper but fragile in practice, documentation that exists but no longer corresponds to anything.

Platforms audited on past engagements cover diverse domains: dataviz SaaS, multi-modal mobility, vehicle rental, smart metering and smart grid, digital marketplaces, intensive distributed computing, personal assistance services. And for the past two years, AI assistants for regulated professions—a domain where DD stakes are particularly complex.

What the audit opens, not just what it closes

A point we consider a genuine differentiator: the recommendations we produce are executable by our teams.

Each risk in the matrix is accompanied by an effort estimate in person-days. These estimates correspond to work our teams know how to deliver. The investor who commissions the DD can, upon deal closure, directly engage the teams who conducted the audit to address the identified issues.

This is not systematic, and not an obligation. But it is an option that changes the nature of the DD: it is no longer just a report listing problems—it becomes the starting point of a concrete action plan.

AI: the new frontier for technical DD

Since 2024, a growing proportion of the targets we evaluate embed AI in their product or operations. This is a domain for which traditional DD tools are structurally inadequate.

What an AI system adds as risk dimensions:

Training data quality and governance. A model is only as good as the data it learned from. Biased, incomplete, or legally problematic data (rights, GDPR, consent) constitutes a real risk, often invisible in demos. The barriers to entry for an AI player often rest on the quality of their proprietary data: evaluating this asset is a central question in the DD of an AI vendor.

LLMOps maturity. A model that works in a demo is not a system in production. Prompt management, systematic evaluation (evals), production observability, model drift management: their absence is a sign that the system has never truly been industrialized. We distinguish what is a dressed-up POC from what is an exploitable system.

AI Act compliance. For high-risk AI systems (credit decisions, scoring, healthcare, employment…), the AI Act imposes traceability, human oversight, and documentation requirements. A non-compliant system purchased before closing is a compliance remediation project to absorb afterward. We assess this gap and quantify it.

Model provider dependency. A system built entirely on a single provider’s API without an abstraction layer is exposed to pricing changes, API deprecations, and behavior shifts during model updates. This dependency must be evaluated and quantified.

Actual barrier to entry. Beyond the benchmarks the target presents, we evaluate the solution’s real lead over what competitors could replicate within 12 to 24 months: technology complexity, proprietary data quality, depth of integration into customer business workflows.

Trust at the core

What distinguishes our approach is not methodological. It is a conviction about what we do when we audit a system.

We don’t seek to reassure. We seek to produce a faithful picture. Reassuring at signing only to discover problems at integration is a disservice to everyone.

We don’t confuse complexity with danger. Some systems are complex for good reasons: sophisticated business rules, real-time constraints, heavily regulated domains. Accidental complexity—the kind that serves no business objective and is merely the sediment of accumulated poor decisions—is different. Distinguishing the two requires a technical reading that only experienced practitioners can perform.

We don’t sell superficial compliance. An SBOM produced to check a box, documentation written after the fact for the audit, tests hastily added the week before review: we have learned to recognize them. That is not what we produce, and not what we validate.

Are you managing an investment and need an independent technical assessment? Let’s discuss your context.