Launch HN: InspectMind (YC W24) – AI agent for reviewing construction drawings

Posted by aakashprasad91 1 day ago

Counter57Comment50OpenOriginal

Hi HN, we're Aakash and Shuangling of InspectMind (https://www.inspectmind.ai/), an AI “plan checker” that finds issues in construction drawings, details, and specs.

Construction drawings quietly go out with lots of errors: dimension conflicts, co-ordination gaps, material mismatches, missing details and more. These errors turn into delays and hundreds of thousands of dollars of rework during construction. InspectMind reviews the full drawing set of a construction project in minutes. It cross-checks architecture, engineering, and specifications to catch issues that cause rework before building begins.

Here’s a video with some examples: https://www.youtube.com/watch?v=Mvn1FyHRlLQ.

Before this, I (Aakash) built an engineering firm that worked on ~10,000 buildings across the US. One thing that always frustrated us: a lot of design coordination issues don’t show up until construction starts. By then, the cost of a mistake can be 10–100x higher, and everyone is scrambling to fix problems that could have been caught earlier.

We tried everything including checklists, overlay reviews, peer checks but scrolling through 500–2000 PDF sheets and remembering how every detail connects to every other sheet is a brittle process. City reviewers and GC pre-con teams try to catch issues too, yet they still sneak through.

We thought: if models can parse code and generate working software, maybe they can also help reason about the built environment on paper. So we built something we wished we had!

You upload drawings and specs (PDFs). The system breaks them into disciplines and detail hierarchies, parses geometry and text, and looks for inconsistencies: - Dimensions that don’t reconcile across sheets; - Clearances blocked by mechanical/architectural elements; - Fire/safety details missing or mismatched; - Spec requirements that never made it into drawings; - Callouts referencing details that don’t exist.

The output is a list of potential issues with sheet refs and locations for a human to review. We don’t expect automation to replace design judgment, just to help ACE professionals not miss the obvious stuff. Current AIs are good at obvious stuff, plus can process data at quantities way beyond what humans can accurately do, so this is a good application for them.

Construction drawings aren't standardized and every firm names things differently. Earlier “automated checking” tools relied heavily on manually-written rules per customer, and break when naming conventions change. Instead, we’re using multimodal models for OCR + vector geometry, callout graphs across the entire set, constraint-based spatial checks, and retrieval-augmented code interpretation. No more hard-coded rules!

We’re processing residential, commercial, and industrial projects today. Latency ranges from minutes to a few hours depending on sheet count. There’s no onboarding required, simply upload PDFs. There are still lots of edge cases (PDF extraction weirdness, inconsistent layering, industry jargon), so we’re learning a lot from failures, maybe more than successes. But the tech is already delivering results that couldn’t be done with previous tools.

Pricing is pay-as-you-go: we give an instant online quote per project after you upload the project drawings. It’s hard to do regular SaaS pricing since one project may be a home remodel and another may be a highrise. We’re open to feedback on that too, we’re still figuring it out.

If you work with drawings as an architect, engineer, MEP, GC preconstruction, real estate developer, plan reviewer we’d love a chance to run a sample set and hear what breaks, what’s useful, and what’s missing!

We’ll be here all day to go into technical details about geometry parsing, clustering failures, code reasoning attempts or real-world construction stories about how things go wrong. Thanks for reading! We’re happy to answer anything and look forward to your comments!

Comments

Comment by sparselogic 1 day ago

This is fun to see. Some of my family are Division 10 contractors: their GCs love them because they spot design coordination and code issues early and keep the project from getting derailed. Bringing that to the entire project is a serious lifesaver.

Comment by aakashprasad91 1 day ago

Totally! Division 10 and specialty trades are often the first to see coordination issues show up in the field. We’re trying to bring that same early-warning benefit across the entire drawing set so errors never make it to construction. Would love to run a real project from your family’s world if they’re open to it!

Comment by sparselogic 10 hours ago

One of them is interested. What’s the best way to contact you?

Comment by knollimar 1 day ago

What kind of system to you have for parsing symbology?

Do you check anything like cross discipline coordination (e.g. online searching specification data for parts on drawings like mechanical units and detecting mismatch with electrical spec), or it wholly within 1 trades code at a time?

edit: there's info that answers this on the website. It seems limited to the common ones (e.g. elec vs arch), which makes sense.

Comment by aakashprasad91 1 day ago

Symbol variation is a huge challenge across firms.

Our approach mixes OCR, vector geometry, and learned embeddings so the model can recognize a symbol plus its surrounding annotations (e.g., “6-15R,” “DIM,” “GFCI”).

When symbols differ by drafter, the system leans heavily on the textual/graph context so it still resolves meaning accurately. We’re actively expanding our electrical symbol library and would love sample sets from your workflow.

Comment by aakashprasad91 1 day ago

We parse symbols using a mix of vector geometry, OCR, and learned detection for common architectural/MEP symbols. Cross-discipline checks are a big focus as we already flag mismatches between architectural, structural, and MEP sheets, and we’re expanding into deeper electrical/mechanical spec alignment next. Would love to hear which symbols matter most in your workflow so we can improve coverage.

Comment by djprice1 21 hours ago

What do you mean when you say "vector geometry"? Are you using the geometry extracted from PDFs directly? I'm curious how that interacts with the OCR and detection model portion of what you're doing

Comment by aakashprasad91 19 hours ago

Great question. By “vector geometry” we mean we’re using the underlying CAD-style vector data embedded in many PDFs (lines, arcs, polylines, hatches, etc.), not just raster images. We reconstruct objects and regions from that geometry, then fuse it with OCR (for annotations, tags, labels) and a detection model that operates on rendered tiles. The detector + OCR tells us what something is; the vector layer tells us exactly where and how it’s shaped so we can run dimension/clearance and cross-sheet checks reliably.

Comment by djprice1 19 hours ago

Woah! What determines if something is an object at that vector level? I've done some light PDF investigations before and the whole PDF spec is super intimidating. Seems insane that you can understand which things are objects in the actual drawing at the PDF vector level

Comment by knollimar 1 day ago

I do electrical so parsing lighting is often a big issue. (Subcontractor)

One big issue Ive had is drafters use the same symbol for different things per person. One person's GFCi is another's switched receptacle. People use the specialty putlet symbol sometimes very precisely and others not. Often accompanied by an annotation (e.g. 6-15R).

Dimmers being ambiguous is huge; avoiding dimming type mismatches is basically 80% the lutron value add.

Comment by oscarmcdougall 1 day ago

We're in a similar space doing machine assisted lighting take offs for contractors in AU/NZ, with bespoke models trained for identifying & measuring luminaires on construction plans.

Compliance is a space we've branched into recently. Would be super interested in seeing how you guys are currently approaching symbol detection.

Comment by aakashprasad91 1 day ago

Happy to swap notes. If you send a representative lighting plan set, we can run it and share how the detector clusters, resolves, and cross-references symbols across sheets. Always excited to compare approaches with teams solving adjacent problems.

Comment by testUser1228 1 day ago

The bathroom height example in your video is really interesting (checking the bathroom height above the toilet against building code), how does it know when to check drawings against code provisions and how does it know which code to look at?

Comment by aakashprasad91 1 day ago

We infer the applicable codes from the project metadata + the drawings themselves.

The location + occupancy/use type tells us the governing code families (e.g., IBC/IRC, ADA, NFPA, local amendments), and then we parse the sheets for callouts, annotations, assemblies, and spec sections to map them to the relevant provisions.

So the system knows when to check (e.g., plumbing fixture clearances) because of the objects it detects in the drawings, and it knows what code to check based on jurisdiction + building type + what’s being shown in that detail.

The model still flags with human-review intent so designer judgment stays in the loop.

Comment by testUser1228 1 day ago

Gotcha, so the model is identifying elements on the sheets and determining when to run code checks? Is the model running thousands of code checks per drawing set? I would imagine there are lots of elements that could trigger that

Comment by aakashprasad91 1 day ago

Yep, the model identifies objects/conditions on sheets (fixtures, stairs, rated walls, landings, etc.) and triggers the relevant checks automatically. It can run thousands of checks per project, but we only surface high-confidence findings where the combination of geometry + annotations + code context points to a real risk. Humans stay in the loop to confirm what matters.

Comment by knollimar 1 day ago

Maybe this is saying the quiet part out loud: how do you deal with bogus specs that designers end up not caring about since they're copy pasted? Is it just mission accomplished when you point out a potential difficulty?

Comment by aakashprasad91 1 day ago

We see that a lot — specs that are clearly boilerplate or outdated relative to the drawings. Our goal isn’t to force a change, but to surface where the specs and drawings diverge so the designer can quickly decide what’s intentional vs what’s baggage. “Flag + context for fast human judgment” is the philosophy.

Comment by Doerge 1 day ago

I love this!

Stupid question: Would BIM solve these issues? I know northern Europe are somewhat advanced in that direction. What kind of digitalization pace do you see in the US?

Comment by knollimar 1 day ago

BIM just shuffles the problem around. There are firms that do "one source of truth" BIM models but the real issue is conflicts and workflow buy in.

How do you get architect to agree with engineer with lighting designer with lighting contractor when they all have different non overlapping deadlines, work periods, knowledge and scope?

edit: if you don't work in the industry, BIM helps for "these two things are in the same spot", but not much for code unless it's about clearance or some spatial based calculation

Comment by aakashprasad91 1 day ago

100% agree the hardest problems are workflow and incentives, not file formats.

Even with a perfect BIM model, late changes and discipline silos mean drawings still diverge and coordination issues sneak through.

We’re trying to be the “safety net” that catches what falls through when teams are moving fast and not perfectly in sync.

Comment by aakashprasad91 1 day ago

BIM definitely helps, but most projects still rely heavily on 2D PDFs for coordination and permitting especially in the US. Even when BIM exists, drawings often lag behind the model and changes don’t stay perfectly synced. We see AI plan checking as a bridge that helps teams catch what falls through the cracks in today’s workflows. And BIM only catches certain issues not building codes etc.

Comment by pondemic 1 day ago

I’m sure commissioning engineers would have a field day with this. Have you considered use cases on the larger owner’s side of things? As an owner’s rep I can definitely see value here at an SD and DD level, especially if the owner has a decently sized Facilities or commissioning team.

Comment by aakashprasad91 1 day ago

Great point! Owner’s reps and commissioning teams are becoming one of the fastest-growing user groups for us. At SD/DD we can surface coordination risks early, highlight spec–drawing mismatches, and give owners a clearer picture of design completeness before things get locked in. If you’re open to it, we’d love to run a sample SD/DD set from your world and see what’s most useful.

Comment by frogguy 1 day ago

Are you doing code checks for structural issues? If so, how do you deal with licensing on common code orgs, such as ASCE?

Comment by aakashprasad91 1 day ago

Great question. We currently focus primarily on coordination, dimension conflicts, missing details, and clear code-triggered checks that don’t require sealed structural judgment. For structural code references (e.g., ASCE-7), we infer applicable sections and surface potential issues for a licensed engineer to review. We don’t replace engineering judgment or sealed design accountability.

Comment by zodo123 1 day ago

How does your system do with hand-drawn plans from an old-school architect? Is reliable OCR and line reading dependent on CAD output plans?

Comment by aakashprasad91 19 hours ago

We do best on CAD-originated PDFs where we can use the underlying vector data, but we can run on scanned/hand-drawn sets too. In that case we rely more on image-based detection + OCR (no clean vector layer), so accuracy depends on scan quality, contrast, and how consistent the annotations are. We’ve had success on some older/detail-heavy scans, but it’s definitely a harder mode. If you have a representative “old-school” set, we’d love to run it and show you where it works well vs where it struggles.

Comment by knollimar 1 day ago

Is the pay as you go model % based or project sized? I've had issues with conflicts of interest of being lean vs not. It's hard to sell on % based revenue.

Also who is this targetted at? Subcontractors, GC, design?

Comment by aakashprasad91 1 day ago

We price per-project based on size/complexity not % of construction cost, so there’s no conflict of interest around bigger budgets. Today our main users are architects/engineers and GC pre-con teams, but subs who catch coordination issues early also get a ton of value.

Comment by knollimar 1 day ago

At what stage do you run this on plans? like DD, some % CD? What's the intended target timeframe?

I don't see how subs get much value unless they can use it on ~80% CD for bid phases

Comment by aakashprasad91 1 day ago

Most teams run us late DD through CD anywhere the set is stable enough that coordination issues matter. Subs especially like running it pre-bid at ~80–100% CDs so they don’t inherit coordination risk. Earlier checks also help designers tighten the set before hand-offs, so value shows up at multiple stages. Eventually the goal is to be continuous QA tool including during construction by pulling in field data too and comparing to drawings and specs. Like drawings showed X size and field photos show Y size.

Comment by knollimar 1 day ago

Would love to run it and give feedback if it's cheap to do so; my company just finished a bunch of projects and would love to cross reference if it catches the issues that we found by hand (assuming it's inexpensive enough). I do high rise electrical work for a subcontractor.

Comment by aakashprasad91 1 day ago

We’d love that — perfect use case. Send a recent set and we’ll run a discounted comparison so you can see what we catch vs. what surfaced during construction. If helpful, we can hop on a quick call to walk through results and collect feedback. Email me aakash@inspectmind.ai

Comment by cannedbread 1 day ago

When I upload my drawing set, how often should I expect it to hallucinate? And how much of the real stuff does it flag?

Comment by aakashprasad91 1 day ago

Hallucinations still happen occasionally, but we bias heavily toward high-confidence findings so noise stays low. On typical projects we surface a few hundred coordination issues that are real, observable conflicts across sheets rather than speculative checks. We’re actively improving precision by learning from every false positive customers flag. We show you the drawings, specs, etc. so you can verify it yourself not just trust the AI.

Comment by shuangly 1 day ago

We do extensive preprocessing to ensure AI receives accurate context, data, and documents for review, and we’re continuously refining this, so accuracy keeps improving every day. Right now the accuracy isn't super stable yet across projects, but we've had findings with > 90% accuracy results

Comment by T1tt 1 day ago

"an AI “plan checker”" do you have some public benchmark for how many issues you can find?

how does this work behind the scenes?

Comment by aakashprasad91 1 day ago

Great questions. We’re working on a more formal public benchmark and will share results as our dataset grows. Today, we typically catch coordination issues like conflicting dimensions, missing callouts, building code and clearance violations that humans often miss in large sheet sets. Behind the scenes it’s a multimodal workflow: OCR + geometry parsing + cross-sheet callout graph + constraint checks vs. code/spec requirements.

Comment by BoorishBears 1 day ago

Not shade, and it's a small thing, but why do you list your investors as social proof here?

Isn't the target persona someone who'd be at best indifferent, and at worst distrustful, of a tech product that leads with how many people invested in it? Especially vs the explanation and actual testimonials you're pushing below the fold to show that?

Comment by aakashprasad91 1 day ago

Totally fair callout and appreciate the feedback. We’re already testing alternative hero layouts focused purely on real customer results and example issues caught. Our goal is to win trust by demonstrating usefulness/results, not who invested in us.

Comment by an_aparallel 1 day ago

where would my firms documents end up (on whos servers) to do this checking? I dont know how any firm would just hand out their cd's just like that?

Or is being that lax normal these days?

Aside: this field is insanely frustrating, the chasm between clash detection and resolution is a right ball ache...between acc, revizto, and aconex clash detection (and the like)..the defacto standard is pretty much telling me x is touching y....great...can you group this crap intelligently to get my hi rise clashes per discipline from 2000 down to 10? Can you navigate me there in revit (yes switchback in revizto is great) but revizto itself could improve.

Comment by aakashprasad91 1 day ago

Yes one of the biggest values of our system is reducing “noise.” Instead of surfacing 2,000 micro-clashes, we cluster findings into higher-order issues (e.g., “all conflicts caused by this duct run” or “all lighting mismatches tied to this dimming spec”). We’re not a BIM viewer yet, but we do map issues back to sheet locations, callouts, and detail references so teams can navigate directly to the real source of the problem.

Comment by an_aparallel 1 day ago

Sounds good, what is the typical workflow aggregating sheet sets in question for a certain phase? I assume user collates and drops for analysis?

Comment by aakashprasad91 1 day ago

Today the workflow is simple: users just drag-and-drop the full drawing/spec set (ZIP or PDFs) for whatever phase they want reviewed. The system automatically splits sheets by discipline, reconstructs callout relationships, and runs the checks. We’ll be adding integrations with ACC/Procore/Revit exports so this becomes even more automated.

Comment by aakashprasad91 1 day ago

Yes today users simply gather the sheets for whatever phase they want reviewed (DD, 80% CDs, 100% CDs, etc.), ZIP them or upload PDFs directly, and the system handles the rest. It auto-detects disciplines, reconstructs callout graphs, and runs checks across the full set. We're also adding integrations with ACC/Procore/Revit so sheet aggregation becomes automatic.

Comment by aakashprasad91 1 day ago

We store files securely on AWS with strict access controls, encryption in transit and at rest, and zero sharing outside the file owner’s account. Only our engineers can access a project for debugging and only if the customer explicitly allows it. We can also offer an enterprise option with private cloud/VPC deployment for firms that require even tighter controls. Users can delete all files permanently at any time.

Comment by shuangly 1 day ago

Documents are stored on AWS with strict access controls, meaning they are only accessible to the file owner and, if necessary, our engineers for debugging purposes. After the check, users can delete the project and optionally permanently delete the files from our S3 buckets on AWS.

Comment by breedmesmn 1 day ago

[flagged]

Comment by aakashprasad91 1 day ago

Could you share a bit more about what didn’t work on your end?

Comment by breedmesmn 1 day ago

[flagged]

Comment by aakashprasad91 1 day ago

That comment comes across as racially loaded and isn’t helpful. If you ran into a real issue, I’m happy to take a look.