Reverse engineering a $1B Legal AI tool exposed 100k+ confidential files
Posted by bearsyankees 9 days ago
Comments
Comment by 0xbadcafebee 8 days ago
Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.
This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.
Comment by altmanaltman 8 days ago
Comment by blitzar 8 days ago
Comment by Covenant0028 8 days ago
Just like if any human employee publicly sexually harassed his female CEO, he'd be out of a job and would find it very hard to find a new one. But Grok can do it and it's the CEO who ends up quitting.
Comment by dr_dshiv 8 days ago
Comment by input_sh 8 days ago
You can't fit every security consideration into the context window.
Comment by dr_dshiv 8 days ago
Comment by input_sh 8 days ago
They also know not to, say, temporarily disable auth to be able to look at the changes they've made on a page hidden behind auth, which is what I observed Gemini 3 Pro doing just yesterday.
Comment by dr_dshiv 8 days ago
Comment by input_sh 8 days ago
That's what makes it bad at security. It cannot comprehend more than a floppy drive worth of data before it reverts to absolute gibberish.
Comment by eric-burel 8 days ago
Comment by input_sh 8 days ago
Let's imagine a codebase that can fit onto a revolutionary piece of technology known as a floppy drive. As we all know, a floppy drive can store <2 megabytes of storage. But a 100k tokens is only about 400 kilobytes. So, to process the whole codebase that can fit onto a floppy drive, you need 5 agents plus the sixth "parent process" that those 5 agents will report to.
Those five agents can report "no security issues found" in their own little chunk of the codebase to the parent process, and that parent process will still be none the wiser about how those different chunks interact with each other.
Comment by eric-burel 6 days ago
Comment by joshribakoff 8 days ago
Its almost as if it has additional problems beyond the context limits :)
Comment by eric-burel 6 days ago
Comment by joshribakoff 8 days ago
Comment by ethbr1 8 days ago
And what keeps security problems from making it into prod in the real world?
Code review, testing, static and dynamic code scanning, and fuzzing.
Why aren't these things done?
Because there isn't enough people-time and expertise.
So in order for LLMs to improve security, they need to be able to improve our ability to do one of: code review, testing, static and dynamic code scanning, and fuzzing.
It seems very unlikely those forms of automation won't be improved in the near future by even the dumbest form of LLMs.
And if you offered CISOs a "pay to scan" service that actually worked cross-language and -platform (in contrast to most "only supported languages" scanners), they'd jump at it.
Comment by joshribakoff 8 days ago
Comment by ethbr1 8 days ago
Comment by windexh8er 8 days ago
Why? Context. LLMs, today, go off the rails fairly easily. As I've mentioned in prior comments I've been working a lot with different models and agentic coding systems. When a code base starts to approach 5k lines (building the entire codebase with an agent) things start to get very rough. First of all, the agent cannot wrap it's context (it has no brain) around the code in a complete way. Even when everything is very well documented as part of the build and outlined so the LLM has indicators of where to pull in code - it almost always cannot keep schemas, requirements, or patterns in line. I've had instances where APIs that were being developed were to follow a specific schema, should require specific tests and should abide by specific constraints for integration. Almost always, in that relatively small codebase, the agentic system gets something wrong - but because of sycophancy - it gleefully informs me all the work is done and everything is A-OK! The kicker here is that when you show it why / where it's wrong you're continuously in a loop of burning tokens trying to put that train back on the track. LLMs can't be efficient with new(ish) code bases because they're always having to go lookup new documentation and burning through more context beyond what it's targeting to build / update / refactor / etc.
So, sure. You can "call an LLM multiple times". But this is hugely missing the point with how these systems work. Because when you actually start to use them you'll find these issues almost immediately.
Comment by joshribakoff 8 days ago
Comment by windexh8er 8 days ago
Spot on. If we look at, historically, "AI" (pre-LLM) the data sets were much more curated, cleaned and labeled. Look at CV, for example. Computer Vision is a prime example of how AI can easily go off the rails with respect to 1) garbage input data 2) biased input data. LLMs have these two as inputs in spades and in vast quantities. Has everyone forgotten about Google's classification of African American people in images [0]? Or, more hilariously - the fix [1]? Most people I talk to who are using LLMs think that the data being strung into these models has been fine tuned, hand picked, etc. In some cases for small models that were explicitly curated, sure. But in the context (no pun) of all the popular frontier models: no way in hell.
The one thing I'm really surprised nobody is talking about is the system prompt. Not in the manner of jailbreaking it or even extracting it. But I can't imagine that these system prompts aren't collecting mass tech debt at this point. I'm sure there's band aid after band aid of simple fixes to nudge the model in ever so different directions based on things that are, ultimately, out of the control of such a large culmination of random data. I can't wait to see how these long term issues crop and and duct taped for the quick fixes these tech behemoths are becoming known for.
[0] https://www.bbc.com/news/technology-33347866 [1] https://www.theguardian.com/technology/2018/jan/12/google-ra...
Comment by eric-burel 6 days ago
Comment by aduwah 8 days ago
Comment by dr_dshiv 8 days ago
Comment by MangoToupe 8 days ago
Comment by Cthulhu_ 8 days ago
But also, you'd need to have some metrics - how good are developers at security already? What if the bar is on the floor and LLM code generators are already better?
Comment by wizzledonker 8 days ago
Comment by rendaw 7 days ago
I've seen a lot of job ads (Canva) lately that mandate AI use or AI experience, and as an AI company if they wanted that I think they would have put it in the ad.
For the record I think I may be fine with the insincerity of selling AI but not using it!
Comment by compootr 8 days ago
Comment by agos 8 days ago
Comment by eru 8 days ago
Yes, but adding these common sense considerations is actually something LLMs can already do reasonably well.
Comment by darkwater 8 days ago
Comment by AbstractH24 8 days ago
Comment by MangoToupe 8 days ago
Comment by AbstractH24 8 days ago
Comment by MangoToupe 8 days ago
Comment by AbstractH24 8 days ago
If we're saying the way to ensure competency is to instill fear of not getting money tomorrow as a consequence of failure, then AI companies and humans are on equal footing.
Comment by eru 8 days ago
It's like having multiple people audit your systems. Even if everyone only catches 90%, as long as they don't catch exactly the same 90%, this parallel effort helps.
Comment by LunaSea 8 days ago
Comment by MangoToupe 8 days ago
Comment by Sharlin 8 days ago
Comment by tryauuum 8 days ago
he wanted to demonstrate that he indeed has the private data. But he fucked up the tar command and it ended up having his username in the directory names, a username he used in other places on the internet
Comment by Fnoord 8 days ago
The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.
Comment by zwnow 8 days ago
Comment by Fnoord 8 days ago
Comment by anshumankmr 8 days ago
Comment by gbacon 8 days ago
The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.
The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?
This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.
The reported incident is a great reminder of caveat emptor.
Comment by objclxt 8 days ago
I don't...think this is true? Google has no problems shipping complex software projects, their London HQ is years behind schedule and vastly over budget.
Construction is really complex. These can be mega-projects with tens of thousands of people involved, where the consequences of failure are injury or even death. When software failure does have those consequences - things like aviation control software, or medical device firmware - engineers are held to a considerably higher standard.
> The private market is perfectly capable of performing this function
But it's totally not! There are so many examples in the construction space of private markets being wholly unable to perform quality control because there are financial incentives not to.
The reason building codes exist and are enforced by municipalities is because the private market is incapable of doing so.
Comment by throwaway984393 8 days ago
Comment by theoldgreybeard 9 days ago
I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.
But now I understand it’s not the developers who are incompetent…
Comment by eru 9 days ago
Comment by theoldgreybeard 9 days ago
Comment by eru 9 days ago
There are organisations that are generally competent, and there are places that are less competent. It's not all that uncommon for the whole organisation to be generally incompetent.
The saddest places (for me) are those where almost every individual you talk to seems generally competent, but judging by their output the company might as well be stuffed by idiots. Something in the way they are organised suppresses the competence. (I worked at one such company.)
> Maybe I have just been lucky, but I have not had the displeasure of working with people either tha incompetent or willfully ignorant yet.
It's very important before you start any new job to suss out how competent people and the organisation are. Ideally, you probably want to work for a competent company. But at least you want to know what you are getting into.
There's a bit of luck involved, if you go in blindly, but you can also use skill and elbow-grease to investigate.
Comment by vkou 8 days ago
It's a natural outcome of authoritarian structures when the people at the top are idiots. When that happens, the whole organization rots.
Comment by chii 8 days ago
how does one do this, without first having the job and being embedded in there? From the outside, it's near impossible to see these details imho.
Comment by eru 8 days ago
It's fundamentally the same problem that the company is trying to solve when they interview you, just the other way 'round.
Some ideas: observe and ask in the interviews and hiring process in general. See what you can find out about the company from friends, contacts and even strangers. Network! Do some online research, too.
Btw, lots of the cliché interview questions ("What are your greatest weaknesses?" etc) actually make decent questions you can ask about the company and team you are about to join.
Comment by SaltyBackendGuy 8 days ago
Comment by YouAreWRONGtoo 8 days ago
Comment by delaminator 8 days ago
Reeves orders Treasury inquiry over Budget leaks
Chancellor’s policies found their way to the press before she announced them to MPs
https://www.telegraph.co.uk/news/2025/12/03/reeves-orders-tr...
Comment by eru 8 days ago
Comment by blitzar 8 days ago
Comment by Lord-Jobo 8 days ago
There’s definitely plenty of incompetence regardless. But I’ve never seen a company where the incompetence was more noteworthy in the cog positions than “leadership”.
Comment by samdung 8 days ago
Comment by hahn-kev 8 days ago
Comment by icyfox 9 days ago
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
Comment by Aurornis 9 days ago
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
Comment by srrdev 9 days ago
Comment by DrewADesign 9 days ago
Now if you needed to develop something not-urgent that involved, say, the performance department, database department, and your own, hope you’ve got a few months to blow on conference calls and procedure documents.
For that industry it made sense though.
Comment by eru 9 days ago
Comment by DrewADesign 7 days ago
Now that I think of it, I’ll bet a lot of companies have a system similar to this for their infrastructure… they just outsource it to AWS, Azure, Google, etc. and comparatively fly by the seat of their pants on the dev side. You could only scale that system down so much, I imagine.
Comment by rvba 8 days ago
A lot are people who cannot code at all, cannot administer - they just fill tables and check boxes, maybe from some automated suite. They dont know what http and https is, because they are just paper pushers what is far from real security, but more like security in name only.
And they joined the work since it pays well
Comment by tietjens 8 days ago
Comment by Barathkanna 9 days ago
Comment by Aurornis 9 days ago
At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"
Comment by whstl 9 days ago
And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.
Comment by bongodongobob 9 days ago
Comment by jll29 8 days ago
Email the memo to a decision maker with the important flag on and CC: another person as a witness.
If you have been saying it for a long time and nobody has taken any action, you may use the word "escalation" as part of the subject line.
If things hit the fan, it will also make sure that what drops from the fan falls on the right people, and not on you.
Comment by ChrisMarshallNY 9 days ago
They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.
The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.
Think I'm kidding?
Many folks that have worked for large companies (or bureaucracies) have seen exactly this.
Comment by eru 9 days ago
Comment by throwaway290 9 days ago
Comment by ipdashc 9 days ago
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
Comment by canopi 9 days ago
There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.
Comment by TheTaytay 9 days ago
Comment by YouAreWRONGtoo 8 days ago
Comment by whstl 9 days ago
The other half was people demanding payment.
Comment by horacemorace 8 days ago
Comment by latchkey 9 days ago
Comment by bfxbjuf 9 days ago
Comment by londons_explore 9 days ago
I reckon only 1% of reports are valid.
LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.
I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.
Comment by tryauuum 8 days ago
Of course this could be a real vulnerability if it would disclose the real server IP behind cloudflare. This was not the case, we were sending via AWS email gateway
Comment by gwbas1c 9 days ago
Comment by stavros 9 days ago
Comment by Aurornis 9 days ago
Outside of startups and big tech, it's not uncommon to have release cycles that are months long. Especially common if there is any legal or regulatory involvement.
Comment by technion 9 days ago
I remember heartbleed dropping shortly after a deployment and not being allowed to patch for like ten months because the fix wasn't "validated". This was despite insurers stating this issue could cost coverage and legal getting involved.
Comment by stavros 9 days ago
Comment by Jolter 9 days ago
Comment by Capricorn2481 9 days ago
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
Comment by giancarlostoro 9 days ago
Comment by perlgeek 9 days ago
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
Comment by jofzar 9 days ago
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
1 week is surprisingly not that slow.
Comment by bgbntty2 9 days ago
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
Comment by nl 8 days ago
I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.
Your responsibility is responsible disclosure.
Their responsibility is how to handle it. Don't try to decide that for them.
Comment by ghostly_s 9 days ago
What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.
Comment by bgbntty2 8 days ago
The more casualties, the more media attention -> the more likely they, and others in their field, will take security more seriously in the future.
If we let them do nothing for a month, they'll eventually fix it, but in the mean time malicious hackers may gain access to the PII. They might not make it public, but sell that PII via black markets. The company may not get the negative publicity it deserves and likely won't learn to fix their systems in time and to adopt adequate security measures. The sale of the PII and the breach itself might become public knowledge months after the fact, while the company has had a chance to grow in the meantime, and make more security mistakes that may be exploited later on.
And yes, I know it may be a crime - that's why I said I'd report it anonymously from now on. But if the company sits on their asses for a month, shouldn't that count as a crime, as well? The current definition of responsible disclosure gives companies too much leeway, in my opinion.
If I knew I operated a service that was trivial to exploit and was hosting people's PII, I'd shut it down until I fixed it. People won't die if I make everything in my power to provide the test results (in my example of medical labs) to doctors and patients via other means, such as via paper or phone. And if people do die, it would be devastating, of course, but it would mean society has put too much trust into a single system without making sure it's not vulnerable to the most basic of attacks. So it would happen sooner or later, anyway. Although I can't imagine someone dying because their doctor had to make a phone call to the lab instead of typing in a URL.
The same argument about people dying due to the disruption of the medical communications system could be made about too-big-to-fail companies that are entrenched into society because a lot of pension funds have invested in them. If the company goes under, the innocent people dependent on the pension fund's finances would suffer. While they would suffer, which would be awful, of course, would the alternative be to not let such companies go bankrupt? Or would it be better for such funds to not rely so much on one specific company in the first place? That is to say, in both cases (security or stocks in general) the reality is that currently people are too dependent on a few singular entities, while they shouldn't be. That has to change, and the change has to begin somewhere.
Comment by habosa 9 days ago
Also … shows you what a SOC 2 audit is worth: https://www.filevine.com/news/filevine-proves-industry-leade...
Even the most basic pentest would have caught this.
Comment by stingraycharles 9 days ago
The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.
(I’m responsible for the SOC2 audits at our firm)
Comment by abustamam 8 days ago
I asked my my manager if that's all that was required and he said yes, just make sure you do it again next year. I spent the rest of my time worrying that we missed something. I genuinely didn't believe him until your comment.
Edit: missing sentence.
Comment by rustystump 9 days ago
Comment by technion 9 days ago
I dont at all get why there is a paragraph thanking their communication if that is the case.
Comment by nick49488171 9 days ago
Comment by eru 9 days ago
I wouldn't expect them to find any computer problems either to be honest.
Comment by anticensor 6 days ago
Comment by mrweasel 8 days ago
Comment by jonny_eh 9 days ago
Comment by theodorejb 9 days ago
Comment by OtherShrezzing 9 days ago
Comment by kylecazar 9 days ago
They should have given you some money.
Comment by edm0nd 9 days ago
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
Comment by RagnarD 9 days ago
Comment by DonHopkins 9 days ago
Comment by sys32768 9 days ago
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
Comment by pr337h4m 9 days ago
Comment by hughes 9 days ago
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
Comment by empiko 8 days ago
Comment by lionkor 8 days ago
Point out one (1) "AI product" company that isn't described accurately by that sentence
Comment by layer8 9 days ago
Comment by sys32768 9 days ago
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
Comment by ansgri 8 days ago
Storing lots of legal data doesn’t seem to be one of these cases though.
Comment by bonesss 8 days ago
Selling an on-premise service requires customer support, engineering, and duplication of effort if you’re pushing to the cloud as well. Then you get the temptations and lock in of cloud-only tooling and an army of certified consultant drones whose resumes really really need time on AWS-doc-solution-2035, so the on premise becomes a constant weight on management.
SaaS and the cloud is great for some things some of the time, but often you’re just staring at the marketing playbook of MS or Amazon come to life like a golem.
Comment by pm90 9 days ago
Comment by mbesto 9 days ago
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
Comment by Aperocky 9 days ago
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
Comment by teej 9 days ago
Comment by whalesalad 9 days ago
Comment by pstuart 9 days ago
Comment by lupire 9 days ago
Comment by canopi 9 days ago
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
Comment by latchkey 9 days ago
https://jon4hotaisle.substack.com/i/180360455/anatomy-of-the...
It is crazy how this gets perpetuated in the industry as actually having security value, when in reality, it is just a pay-to-play checkbox.
Comment by chickensong 9 days ago
If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.
Comment by quapster 9 days ago
What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals. This is a 2010-level bug pattern wrapped in 2025 AI hype. The only truly "AI" part is that centralizing all documents for model training drastically raises the blast radius when you screw up.
The economic incentive is obvious: if your pitch deck is "we'll ingest everything your firm has ever touched and make it searchable/AI-ready", you win deals by saying yes to data access and integrations, not by saying no. Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
The scary bit is that lawyers are being sold "AI assistant" but what they're actually buying is "unvetted third party root access to your institutional memory". At that point, the interesting question isn't whether there are more bugs like this, it's how many of these systems would survive a serious red-team exercise by anyone more motivated than a curious blogger.
Comment by j45 9 days ago
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
Comment by jimbokun 9 days ago
Comment by j45 8 days ago
It’s assuming and estimating it will behave like other software before it when it’s nothing like the software that came before it.
LLMs today won’t behave like the software we’re used to where 1+1 will equal 2 every time.
Comment by jimbokun 8 days ago
Comment by dogman144 9 days ago
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.
Comment by j45 9 days ago
And then folks can gasp and faint like goats and pretend they didn’t know.
It reminds me of the time I met an IT manager who dint have an IT background. Outsourced hilarity ensued through sales people who were also non-technical.
Comment by dogman144 8 days ago
Sec lead might have a pretty darn clear idea of an out of whack creation of risk v reward. CEO disagrees. Risk accept and move on.
When you’re technical and eventually realize there’s a business to survive behind the tech skills, this is the stuff you learn how to do.
People “will know” as you say because it’s all documented and professionally escalated.
Comment by Aurornis 6 days ago
Speaking of LLMs, did you notice the comment you were responding to was written by an account posting repetitive LLM-generated comments? :)
Comment by stronglikedan 9 days ago
Comment by j45 9 days ago
Comment by RansomStark 9 days ago
This might just be a golden age for getting access to the data you need for getting the job done.
Next security will catch up and there'll be a good balance between access and control.
Then, as always security goes to far and nobody can get anything done.
It's a tale as old as computer security.
Comment by j45 7 days ago
"GenAI" is nothing new. "AI" is just software. It's not intelligent, or alive, or sentient, or aware. People can scifi sentimentalize it if they want.
It might simulate parts of things, hopefully more reliably.
It's however a different category of software which requires management that doesn't exist yet how it should.
Cybersecurity security theatre for me is using a web browser to secure and administer what was previously already done and creating new security holes from a web interface.
Then, bypassing it to allow unmanaged MCP access to internal data moats creating it's own universe of security vulnerabilities, full stop. In a secured and contained environment, using an MCP to access data to unlock insight is one thing.
It doesn't mean dont' use MCPs. It means the AI won't figure out what the user doesn't know about security around securing MCPs which is a far more massive vulnerability because users of AI have delegated their thinking to a statistics formula ("GenAI"), because it is so impressive on the surface, but no one is checking the work to make sure it stays that way. Managing quality however, is improving.
My comment is calling out effectively letting external paths have unadulterated access to your private and corporate data.
Data is the new moat. Not UI/UX/Software.
A wormhole that exposes your data makes it available for someone to put it into their data moat far too commonly, and also for it to be mis-interpretted.
Comment by BrenBarn 9 days ago
Comment by barbazoo 8 days ago
Comment by mvkel 8 days ago
Comment by BrenBarn 8 days ago
Comment by mvkel 8 days ago
It's also impossible to guarantee a 100% secure infrastructure, no matter how good your product team is.
In the grey is a term of art: "best efforts."
If data is leaking, and it wasn't because hackers bypassed a bunch of safeguards, if it can be shown that you didn't use Best Efforts to secure said data, there is liability.
Comment by BrenBarn 7 days ago
1. The standards aren't clearly defined (i.e., you must specifically do this).
2. They are defined in terms of efforts rather than effects. It is like saying "every car sold must be made of steel" rather than "every car sold must be capable of withstanding an impact against a concrete wall at 60mph with X amount of deformation, etc." We want the rules to determine what level of threat is protected against, not just what motions the company went through. In the case in the article, it wasn't because hackers bypassed a bunch of safeguards; the company didn't protect against even basic threats.
3. It's not enough to have "liability". That puts the onus on individuals to sue the company for their specific damages. We need criminal penalties that are designed to punish companies (and the individuals who direct them) for the harm they do to society by the overall process of rushing ahead selling things instead of slowing down and being careful. We need large-scale enforcement so that companies actually stop doing these things because the cost of doing them becomes too enormous.
4. Our laws do not adequately take account of the differential power of those who cut corners, and the differential gains reaped. We frequently find small operators on the wrong end of painful lawsuits and onerous criminal penalties, while the biggest companies and wealthiest individuals use their position to avoid consequences. Laws need to explicitly take this into account, lowering the standard of proof for penalties against larger, wealthier, and more powerful companies and individuals, and also making those penalties exponentially higher.
Comment by mattmaroon 8 days ago
Comment by mvkel 8 days ago
Comment by btbuildem 8 days ago
Comment by abustamam 8 days ago
Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.
Comment by BrenBarn 8 days ago
In the current world, I dunno. I guess it depends on what the company is. If it's something like a hedge fund or a fossil fuel company I think I'd be fine with some kind of wikileaks-like avenue for exposing it in such a way that it results in the company being totally destroyed.
Comment by magnetowasright 9 days ago
I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.
It's inexcusable.
Comment by rashidujang 9 days ago
Security reminds me of the Anna Karenina principle: All happy families are alike; each unhappy family is unhappy in its own way.
Comment by GJim 8 days ago
To be fair, data security breaches seldom are.
Comment by yieldcrv 9 days ago
and otherwise well structured engineering orgs have lost their goddamn minds with move fast and break things
because they're worried that OpenAI/Google/Meta/Amazon/Anthropic will release the tool they're working on tomorrow
literally all of them are like this
Comment by trollbridge 8 days ago
Comment by deep_thinker26 9 days ago
Comment by qmr 9 days ago
Go on write your blog post. Don't let your dreams be dreams.
Comment by bigmadshoe 9 days ago
Comment by hsbauauvhabzb 9 days ago
Comment by amackera 9 days ago
They will stop letting you use the service. That's the recourse for breaking the TOS.
Comment by hsbauauvhabzb 9 days ago
I say this as someone threatened by a billion dollar company for this very thing.
Comment by advisedwang 9 days ago
Comment by qmr 7 days ago
Comment by gessha 9 days ago
Comment by CER10TY 9 days ago
Comment by trollbridge 8 days ago
Comment by keernan 8 days ago
Things were easier when I first began practicing in the 1970s. There weren't too many ways confidential materials in our files could be compromised. Leaving my open file spread out on the conference room table while I went to lunch while attorneys arriving for a deposition on my partner's case were one by one seated into the conference room. That's the kind of thing we had to keep an eye on.
But things soon got complicated. Computers. Digital copies of files that didn't disappear into an external site for storage like physical files. Then email. What were our obligations to know what could - and could not - be intercepted while email traveled the internet.
Then most dangerous of all. Digital storage that was outside our physical domain. How could we now know if the cloud vendor had access to our confidential data? Where were the backups stored? How exactly was the data securely compartmentalized by a cloud vendor? Did we need our own IT experts to control the data located on the external cloud? What did the contracts with the cloud vendor say about the fact we were a law firm and that we, as the lawyers responsible for our clients confidential information, needed to know that they - the cloud vendor - understood the legal obligations and that they - the cloud vendor - would hire lawyers to oversee the manner in which the cloud vendor blocked all access to the legal data located on their own servers. And so on and so forth.
I'm no longer in active practice but these issues were a big part of my practice my last few years at a Fortune 500 insurance company that used in-house attorneys nationwide to represent insureds in litigation - and the corporation was in engaged in signing onto a cloud service to hold all of the corporate data - including the legal departments across all 50 states. It was a nightmare. I'm confident it still is.
Comment by etamponi 9 days ago
I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...
So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI
Comment by xp84 8 days ago
My apologies to the frontend engineers out there who know what they're doing.
Comment by hbarka 9 days ago
Can that company tell you to cease and desist? How does the law work?
Comment by me_again 9 days ago
Comment by dghlsakjg 9 days ago
They are strongly worded requests from a legal point of view. The only real message they send is that the sender is serious enough about the issue to have involved a lawyer, unless of course you write it yourself, which is something that literally anyone can do.
If you want to actually force an action, you need a court order of some type.
NB for the actual lawyers: I'm oversimplifying, since they can be used in court to prove that you tried to get the other party to stop, and tried to resolve the issue outside of court.
Comment by badbird33 9 days ago
Comment by valbaca 9 days ago
Just search "healthcare" in https://news.ycombinator.com/item?id=46108941
Comment by Invictus0 9 days ago
Comment by culanuchachamim 9 days ago
In the same tenure I think that a professional etical hacker or a curious fellow that is poking around with no harm intent, shouldn't disclose the name of the company that had a security issue if they resolve it professionally.
You can write the same blog post without mentioning that it was Filevine.
If they didn't take care of the incident that's a different story...
Comment by evan_a_a 9 days ago
Comment by deelowe 9 days ago
Comment by manbash 9 days ago
Comment by CBMPET2001 9 days ago
Comment by jacquesm 9 days ago
Comment by giancarlostoro 9 days ago
Comment by lazide 9 days ago
Comment by venturecruelty 9 days ago
Comment by sidrag22 9 days ago
Comment by xarope 8 days ago
... rummages around...
here you go:
Comment by jacquesm 9 days ago
Comment by aperture147 8 days ago
"Worried your vibe-coded app is about to be broadcast on the internet’s biggest billboard? Chill. ACME AI now wraps it in “NSA-grade” security armor."
I've never thought that there will be multiple billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from the older billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from...
Comment by 6thbit 7 days ago
One could only imagine that if OP wasn't the first to discover it, people could've generated tons of shared links for all kinds of folders, for instance, which would remain active even if they invalidated the API token.
Comment by mattfrommars 9 days ago
I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.
I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.
This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.
Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?
Apologies for the scattered thoughts...
Comment by airstrike 9 days ago
Not impossible, but very hard. And starting a company is hard enough as it is.
So 9/10 times the answer will be to partner with someone who understands the space and pain point, preferably one who has lived it, or find an easier problem to solve.
Comment by joshvm 9 days ago
1. Compliancy with relevant standards. HIPAA, GDPR, ISO, military, legal, etc. Realistically you're going to outsource this or hire someone who knows how to build it, and then you're going to pay an agency to confirm that you're compliant. You also need to consider whether the incumbent solution is a trust-based solution, like the old "nobody gets fired for buying Intel".
2. Domain expertise is always easier if you have a domain expert. Big companies also outsource market research. They'll go to a firm like GLG, pay for some expert's time or commission a survey.
It seems like table stakes to do some basic research on your own to see what software (or solutions) exist and why everyone uses them, and why competitors failed. That should cost you nothing but time, and maybe expense if you buy some software. In a lot of fields even browsing some forums or Reddit is enough. The difference is if you have a working product that's generic enough to be useful to other domains, but you're not sure. Then you might be able to arrange some sort of quid pro quo like a trial where the partner gets to keep some output/analysis, and you get some real-world testing and feedback.
Comment by strgcmc 9 days ago
I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.
Reference: https://www.thetimes.com/sport/formula-one/article/professor...
Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).
Comment by jimbokun 9 days ago
Comment by corry 9 days ago
And... Margolis allowed this open demo environment to connect to their ENTIRE Box drive of millions of super sensitive documents?
HUH???!
Before you get to the terrible security practices of the vendor, you have to place a massive amount of blame on the IT team of Margolis for allowing the above.
No amount of AI hype excuses that kind of professional misjudgement.
Comment by me_again 9 days ago
Comment by corry 8 days ago
Comment by stanfordkid 9 days ago
Comment by 1vuio0pswjnm7 8 days ago
Would there be a "pretty printer" or some other "unminifier" for this task
If not, then is minification effectively a form of obfuscation
Comment by gu5 8 days ago
Comment by testemailfordg2 8 days ago
Comment by lupire 9 days ago
Clever work by OP. Surely there is automatic prober tool that already hacked this product?
Comment by dghlsakjg 9 days ago
Google tells me they are a NY law firm specializing in Real Estate and Immigration law. There are other firms with Margolis in the name too. Kinda doesn't matter; see below.
I doubt that they are thrilled to have their name involved in this, but that is covered by the US constitution's protections on free press.
Comment by richwater 9 days ago
Comment by satya71 9 days ago
Comment by densone 8 days ago
Comment by canto 8 days ago
Comment by bzmrgonz 9 days ago
Comment by MangoToupe 8 days ago
People should really look this law up before they reference it
Comment by nstj 8 days ago
Comment by hansmayer 8 days ago
Comment by Fokamul 8 days ago
Comment by tonyhart7 8 days ago
Comment by ethin 9 days ago
Comment by fallinditch 9 days ago
AI tends to be good at un-minifying code.
Comment by a_victorp 9 days ago
Comment by CER10TY 9 days ago
On the other hand, minified code is literally published by the company. Everyone can see it and do with it as they please. So handing that over to an AI to un-minify is not really your problem, since you're not the developer working on the tool internally.
Comment by fallinditch 9 days ago
Comment by nodesocket 9 days ago
Comment by 2ndatblackrock 9 days ago
Comment by larrysanchez77 1 day ago
Comment by kitschman 8 days ago
Comment by electric_muse 9 days ago
Comment by tomhow 8 days ago
We detached this subthread from https://news.ycombinator.com/item?id=46137863 and marked it off topic.
Comment by simonw 9 days ago
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
Comment by Aurornis 9 days ago
> Interesting point about Cranelift! I've been following its development for a while, and it seems like there's always something new popping up.
> Interesting point about the color analysis! It kinda reminds me of how album art used to be such a significant part of music culture.
> Interesting point about the ESP32 and music playback! I've been tinkering with similar projects, and it’s wild how much potential these little devices have.
> We used to own tools that made us productive. Now we rent tools that make someone else profitable. Subscriptions are not about recurring value but recurring billing
> Meshtastic is interesting because it's basically "LoRa-first networking" instead of "internet with some radios attached." Most consumer radios are still stuck in the mental model of walkie-talkies, while Meshtastic treats RF as an IP-like transport layer you can script, automate, and extend. That flips the stack:
> This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
The repeated prefixes (Interesting point about!) and the classic it's-this-not-that LLM pattern are definitely triggering my LLM suspicions.
I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Comment by balamatom 9 days ago
Or, bear with me there, maybe things aren't so far downhill yet, these users just learned how English is supposed to sound, from the same place where the LLMs learned how English is supposed to sound! Which is just the Internet.
AI hype is already ridiculous; the whole "are you using an AI to write your posts for you" paranoia is even more absurd. So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere. Just like most non-AI-generated thoughts, except perhaps the one which leads to the fridge.
Comment by Aurornis 9 days ago
> So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere.
FYI, spammers love LLM generated posting because it allows them to "season" accounts on sites like Hacker News and Reddit without much effort. Post enough plausible-sounding comments without getting caught and you have another account to use for your upvote army, which is a service you can now sell to desperate marketing people who promised their boss they'd get on the front page of HN. This was already a problem with manual accounts but it took a lot of work to generate the comments and content.
That's the "so what"
Comment by balamatom 8 days ago
It would be massively funny if that escape hatch just sort of disappeared while we were looking at something else.
Your point stands, though.
>exact patterns common to AI generated comment
How can there be exact patterns to it?
Comment by LoganDark 9 days ago
Yes, if this is LLM then it definitely wouldn't be zero-shot. I'm still on the fence myself as I've seen similar writing patterns with Asperger's (specifically what used to be called Asperger's; not general autism spectrum) but those comments don't appear to show any of the other tells to me, so I'm not particularly confident one way or the other.
Comment by balamatom 9 days ago
It's always enlightening to remember where Hans Asperger worked, and under what sociocultural circumstances that absolutely proverbial syndrome was first conceived.
GP evidently has some very subtle sort of expectations as to what authentic human expression must look like, which however seem to extend only as far as things like word choice and word order. (If that's all you ever notice about words, congrats, you're either a replicant or have a bad case of "learned literacy in USA" syndrome.)
This makes me want to point out that neither the means nor the purpose of the kind of communication which GP seems to implicitly expect (from random strangers) are even considered to be a real thing in many places and by many people.
I do happen to find that sort of thing way more coughinterestingcough than the whole "howdy stranger, are you AI or just a pseud" routine that HN posters seem to get such a huge kick out of.
Sure looks like one of the most basic moves of ideological manipulation: how about we solved the Turing Test "the wrong way around" by reducing the tester's ability to tell apart human from machine output, instead of building a more convincing language machine? Yay, expectations subverted! (While, in reality, both happen simultaneously.)
Disclaimer: this post was written by a certified paperclip optimizer.
Comment by samdoesnothing 9 days ago
Comment by snapdeficit 9 days ago
Comment by rootusrootus 9 days ago
(and I suspect that plenty of people will remain credulous anyway, AI slop is going to be rough to deal with for the foreseeable future).
Comment by lordnacho 9 days ago
Comment by Aurornis 9 days ago
That may or may not be what's happening with this account, but it's worth flagging accounts that generate a lot of questionable comments. If you look at that account's post history there's a lot of familiar LLM patterns and repeated post fragments.
Comment by Conasg 9 days ago
Comment by snapcaster 9 days ago
Comment by lazide 9 days ago
Comment by legostormtroopr 9 days ago
Comment by FrustratedMonky 9 days ago
Comment by samdoesnothing 9 days ago
Comment by syndacks 9 days ago
Comment by koumou92 9 days ago
Comment by vkou 9 days ago
The point you raised is both a distraction... And does not engage with the ones it did.
Comment by jfindper 9 days ago
For what it's worth, even if the parent comment was directly submitted by chatgpt themselves, your comment brought significantly less value to the conversation.
Comment by probably_wrong 9 days ago
Comment by jfindper 9 days ago
But also, its super annoying to sift through people saying "the word critical was used, this is obviously ai!". not to mention it really fucking sucks when you're the person who wrote something and people start chanting "ai slop! ai slop!". like, how am i going to prove is not AI?
I can't wait until ai gets good enough that no one can tell the difference (or ai completely busts and disappears, although that's unlikely), and we can go back to just commenting about whether something was interesting or educational or whatever instead of analyzing how many em-dashes someone used pre-2020 and extrapolating whether their latest post has 1 more em-dashes then their average post so that we can get our pitchforks out and chase them away.
Comment by anonymous908213 9 days ago
Since LLMs are here to stay, what we actually need is for humans to get better at recognising LLM slop, and stop allowing our communication spaces to be rotted by slop articles and slop comments. It's weird that people find this concept objectional. It was historically a given that if a spambot posted a copy-pasted message, the comment would be flagged and removed. Now the spambot comments are randomly generated, and we're okay with it because it appears vaguely-but-not-actually-human-like. That conversations are devolving into this is actually the failure of HN moderation for allowing spambots to proliferate unscathed, rather than the users calling out the most blatantly obvious cases.
Comment by jfindper 9 days ago
The only spam I see in this chain is the flagged post by electric_muse.
It's actually kind of ironic you bring up copy-paste spam bots. Because people fucking love to copy-paste "ai slop" on every comment and article that uses any punctuation rarer than a period.
Comment by anonymous908213 9 days ago
Yes: the original comment is unequivocally slop that genuinely gives me a headache to read.
It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell.
Humans don't needlessly use a colon in every single sentence they write: abusing punctuation like this is actually really fucking irritating.
Of course, it goes beyond the punctuation: there is zero substance to the actual output, either.
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
> Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
This stupid pattern of LLMs listing off jargon like they're buzzwords does not add to the conversation. Perhaps the usage of jargon lulls people into a false sense of believing that what is being said is deeply meaningful and intelligent. It is not. It is rot for your brain.
Comment by jfindper 9 days ago
>"It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell."
So, I'm actually pretty sure you're just copy-pasting my comments into chatgpt to generate troll-slop replies, and I'd rather not converse with obvious ai slop.
Comment by anonymous908213 9 days ago
Comment by jfindper 9 days ago
Anyways, if you think something is ai, just flag it instead so I don't need to read the word "slop" for the 114th fucking time today.
Thankfully, this time, it was flagged. But I got sucked in to this absolutely meaningless argument because I lack self control.
Comment by anonymous908213 9 days ago
Comment by jfindper 9 days ago
oh shit I’m supposed to be done replying
Comment by slop-cop 8 days ago
Comment by Despyte 9 days ago
Comment by jfindper 9 days ago
Comment by chunk1000 9 days ago
Comment by observationist 9 days ago
It's become clear that the first and most important and most valuable agent, or team of agents, to build is the one that responsibly and diligently lays out the opsec framework for whatever other system you're trying to automate.
A meta-security AI framework, cursor for opsec, would be the best, most valuable general purpose AI tool any company could build, imo. Everything from journalism to law to coding would immediately benefit, and it'd provide invaluable data for post training, reducing the overall problematic behaviors in the underlying models.
Move fast and break things is a lot more valuable if you have a red team mechanism that scales with the product. Who knows how many facepalm level failures like this are out there?
Comment by croes 9 days ago
Of course, it’s called proper software development
Comment by jeffbee 9 days ago
Comment by venturecruelty 9 days ago
Comment by marginalx 9 days ago
Comment by venturecruelty 9 days ago
Comment by dghlsakjg 9 days ago
The legal world has plenty of ways for determining if you are legally responsible for the outcome of an event. Right now the standard is civil punishments for provable negligence.
It sounds like GP is proposing a framework where we tighten up the definition of negligence, and add criminal penalties in addition to civil ones.
Comment by pbhjpbhj 8 days ago
Comment by dghlsakjg 9 days ago
This was just plain terrible web security.
Comment by imvetri 9 days ago
How does above sound like and what kind of professional write like that?