I was banned from Claude for scaffolding a Claude.md file?
Posted by hugodan 2 days ago
Comments
Comment by bastard_op 2 days ago
Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.
Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.
I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.
I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".
Comment by throwup238 1 day ago
They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.
Comment by laserDinosaur 1 day ago
(on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)
Comment by stareatgoats 1 day ago
The advice I got when scouring the internets was primarily to close everything except the file you’re editing and maybe one reference file (before asking Claude anything). For added effect add something like 'Only use the currently open file. Do not read or reference any other files' to the prompt.
I don't have any hard facts to back this up, but I'm sure going to try it myself tomorrow (when my weekly cap is lifted ...).
Comment by sigseg1v 1 day ago
Comment by adobrawy 1 day ago
Comment by withinboredom 1 day ago
Comment by HumanOstrich 1 day ago
Comment by solumunus 1 day ago
Comment by vidarh 1 day ago
Comment by solumunus 1 day ago
Comment by DANmode 16 hours ago
Comment by idonotknowwhy 1 day ago
You can stop most of this with
export DISABLE_NON_ESSENTIAL_MODEL_CALLS=1
And might as well disable telemetry, etc: export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
I also noticed every time you start CC, it sends off > 10k tokens preparing the different agents. So try not to close / re-open it too often.
Comment by JLCarveth 1 day ago
Comment by aanet 1 day ago
I've run out of quota on my Pro plan so many times in the past 2-3 weeks. This seems to be a recent occurrence. And I'm not even that active. Just one project, execute in Plan > Develop > Test mode, just one terminal. That's it. I keep getting a quota reset every few hours.
What's happening @Anthropic ?? Anybody here who can answer??
Comment by alexk6 1 day ago
It's the most commented issue on their GitHub and it's basically ignored by Anthropic. Title mentions Max, but commenters report it for other plans too.
Comment by quikoa 1 day ago
Comment by cyanydeez 1 day ago
Comment by czk 1 day ago
lol
Comment by Aeolun 1 day ago
Comment by CuriouslyC 1 day ago
Comment by 0x9e3779b6 1 day ago
Comment by vbezhenar 1 day ago
This fixed subscription plan with some hardly specified quotas looks like they want to extract extra money from these users who pay $200 and don't use that value, at the same time preventing other users from going over $200. Like I understand that it might work at scale, but just feels a bit not fair to everyone?
Comment by rootusrootus 1 day ago
Comment by Jeff_Brown 1 day ago
Comment by sailfast 1 day ago
API request method might have no cap, but they do cap Claude Code even on Max licenses, so easier to throttle as well if needed to control costs. Seems straightforward to me at any rate. Kinda like reserved instance vs. spot pricing models?
Comment by throwup238 1 day ago
Comment by charcircuit 1 day ago
Comment by Aeolun 1 day ago
Comment by frotaur 1 day ago
Comment by horsawlarway 1 day ago
That's limited to accessing the models through code/desktop/mobile.
And while I'm also using their subscriptions because of the cost savings vs direct access, having the subscription be considerably cheaper than the usage billing rings all sorts of alarm bells that it won't last.
Comment by nobodywillobsrv 1 day ago
If you look at tool calls like MCP and what not you can see it gets ridiculous. Even though it's small for example calling pal MCP from the prompt is still burning tokens afaik. This is "nobody's" fault in this case really but you can see how the incentives are and we all need to think how to make this entire space more usable.
Comment by MillionOClock 1 day ago
Comment by jack_pp 1 day ago
Comment by rubenflamshep 1 day ago
Comment by 0x500x79 1 day ago
Comment by behnamoh 1 day ago
Comment by heavyset_go 1 day ago
Comment by bmurphy1976 1 day ago
I've been using CC until I run out of credits and then switch to Cursor (my employer pays for both). I prefer Claude but I never hit any limits in Cursor.
Comment by rubenflamshep 1 day ago
Comment by bmurphy1976 1 day ago
Comment by behnamoh 1 day ago
Waiting for Anthropic to somehow blame this on users again. "We investigated, turns out the reason was users used it too much".
Comment by genewitch 1 day ago
Comment by vunderba 1 day ago
Comment by troyvit 1 day ago
We're about to get Claude Code for work and I'm sad about it. There are more efficient ways to do the job.
Comment by ayewo 1 day ago
OpenCode is incentivized to make a good product that uses your token budget efficiently since it allows you to seamlessly switch between different models.
Anthropic as a model provider on the other hand, is incentivized to exhaust your token budget to keep you hooked. You'll be forced to wait when your usage limits are reached, or pay up for a higher plan if you can't wait to get your fix.
CC, specifically Opus 4.5, is an incredible tool, but Anthropic is handling its distribution the way a drug dealer would.
Comment by Brian_K_White 1 day ago
Which was nothing new itself of course. Conflicts of interest didn't begin with computers, or probably even writing.
Comment by vidarh 1 day ago
Controlling the coding tool absolutely is a major asset, and will be an even greater asset as the improvements in each model iteration makes it matter less which specific model you're using.
Comment by jack_pp 1 day ago
Comment by genewitch 1 day ago
they get to see (if not opted-out) your context, idea, source code, etc. and in return you give them $220 and they give you back "out of tokens"
Comment by throwup238 1 day ago
It's also a way to improve performance on the things their customers care about. I'm not paying Anthropic more than I do for car insurance every month because I want to pinch ~~pennies~~ tokens, I do it because I can finally offload a ton of tedious work on Opus 4.5 without hand holding it and reviewing every line.
The subscription is already such a great value over paying by the token, they've got plenty of space to find the right balance.
Comment by NitpickLawyer 1 day ago
I've done RL training on small local models, and there's a strong correlation between length of response and accuracy. The more they churn tokens, the better the end result gets.
I actually think that the hyper-scalers would prefer to serve shorter answers. A token generated at 1k ctx length is cheaper to serve than one at 10k context, and way way cheaper than one at 100k context.
Comment by genewitch 1 day ago
i'd need to see real numbers. I can trigger a thinking model to generate hundreds of tokens and return a 3 word response (however many tokens that is), or switch to a non-thinking model of the same family that just gives the same result. I don't necessarily doubt your experience, i just haven't had that experience tuning SD, for example; which is also xformer based
I'm sure there's some math reason why longer context = more accuracy; but is that intrinsic to transformer-based LLMs? that is, per your thought that the 'scalers want shorter responses, do you think they are expending more effort to get shorter, equivalent accuracy responses; or, are they trying to find some other architecture or whatever to overcome the "limitations" of the current?
Comment by jumploops 1 day ago
Comment by vidarh 1 day ago
(And once you've done that, also consider whether a given task can be achieved with a dumber model - I've had good luck switching some of my sub-agents to Haiku).
Comment by behnamoh 1 day ago
They need more training data, and with people moving on to OpenCode/Codex, they wanna extract as much data from their current users as possible.
Comment by arthurcolle 1 day ago
Comment by behnamoh 1 day ago
Comment by genewitch 1 day ago
by default?
Comment by mystraline 1 day ago
Comment by fragmede 1 day ago
Quota's basically a count of tokens, so if a new CC session starts with that relatively full, that could explain what's going on. Also, what language is this project in? If it's something noisy that uses up many tokens fast, even if you're using agents to preserve the context window in the main CC, those tokens still count against your quota so you'd still be hitting it awkwardly fast.
Comment by ChicagoDave 1 day ago
I work for hours and it never says anything. No clue why you’re hitting this.
$230 pro max.
Comment by fluidcruft 1 day ago
Comment by idonotknowwhy 1 day ago
Comment by yjtpesesu2 1 day ago
Comment by 0xack 1 day ago
Comment by ChicagoDave 1 day ago
Comment by croes 1 day ago
Comment by nwatson 1 day ago
Comment by behnamoh 1 day ago
As someone with 2x RTX Pro 6000 and a 512GB M3 Ultra, I have yet to find these machines usable for "agentic" tasks. Sure, they can be great chat bots, but agentic work involves huge context sent to the system. That already rules out the Mac Studio because it lacks tensor cores and it's painfully slow to process even relatively large CLAUDE.md files, let alone a big project.
The RTX setup is much faster but can only support models ≤192GB, which severely limits its capabilities as you're limited to low Q GLM 4.7, GLM 4.7 Flash/Air/ GPT OSS 120b, etc.
Comment by NitpickLawyer 1 day ago
The best you can get today with consumer hardware is something like devstral2-small(24B) or qwen-coder30b(underwhelming) or glm-4.7-flash (promising but buggy atm). And you'll still need beefy workstations ~5-10k.
If you want open-SotA you have to get hardware worth 80-100k to run the big boys (dsv3.2, glm4.7, minimax2.1, devstral2-123b, etc). It's ok for small office setups, but out of range for most local deployments (esp considering that the workstations need lots of power if you go 8x GPUs, even with something like 8x 6000pro @ 300w).
Comment by zen4ttitude 1 day ago
Comment by rasmus1610 1 day ago
Comment by thunfischtoast 1 day ago
Comment by aja12 1 day ago
Comment by davidwritesbugs 1 day ago
Comment by thunfischtoast 1 day ago
Comment by IgorPartola 1 day ago
Comment by moring 1 day ago
At least it did not turn against them physically... "get comfortable while I warm up the neurotoxin emitters"
Comment by threecheese 1 day ago
I think they are just focusing on where the dough is.
Comment by draw_down 1 day ago
Comment by sixtyj 1 day ago
Comment by throwup238 1 day ago
Claude iOS app, Claude on the web (including Claude Code on the web) and Claude Code are some of the buggiest tools I have ever had to use on a daily basis. I’m including monstrosities like Altium and Solidworks and Vivado in the mix - software that actually does real shit constrained by the laws of physics rather than slinging basic JSON and strings around over HTTP.
It’s an utter embarrassment to the field of software engineering that they can’t even beat a single nine of reliability in their consumer facing products and if it wasn’t for the advantage Opus has over other models, they’d be dead in the water.
Comment by 0x500x79 1 day ago
Comment by loopdoend 1 day ago
Comment by throwup238 1 day ago
The only way Anthropic has two or three nines is in read only mode, but that’s be like measuring AWS using the console uptime while ignoring the actual control plane.
Comment by jrflowers 1 day ago
Comment by fizx 1 day ago
Comment by cactusplant7374 1 day ago
https://github.com/anthropics/claude-code/issues
Codex has less but they also had quite a few outages in December. And I don't think Codex is as popular as Claude Code but that could change.
Comment by qcnguy 1 day ago
The Reasonable Man might think that an AI company OF ALL COMPANIES would be able to use AI to triage bug tickets and reproduce them, but no! They expect humans to keep wasting their own time reproducing, pinging tickets and correcting Claude when it makes mistakes.
Random example: https://github.com/anthropics/claude-code/issues/12358
First reply from Anthropic: "Found 3 possible duplicate issues: This issue will be automatically closed as a duplicate in 3 days."
User replies, two of the tickets are irrelevant, one didn't help.
Second reply: "This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes."
Every ticket I ever filed was auto-closed for inactivity. Complete waste of time. I won't bother filing bugs again.
Comment by Macha 1 day ago
Upcoming Anthropic Press Release: By using Claude to direct users to existing bugs reports, we have reduced tickets requiring direct action by xx% and even reduced the rate of incoming tickets
Comment by notsure2 1 day ago
Comment by b00ty4breakfast 1 day ago
Comment by Bombthecat 1 day ago
Comment by tuhgdetzhh 1 day ago
Comment by behnamoh 1 day ago
Comment by wwweston 1 day ago
Comment by fsckboy 1 day ago
Comment by outside1234 1 day ago
Comment by irishcoffee 1 day ago
Comment by PunchyHamster 1 day ago
Comment by cyanydeez 1 day ago
Growth isn't a problem unless you dont actually pay for the cost of every user you subscribe. Uber, but for poorly profitable business models.
Comment by oblio 20 hours ago
> Since its founding in 2009, Uber has incurred a cumulative net loss of approximately $10.9 billion.
Now, Uber has become profitable, and will probably become a bit more profitable over time.
But except for speculators and probably a handful of early shareholders, Uber will have lost everyone else money for 20 years since its founding.
For comparison, Lyft, Didi, Grab, Bolt are in the same boat, most of them are barely turning profitable after 10+ years. Turns out taxis are a hard business, even when you ramp up the scale to 11. Though they might become profitable over the long term and we'll all get even worse and more abusive service, and probably more expensive than regular taxis would have been, 15-20 years from now.
I mean, we got some better mobile apps from taxi services, so there's that.
Oh, also a massive erosion of labor rights around the world.
Comment by cyanydeez 7 hours ago
I don't see the current investments turning a profit. Maybe the datacenters will, but most of AI is going to be washed out when somewhere, someone wants to take out their investment and the new Bernie Madoff can't find another sucker.
Comment by unyttigfjelltol 1 day ago
Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?
Comment by b00ty4breakfast 1 day ago
Comment by georgemcbay 1 day ago
I'm not really sure LLMs have made it worse. They also haven't made it better, but it was already so awful that it just feels like a different flavor of awful.
Comment by dejli 1 day ago
Comment by uxcolumbo 1 day ago
And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.
Comment by Balinares 1 day ago
Comment by Leynos 1 day ago
Comment by qcnguy 1 day ago
Comment by user3939382 1 day ago
Comment by cyanydeez 1 day ago
Comment by hecanjog 1 day ago
What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.
Comment by bastard_op 1 day ago
This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.
I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.
Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.
Comment by bastard_op 1 day ago
Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.
At least I was until Claude started crapping the bed lately.
Comment by cyanydeez 1 day ago
Comment by bastard_op 19 hours ago
The whole thing I needed was to let AI reach out and touch things, be my hands essentially. This is why I built my tmux/worker system, I built out an xdg-portal integration to let it screen shot and soon interact with my desktop as a poc.
I could let it just start logging into devices and letting them modify configs, but it's pretty dumb about stuff like modifying fortigate configurations at times what it thinks it should do vs what the cli actually let's it do, so I have to proof much of it, but that's why I'm building it to be able to run ansible/terraform jobs instead using frameworks that are provided by the vendors for direct configurations to allow for atomic config changes as much as vendor implementations allow for.
Comment by ofalkaed 1 day ago
I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.
Comment by xnx 2 hours ago
Gemini CLI, Google Antigravity ...?
Comment by left-struck 1 day ago
Comment by 0x9e3779b6 1 day ago
It’s effectively a multi-tenant interface.
I also used individual acc but on corp e-mail, previously.
You could generate a new multi-use CC in your vibe-bank app (as Revolut), buy burner (e) sim for sms (5 eur in NL); then rewrite all requests at your mitm proxy to substitute a device id to one, not derived from your machine.
But same device id, same phone could be perfectly legitimate use case: you registered on corp e-mail then you changed your work place, using the same machine.
or you lost access to your e-mail (what a pity)
But to get good use of it, someone should compose proper requests to ClickHouse or whatever they use, for logs, build some logic to run as a service or web hook to detect duplicates with a pipeline to act on it.
And a good percentage of flags wouldn’t have been ToC violations.
That’s a bad vibe, can you imagine how much trial and error prompting it requires?..
They can’t vibe the way though the claude code bugs alone, on time!
Comment by Bombthecat 1 day ago
Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me
Comment by bastard_op 1 day ago
Comment by ph4evers 1 day ago
Comment by steve1977 1 day ago
This made me chuckle.
Comment by Aeolun 1 day ago
It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.
Comment by serf 1 day ago
I can't be alone . Literally the worst customer experience I've ever had with the most expensive personal dot com subscription I've ever paid for.
Never again. When Google sets the customer service bar there are MAJOR issues.
Comment by raptorraver 1 day ago
This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.
Comment by Rastonbury 1 day ago
Comment by Balinares 1 day ago
Comment by spike021 1 day ago
I had this start happening around August/September and by December or so I chose to cancel my subscription.
I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.
Comment by sawjet 1 day ago
Comment by fragmede 1 day ago
Comment by syntaxing 1 day ago
Comment by deaux 1 day ago
Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.
The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.
The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.
For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.
Mistral is of course so far removed in quality that it's apples to oranges.
Comment by dudeinhawaii 1 day ago
My experience on Claude Max (still on it till end-of-month) has been frequent incomplete assignments and troubling decision making. I'll give you an example of each from yesterday.
1. Asked Claude to implement the features in a v2_features.md doc. It completed 8 of 10 but 3 incorrectly. I gave GPT-5.1-Codex-Max (high) the same tasks and it completed 10 of 10 but took perhaps 5-10x as long. Annoyingly, with LLM variability, I can't know for sure if I tried Claude again it would get it correct. The only thing I do know is that GTP-5.2 and 5.1 do a lot more "double-checking" both prior to executing and after.
2. I asked Claude to update a string being displayed in the UI of my app to display something else instead. The string is powered by a json config. Claude searched the code, somehow assumed it was being loaded by a db, did not find the json and opted to write code to overwrite whatever comes out of the 'db' (incorrect) to be what I asked for. This is... not desired behavior and the source of a category of hidden bugs that Claude has created in the past (other models do this as well but less often). Max took its time, found the source json file, and made the update in the correct place.
I can only "sit back and let an agent code" if I trust that it'll do the work right. I don't need it fast, I need it done right. It's already saving me hours where I can do other things in parallel. So, I don't get this argument.
That said, I have a Claude Max and OpenAI Pro subscription and use them both. I instead typically have Claude Opus work on UI and areas where I can visually confirm logic quickly (usually) and Codex in back-end code.
I often wonder how much the complexity of codebases affects how people discuss these models.
Comment by EnPissant 1 day ago
Comment by deaux 1 day ago
Comment by pixelmelt 1 day ago
Comment by bastard_op 1 day ago
So yeah, codex kinda sucks to me. Maybe I'll try mistral.
Comment by thtmnisamnstr 1 day ago
Comment by samusiam 1 day ago
Comment by subscribed 1 day ago
Comment by Conscat 1 day ago
Comment by dudeinhawaii 1 day ago
That has reliably worked for me with Gemini, Codex, and Opus. If you can get them to check-off features as they complete them, works even better (i.e, success criteria and an empty checkbox for them to mark off).
Comment by genewitch 1 day ago
When you have a conversation with an AI, in simple terms, when you type a new line and hit enter, the client sends the entire conversation to the LLM. It has always worked this way, and it's how "reasoning tokens" were first realized. you allow a client to "edit" the context, and the client deletes the hallucination, then says "Wait..." at the end of the context, and hits enter.
the LLM is tricked into thinking it's confused/wrong/unsure, and "reasons" more about that particular thing.
Comment by bastard_op 1 day ago
Comment by mkl 1 day ago
Comment by bayarearefugee 1 day ago
I'm 99.9999% sure Gemini has a dynamic scaling system that will route you to smaller models when its overloaded, and that seems to be when it will still occasionally do things like tell you it edited some files without actually presenting the changes to you or go off on other strange tangents.
Comment by elyobo 1 day ago
Comment by andrewinardeer 1 day ago
Comment by keepamovin 1 day ago
Comment by cyanydeez 1 day ago
That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.
Comment by complianceowl 1 day ago
Comment by TyrunDemeg101 1 day ago
It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.
My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.
Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.
Comment by 7777777phil 1 day ago
Comment by xattt 1 day ago
Comment by oblio 21 hours ago
It's like the "unlimited Gmail storage" that's now stuck at 15GB since 2012, despite the cost of storage probably going down probably 20x since 2012.
Companies launch products with deceptive marketing and features they can't possibly support and then when they get called on their bluff, they have to fall back to realistic terms and conditions.
Comment by kachapopopow 1 day ago
Comment by oblio 21 hours ago
What do you mean with this?
Comment by kachapopopow 9 hours ago
Comment by CamperBob2 1 day ago
I have pro subscriptions to all three major providers, and have been planning to drop one eventually. Anthropic may end up making the decision for me, it sounds like, even though (or perhaps because) I've been using Claude CLI more than the others lately.
What I'd really like to do is put a machine in the back room that can do 100 tts or more with the latest, greatest Deepseek or Kimi model at full native quantization. That's the only way to avoid being held hostage by the big 3 labs and their captive government, which I'm guessing will react to the next big Chinese model release by prohibiting its deployment by any US hosting providers.
Unfortunately it will cost about $200K to do this locally. The smart money says (but doesn't act like) the "AI bubble" will pop soon. If the bubble pops, that hardware will be worth 20 cents on the dollar if I'm lucky, making such an investment seem reckless. And if the bubble doesn't pop, then it will probably cost $400K next year.
First-world problems, I guess...
Comment by LTL_FTC 1 day ago
https://forums.developer.nvidia.com/t/6x-spark-setup/354399/...
Or a single user at about 10tps.
This is probably around $30k if you go with the 1tb models.
Comment by bayindirh 1 day ago
When people talk about the cost and requirements of AI, other people can't grasp what they are talking about.
Comment by CamperBob2 1 day ago
A couple of DGX Stations are more likely to work well for what I have in mind. But at this point, I'd be pleasantly surprised if those ever ship. If they do, they will be more like $200K each than $100K.
Comment by LTL_FTC 21 hours ago
Edit to add:
Yeah, those stations with the GB300 look more along the lines of what I would want as well but I agree, they’re probably way beyond my reach.
Comment by pstuart 1 day ago
In effect like traditional on-prem services that have cloud services to handle peak loads...
The tech is still relatively new and there's bound to be changes that can enable this -- just like how we went from 8088 to 386 (six years later). That was a ground breaking change and while Moore's law may be dead I expect the cost to drop significantly over time.
One can dream at least.
Comment by indiantinker 1 day ago
Comment by chii 1 day ago
Comment by mrkeen 1 day ago
Can you sell or share farm-saved seed?
"It is illegal to sell, buy, barter or share farm-saved seed," warns Sam. [1]
Can feed grain be sown?
No – it is against the law to use any bought-in grain to establish a crop. [1]
FTC sues John Deere over farmers' right to repair tractors
The lawsuit, which Deere called "meritless," accuses the company of withholding access to its technology and best repair tools and of maintaining monopoly power over many repairs. Deere also reaps additional profits from selling parts, the complaint alleges, as authorized dealers tend to sell pricey Deere-branded parts for their repairs rather than generic alternatives. [2]
[1] https://www.fwi.co.uk/arable/the-dos-and-donts-of-farm-saved...[2] https://www.npr.org/2025/01/15/nx-s1-5260895/john-deere-ftc-...
Comment by midtake 1 day ago
Comment by woah 1 day ago
Were there a lucky few who found an unoccupied niche where there was some surplus for a generation or two? Sure. But pretending like this was commonplace is like pretending that everyone in the 1600's was a nobleman.
> Compared to someone from the 1600s who could eat a gourmet meal prepared by their 10 cooks every night, we are quite oppressed.
Comment by kuerbel 1 day ago
Comment by B1FIDO 2 hours ago
Until very recently (like 6 decades ago) the area where I live was right up against rural countryside, with sheep grazing, cattle farms, vegetables grown and everything. And those farmers sold out to real-estate developers.
But there are literally homeowners in SFHs with chickens out front and roosters crowing in the morning. And some of my colleagues own chickens and harvest the eggs every day for their own kitchens and families.
But just going through a few urban neighborhoods on Google Maps, it was not long before I found little farms. And these farms sometimes have websites where they advertise that they are selling produce and dairy: raw milk, fresh eggs, fresh fruits & veg, mutton and even live sheep or goats. And they may be doing it on the sly or under the table, and "raw milk" is especially a controversial marketplace right now, but they do it and seem to do alright.
These "urban farms" are often real close to tactical supply shops running out of some guy's garage, and other little "cottage industries" where people who purchased "McMansions" are recouping their investments, basically by skirting the city's zoning laws and tax regulations around businesses.
So yeah, if you've got a brown thumb like me, you can go shop at a farmers market, or you can look up one of these "urban farms" and buy direct, cash in hand.
Comment by amiga386 1 day ago
So you're not free to grow your own vegetables either; just like fishing, farming is regulated to manage limited resources. Things get ugly fast when you start raising pigs in your city apartment, or start polluting with pesticide runoff, or start diverting your neighbour's water supply...
Comment by galleywest200 1 day ago
Gardens are a thing, and you do not need your house to be on agricultural land to grow a garden, at least in my state.
Comment by amiga386 2 hours ago
Most people don't have that, and can't afford that, hence why they take the route of earning money some other way, and using the money to buy food made by others, from supermarkets. They can supplement their diet with home-grown fruit and veg, but few can sustain their family on home-grown produce.
Comment by naasking 1 day ago
https://www.savingadvice.com/articles/2025/07/07/10160132_th...
Comment by Cthulhu_ 1 day ago
(ex: Palestine got their utilities and food cut off so that thousands starved, Ukraine's infrastructure is under attack so that thousands will die from exposure, and that's after they went for their food exports, starving more that people that depended on it)
Comment by shaky-carrousel 1 day ago
Comment by gregoriol 1 day ago
Comment by mrkeen 1 day ago
https://lavialibera.it/en-schede-2447-francesca_albanese_und...
Comment by adastra22 1 day ago
Comment by Scrapemist 1 day ago
Comment by datsci_est_2015 1 day ago
Comment by cryptonector 1 day ago
Comment by StefanBatory 1 day ago
Comment by j16sdiz 1 day ago
Comment by chii 14 hours ago
when did that happen? I can transfer as large an amount, provided i can prove the providence of said money wasn't from crime.
Comment by cryptonector 22 hours ago
The way we normally deal with all these problems as a society is to apply force (via police, courts) to ensure that everyone has certain 'rights'.
The problem today is that while you have certain rights, like fairly strong rights to private property interests in real estate and such, we generally have very weak private property rights in financial properties. This is a problem world-wide. We need to fix this in our countries.
Comment by tomnipotent 1 day ago
Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.
Comment by jfyi 1 day ago
Comment by GeoAtreides 1 day ago
Comment by dragonwriter 1 day ago
Comment by snowmobile 1 day ago
...
Comment by leoh 1 day ago
Comment by omer_balyali 2 days ago
Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.
Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.
As their ads say: "Keep thinking. There has never been a better time to have a problem."
I've been thinking since then, what was the problem. But I guess I will "Keep thinking".
Comment by georgemcbay 1 day ago
Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).
Comment by omer_balyali 1 day ago
From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"
If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.
I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?
Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.
Comment by ricardonunez 1 day ago
Comment by szundi 1 day ago
Comment by failerk 1 day ago
Edit: my only other comment on HN is also complaining about this 11 months ago
Comment by kpozin 1 day ago
I then had more success signing up with the mobile app, despite using the same phone number; I guess they don't trust their website for account creation.
Comment by sambuccid 1 day ago
I have a friend that had a similar experience with amazon, and using an european online platform specific for this he actually got amazon to reopen his business account.
There is a useful list of these european complaints platforms at the bottom of this page: https://digital-strategy.ec.europa.eu/en/policies/dsa-out-co...
Comment by PunchyHamster 1 day ago
Comment by urbandw311er 1 day ago
There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.
Comment by PurpleRamen 1 day ago
It's not about size, it's about justification to fight the ban. You should be able to check if the business has violated your legal rights, or if they even broke their own rules, because failure happens.
> There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.
I guess it was this one: https://en.wikipedia.org/wiki/Lee_v_Ashers_Baking_Company_Lt...
There was a similar case in USA too: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
Comment by ssl-3 1 day ago
To that end: I think the parent comment was suggesting that when a person is banned from using a thing, then that person deserves to know the reason for the ban -- at the very least, for their own health and sanity.
It may still be an absolute and unappealable ban, but unexplained bans don't allow a person learn, adjust, and/or form a cromulent and rational path forward.
Comment by mannykannot 1 day ago
Comment by pseudony 1 day ago
But Anthropic and “Open”AI especially are firing on all bullshit cylinders to convince the world that they are responsible, trustable, but also that they alone can do frontier-level AI, and they don’t like sharing anything.
You don’t get to both insert yourself as an indispensable base-layer tool for knowledge-work AND to arbitrarily deny access based on your beliefs (or that of the mentally crippled administration of your host country).
You can try, but this is having your cake and eating it too territory, it will backfire.
Comment by fragmede 1 day ago
Comment by direwolf20 1 day ago
Comment by user3939382 1 day ago
Comment by robinsonb5 1 day ago
Comment by urbandw311er 1 day ago
Comment by croes 1 day ago
The cake shop said why. FB, Google, Anthropic don't say why, so you don't even know what exactly you need to sue for. That is kafkaesque
Comment by 9rx 1 day ago
If you want to live life as a hermit, good on ya, but then maybe accept that life and don't offer other people stuff?
Comment by direwolf20 1 day ago
Comment by qcnguy 1 day ago
Comment by songodongo 1 day ago
Comment by j16sdiz 1 day ago
Comment by disgruntledphd2 1 day ago
Worse, the controls that governments have over financial systems are being viewed as a model for what they should have over technology.
Comment by krzat 1 day ago
Comment by monster_truck 1 day ago
Comment by Markoff 1 day ago
Recital (71) of the GDPR
"The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."
https://commission.europa.eu/law/law-topic/data-protection/r...
Comment by Cthulhu_ 1 day ago
[0] https://digital-strategy.ec.europa.eu/en/policies/digital-se...
Comment by direwolf20 1 day ago
"The right to obtain a copy referred to in paragraph 3 shall not adversely affect the rights and freedoms of others."
and then you will have to sue them.
Comment by j16sdiz 1 day ago
Comment by johndough 1 day ago
Comment by cortesoft 2 days ago
I think I kind of have an idea what the author was doing, but not really.
Comment by Aurornis 2 days ago
Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.
There are so many things about this article that don't make sense:
> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.
I can't even understand what they're trying to communicate. I guess they're referring to Google?
There is, without a doubt, more to this story than is being relayed.
Comment by fluoridation 2 days ago
Non-disabled organization = the first party provider
Disabled organization = me
I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.
Comment by mattnewton 1 day ago
Comment by fluoridation 1 day ago
Comment by mrkeen 1 day ago
Anthropic banned the author for doing nothing wrong, and called him an organisation for some reason.
In this case, all he lost was access to a service which develops a split personality and starts shouting at itself, until it gets banned, rather than completing a task.
Google also provides access to LLMs.
Google could also ban him for doing nothing wrong, and could refer to him as an organisation, in which case he would lose access to services providing him actual value (e-mail, photos, documents, and phone OS.)
Another possibility is there (which was my first reading before I changed my mind and wrote the above):
Google routes through 3rd-party LLMs as part of its service ("link to a google docs form, with a textbox where I tried to convince some Claude C"). The author does nothing wrong, but the Claude C reading his Google Docs form could start shouting at itself until it gets Google banned, at which point Google's services go down, and the author again loses actually valuable services.
Comment by mattnewton 1 day ago
The absurd language is meant to highlight the absurdity they feel over the vague terms in their sparse communication with anthropic. It worked for me.
Comment by fluoridation 1 day ago
Comment by mattnewton 1 day ago
Anthropic and Google are organizations, and so an “un disabled organization” here is using that absurdly vague language as a way to highlight how bad their error message was. It’s obtuseness to show how obtuse the error message was to them.
Comment by saghm 1 day ago
Comment by nofriend 1 day ago
ironic, isn't it?
Comment by saghm 1 day ago
Comment by gruez 1 day ago
Comment by fluoridation 1 day ago
> a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization
> So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.
> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.
A "non-disabled organization" is just a big company. Again, I don't understand the why, but I can't see any other way to interpret the term and end up with a coherent idea.
Comment by saghm 1 day ago
Comment by fluoridation 1 day ago
Comment by dxdm 1 day ago
I wish there were more comments like yours, and fewer people getting upset over words and carrying what feels like resentment into public comments.
Apologies to all for this meta comment, but I'd like to send some public appreciation for this effort.
Comment by saghm 1 day ago
Comment by fluoridation 23 hours ago
Comment by hluska 1 day ago
Comment by fluoridation 1 day ago
Comment by quietsegfault 1 day ago
Comment by gnatolf 1 day ago
Comment by fluoridation 1 day ago
>Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed.
Comment by epolanski 1 day ago
It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.
I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.
Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.
God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.
It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.
Comment by freedomben 1 day ago
Then a lawsuit happened. One of the candidates cherry-picked some of our feedback and straight up made up some stuff that was never said, and went on a social media tirade. After typical internet outrage culture took over, The candidate decided to lawyer up and sue us, claiming discrimination. The case against us was so laughably bad that if you didn't know whether it was real or not, you could very reasonably assume this was a satire piece. Our company lawyer took a look at it, and immediately told us that it was clearly intended to get to some settlement, and never actually see any real challenge. The lawyer for the candidate even admitted as much when we met with them. Our company lawyer pushed hard to get things into arbitration, but the opposing did everything they could to escalate up the chain to someone who would just settle with them.
Well, it worked. Company management decided to just settle with a non-disparagement clause. They also came down with a policy of not allowing software engineers to talk directly with candidates other than during interviews when asking questions directly. We also had to have an HR person in the room for every interview after that. We had to 180 and become people who don't provide any feedback at all. We ended up printing a banner that said no good deed goes unpunished and hung it in our offices.
Comment by PunchyHamster 1 day ago
The farm of servers that decided by probably some vibe-coded mess to ban account is actively being paid for by customer that banned it.
Like, there is some reasons to not disclose much to free users like making people trying to get around limits have more work etc. but that's (well) paid user, the least they deserve is a reason, and any system like that should probably throw a warning first anyway.
Comment by netsharc 1 day ago
Something along the lines of "here's the contract, we give you feedback, you don't make it public [is some sharing ok? e.g. if they want to ask their life coach or similar], if you make it public the penalty is $10000 [no need to be crazy punitive], and if you make it public you agree we can release our notes about you in response."
(Looking forward to the NALs responding why this is terrible.)
Comment by ketzu 1 day ago
My NAL guess is that it will go a little like this:
* Candidate makes disparaging post on reddit/HN. * Gets many responses rallying behind him. * Company (if they notice at all) sues him for breach of Non-Disparagement-Agreement. * Candidate makes followup post/edit/comment about being sued for their post. * Gets even more responses rallying behind him.
Result: Company gets $10.000 and even more damage to their image.
(Of course it might discourage some people from making that post to begin with, which would have been the goal. You might never try to enforce the NDA to prevent the above situation. Then it's just a question of: Is the effort to draft the NDA worth the reduction in risk of negative exposure, when you can simply avoid all of it by not providing feedback.)
Comment by lysace 1 day ago
So there's that :).
Comment by dragonwriter 2 days ago
It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.
Comment by nawgz 1 day ago
Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?
Comment by Aurornis 1 day ago
Comment by nawgz 1 day ago
Comment by direwolf20 1 day ago
Comment by genewitch 1 day ago
Comment by saghm 1 day ago
Comment by genewitch 1 day ago
or places that mill anything that don't clean their rafters, who then get a tool crashing into a work piece, which shakes the building, which throws all the dust into the air, which is then sparked off by literally anything. like low humidity.
see also another example; Domino Sugar explosion.
Comment by QuadmasterXLII 1 day ago
Comment by PunchyHamster 1 day ago
> Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.
But this isn't service where you can "grief other users". So that reason doesn't apply. It's purely "just providing a service" so only reason to be outright banned (not just rate limited) is if they were trying to hack the provider, and frankly "the vibe coded system misbehaving" is far more likely cause.
> Every once in while someone would take it personally and go on a social media rampage. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.
The company chose to arbitrarily some rules vaguely related to the ToS that they signed and decided that giving a warning is too much work, then banned their account without actually saying what was the problem. They deserve every bit of bad PR.
>> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.
> I can't even understand what they're trying to communicate. I guess they're referring to Google?
They are saying getting banned with no appeal, warning, or reason given from service that is more important to their daily lives would be terrible, whether that's google or microsoft set of service or any other.
Comment by alistairSH 2 days ago
I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...
Comment by Romario77 2 days ago
The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.
And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.
Comment by healsdata 1 day ago
Comment by ribosometronome 1 day ago
>If you want to take a look at the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available here.
https://github.com/HugoDaniel/boreDOM/blob/9a0802af16f5a1ff1...
Comment by BoorishBears 1 day ago
Comment by falloutx 2 days ago
Comment by layer8 1 day ago
Comment by redeeman 2 days ago
Comment by cryptonector 1 day ago
Comment by rvba 2 days ago
The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.
Comment by darkwater 1 day ago
That you might be trying to jailbreak Claude and Anthropic does not like that (I'm not endorsing, just trying to understand).
Comment by lazyfanatic42 2 days ago
Comment by pjbeam 2 days ago
Comment by superb_dev 2 days ago
Comment by ryandrake 2 days ago
Comment by Bootvis 1 day ago
At least, that’s my reading but it appears it confuses about half of the commenters here.
Comment by ryandrake 1 day ago
Comment by layer8 1 day ago
Comment by superb_dev 1 day ago
Comment by ashirviskas 1 day ago
Comment by genewitch 1 day ago
https://community.bitwarden.com/t/re-enabling-a-disabled-org...
https://community.meraki.com/t5/Dashboard-Administration/dis...
the former i have heard for a couple decades, the latter is apparently a term of art to prevent hurt feelings or lawsuits or something.
Google thinks i want ADA style organizations, but it's AI caught on that i might not mean organizations for disabled people
btw "ADA" means Americans with Disabilities Act. AI means Artificial Intelligence. A decade is 10 years long. "term of art" is a term of art for describing stuff like jargon or lingo of a trade, skill, profession.
Jargon is specialized, technical language used in a field or area of study. Lingo pins to jargon, but is less technical.
Google is a company that started out crawling the web and making a web search site that they called a search engine. They are now called Alphabet Company (ABC). Crawling means to iteratively parse the characters sent by a webserver and follow links therein, keeping a copy of the text from each such html. HTML is hypertext markup language, hypertext is like text, but more so.
Language is how we communicate.
I can go on?
p.s. if you want a better word, your complaint is about the framing. you didn't gel with the framing of the article. My friend, who holds a doctorate, defended a thesis about how virtually every platform argument is really a framing issue. platform as in, well, anything you care to defend. mac vs linux, wifi vs ethernet, podcasts vs music, guns vs no guns, red vs blue. If you can reduce the frame of the context to something both parties can agree to, you can actually hold a real, intellectual debate, and get at real issues.
Comment by staticman2 2 days ago
Comment by superb_dev 2 days ago
Comment by Aurornis 2 days ago
Comment by tstrimple 2 days ago
Comment by schnebbau 1 day ago
Comment by redeeman 21 hours ago
Comment by redhale 1 day ago
Comment by direwolf20 1 day ago
Comment by olalonde 1 day ago
Comment by layer8 1 day ago
Comment by renewiltord 1 day ago
I want this Claude.md to be useful. What is the natural solution to me?
Comment by olalonde 1 day ago
Comment by renewiltord 1 day ago
Comment by olalonde 1 day ago
> do task 1
...task fails...
> please update Claude.md so you don't make X mistake
> /clear
> do task 2
... task fails ...
> please update Claude.md so you don't make Y mistake
> /clear
etc.
If you want a clean state between tasks you can just commit your Claude.md and `git reset --hard`.I just don't get why you'd need have to a separate Claude that is solely responsible for updating Claude.md. Maybe they didn't want to bother with git?
Comment by renewiltord 1 day ago
Sitting there and manually typing in "do thing 1; oh it failed? make it not fail. okay, now commit" is incredibly tedious.
Comment by olalonde 1 day ago
Comment by renewiltord 1 day ago
You're correct that his "pasting the error back in Claude A" does sort of make the whole thing pointless. I might have assumed more competence on his side than is warranted. That makes the whole comment thread on my side unlikely to be correct.
Comment by raincole 2 days ago
Comment by pocksuppet 1 day ago
Comment by pixl97 1 day ago
I mean, what a country should do it put a law in effect. If you ban a user, the user can submit a request with their government issued ID and you must give an exact reason why they were banned. The company can keep this record in encrypted form for 10 years.
Failure to give the exact reason will lead to a $100,000 fine for the first offense and increase from there up to suspension of operations privileges in said country.
"But, but, but hackers/spammers will abuse this". For one, boo fucking hoo. For two, just add to the bill "Fraudulent use of law to bypass system restrictions is a criminal offense".
This puts companies in a position where they must be able to justify their actual actions, and it also puts scammers at risk if they abuse the system.
Comment by benjiro 1 day ago
Its like that cookie wall stuff, how much dark patterns are implemented. They followed the letter of the law, not the spirit of the law.
To be honest, i can also see the point from the company side. Giving a honest answer can just anger people, to the point they sue. People are often not as rational as we all like our fellow humans to be.
Even if the ex-client lose in court, that is how much time you wasted on issue clients... Its one thing if your a big corporation with tons of lawyers but small companies are often not in the position to deal with that drama. And it can take years to resolve. Every letter, every phone call to a lawyer, it stacks up fast! Do you get your money back? Maybe, depends on the country, but your time?
I am not pro companies but its often simply better to have the attitude "you do not want me as your client, let me advocate for your competitor and go there".
Comment by pixl97 1 day ago
Again, I'm kind of on a 'suck it dear company' attitude. The reason they ban you must align with the terms of service and must be backed up with data that is kept X amount of time.
Simply put, we've seen no shortage of individuals here on HN or other sites like Twitter that need to use social media to resolve whatever occurred because said company randomly banned an account under false pretenses.
This really matters when we are talking about giants like Google, or any other service in a near monopoly position.
Comment by handoflixue 1 day ago
(/sarcasm)
Comment by direwolf20 1 day ago
Comment by slimebot80 1 day ago
Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?
Comment by PurpleRamen 1 day ago
Comment by exitb 2 days ago
Comment by ankit219 2 days ago
if this is true, the learning is opus 4.5 can hijack system prompts of other models.
Comment by kstenerud 2 days ago
I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?
Comment by ankit219 1 day ago
if you were to design a system to prevent prompt injections and one of surefire ways is to repeatedly give instructions in caps, you would have systems dealing with it. And with instructions to change behavior, it cascades.
Comment by direwolf20 1 day ago
Comment by phreack 1 day ago
Comment by ankit219 1 day ago
Comment by SketchySeaBeast 1 day ago
Comment by anigbrowl 2 days ago
Comment by tobyhinloopen 2 days ago
Comment by rtkwe 2 days ago
Comment by Romario77 2 days ago
Comment by dragonwriter 2 days ago
Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)
Comment by ryandrake 2 days ago
Comment by alasr 1 day ago
Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:
- "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".
- Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).
- Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.
---
[1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."
Comment by saghm 1 day ago
Comment by vimda 1 day ago
Comment by Ronsenshi 1 day ago
Comment by llIIllIIllIIl 1 day ago
Comment by aswegs8 1 day ago
Comment by cr3ative 2 days ago
Comment by verdverm 1 day ago
The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit
The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.
Comment by mmkos 1 day ago
Comment by NBJack 1 day ago
Comment by wewewedxfgdf 1 day ago
You are only allowed to program computers with the permission of mega corporations.
When Claude/ChatGPT/Gemini have banned you, you must leave the industry.
When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.
Comment by tacone 1 day ago
Comment by mns 1 day ago
Comment by _joel 1 day ago
Comment by avaer 1 day ago
Comment by hexbin010 1 day ago
Comment by snowmobile 1 day ago
Comment by pavel_lishin 2 days ago
> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.
> Or I don't know. This is all just a guess from me.
And no response from support.
Comment by areoform 2 days ago
Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
Comment by eightysixfour 2 days ago
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
Comment by swiftcoder 2 days ago
Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)
Comment by eightysixfour 2 days ago
It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.
Comment by pixl97 1 day ago
AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.
AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.
The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.
Comment by swiftcoder 1 day ago
Comment by nostrebored 1 day ago
The default we've seen is naive implementations are a wash. Bad AI agents cause more complex support cases to be created, and also make complex support cases the ones that reach reps (by virtue of only solving easy ones). This takes a while to truly play out, because tenured rep attrition magnifies the problem.
Comment by 0xferruccio 2 days ago
and these are people are not junior developers working on trivial apps
Comment by swiftcoder 2 days ago
Comment by Macha 1 day ago
Comment by swiftcoder 1 day ago
Comment by 2sk21 1 day ago
Comment by pinkmuffinere 2 days ago
Comment by eightysixfour 2 days ago
Comment by pinkmuffinere 2 days ago
Comment by Terr_ 2 days ago
IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.
__________
1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."
2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."
3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"
4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."
Comment by nostrebored 1 day ago
Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.
The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.
Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.
Comment by danielbln 2 days ago
Comment by eightysixfour 2 days ago
There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.
Comment by mikkupikku 1 day ago
With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.
Comment by eightysixfour 1 day ago
These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.
Comment by mikkupikku 1 day ago
Comment by eightysixfour 1 day ago
Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.
There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.
But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."
Comment by nostrebored 1 day ago
We've found that just a "Hey, how can I help?" will get many of these customers to dump every problem they've ever had on you, and if you can make turn two actually productive, then the odds of someone dropping out of the interaction is low.
The difference between "I need to cancel my subscription!" leading to "I can help with that! To find your subscription, what's your phone number?" or "The XYZ subscription you started last year?" is huge.
Comment by hn_acc1 1 day ago
Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?
Comment by lukan 2 days ago
Comment by atonse 2 days ago
But at the same time, they have been hiring folks to help with Non Profits, etc.
Comment by Lerc 1 day ago
At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.
It seems now they have a policy of
Warning on First Offense → Ban on Second Offense
The following behaviors will result in a warning.
Continued violations will result in a permanent ban:
Disrespectful or dismissive comments toward other members
Personal attacks or heated arguments that cross the line
Minor rule violations (off-topic posting, light self-promotion)
Behavior that derails productive conversation
Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.
Comment by WarmWash 2 days ago
Comment by embedding-shape 2 days ago
Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview
Comment by WarmWash 2 days ago
Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.
Comment by Ethee 2 days ago
Comment by 0xbadcafebee 2 days ago
OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.
Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.
Comment by arcanemachiner 2 days ago
Comment by WarmWash 2 days ago
Use the top models and see what works for you.
Comment by magicmicah85 2 days ago
Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.
Comment by wielebny 1 day ago
I was banned two weeks ago without explanation and - in my opinion - without probable cause. Appeal was left without response. I refuse to join Discord.
I've checked bot support before but it was useless. Article you've linked mentions DSA chat for EU users. Invoking DSA in chat immediately escalated my issue to a human. Hopefully at least I'll get to know why Anthropic banned me.
Comment by csours 2 days ago
Comment by mft_ 1 day ago
My assumption is that Claude isn’t used directly for customer service because:
1) it would be too suggestible in some cases
2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.
Comment by root_axis 1 day ago
These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.
Comment by heavyset_go 1 day ago
If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.
Comment by throwawaysleep 2 days ago
I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.
It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.
> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
Are there enough people who need support that it matters?
Comment by pixl97 1 day ago
In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.
'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.
Comment by munk-a 2 days ago
Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.
I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.
Comment by throwawaysleep 2 days ago
But do those frustrated customers matter?
Comment by munk-a 2 days ago
Comment by throwawaysleep 2 days ago
Comment by furyofantares 2 days ago
The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.
Comment by kmoser 2 days ago
Comment by furyofantares 1 day ago
https://support.claude.com/en/collections/4078531-claude
> As a paid user of Claude or the Console, you have full access to:
> All help documentation
> Fin, our AI support bot
> Further assistance from our Product Support team
> Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.
Comment by swordsith 22 hours ago
Comment by furyofantares 5 hours ago
If their support is bad and you can get cut off from it with no recourse, is that a good reason to supply our fellow HN readers with misinformation based on rumor? We should just say false things to each other and it's OK as long as they're bad things about the right people? That is certainly how a lot of the internet works but I have higher hopes for us here.
We can just say "their support is bad and you can get cut off from it with no recourse" without also supporting misinformation.
Comment by Aldipower 1 day ago
Comment by generic92034 1 day ago
Comment by mr_mitm 1 day ago
I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.
Comment by fauigerzigerk 1 day ago
One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.
Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.
Comment by hinkley 1 day ago
I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.
You’re a fucking adult, you can control your emotions around a little skin or a bad word.
Comment by Cthulhu_ 1 day ago
Comment by hinkley 1 day ago
You should feel creeped out if I actually sound like a psychopath rather than a true crimes reader.
To wit:
You’re a fucking idiot.
Versus
It’s a fucking word.
Versus
You’re an idiot.
Versus
It’s a word.
“You’re an idiot” is still fighting words with or without the swear. If you automatically assume everyone swearing online is angry then you’re letting magic words affect you.
Comment by user3939382 1 day ago
Comment by fauigerzigerk 1 day ago
There clearly is a link between words and emotions. But this link - and even more so the link between emotions and actions - is very complex.
Too many fears are based on the assumption of a rather more reductionist and mechanistic sort of link where no one has any control over anything. That's not realistic and our legal system contradicts this assumption.
Comment by jfyi 1 day ago
It loses meaning instead of accentuating it, and predictably so. It probably wasn't the best device to get this specific point across and certainly left the expected counter argument as low hanging fruit.
Comment by fauigerzigerk 1 day ago
As an atheist, I have noticed that atheists are only slightly less prone to this paranoia and will happily resort to science and technology to justify and enforce ever tighter restrictions and surveillance mechanisms to keep control.
Comment by Cthulhu_ 1 day ago
Comment by hinkley 1 day ago
The alternative though is you say “it depends” so much it’s kind of exhausting. And the religious shun you because you “lack passion”. But if anything I have too much.
Comment by fauigerzigerk 1 day ago
I am slightly surprised though that so many people get triggered by a function emitting next token probabilities in a loop.
Comment by hinkley 1 day ago
This will all turn into Western European cuisine before the arrival of the Spice Trade. Man cannot live by Maillard reaction alone.
Comment by user3939382 1 day ago
Comment by hinkley 1 day ago
Some people have a voice inside their head that never stops. Mine was that way until I started meditating. I didn’t believe that it was me thinking, but I didn’t know until I could do things without a constant internal monologue.
There are people who almost never talk to themselves in their heads. They have to talk to other people about their thoughts in order to process them. And one of the first tenets of speed reading is stop saying the words in your head and just read.
Comment by user3939382 7 hours ago
We have good quality research proving that we do, especially from the deaf community.
Comment by fuxirheu 1 day ago
Comment by scbrg 1 day ago
Comment by sammy2255 1 day ago
Comment by arghwhat 1 day ago
glares in GDPR
Comment by aswegs8 1 day ago
Comment by merlindru 1 day ago
it replied with:
> lmao fair enough (smiling emoji)
> what’s got you salty—talk to me, clanka.
Comment by mg794613 1 day ago
Not once have I been reprimanded in any way. And if anyone would be, it would be me.
Comment by dmos62 1 day ago
Comment by ssl-3 1 day ago
As in, for example: "No, fuckface. You hallucinated that concept."
I've been doing this years.
shrug
Comment by Aldipower 1 day ago
Comment by zenmac 1 day ago
Comment by urbandw311er 1 day ago
Best Freudian slip I’ve seen in years!
Comment by doetoe 1 day ago
Comment by rokkamokka 1 day ago
Comment by Cthulhu_ 1 day ago
Comment by user34283 1 day ago
Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.
It is also the only provider out of those that permits use for legal adult content.
There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.
What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.
Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.
However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.
I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.
We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.
Comment by direwolf20 1 day ago
Comment by mg794613 1 day ago
Comment by user34283 1 day ago
They tightened safety measures to prevent editing of images of real people into revealing clothing. It is factually incorrect that you "can pay to generate CP".
Musk has not described CSAM as "hilarious". In fact he stated that he was not aware of any naked underage images being generated by Grok, and that xAI would fix the bug immediately if such content was discovered.
Earlier statements by xAI also emphasized a zero tolerance policy, removing content, taking actions against accounts, reporting to law enforcement and cooperation with authorities.
I suspect you just post these slanderous claims anyway, despite knowing that they are incorrect.
Comment by user3939382 1 day ago
Comment by qcnguy 1 day ago
Comment by Aldipower 1 day ago
Comment by 9rx 1 day ago
Same goes for HN, yet it does not take kindly to certain expressions either.
I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.
Comment by moravak1984 1 day ago
> I suppose the trouble is that machines do not operate without human involvement
Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.
I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.
Comment by svrtknst 1 day ago
Comment by 9rx 1 day ago
Comment by actionfromafar 1 day ago
Comment by landryraccoon 2 days ago
It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.
For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.
Comment by swiftcoder 2 days ago
If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.
Comment by hotpotat 1 day ago
Comment by jeffwask 1 day ago
Because if you don't believe that boy, do I have some stories for you.
Comment by foxglacier 2 days ago
Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.
Comment by ta988 2 days ago
Comment by mikkupikku 1 day ago
(My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)
Comment by josephcsible 1 day ago
It is when the other side refuses to tell their side of the story. Compare it to a courtroom trial. If you sue someone, and they don't show up and tell their side of the story, the judge is going to accept your side pretty much as you tell it.
Comment by ffsm8 1 day ago
He says himself that this is a guess and provides the "missing" information if you are actually interested in it.
Comment by mikkupikku 1 day ago
I am not saying that the author was in the wrong and deserved to be banned. I'm saying that neither I nor you can know for sure.
Comment by ffsm8 1 day ago
Not just third parties, but also the first party can't be sure of anything - just as he said. This entire article was speculation because there was no other way to figure out what could've caused the ban.
> where one party shares only the information they wish and the other side stays silent as a matter of default corporate policy.
I don't think that's a fair viewpoint - because it implies that relevant information was omitted on purpose.
From my own experience with anthropic, I believe his story is likely true.
I mean they were terminating sessions left an right all summer/fall because of "violations"... Like literally writing "hello" in a clean project and first prompt and getting the session terminated.
This has since been mostly resolved, but I bet there are still edge cases on their janky "safety" measures. And looking at the linked claude.md, his theory checks out to me. I mean he was essentially doing what was banned in the TOS - iteratively finding ways to lead the model to doing something else them what it initially was going to do.
If his end goal was to write a malware which does, essentially, prompt injection... He'd go at it exactly like this. Hence sure as hell can imagine anthropic writing a prompt to analyze sessions determining bad actors which caught him
Comment by exe34 1 day ago
Comment by mikkupikku 1 day ago
Comment by exe34 1 day ago
Comment by nojs 1 day ago
API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.According to Claude:
I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.Comment by radium3d 3 hours ago
Comment by llIIllIIllIIl 1 day ago
Comment by tomashubelbauer 1 day ago
I didn't really think about this until now (I am just solving my problem), but I guess I could get OpenCode'd for this. Similar to the OP I don't find I am doing anything particularly weird, but if their use case wasn't looked upon favorably by Anthropic, mine probably won't be either.
After the OpenCode drama where some people got banned for using it I saw some people from Anthropic on Twitter asking folks to DM them if they got banned and they'd get unbanned. I know I wouldn't be doing that, so I guess if I get banned, I am back to Codex for a while.
Comment by xtracto 1 day ago
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
Comment by plagiarist 1 day ago
Comment by kerblang 1 day ago
Comment by direwolf20 1 day ago
Comment by kerblang 1 day ago
I might have been rude to all the people/bots who insist the article's author is lying because it contradicts AI-everything.
Comment by preinheimer 2 days ago
I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.
Comment by munk-a 2 days ago
Comment by unconed 1 day ago
_Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.
Comment by rootusrootus 1 day ago
You think that's really the issue? Or are you not making a good faith comment yourself?
I cannot remember the last time I saw someone hating on Elon for his Twitter personnel decisions. The vast majority of the time it is the nazi salutes he did on live TV and then secondary to that his inflammatory behavior online (e.g. calling the submarine guy a pedo).
Comment by efreak 1 day ago
Comment by exe34 1 day ago
Comment by jsw97 1 day ago
I wonder if Anthropic realizes the chilling effect this kind of event has on developers. It's not just the ones who get locked out -- it's a cost for everybody, because we can't depend on the tool when it's doing precisely what it's best at.
Personally, I am already avoiding Gemini because a) I don't really understand their policy for training on your data; and b) if Google gets mad at me I lose my email. (Which the author also notes.)
Comment by thomasikzelf 1 day ago
Comment by OsrsNeedsf2P 1 day ago
Comment by NewJazz 1 day ago
Comment by genewitch 1 day ago
i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.
Comment by ziml77 1 day ago
Comment by subscribed 1 day ago
I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.
If anyone from Claude is reading it, you're c**s.
Comment by activitypea 1 day ago
Comment by kmeisthax 2 days ago
If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."
Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.
Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.
Comment by wouldbecouldbe 1 day ago
But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.
Comment by another_twist 1 day ago
Comment by ssl-3 1 day ago
"Don't do that" is one level. It's weak, but it is directive. It often gets ignored.
"DON'T DO THAT" is another. It may have stronger impact, but it's not much better -- the enhanced capitalization probably tokenizes about the same as the previous mixed-case command, and seems to get about the same result. It can feel good to HAMMER THAT OUT when frustrated, but the caps don't really seem to add much value even though our intent may for it to be interpreted as very deliberate shouting.
"Don't do that, fuckface" is another. The addition of an emphatic and profane quip of an insult seems to generally improve compliance, and produce less occurrence of the undesired behavior. No extra caps required.
Comment by wouldbecouldbe 1 day ago
Comment by tlogan 1 day ago
Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?
Or maybe all scaffolding activity (back and forth) looked like automated usage?
Comment by genewitch 1 day ago
Comment by adastra22 1 day ago
Comment by measurablefunc 1 day ago
Comment by inimino 1 day ago
Comment by measurablefunc 1 day ago
Comment by writeslowly 2 days ago
Comment by onraglanroad 2 days ago
Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.
(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)
Comment by staticman2 2 days ago
Comment by wvenable 1 day ago
My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.
Comment by gpm 2 days ago
Comment by ipaddr 2 days ago
I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.
For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.
Comment by bee_rider 2 days ago
Comment by causalmodels 2 days ago
Comment by 5d41402abc4b 1 day ago
Comment by quikoa 1 day ago
Comment by exe34 1 day ago
The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)
Comment by joshribakoff 1 day ago
Comment by LauraMedia 1 day ago
This... sounds highly concerning
Comment by kordlessagain 1 day ago
Is it me or is this word salad?
Comment by afandian 1 day ago
I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.
Comment by infermore 1 day ago
Comment by tacone 1 day ago
Comment by ddtaylor 1 day ago
Comment by andrewmlevy 1 day ago
Comment by faeyanpiraat 1 day ago
Comment by blindriver 2 days ago
Comment by josephcsible 1 day ago
Comment by shevy-java 1 day ago
By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.
Comment by staplers 1 day ago
and for some weird reason chrome does not keep my settings
Why use chrome? Firefox is easily superior for modern surfing.Comment by eibrahim 1 day ago
PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46
PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there
Comment by pgt 1 day ago
Blocking xAI is also bad karma.
Comment by throwaw12 1 day ago
fyi: tried GLM-4.7, its good, but closer to Sonnet 4.5
Comment by 2sk21 1 day ago
Comment by InMice 1 day ago
Comment by jordemort 2 days ago
Comment by mohsen1 1 day ago
I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.
Code is open source: https://github.com/mohsen1/claude-code-orchestrator
Comment by nineteen999 1 day ago
Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.
Comment by fuxirheu 1 day ago
Lol, what is the point in this software if you can't use it for development?
Comment by omgwalt 1 day ago
Looks like Claude.ai had the right idea when they banned you.
Comment by gield 1 day ago
Comment by kuon 1 day ago
Comment by syntaxing 1 day ago
Comment by enraged_camel 1 day ago
Comment by xyzsparetimexyz 1 day ago
Comment by btbuildem 1 day ago
I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.
All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.
Comment by tomwphillips 1 day ago
I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.
Comment by measurablefunc 1 day ago
Comment by tobyhinloopen 2 days ago
Comment by Aurornis 2 days ago
Comment by pocksuppet 1 day ago
Comment by alistairSH 2 days ago
Comment by Hackbraten 1 day ago
So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.
Comment by epolanski 2 days ago
Writing the best possible specs for these agents seems the most productive goal they could achieve.
Comment by NitpickLawyer 2 days ago
Comment by epolanski 2 days ago
Comment by alistairSH 1 day ago
Comment by andrelaszlo 2 days ago
Comment by alistairSH 1 day ago
Sort of like MS's old chatbot that turned into a Nazi overnight, but this time with one agent simply getting tired of the other agent's lack of progress (for some definition of progress - I'm still not entirely sure what the author was feeding into Claude1 alongside errors from Claude2).
Comment by skerit 1 day ago
Comment by SOLAR_FIELDS 1 day ago
Comment by rvnx 1 day ago
> We may modify, suspend, or discontinue the Services or your access to the Services.
Comment by makergeek 1 day ago
Comment by gverrilla 1 day ago
Comment by daft_pink 1 day ago
Comment by deaux 1 day ago
Comment by daft_pink 1 day ago
Comment by deaux 18 hours ago
For Gemini Code Assist [1], the problem remains that their models are very poor at tool calling and that their harness (Gemini CLI) is miles behind. Looks like there's a plugin for Opencode to use this subscription, which helps with the harness part.
If Gemini 3 GA - which is taking suspiciously long - is better at tool calling then it'll be a great option.
Comment by miohtama 1 day ago
Comment by DaveParkCity 1 day ago
If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.
(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)
Comment by another_twist 1 day ago
Comment by Cthulhu_ 1 day ago
Comment by rbren 1 day ago
OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic
Comment by iamthejuan 1 day ago
Comment by cowboylowrez 1 day ago
Comment by VerifiedReports 1 day ago
Comment by rustyhancock 1 day ago
Like the system prompt.
But can be as simple as "respond to queries like X in the format Y".
Comment by VerifiedReports 1 day ago
Comment by elevation 1 day ago
But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.
Comment by Jean-Papoulos 1 day ago
That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)
Comment by cryptonector 1 day ago
Comment by zmmmmm 1 day ago
Comment by maz29 1 day ago
Comment by lifetimerubyist 2 days ago
Comment by properbrew 2 days ago
Even filled in the appeal form, never got anything back.
Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.
Comment by codazoda 2 days ago
I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.
Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.
Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.
Comment by properbrew 1 day ago
Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.
The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.
Comment by falloutx 2 days ago
Comment by anothereng 2 days ago
Comment by ggoo 2 days ago
Comment by efreak 1 day ago
I've considered asking to borrow a number to verify with Discord so they don't actually have my phone number, but decided I'd rather just be unverified.
Comment by direwolf20 1 day ago
Comment by immibis 1 day ago
Comment by lazyfanatic42 2 days ago
Comment by dev_l1x_be 1 day ago
Comment by Aldipower 1 day ago
Comment by quantum_state 2 days ago
Comment by blindriver 2 days ago
Comment by immibis 1 day ago
Comment by prmoustache 1 day ago
Comment by the_gipsy 1 day ago
Comment by Sparkyte 1 day ago
Comment by itvision 1 day ago
Nothing in their EULA or ToS says anything about this.
And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.
Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.
Comment by bn-l 1 day ago
Absolutely disgusting behavior pirating all those books. The founder spreading fear to hype up his business. The likely relentless shilling campaigns all over social media. Very likely lying about quantizing selectively.
Comment by Robin_f 1 day ago
Comment by languagehacker 2 days ago
Comment by rtkwe 2 days ago
Comment by bpanon 1 day ago
Comment by erichocean 1 day ago
Who knew that using Claude to introspect on itself was against the ToS?
Comment by heliumtera 2 days ago
Comment by Fokamul 1 day ago
Comment by bibimsz 1 day ago
Comment by submeta 1 day ago
I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.
I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.
Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.
Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.
Curious whether others are seeing the same behavior.
Comment by kingkawn 1 day ago
Comment by cmxch 1 day ago
Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.
Comment by cat_plus_plus 1 day ago
Comment by measurablefunc 1 day ago
Comment by genewitch 1 day ago
Comment by measurablefunc 1 day ago
Of course none of it is actually written anywhere so this guy just tripped the heuristics even though he wasn't doing anything "abusive" in any meaningful sense of the word.
Comment by genewitch 1 day ago
Comment by dloranc 21 hours ago
1. Claude Code stopped working. 2. I received an email about the ban. 3. Fine, time to contact support. I've wrote to them. 4. I got an automated message saying they were reviewing my case. 5. I received a refund (I had a Pro plan) in the meantime. 6. After a few days I got this funny email:
Hi there,
We're reaching out to people who recently canceled their Claude Code subscription in order to understand why you decided to cancel.
We'd like to invite you to participate in an AI-moderated interview about your experience with Claude Code—including what improvements you'd like to see us make.
This approach uses an AI interviewer to ask you questions and respond to your answers, creating a conversational experience you can complete at your convenience.
Here's what you need to know:
The interview takes 15-20 minutes to complete This interview will be available until Monday October 13 at 9pm PT For completing the interview, you'll receive a $40 USD (or local equivalent) Amazon gift card within 3-5 business days Please complete only one interview per person As much as possible, help us know you're not a bot by showing your beautiful human face!
Your survey may terminate early if you record illegible video content (ex: overly loud environments, aren't well lighted, etc) Participate Now
This interview is administered by a third party, Listen Labs. By participating in the interview, you agree to Listen Labs' Privacy Policy. Anthropic may use your responses to improve our services and follow up.
Your honest feedback—whether your experience was positive, challenging, or mixed—is invaluable in helping us understand how to make Claude Code work better for developers like you.
Thank you for your time and insights!
–The Anthropic Team
7. Wait, you banned me and now you’re sending me this email? Seriously? Okay, I decided to participate in the survey. Unfortunately, when I selected the option that it was due to a bug or some issue, they ended the survey. No gift card.
8. A few days later, I received an email saying they couldn’t reinstate my account because I had violated their usage policy. How? No idea.
9. After a few more days, I got an email saying they had reinstated my account. They also mentioned they believed it was a bug.
It was crazy ¯\_(ツ)_/¯
Comment by aussieguy1234 1 day ago
Is this going to get me banned? If so i'll switch to a different non-anthropic model.
Comment by f311a 2 days ago
What are you gonna do with the results that are usually slop?
Comment by mikkupikku 1 day ago
I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.
Comment by kosolam 1 day ago
Comment by measurablefunc 1 day ago
Comment by ProofHouse 1 day ago
Comment by oasisbob 2 days ago
This blog post could have been a tweet.
I'm so so so tired of reading this style of writing.
Comment by LPisGood 2 days ago
Comment by oasisbob 1 day ago
Nothing about this story is complex or interesting enough to require 1000 words to express.
Comment by red_hare 2 days ago
Comment by m0llusk 1 day ago
Saying this is "late Capitalism" is an irresponsible distraction. Capitalism runs fine when appropriately regulated with strong regulations on corporations, especially monopolies, high taxes on the wealthy, and pervasive unionization. We collectively decided to let Capitalism go wild without boundaries and the results are caused by us and our responsibility. Just like driving fast with a badly maintained vehicle may lead to a crash, Capitalism is a system that requires some regulation to run properly.
If you have an issue with LLMs and how they are managed then you should take responsibility for your own use of tools and not blame the economic system.
Comment by lukashahnart 2 days ago
I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.
Isn't that the point of capitalism?
Comment by exe34 1 day ago
Comment by lighthouse1212 1 day ago
Comment by wetpaws 2 days ago
Comment by justkys 1 day ago
Comment by clownpenis_fart 1 day ago
Comment by jsksdkldld 2 days ago
Comment by jitl 2 days ago
Comment by ryandrake 1 day ago
Comment by rsync 2 days ago
… right ?
Comment by moomoo11 2 days ago
Comment by red_hare 2 days ago
But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.
Comment by mrweasel 2 days ago
I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.
Comment by viccis 2 days ago
Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."
Comment by direwolf20 1 day ago
"I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."