I built a programming language using Claude Code

Posted by GeneralMaximus 8 hours ago

Counter104Comment151OpenOriginal

Comments

Comment by jc-myths 12 minutes ago

Similar experience building a product solo with AI. The spec-first workflow you describe is very real. I converged on something similar after getting burned way too many times :(

One thing I'd add: even with good specs, the agent still cuts corners in ways that are hard to catch. It'll implement a feature but quietly add a fallback that returns mock data when the real path fails. Your app looks like it works. It doesn't. You find out in production.

Or it'll say "done" and what it did was add a placeholder component with a TODO. So now I have trust issues and I review everything, which kind of defeats the "walk away from the computer" part.

The "just one more prompt" loop is so true lol.

Comment by andsoitis 8 hours ago

> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.

Comment by marssaxman 7 hours ago

The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.

There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

Comment by raincole 7 hours ago

> every AI coding bot will learn your new language

If there are millions of lines on github in your language.

Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.

Comment by Maxatar 4 hours ago

I have not found this to be the case. My company has some proprietary DSLs we use and we can provide the spec of the language with examples and it manages to pick up on it and use it in a very idiomatic manner. The total context needed is 41k tokens. That's not trivial but it's also not that much, especially with ChatGPT Codex and Gemini now providing context lengths of 1 million tokens. Claude Code is very likely to soon offer 1 million tokens as well and by this time next year I wouldn't be surprised if we reach context windows 2-4x that amount.

The vast majority of tokens are not used for documentation or reference material but rather are for reasoning/thinking. Unless you somehow design a programming language that is just so drastically different than anything that currently exists, you can safely bet that LLMs will pick them up with relative ease.

Comment by joshstrange 3 hours ago

> Claude Code is very likely to soon offer 1 million tokens as well

You can do it today if you are willing to pay (API or on top of your subscription) [0]

> The 1M context window is currently in beta. Features, pricing, and availability may change.

> Extended context is available for:

> API and pay-as-you-go users: full access to 1M context

> Pro, Max, Teams, and Enterprise subscribers: available with extra usage enabled

> Selecting a 1M model does not immediately change billing. Your session uses standard rates until it exceeds 200K tokens of context. Beyond 200K tokens, requests are charged at long-context pricing with dedicated rate limits. For subscribers, tokens beyond 200K are billed as extra usage rather than through the subscription.

[0] https://code.claude.com/docs/en/model-config#extended-contex...

Comment by rebolek 4 hours ago

That’s not true. I’m working on a language and LLMs have no problems writing code in it even if there exists ~200 lines of code in the language and all of them are in my repo.

Comment by calvinmorrison 6 hours ago

Uh not really. I am already having Claude read and then one-shot proprietary ERP code written in vintage closed source language OOP oriented BASIC with sparse documentation.... just needed to feed it in the millions of lines of code i have and it works.

Comment by jonfw 4 hours ago

I'm sure claude does great at that, but it would be objectively better, for a large variety of reasons, if claude didn't have to keep syntax examples in it's context.

Comment by calvinmorrison 3 hours ago

for sure. About 6 months ago it absolutely couldn't do it and kept getting cofnused even when i tried to do RAG against the manuals provided (only downloadable from a shady .ru site LOL) but now .. like butter. The context seems to mostly be it reading and writing related stuff?

Comment by vrighter 6 hours ago

"i haven't been able to find much" != "there isn't much on the entire internet fed into them"

Comment by UncleOxidant 7 hours ago

> but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?

Comment by clickety_clack 7 hours ago

Agreed - unpopular languages and packages have pretty shaky outcomes with code generation, even ones that have been around since before 2023.

Comment by almog 6 hours ago

Neither RAG nor loading the docs into the context window would produce any effective results. Not even including the grammar files and just few examples in the training set would help. To get any usable results you still need many many usage examples.

Comment by fcatalan 5 hours ago

My own 100% hallucinated language experiment is very very weird and still has thousands of lines of generated examples that work fine. When doing complex stuff you could see the agent bounce against the tests here and there, but never produced non-working code in the end. The only examples available were those it had generated itself as it made up the language. It was capable of making things like a JSON parser/encoder, a TODO webapp or a command line kanban tracker for itself in one shot.

Comment by marssaxman 5 hours ago

And yet it works well enough, regardless. I have a little project which defines a new DSL. The only documentation or examples which exist for this little language, anywhere in the world, are on my laptop. There is certainly nothing in any AI's training data about it. And yet: codex has no trouble reading my repo, understanding how my DSL works, and generating code written in this novel language.

Comment by danielvaughn 7 hours ago

In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.

Comment by Insanity 6 hours ago

That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.

There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.

Comment by coderenegade 2 hours ago

You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.

Comment by danielvaughn 5 hours ago

I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.

Comment by gf000 6 hours ago

Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.

Comment by giancarlostoro 5 hours ago

To you maybe, but Go is running a large amount of internet infrastructure today.

Comment by gf000 5 hours ago

How does that relate to Go being a verbose language?

Comment by giancarlostoro 5 hours ago

Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.

Comment by gf000 4 hours ago

Verbosity is an objective metric.

Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).

For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).

Comment by giancarlostoro 3 hours ago

Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?

Comment by thunky 2 hours ago

Lots of non-go code out there on the Internet if you ever decide you want to take a look.

Comment by politician 2 hours ago

You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.

Comment by Insanity 5 hours ago

Maybe not a perfect example but it’s more lightweight than Java at least haha

Comment by gf000 5 hours ago

If by lightweight you mean verbosity, then absolutely no.

In go every third line is a noisy if err check.

Comment by LtWorf 6 hours ago

Well LLMs are made to be extremely verbose so it's a good match!

Comment by nineteen999 4 hours ago

I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.

Claude seems more consistently _concise_ to me, both in web and cli versions. But who knows, after 12 months of stuff it could be me who is hallucinating...

Comment by idiotsecant 6 hours ago

I think I remember seeing research right here on HN that terse languages don't actually help all that much

Comment by thomasmg 6 hours ago

I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.

Comment by quotemstr 6 hours ago

Those constraints can be enforced by a library too. Even humans sometimes make a whole new language for something that can be a function library. If you want strong correctness guarantees, check the structure of the library calls.

Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.

Comment by imiric 6 hours ago

> every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

How will it "learn" anything if the only available training data is on a single website?

LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.

Comment by Zak 6 hours ago

They don't understand anything, but they sure can repeat a pattern.

I'm using Claude Code to work on something involving a declarative UI DSL that wraps a very imperative API. Its first pass at adding a new component required imperative management of that component's state. Without that implementation in context, I told Claude the imperative pattern "sucks" and asked for an improvement just to see how far that would get me.

A human developer familiar with the codebase would easily understand the problem and add some basic state management to the DSL's support for that component. I won't pretend Claude understood, but it matched the pattern and generated the result I wanted.

This does suggest to me that a language spec and a handful of samples is enough to get it to produce useful results.

Comment by dmd 5 hours ago

It's wild to me the disconnect between people who actually use these tools every day and people who don't.

I have done exactly the above with great success. I work with a weird proprietary esolang sometimes that I like, and the only documentation - or code - that exists for it is on my computer. I load that documentation in, and it works just fine and writes pretty decent code in my esolang.

"But that can't possibly work [based on my misunderstanding of how LLMs work]!" you say.

Well, it does, so clearly you misunderstand how they work.

Comment by ModernMech 5 hours ago

The reason it works so well is that everyone’s “personal unique language” really isn’t all that different from what’s been proposed before, and any semantic differences are probably not novel. If you make your language C + transactional memory, the LLM probably has enough information about both to reason about your code without having to be trained on a billion lines.

Probably if you’re trying to be esoteric and arcane then yeah, you might have trouble, but that’s not normally how languages evolve.

Comment by dmd 5 hours ago

No, mine's a esoteric declarative data description/transform language. It's pretty damn weird.

Comment by wizzwizz4 4 hours ago

You may underestimate the weirdness of existing declarative data transformation languages. On a scale of 1 to 10, XSLT is about a 2 or 3.

Comment by dmd 4 hours ago

Mine's a weird, bad copy of Ab Initio's DML. https://www.google.com/search?q=ab+initio+dml+language

Comment by imiric 4 hours ago

My comment is based precisely on using these tools frequently, if not daily, so what's wild is you assuming I don't.

The impact that lack of training data has on the quality of the results is easily observable. Try getting them to maintain a Python codebase vs. e.g. an Elixir one. Not just generate short snippets of code, but actually assist in maintaining it. You'll constantly run into basic issues like invalid syntax, missing references, use of nonexistent APIs, etc., not to mention more functional problems like dead, useless, or unnecessarily complicated code. I run into these things with mainstream languages (Go, Python, Clojure), so I don't see how an esolang could possibly fair any better.

But then again, the definitions of "just fine" and "decent" are subjective, and these tools are inherently unreliable, which is where I suspect the large disconnect in our experiences comes from.

Comment by voxleone 6 hours ago

In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.

Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.

One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.

Comment by abraxas 6 hours ago

I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.

I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.

Comment by spelunker 7 hours ago

Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?

Comment by Fnoord 1 hour ago

Without proper attribution, it seems more fair to say copyright infringement occurred, on a massive scale if I may add. The burden of proof lies at the owners of the LLM. Which is why, if you do not want a blackbox, you want training data to be properly specified. That ain't happening now because of the skeletons in the closet.

Comment by idiotsecant 6 hours ago

I think the only hope is that AGI arises and picks up where humanity left off. Otherwise I think this is the long dark teatime of human engineering of all sorts.

Comment by tartoran 5 hours ago

So you’re hoping for a blackbox uninspectable by humans? That to me sounds like a nightmare, a nightmare worse than all the cruft and stupid rules humanity accrued over time. Let’s hope the future tech is inspectable and understandable by humans.

Comment by idiotsecant 3 hours ago

I think if we assume that AGI will be a thing the odds of future tech remaining inspectable by humans is pretty unlikely. Would you build a car so that your dog can maintain it?

Comment by _aavaa_ 7 hours ago

I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.

Comment by phn 7 hours ago

A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.

On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.

Comment by jetbalsa 7 hours ago

That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.

Comment by onlyrealcuzzo 7 hours ago

> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:

1) It maximizes local reasoning and minimizes global complexity

2) It makes the vast majority of bugs / illegal states impossible to represent

3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)

4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)

The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.

Comment by idiotsecant 6 hours ago

How does a programming language prevent the vast majority of bugs? I feel like we would all be using that language!

Comment by onlyrealcuzzo 5 hours ago

See Rust with Use-after-Free, fearless concurrency, etc.

My language is a step ahead of Rust, but not as strict as Ada, while being easier to read than Swift (especially where concurrency is involved).

Comment by gf000 6 hours ago

I agree with your questioning of it being capable of preventing bugs, but your second point is quite likely false -- we have developed a bunch of very useful abstractions in "research" languages 50 years ago, only to re-discover them today (no null, algebraic data types, pattern matching, etc).

Comment by johnfn 8 hours ago

> If you’re not writing or reading it, the language, by definition doesn’t matter.

By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.

Comment by koolala 7 hours ago

Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.

Comment by johnbender 8 hours ago

In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.

Comment by andyfilms1 7 hours ago

I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.

Comment by entropie 7 hours ago

Yeah, what could go wrong.

Comment by eatsyourtacos 5 hours ago

I have been building a game via a separate game logic library and Unity (which includes that independent library).. let's just say that over the last couple weeks I have 100% lost the need to do the coding myself. I keep iterating and have it improve and there are hundreds of unit tests.. I have a Unity MCP and it does 95% of the Unity work for me. Of course the real game will need custom designing and all that; but in terms of getting a complete prototype setup.... I am literally no longer the coder. I just did in a week what it would have taken me months and months and months to do. Granted Unity is still somewhat new to, but still.. even if you are an expert- it can immediately look at all your game objects and detect issues etc.

So yeah for some things we are already at the point of "I am not longer the coder, I am the architect".. and it's scary.

Comment by nineteen999 4 hours ago

100% same experience with Claude and Unreal Engine 5 over here. And as the game moves from "less scaffolding" towards "more code", Claude actually is getting better at one-shotting things than it ever was - probably due to there being a lot more examples in the codebase of how to handle things under different scenarious (world compositing, multiplayer etc etc).

Comment by gopalv 5 hours ago

> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.

That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].

The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

Sure it made a mistake, but it is right there, you could go again.

Pull the lever, doesn't matter if the kids have Karate at 8 AM.

[1] - https://github.com/t3rmin4t0r/magic-partitioning

Comment by asciimov 5 hours ago

This takes all the satisfaction out of spending a few well thought out weekends to build your own language. So many fun options: compiled or interpreted; virtual machine, or not; single pass, double pass, or (Leeloo Dallas) Multipass? No cool BNF grammars to show off either…

It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.

Comment by fcatalan 3 hours ago

This enables different satisfactions. You can still choose all your options but have a working repl or small compiler where you are trying them within minutes.

Also you decide how much in control you are. Want to provide a hand made grammar? go ahead, want the agent to come up with it just from chatting and pointing it to other languages, ok too. Want to program just the first arithmetic operator yourself and then save the tedium of typing all the others so you can go to the next step? fine...

So you can have a huge toy language in mere days and experiment with stuff you'd have to build for months by hand to be able to play with.

Comment by NuclearPM 4 hours ago

Deciding on the syntax and semantics myself and using AI to help implement my toy language has been very rewarding.

Mine is an Io and Rebol inspired language that uses SQlite and Luajit as a runtime.

1.to 10 .map[n | n * n].each[n | n.say!]

Comment by bobjordan 6 hours ago

I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.

At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.

Comment by wcarss 5 hours ago

This is the take, very well said. I've been trying to use analogies with cars and cabinet making, but building a house is just right for the scale and complexity of the efforts enabled, and the ownership idea threads into it well.

Going into the vault!

Comment by heavyset_go 4 hours ago

> I believe the author owns the language.

Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.

Comment by wild_egg 4 hours ago

Do you have a citation for that?

Comment by heavyset_go 4 hours ago

Yes[1]. Copyright applies to human creations, not machine generated output.

It's possible to use AI output in human created content, and it can be copyrightable, and substantiative, transformative human-creative alteration of AI output is also copyrightable.

100% machine generated code is not copyrightable.

[1] https://newsroom.loc.gov/news/copyright-office-releases-part...

Comment by wild_egg 2 hours ago

> The content you are looking for is currently unavailable.

Comment by heavyset_go 1 hour ago

Here's the correct link, I accidentally added an 'l' to the end when pasting: https://newsroom.loc.gov/news/copyright-office-releases-part...

Comment by kccqzy 3 hours ago

There are so many cases of the copyright office rejecting the request to register copyright for AI-generated works. Here’s just one example: https://www.copyright.gov/rulings-filings/review-board/docs/... (skip to section III).

Comment by wild_egg 2 hours ago

> This analysis will be “necessarily case-by- case” because it will “depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.”

This seems the opposite of the cut and dry "cannot be copyrighted" stance I was replying to.

Comment by kccqzy 30 minutes ago

Yes it does depend on the circumstances. You are free to waste your own time to try this at the copyright office, but in my opinion, this project's 100% LLM output where the human element is just writing prompts and steering the LLM is the same circumstance as my linked case where the human prompted Midjourney 624 times before producing the image the human deemed acceptable. The copyright office has this to say:

> As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”

Comment by anonnon 3 hours ago

> Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.

Comment by thunky 1 hour ago

Odd choice of a comment to post this reply to.

I guess you're just not going to believe what anyone says.

Comment by anonnon 1 hour ago

> Odd choice of a comment to post this reply to.

How? They claimed LLMs somehow enabled them to write more code in the span of 3.5 years (assuming they started with ChatGPT's introduction) than they would be able to write in the span of decades. No studies have shown this. But at least one study did show that LLM devs overestimate how productive these systems make them.

Comment by pluc 7 hours ago

Claude Code built a programming language using you

Comment by kreek 2 hours ago

This is the second "I built a programming language" post in a day, and if I post the one I'm building, we can have a three-day streak :D They thought AI meant personal software, but it also means personal programming languages!

In all seriousness, this is great, and why not? As the post said, what once took months now takes weeks. You can experiment and see what works. For me, I started off building a web/API framework with certain correctness built in, and kept hitting the same wall: the guarantees I wanted (structured error handling, API contracts, making invalid states unrepresentable) really belonged at the language level, not bolted onto a framework. A few Claude Code sessions later, I had a spec, then a tree-sitter implementation, then a VM/JIT... something that, given my sandwich-generation-ness, I never would have done a few months ago.

Comment by bfivyvysj 1 hour ago

I should post number 4, last week I built a new lisp framework for LLMs as first class programmers. It compiles for go, python, and JS.

Comment by emh68 2 hours ago

Okay I'll add mine too. I recently vibe-coded a Ruby interpreter, as a single-header C file, meant to be embedded (like Lua or mruby). I call it Luby: https://halferty.dev/index.php/luby-single-header-embeddable...

Comment by ramon156 8 hours ago

AI written code with a human writted blog post, that's a big step up.

That said, it's a lot of words to say not a lot of things. Still a cool post, though!

Comment by ivanjermakov 7 hours ago

> with a human writted blog post

I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.

Comment by wavemode 7 hours ago

We're definitely not at that point.

If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.

Comment by craigmart 6 hours ago

You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce

Comment by exitb 6 hours ago

”I named it Cutlet after my cat. It’s completely legal to do that.”

I’ve never seen LLM being able to produce these kind of absurdist jokes. Or any jokes, really.

Comment by craigmart 4 hours ago

comedy is a completely different thing than natural tone. I agree that they’re incapable of coming up with decent jokes

Comment by wavemode 2 hours ago

I never claimed that you can't get natural tone out of an LLM. What I said was that you can't get this blog post out of one.

By all means, go read the post and then try to do so.

Comment by Bnjoroge 6 hours ago

Agree. I've been yearning for more insightful posts and there's just not alot of them out there these days

Comment by aleksiy123 6 hours ago

One topic of llms not doing well with UI and visuals.

I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.

Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.

CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.

You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.

Your commands structure will very roughly map to your resources or pages.

Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)

You can get it to build the remote storage, then the apis, finally the frontend.

All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.

I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.

Comment by tines 8 hours ago

Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.

Comment by ajay-b 7 hours ago

Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!

Comment by g3f32r 7 hours ago

I suspect this is going to be an iteration of the Simpsons meme soon, but...

Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ

Comment by theblazehen 7 hours ago

Here's Claude playing Detroit: Become Human https://www.youtube.com/watch?v=Mcr7G1Cuzwk

Comment by monster_truck 2 hours ago

My lemmings port has MCP if you want to try this https://github.com/doublemover/LemmingsJS-MIDI

Comment by jetbalsa 7 hours ago

I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info

Comment by knicholes 6 hours ago

Honestly some of the most fun I had playing Ultima Online was writing scripts to play it for me.

Comment by monster_truck 2 hours ago

The stun -> disarm -> pickpocket -> bludgeon defenseless player scripts are still the most fun I've ever had in an MMO.

Comment by Bnjoroge 6 hours ago

Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.

Comment by jaggederest 7 hours ago

I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler

Comment by marginalia_nu 7 hours ago

Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.

Comment by Copyrightest 2 hours ago

[dead]

Comment by soperj 7 hours ago

We did this in 4th year comp-sci.

Comment by 7 hours ago

Comment by laweijfmvo 7 hours ago

Using LLMs to invent new programming languages is a mystery to me. Who or what is going to use this? Presumably not the author.

Comment by matthews3 7 hours ago

AI generate some feedback, then just move onto the next project, and repeat.

Comment by 1 hour ago

Comment by monster_truck 2 hours ago

Same but Codex, still chipping away at it. https://github.com/doublemover/Slopjective-C

It has not had any issues at all writing objc3 code

Comment by Copyrightest 2 hours ago

[dead]

Comment by ractive 3 hours ago

> [...] “just one more prompt” [...]. That creates a strong urge to try using it for everything all the time. And just like with slot machines, the [house](https://www.anthropic.com) always wins.

I really liked that part - the house always wins.

Comment by dybber 5 hours ago

I have been trying this as well, and you can quickly come very far.

However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.

Comment by singularity2001 2 hours ago

Comment by p0w3n3d 6 hours ago

I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...

Comment by NuclearPM 4 hours ago

Neutral to entropy? What do you mean?

Comment by randallsquared 5 hours ago

> The @ meta operator also works with comparisons.

I haven't read any farther than this, yet, but this made me stutter in my reading. Isn't a comparison just a function that takes two arguments and returns a third? How is that different from "+"?

Comment by amelius 8 hours ago

The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.

Comment by geon 7 hours ago

That’s git commits.

Comment by amelius 7 hours ago

That's arguably not very ergonomic, which is probably the biggest requirement for a programming language.

Comment by beepbooptheory 5 hours ago

Why care about ergonomics if you're not going to write the code?

Comment by amelius 2 hours ago

Managers also want ergonomics.

Comment by scottmf 7 hours ago

or css

Comment by koolala 7 hours ago

A REPL + immutability?

Comment by jackby03 6 hours ago

Curious how you handled context management as the project grew — did you end up with a single CLAUDE.md or something more structured? I've been thinking about this problem and working on a standard for it.

Comment by shadeslayer 5 hours ago

It’s been a while friend

Congratulations on getting to the front page ;)

Comment by jcranmer 7 hours ago

I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:

  fn read_float_literal(&mut self) -> &'a str {
    let start = self.pos;
    while let Some(ch) = self.peek_char() {
      if ch.is_ascii_alphanumeric() || ch == '.' || ch == '+' || ch == '-' {
        self.advance_char();
      } else {
        break;
      }
    }
    &self.source[start..self.pos]
  }
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.

At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.

Comment by dboreham 6 hours ago

I had a somewhat experience with Claude coding an Occam parser but I just let it do it's thing and once I had presented it with a suitable suite of test source code, it course corrected, refactored and ended up with a reasonable solution. The journey was a bit different to an experienced human developer but the results much the same and perhaps 100X cheaper.

Comment by jcranmer 5 hours ago

Some of the issues are undoubtedly that I have a decidedly non-standard architecture for my system that the AI refuses to acknowledge--it hallucinated things like integers, which isn't a part of my system, simply because what I have looks almost like a standard example expression grammar so clearly I must have all of the standard example expression grammar things. (This is a pretty common failure mode I've noticed in AI-based systems--when the thing you're looking for is very similar to a very notable, popular thing, AI systems tend to assume you mean the latter as opposed to the former.)

Comment by righthand 8 hours ago

> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.

This is such an interesting statement to me in the context of leftpad.

Comment by rpowers 7 hours ago

I'm imagining the amount of energy required to power the datacenter so that we can produce isEven() utility methods.

Comment by righthand 6 hours ago

Also, neither over the wire dependency issues or code injection issues (the two major criticisms) are solved by using an llm to produce the code. Talk about shifting complexity. It would be better if every LSP had a general utility library generator built in.

Comment by nefarious_ends 7 hours ago

we need a caching layer

Comment by craigmcnamara 7 hours ago

Now anyone can be a Larry Wall, and I'm not sure that's a good thing.

Comment by nz 7 hours ago

This is not exactly novel. In the 2000s, someone made a fully functioning Perl 6 runtime in a very short amount of time (a month, IIRC) using Haskell. The various Lisps/Schemes have always given you the ability to implement specialized languages even more quickly and ergonomically than Haskell (IMHO).

This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).

Comment by zahirbmirza 6 hours ago

"Just one more prompt..." I can relate. who else has been affected by this?

Comment by ractive 3 hours ago

Yes, it completely sucks you in and you do "just one more prompt" until late in the night. And somehow you wake up with headache the next morning...

Comment by esafak 2 hours ago

This was a missed opportunity to showcase how to use formal methods for proof of correctness. The author does not even seem to be particularly interested in programming language design; there is no discussion of design goals, or inspiration. Nothing to see here.

Comment by grumpyprole 6 hours ago

Does this really test Claude in a useful way? Is building a highly derivative programming language a useful use case? Claude has probably indexed all existing implementations of imperative dynamic languages and is basically spewing slop based on that vibe. Rather than super flexible, super unsafe languages, we need languages with guardrails, restrictions and expressive types, now more than ever. Maybe LLMs could help with that? I'm not sure, it would certainly need guidance from a human expert at every step.

Comment by dwedge 6 hours ago

Admittedly I only skimmed this, but I found it interesting that they came to the conclusion that Claude is really bad at (thing they know how to do, and therefore judge ) and really good at (thing they don't know how to do or judge).

I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."

Comment by mrsmrtss 6 hours ago

I had the exact same thoughts reading it.

Comment by shevy-java 6 hours ago

That was step #1.

Step #2 is: get real people to use it!

Comment by mriet 7 hours ago

Wait. You built a new language, that there's thus no training data for.

Who the hell is going to use it then? You certainly won't, because you're dependent on AI.

Comment by logicprog 7 hours ago

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

Comment by Bnjoroge 6 hours ago

it's a valid question and one that everyone should be asking, unless ofcourse it's for fun which is what I believe this is.

Comment by croes 6 hours ago

It isn’t shallow.

Who’s going to use it?

Comment by fcatalan 2 hours ago

You tell the agent to write a whimsical tutorial book about the language, it takes about an hour :)

Comment by dankwizard 2 hours ago

We're in the process of migrating our entire code base over to this new language (One of the big 4 banks) - Keen to add early adopters to our resumes : - )

Comment by koolala 7 hours ago

With clear examples in their context they don't need training data.

Comment by 8 hours ago

Comment by atoav 6 hours ago

I rolled a fair dice using ChatGPT.

Comment by kerkeslager 7 hours ago

> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).

This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.

These "guardrails" are made of silly putty.

EDIT: Would downvoters care to share an explanation? Preferably one they thought of?

Comment by octoclaw 7 hours ago

[dead]

Comment by aplomb1026 7 hours ago

[dead]

Comment by dehkopolis 7 hours ago

[dead]

Comment by sabinbir 1 hour ago

[dead]

Comment by iberator 6 hours ago

Nope. You didn't write it. You plagiarized it. AI is bad

Comment by cptroot 3 hours ago

If you read TFA, you'll find that the author agrees with you - at least on your first point.

While I agree "AI is bad", well-written posts like this one can provide real insight into the process of using them, and reveal more about _why_ AI is bad.