Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns
Posted by henwfan 20 hours ago
I built AlgoDrill because I kept grinding LeetCode, thinking I knew the pattern, and then completely blanking when I had to implement it from scratch a few weeks later.
AlgoDrill turns NeetCode 150 and more into pattern-based drills: you rebuild the solution line by line with active recall, get first principles editorials that explain why each step exists, and everything is tagged by patterns like sliding window, two pointers, and DP so you can hammer the ones you keep forgetting. The goal is simple: turn familiar patterns into code you can write quickly and confidently in a real interview.
Would love feedback on whether this drill-style approach feels like a real upgrade over just solving problems once, and what’s most confusing or missing when you first land on the site.
Comments
Comment by firsttracks 15 hours ago
Ended up deciding to buy a subscription, but looks like the site still says "82% claimed" and "17 spots left". I appreciate the one-time purchase model, but feel that it's a bit shady of a tactic.
Comment by zweifuss 11 hours ago
Comment by michaelmior 16 hours ago
Comment by VBprogrammer 16 hours ago
Comment by marssaxman 15 hours ago
Comment by VBprogrammer 14 hours ago
I've had candidates describe what I'd loosely call "warm-up" questions as leet code problems. Thing like finding the largest integer in an array or figuring out if a word is a palindrome.
Comment by cloverich 12 hours ago
typical examples would be sorting algorithms or graph search problems, and some companies do indeed ask these; some big tech (the ones everyone studies for) may exclusively ask these. Thats imo largely because CS new grads are their primary pipeline.
Comment by andoando 12 hours ago
Comment by henwfan 14 hours ago
You are right that the current check still leans too much toward my reference solution. It already ignores formatting and whitespace, but it is still quite literal about structure and identifiers, which nudges you toward writing my version instead of your own. There are many valid ways to express the same idea and I do not want to lock people into only mine.
Where I want to take it is two clear modes. One mode tracks the editorial solution for people who want to learn that exact version for an interview, while still allowing harmless changes like different variable names and small structural tweaks. Another mode is more flexible and is meant to accept your own code as long as it is doing the same job. Over time the checker should be able to recognise your solution and adapt its objectives and feedback to what you actually wrote, instead of pushing you into my template. It should care more about whether you applied the right logic under time pressure than whether you matched my phrasing.
There is also a small escape hatch already in the ui. If you completely blank or realise you have missed something, you can press the Stuck button to reveal the reference line and a short explanation, so you still move forward instead of getting blocked by one detail.
You are pushing exactly on the area I plan to invest in most. The first version is intentionally literal so the feedback is never vague, but the goal is for the checker to become more adaptive over time rather than rigid, so it can meet people where they are instead of forcing everyone through one exact solution.
Comment by losteric 14 hours ago
Comment by epolanski 18 hours ago
The lifetime membership + launch discount was a good marketing bait I felt for.
Not really understanding the negativity here. We know for a fact that most of the people that master intellectual problems do so via pattern recognition, not by reasoning.
You show a chess master a position, he/she can instantly tell you what the best moves are without "thinking" or "calculating" because it's mostly pattern recognition.
Maths and algorithms fall in the same category. When approaching new problems, masters don't really start processing the information and reasoning about it, instead they use pattern recognition to find what are very similar problems.
The thing I really don't like is the lack of TypeScript or at least JavaScript, which are the most common languages out there. I really don't enjoy nor use Java/Python/C++.
Comment by embedding-shape 18 hours ago
Where is this fact stated, and who are "we" here? Sounds like an opinion or guess at best.
> Not really understanding the negativity here
There are two comments that could be read negativily, the rest is neutral or positive. I don't really understand the constant need for people to bring up what (they think) the rest of the comments said. Post your piece adding positivity if you want, but most of the time comments end up a fair mix so any time someone adds a snippet like that, it turns outdated in a few hours.
Comment by epolanski 18 hours ago
Going back to the chess example, while chess masters are incredible at analyzing complex positions they can recognize as "similar to", their advantage over normal human beings is very small when positions are completely randomized.
"Peak: Secrets from the New Science of Expertise", by Ericsson goes more in depth of the topic, but there's lots of literature on the topic.
Comment by hansmayer 16 hours ago
Pattern recognition in experts comes from combination of theoretical understanding and a lot of practical problem solving experience (which translates into patterns forming in way of neural paths) - not the other way around. If you dont understand the problem you are solving, then yes maybe you'll be able to throw a pattern at it and with a bit of luck solve it (kinda like how LLMs operate), but this will not lead to understanding. Memorising patterns isolated from theoretical backgrounds is not something that will create an expert in a field.
Comment by pcthrowaway 17 hours ago
The book you referenced does not say they're comparable to normal players at playing from a random position.
Normal players are almost as good as them at recalling a nonsensical board of random pieces.
The suggestion that the advantage of a chess master over a normal player is "very small" at playing from a random position is laughable.
Comment by epolanski 17 hours ago
Comment by pegasus 16 hours ago
Comment by inesranzo 18 hours ago
> The lifetime membership + launch discount was a good marketing bait I felt for.
The negativity here with me is because it feels like clickbait and like a scammy ad to manipulate me into purchasing.
It is almost lying. I find it unethical and I don't think there are 17 lifetime access spots, it's just artificial hype that doesn't make sense to me.
Marketing (at least like this) is basically lying.
Comment by epolanski 17 hours ago
Might be because I'm also considering finding new clients/jobs, and apparently even for 2/3 months of collaborations people are sending me through several rounds of algo questions, so it was a nice add on top of my leetcode and codewars drills.
Comment by henwfan 18 hours ago
I agree with you on pattern recognition. AlgoDrill is built around taking patterns people already understand and turning them into something their hands can write quickly under pressure. You rebuild the solution line by line with active recall, small objectives, and first principles explanations after each step, so it is more than just memorizing code.
You are also right about the language gap. Right now the drills are Python first, but I am already working on full support for JavaScript, Java, and C++ across all problems, and I will have all of those in by the end of this year. I want people to be able to practice in the language they actually use every day, so your comment helps a lot.
Comment by johnhamlin 17 hours ago
Comment by baq 18 hours ago
Comment by Mars008 17 hours ago
this is probably not accidental.
Comment by andoando 10 hours ago
Comment by paddleon 17 hours ago
In the last year or so HN seems to have attracted a lot of people (plus some bots) who seem to have been socialized on Reddit.
I don't know if these people are ignorant of what a good discussion forum can be (because they've never experienced one) or just don't care, but I do wish we could see more reflection on the second-order impacts of posting, and a move away from the reflexive negativity that mimics the outer face of good criticism while totally missing the thought and expertise good criticism requires.
Comment by kilroy123 15 hours ago
Comment by monooso 16 hours ago
If anything, GitHub seems like a more obvious choice for such a site.
Comment by port11 50 minutes ago
Comment by henwfan 13 hours ago
I am working on both and plan to let people move their account once they are live if they would prefer not to use Google here.
Comment by wodenokoto 18 hours ago
I bit more info on what NeetCode is, why I should focus on those 150 problems and how the drilling actually work would be helpful. Do I get asked to do the same problems on repeat? Is it the same problems reformulated over and over? Is there actualy any spaced repetition, or am I projecting?
Comment by henwfan 18 hours ago
NeetCode 150 is a popular curated list of LeetCode problems that covers the core interview patterns people expect nowadays, like sliding window, two pointers, trees, graphs, and dynamic programming. I used that set as the base so you are not guessing which problems to focus on, and more problems and patterns are being added on top of that core set regularly.
On the study side, each problem has a consistent structure with the core idea, why that pattern applies, and a first principles walkthrough of the solution. On the practice side, the solution is broken into small steps. Each step has a clear objective in plain language, and you rebuild the code line by line by filling in the missing pieces. After you answer, you see a short first principles explanation tied to the line you just wrote, so you are actively recalling the logic instead of just reading notes.
You can repeat problems and patterns as much as you want, mark problems as solved or unsolved, and filter by pattern so you can focus on the ones you struggle with most. There is not a full automatic review schedule yet. For now you choose what to review, and the goal is to use that progress data to track weak patterns, guide what you should drill next, and add more types of focused drills over time.
Comment by Fire-Dragon-DoL 3 hours ago
Thank you either way, I purchased I license
Comment by embedding-shape 19 hours ago
But then I don't know how to reconcile the idea that some people use LeetCode to pass interviews, some use it recreationally, but then this app seems to indicate some people use LeetCode to learn patterns to implement in the real world, which seems absolutely backwards to me. These are tiny examples, not "real programming" like you'd encounter in the world outside of computers, LeetCode can impossibly teach you how to create useful programs, it only teaches you syntax and specific problems.
So I guess take this as a word of caution, that no matter how much you grind LeetCode, nothing will prepare you to solve real world problems as practicing solving real world problems, and you don't need any platforms for that, just try to make your daily life better and you'll get better at it over time and with experience of making mistakes.
Comment by baq 18 hours ago
they're doing it for themselves just like when they solve sudokus, crosswords or play fortnite
Comment by another_twist 17 hours ago
Comment by mylifeandtimes 18 hours ago
Comment by Vaslo 15 hours ago
Comment by embedding-shape 15 hours ago
Yeah, this is me very much to the core of my bones, and I think that's why I don't find any pleasure or enjoyment from these synthetic coding challenges, and trying to understand those that do.
Comment by 999900000999 15 hours ago
I dislike limited offers, because I think you're placing a bit of unfair pressure on the user to buy. But I went ahead and gave you 30 bucks.
I'm going to study this before my next interview, thank you
Comment by emaro 14 hours ago
Or to put it another way, if I give some applicant a coding problem to solve, and they just write down the solution, I didn't learn much about them except they memorized the solution to my problem. That most likely means I gave them the wrong (too easy) problem. It will only increase the change of me hiring them by a tiny bit.
Edit: I don't hate the player, I hate the game.
Comment by notepad0x90 14 hours ago
10 programmers will write 10 different ways to solve a simple problem. and that code is tech-debt other programmers have to maintain at some point. Just having coders that have the same base-level memorized problem solving patterns can ease that pain, and it can make collaboration/reviews easier down the road.
Comment by pxtail 18 hours ago
Comment by hinicholas 16 hours ago
Comment by francoispiquard 19 hours ago
Comment by henwfan 19 hours ago
In chess you repeat the same positions until the patterns feel automatic. Here it is LeetCode problems. You keep seeing the same core patterns and rebuild the solution step by step. For each step and line there is a small objective first, and then a short first principles explanation after you answer, so you are not just memorizing code but training pattern recognition and understanding at the same time.
Comment by AidenVennis 17 hours ago
Comment by reverseblade2 12 hours ago
Comment by apt-apt-apt-apt 14 hours ago
Comment by skydan 14 hours ago
Comment by bochoh 15 hours ago
Quick suggestions:
- GitHub OAuth would feel natural for devs.
- Broaden language support (C#, TypeScript, Ruby).
- Add dark/light mode toggle for comfort.
Excited to see where it goes — thanks for building.Comment by henwfan 13 hours ago
GitHub sign in is on the way. Right now it is Google only, but I am adding GitHub so it feels more natural for devs.
For languages, the drills are Python first. Java, C++ and JavaScript will be fully supported by the end of this year across all problems.
The site is dark by default today. A proper light and dark toggle is planned so people can pick what is more comfortable for longer sessions.
Really appreciate you trying it this early and sharing where you would like it to go.
Comment by sumnole 13 hours ago
Comment by henwfan 13 hours ago
You are right that in the current version the checker is still too literal about names and structure. In two sum for example it nudges you toward my map name instead of letting you use your own, which is not what I want to optimise for once you already know the idea.
The plan from here is to keep an editorial mode for people who want to follow the exact solution and add a more flexible mode that accepts your own names and structure as long as it is doing the same job. Over time the checker should recognise what you actually wrote and adapt its objectives and feedback to that, instead of forcing everyone into one naming scheme.
Comment by quibono 9 hours ago
Comment by nialv7 7 hours ago
Comment by JoeOfTexas 5 hours ago
Most useful when you work with large datasets, if you can reduce a workload that takes hours into minutes or less, congrats, otherwise, you are forced to wait the hours. Either way, job security.
Comment by pelagicAustral 18 hours ago
Comment by noident 17 hours ago
* Take home projects filter out people with busy lives. Wastes 100 people's time to hire 1 person. Can't be sure they didn't cheat. No incentives to stop company from giving you a 10 hour assignment and then not looking at it. The candidate with the most time to waste wins.
* Relying on academic credentials unfairly favors people from privileged backgrounds and doesn't necessarily correlate with skill as an engineer.
* Skipping the tech interview and just talking about the candidate's experience is prone to favoring bullshitters, plus you'll miss smart people who haven't had their lucky break yet.
* Asking "practical" questions tends to eliminate people without familiarity with your problem domain or tech stack.
* We all know how asking riddles and brainteasers worked out.
With leetcode, the curriculum is known up front and I have some assurance that the company has at least has some skin in the game when they schedule an engineer to evaluate me. It also tests your general knowledge and in some part intelligence as opposed to testing that you have some very narrow experience that happens to overlap with the job description.
Comment by stuaxo 14 hours ago
You're filtering out people who don't have a lot of extra time on their hands to get good at one particular kind of puzzle.
Time poor people like parents, or people that are talented but busy in their current jobs.
Comment by boredtofears 14 hours ago
Haven't done one since pre-LLM era though and that path seems like it might be completely infeasible for employers now.
That said, the most productive interviews I've been a part of as both employee and employer have always been with the technical people that you'll actually work with and conversational in nature. You can learn a lot about what someone knows by listening to their experiences and opinions (but this depends greatly on the quality of the interviewer)
Comment by neilv 17 hours ago
Comment by another_twist 17 hours ago
Comment by koakuma-chan 12 hours ago
Comment by androng 17 hours ago
Comment by stack_framer 7 hours ago
Comment by ohghiZai 7 hours ago
Comment by kybernetyk 19 hours ago
Comment by qwertytyyuu 18 hours ago
Comment by dJLcnYfsE3 19 hours ago
Comment by HenryQuillin 10 hours ago
Comment by australium 14 hours ago
Comment by another_twist 17 hours ago
Comment by dragochat 19 hours ago
you either:
(a) want DEEP understanding of math and proofs behind algorithms etc.
(b) can get away with very high level understanding, and refer to documentation and/or use LLMs for implementation details help
there is no real world use case for a middle-ground (c) where you want someone with algo implementation details rote-memorized in their brain and without the very deep understanding that would make the rote-memorization unnecessary!
Comment by komali2 18 hours ago
I was watching a video recently talking about how Facebook is adding KPIs for its engineers' LLM usage. As in, you will be marked negatively in your performance review if your code is good but you didn't use AI enough.
I think, you and I agree, that's obviously stupid right? I imagined myself as an engineer at Facebook, reading this email come through. I can imagine two paths: I roll my eyes, find a way to auto-prompt an LLM to fulfill my KPI needs, and go back to working with my small working group of "underrecognized folks that are doing actual work that keeps the company's products functioning against all odds." Or, the other path: I put on my happy company stooge hat, install 25 VScode LLM forks, start writing a ton of internal and external posts about how awesome AI is and how much more productive I am with it, and get almost 0 actual work done but score the highest on the AI KPIs.
In the second path, I believe I will be more capitalistically rewarded (promotions, cushy middle/upper management job where I don't have to do any actual work). In the first, I believe I will be more fulfilled.
Now consider the modern interview: the market is flooded with engineers after the AI layoffs. There's a good set of startups out there that will appreciate an excellent, pragmatic engineer with a solid portfolio, but there's the majority of other gigs, for which I need to pass a leetcode interview, and nothing else really matters.
If I can't get into one of the good startups, then, I guess I'm going to put on my dipshit spinny helicopter hat and play the stupid clown game with all the managers so I can have money.
Comment by ivape 14 hours ago
But like Art, the artists came from everywhere. We're being dishonest if we don't acknowledge what truly made these developers get to where they are, and it wasn't because they originally went "Oh, I know what I'll do, I'll do thousands of Leetcode problems', that is absolutely not the true story of the developer in the last decade.
Leetcode is a sloppy attempt at recognizing and appropriately handling developers. It was an "attempt", a failed one imho. It fundamentally ignores the spirit in which these developers operated in, it reduces them to gym rats, and that's not how they got there.
This being a spiritual problem is what makes the most consistent sense. Even those that grind Leetcode will tell you their heart is not in it (just like GP mentioned above).
Comment by bko 18 hours ago
More often than not a deep interest in a particular technical domain is a liability. It's like that guy that insists on functional programming design patterns that insists on a fold with tail recursion where simple mutation could have easily sufficed. Or endless optimization, abstraction and forced patterns. Bro, you're working on building a crud app, we don't need spacecraft design.
Comment by only-one1701 18 hours ago
Comment by farhanhubble 19 hours ago
I'm seeing this trend again in the field of AI where math olympiad participants are being given God like status by a few companies and the media.
Truth is even the most prolific computational scientists will flunk these idiotic interviews.
Comment by netdevphoenix 18 hours ago
It's why developers as a group will lose negotiating power over time. You would expect a smart person to question why that 'problem' exists in the first place rather than forge ahead and making a solution for a problem that doesn't exist. It's like your manager telling you to write a software that does something, whatever that is. Your first question should be why and you should not type a single letter until you understand the domain and whether a software solution is needed in the first place.
For all the intellectuality modern devs give to themselves, they are still asking how high when told to jump. And in some cases even bragging about jump heights. Only difference is that many devs look down upon others (or simply are unable to understand those) who refuse to jump.
We all know devs have better things to focus on, given the state of modern software development.
Comment by MyHonestOpinon 14 hours ago
Fun story. For years, I used a set of problems that I took from a very old programming book. I have probably seen dozens of solutions for these problems. About 6 years, in an interview, somebody happen to ask me about one of these problems. So, I wrote the solution and the interviewer told me it was wrong, but he couldn't tell me why it was wrong. Then he proceded to clean the screen. (It was remote interview). So I flunk the interview with a problem that I knew back and forth.
Comment by ascorbic 17 hours ago
Comment by petesergeant 18 hours ago
Ship has definitely sailed
Comment by dzonga 18 hours ago
now the same people in the industry advocating for leetcode are also advocating for vibecoding. I wonder if an LLM is made to do leetcode before approval for vibecoding.
day in day out, the software gets worse, delayed, shipped with bugs, very slow yet yeah prove to us you can build software by doing puzzles
if you advocate for leetcode - fxxk yxx.
Comment by Surac 17 hours ago
Comment by neilv 17 hours ago
It was most popular during zero interest rate phenomenon, when there were numerous investment scams based on startup companies that could have a very lucrative "exit" for those running the scheme, despite losing money as a business.
LeetCode falls out of favor when companies realize they need to build viable businesses, and need software engineers rather than theatre performances.
Comment by koakuma-chan 12 hours ago
Comment by neilv 11 hours ago
But then I looked again at the prep materials they recommended for their frat hazing interview theatre, and it was so depressingly trashy, that it made me not want to work there anymore.
And things I read publicly (e.g., culture of disingenuous mercenary careerism, and hiring scraping the bottom of the barrel that knows only the interview gaming) and hear privately (worse) mean that probably it was for the best that I didn't move there, though the bigger bank account would've been nice.
Comment by dsr_ 17 hours ago
"farming" is the same but without the difficulty: just doing an easy but boring task repeatedly because it gets you something else that you want.
Comment by inesranzo 17 hours ago
Please stop with the false urgency and borderline lying to people saying there are 17 spots when they most likely aren't.
Doing this to sell more is unethical and dishonest.
I think if this project didn't do this it might work and go far.
Comment by ErroneousBosh 7 hours ago
Comment by linguae 7 hours ago
Comment by clbrmbr 15 hours ago
Comment by game_the0ry 17 hours ago
But fuck leetcode. With AI, its obsolete at this point.
Comment by another_twist 17 hours ago
Comment by constantcrying 19 hours ago
I get that some people feel forced into it, but nobody can believe that this is an appropriate measure to judge programmers on. Sure, being able to understand and implement algorithms is important, but this is not what this is training for.
Comment by henwfan 19 hours ago
The reality for a lot of candidates is that they still face rounds that look exactly like that, and they stress out even when they understand the ideas. I built this for that group, where the bottleneck is turning a pattern they already know into code under a clock. Each step in the drills is tied to a first principles explanation, so the focus is on the reasoning behind the pattern, not trivia.
Comment by netdevphoenix 18 hours ago
Comment by smetannik 6 hours ago
Leetcode wants subscription, NeetCode wants subscription, and now - yet another one thing.