Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
Posted by huntergemmer 3 days ago
Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.
The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:
- LTTB downsampling runs as a compute shader - Hit-testing for tooltips/hover is GPU-accelerated - Rendering uses instanced draws (one draw call per series)
The result: 1M points at 60fps with smooth zoom/pan.
Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/
Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`
Happy to answer questions about WebGPU internals or architecture decisions.
Comments
Comment by leeoniya 2 days ago
some notes from a very brief look at the 1M demo:
- sampling has a risk of eliminating important peaks, uPlot does not do it, so for apples-to-apples perf comparison you have to turn that off. see https://github.com/leeoniya/uPlot/pull/1025 for more details on the drawbacks of LTTB
- when doing nothing / idle, there is significant cpu being used, while canvas-based solutions will use zero cpu when the chart is not actively being updated (with new data or scale limits). i think this can probably be resolved in the WebGPU case with some additional code that pauses the updates.
- creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
> data: [[0, 1], [1, 3], [2, 2]]
this data format, unfortunately, necessitates the allocation of millions of tiny arrays. i would suggest switching to a columnar data layout.
uPlot has a 2M datapoint demo here, if interested: https://leeoniya.github.io/uPlot/bench/uPlot-10M.html
Comment by huntergemmer 2 days ago
Both points are fair:
1. LTTB peak elimination - you're right, and that PR is a great reference. For the 1M demo specifically, sampling is on by default to show the "it doesn't choke" story. Users can set sampling: 'none' for apples-to-apples comparison. I should probably add a toggle in the demo UI to make that clearer.
2. Idle CPU - good catch. Right now the render loop is probably ticking even when static. That's fixable - should be straightforward to only render on data change or interaction. Will look into it.
Would love your deeper dive feedback when you get to it. Always more to learn from someone who's thought about this problem as much as you have.
Comment by dapperdrake 2 days ago
And column-oriented data is a must. Look at Rlang's data frames, pandas, polars, numpy, sql, and even Fortran's matrix layout.
Also need specialized expicitly targetable support for Float32Array and Float64Array. Both API and ABI are necessary if you want to displace incumbents.
There is huge demand for a good web implementation. This is what it takes.
Am interested in collaborating.
Comment by huntergemmer 2 days ago
Comment by olau 2 days ago
I once had to deal with many million data points for an application. I ended up mip-mapping them client-side.
But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours. When you tune it correctly, you can drop most points without the difference being noticeable.
I didn't find any else doing that at the time, and some people seemed to have trouble accepting it as a viable solution, but if you think about it, it doesn't actually make sense to plot say 1 million points in a line chart 1000 pixels wide. On average that would make 1000 points per pixel.
Comment by PaulDavisThe1st 2 days ago
Bresenham's is one algorithm historically used to downsample the data, but a lot of contemporary audio software doesn't use that. In Ardour (a cross-platform, libre, open source DAW), we actually compute and store min/max-per-N-samples and use that for plotting (and as the basis for further downsampling.
Comment by leeoniya 2 days ago
this is, effectively, what uPlot does, too: https://github.com/leeoniya/uPlot/issues/1119
Comment by ghc 2 days ago
I discovered flot during my academic research career circa 2008 and it saved my ass more times than I can count. I just wanted to say thank you for that. I wouldn't be where I am today without your help :)
Comment by leeoniya 2 days ago
> But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours.
uPlot basically does this (see sibling comment), so hopefully that's some validation for you :)
Comment by dapperdrake 2 days ago
Comment by vlovich123 2 days ago
Comment by huntergemmer 2 days ago
My concern would be computational cost for real-time/streaming use cases. LTTB is O(n) and pretty cache-friendly. Wavelet transforms are more expensive, though maybe a GPU compute shader could make it viable.
The other question is whether it's "visually correct" for charting specifically. LTTB optimizes for preserving the visual shape of the line at a given resolution. Wavelet decomposition optimizes for signal reconstruction - not quite the same goal.
That said, I'd be curious to experiment. Do you have any papers or implementations in mind? Would make for an interesting alternative sampling mode.
Comment by vlovich123 2 days ago
I don't have any papers in mind, but I do think that the critique around visual shape vs signal reconstruction may not be accurate given that wavelets are starting to see a lot of adoption in the visual space (at least JPEG2000 is the leading edge in that field). Might also be interesting to use DCT as well. I think these will perform better than LTTB (of course the compute cost is higher but there's also HW acceleration for some of these or will be over time).
Comment by dapperdrake 2 days ago
Comment by dapperdrake 2 days ago
Sounds like what makes sql joins NP-hard.
Comment by vlovich123 11 hours ago
Comment by dapperdrake 2 days ago
Comment by apitman 2 days ago
Sometimes I like to ponder on the immense amount of engineering effort expended on working around browser limitations.
Comment by dapperdrake 2 days ago
Comment by aurbano 2 days ago
Comment by leeoniya 2 days ago
Comment by sarusso 2 days ago
Comment by Bengalilol 2 days ago
Comment by dapperdrake 2 days ago
Comment by Cabal 2 days ago
Comment by fuckyah 2 days ago
Comment by fuckyah 2 days ago
Comment by zokier 2 days ago
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm
Comment by huntergemmer 2 days ago
Comment by akomtu 2 days ago
Comment by MindSpunk 2 days ago
Comment by akomtu 2 days ago
Comment by dheera 2 days ago
Comment by rustystump 2 days ago
There is this misconception that if one uses js or c# to tell a gpu what to do it is somehow slower than rust. It only is if you crunching data but moving memory to the gpu and telling gpu to crunch is virtually identical.
Comment by dheera 2 days ago
Comment by rustystump 2 days ago
Even then, when u write to a framebuffer directly in the gpu if the locations of the points are not contiguous you are thrashing. Rendering points very fast is still very much about reducing the data set down to bypass all the layers of memory walls.
Comment by dapperdrake 2 days ago
Comment by vanderZwan 2 days ago
Comment by akomtu 2 days ago
Comment by vanderZwan 2 days ago
Comment by akomtu 1 day ago
Comment by vanderZwan 1 day ago
Repeatedly shrinking by a factor of two means log2(max(width, height)) passes, each pass is a quarter of the pixels of the previous pass so that's a total of 4/3 times the pixels of the original image. Should be low enough overhead, right?
Comment by akomtu 20 hours ago
Comment by jsmailes 2 days ago
Comment by leeoniya 2 days ago
Comment by dapperdrake 2 days ago
Add Lab-comor space for this though, like the color theme solarized-light.
Also add options to side-step red-green blindness and blue-yellow blindndess.
Comment by hienyimba 2 days ago
We’ve been working on a browser-based Link Graph (osint) analysis tool for months now (https://webvetted.com/workbench). The graph charting tools on the market are pretty basic for the kind of charting we are looking to do (think 1000s of connected/disconnected nodes/edges. Being able to handle 1M points is a dream.
This will come in very handy.
Comment by huntergemmer 2 days ago
Is graph visualization something you'd want as part of ChartGPU, or would a separate "GraphGPU" type library make more sense? Curious how you're thinking about it.
Comment by agentcoops 2 days ago
More directly relevant, I haven't looked at the D3 internals for a decade, but I wonder if it might be tractable to use your library as a GPU rendering engine. I guess the big question for the future of your project is whether you want to focus on the performance side of certain primitives or expand the library to encompass all the various types of charts/customization that users might want. Probably that would just be a different project entirely/a nightmare, but if feasible even for a subset of D3 you would get infinitely customizable charts "for free." https://github.com/d3/d3-shape might be a place to look.
In my past life, the most tedious aspect of building such a tool was how different graph standards and expectations are across different communities (data science, finance, economics, natural sciences, etc). Don't get me started about finance's love for double y-axis charts... You're probably familiar with it, but https://www.amazon.com/Grammar-Graphics-Statistics-Computing... is fantastic if you continue on your own path chart-wise and you're looking for inspiration.
Comment by huntergemmer 2 days ago
That said, the ECharts-style declarative API is intentionally designed to be "batteries included" for common cases. So it's a balance: the primitives are fast, but you get sensible defaults for the 80% use case without configuring everything. Double y-axis is a great example - that's on the roadmap because it's so common in finance and IoT dashboards. Same with annotations, reference lines, etc. Haven't read the Grammar of Graphics book but it's been on my list - I'll bump it up. And d3-shape is a great reference for the path generation patterns. Thanks for the pointers!
Question: What chart types or customization would be most valuable for your use cases?
Comment by agentcoops 2 days ago
That is, you're definitely developing the tool in a direction that I and I think most Hacker News readers will appreciate and it sounds like you're already thinking about some of the most common "extravagances" (annotations, reference lines, double y-axis etc). As OP mentioned, I think there's a big need for more performant client-side graph visualization libraries, but that's really a different project. Last I looked, you're still essentially stuck with graphviz prerendering for large enough graphs...
Comment by huntergemmer 2 days ago
"Data science/data journalism" is a great way to frame the target audience. Clean defaults, sensible design, fast enough that the tool disappears and you just see the data.
And yeah, graphviz keeps coming up in this thread - clearly a gap in the ecosystem. Might be a future project, but want to nail the 2D charting story first and foremost.
Thanks for the thoughtful feedback - this is exactly the kind of input that shapes the roadmap.
Comment by graphviz 2 days ago
A lot of improvements are possible, based on 20 years of progress in interactive systems, and just overall computing performance.
Comment by lmeyerov 2 days ago
Most recently adding to the family is our open source GFQL graph language & engine layer (cypher on GPUs, including various dataframe & binary format support for fast & easy large data loading), and under the louie.ai umbrella, piloting genAI extensions
Comment by MeteorMarc 2 days ago
Comment by losteric 2 days ago
Comment by wesammikhail 2 days ago
Comment by kposehn 2 days ago
Comment by huntergemmer 2 days ago
Comment by kqr 2 days ago
Comment by huntergemmer 2 days ago
You can now render up to 5 million candles. Just tested it - Achieved 104 FPS with 5M candles streaming at 20 ticks/second.
Demo: https://chartgpu.github.io/ChartGPU/examples/candlestick-str...
Also fixed from earlier suggestions and feedback as noted before:
- Data zoom slider bug has been fixed (no longer snapping to the left or right) - Idle CPU usage bug (added user controls along with more clarity to 1M point benchmark)
13 hours on the front page, 140+ comments and we're incorporating feedback as it comes in.
This is why HN is the best place to launch. Thanks everyone :)
Comment by mcintyre1994 2 days ago
Comment by huntergemmer 2 days ago
One thing to note: I added a toggle to "Benchmark mode" in the 1M benchmark example - this preserves the benchmark capability while demonstrating efficient idle behavior.
Another thing to note: Do not be alarmed when you see the FPS counter display 0 (lol), that is by design :) Frames are rendered efficiently. If there's nothing to render (no dirty frames) nothing is rendered. The chart will still render at full speed when needed, it just doesn't waste cycles rendering the same static image 60 times per second.
Blown away by all of you amazing people and your support today :)
Comment by azangru 2 days ago
https://chartgpu.github.io/ChartGPU/examples/million-points/...
While dragging, the slider does not stay under the cursor, but instead moves by unexpected distances.
Comment by huntergemmer 2 days ago
Looks like the data zoom slider has a momentum/coordinate mapping issue. Bumping this up the priority list since multiple people are hitting it.
Comment by virgil_disgr4ce 2 days ago
Comment by barrell 2 days ago
After the initial setup and learning curve, it was actually very easy. All in all, way less complicated than all the performance hacks I had to do to get 0.01% of the data to render half as smooth using d3.
Although this looks next level. I make sure all the computation happens in a single o(n) loop but the main loop still takes place on the cpu. Very well done
To anyone on the fence, GPU charting seemed crazy to me beforehand (classic overengineering) but it ends up being much simpler (and much much much smoother) than traditional charts!
Comment by tempaccsoz5 2 days ago
[0]: https://chartgpu.github.io/ChartGPU/examples/live-streaming/...
[1]: https://crisislab-timeline.pages.dev/examples/live-with-plug...
Comment by kshri24 2 days ago
[1]: https://github.com/ChartGPU/ChartGPU/blob/main/.cursor/agent...
[2]: https://github.com/ChartGPU/ChartGPU/blob/main/.claude/agent...
Comment by bobmoretti 2 days ago
Comment by janice1999 2 days ago
- new account
- spamming the project to HN, reddit etc the moment the demo half works
- single contributor repo
- Huge commits minutes apart
- repo is less than a week old (sometimes literally hours)
- half the commits start with "Enhance"
- flashly demo that hides issues immediately obvious to experts in the field
- author has slop AI project(s)
OP uses more than one branch so he's more sophisticated than most.
Comment by yogitakes 2 days ago
Here’s a demo of wip rendering engine we’re working on that boosted our previous capabilities of 10M data points to 100M data points.
Comment by 33a 2 days ago
Comment by marginalx 2 days ago
Comment by mikepurvis 2 days ago
However, this is pretty great; there really aren't that many use cases that require more than a million points. You might finally unseat dygraphs as the gold standard in this space.
Comment by zozbot234 2 days ago
I guess the real draw here is smooth scrolling and zooming, which is hard to do with server-rendered tiles. There's also the case of fully local use, where server rendering doesn't make much sense.
Comment by tomjakubowski 2 days ago
The computer on my desk only costs me the electric power to run it, and there's 0 network latency between it and the monitor on which I'm viewing charts. If I am visualizing some data and I want to rapidly iterate on the visualization or interact with it, there's no more ideal place for the data to reside than right there. DDR5 and GPUs will be cheap again, some day.
Comment by dapperdrake 2 days ago
Comment by internetter 2 days ago
I agree, unfortunately no library I've found supports this. I currently SSR plots to SVG using observable plot and JSDom [0]. This means there is no javascript bundle, but also no interactivity, and observable doesn't have a method to generate a small JS sidecar to add interactivity. I suppose you could progressive enhance, but plot is dozens of kilobytes that I'd frankly rather not send.
[0] https://github.com/boehs/site/blob/master/conf/templating/ma...
Comment by switz 2 days ago
It’s more low level than a full charting library, but most of it can run natively on the server with zero config.
I’ve always found performance to be kind of a drag with server side dom implementations.
Comment by mikepurvis 2 days ago
I think it's just a different mindset; GIS libs like Leaflet kind of assume they're the centerpiece of the app and can dictate a bunch of structure around how things are going to work, whereas charting libs benefit a lot more from "just add me to your webpack bundle and call one function with an array and a div ID, I promise not to cause a bunch of integration pain!"
Last time I tried to use it for dashboarding, I found Kibana did extremely aggressive down-sampling to the point that it was averaging out the actual extremes in the data that I needed to see.
Comment by dapperdrake 2 days ago
Comment by volkercraig 2 days ago
Haha, Highcharts is a running joke around my office because of this. Every few years the business will bring in consultants to build some interface for us, and every time we will have to explain to them that highcharts, even with it's turbo mode enabled chokes on our data streams almost immediately.
Comment by ranger_danger 2 days ago
Even when I turn on dom.webgpu.enabled, I still get "WebGPU is disabled by blocklist" even though your domain is not in the blocklist, and even if I turn on gfx.webgpu.ignore-blocklist.
Comment by embedding-shape 2 days ago
Comment by tonyplee 2 days ago
Very cool project. Thanks!!!
Comment by jsheard 2 days ago
Comment by ranger_danger 2 days ago
Comment by pier25 2 days ago
Comment by call_to_action 2 days ago
Comment by huntergemmer 2 days ago
The slider should now track the cursor correctly on macOS. If you tried the million-points demo earlier and the zoom felt off, give it another shot.
This is why I love launching on HN - real feedback from people actually trying the demos. Keep it coming! :)
Comment by Tiberium 2 days ago
Comment by deepfriedrice 2 days ago
Comment by SeasonalEnnui 2 days ago
There doesn't seem to be a communication mechanism that has minimal memcopy or no serialization/deserialization, the security boundary makes this difficult.
I have a backend array of 10M i16 points, I want to get this into the frontend (with scale & offset data provided via side channel to the compute shader).
As it stands, I currently process on the backend and send the frontend a bitmap or simplified SVG. I'm curious to know about the opposite approach.
Comment by olau 2 days ago
Comment by SeasonalEnnui 2 days ago
Comment by dapperdrake 2 days ago
Comment by shunia_huang 2 days ago
Comment by rustystump 2 days ago
For this, compression/quantize numbers and then pass that directly to the gpu after it comes off the network. Have a compute shader on the gpu decompress before writing to a frame buffer. This is what high performance lidar streaming renderers do as lidar data is packed efficiently for transport.
Comment by fulafel 1 day ago
But minimizing copying or avoiding format conversions doesn't necessarily get you best performance of course.
Comment by SeasonalEnnui 1 day ago
Comment by fulafel 1 day ago
Comment by SeasonalEnnui 1 day ago
Comment by lmeyerov 2 days ago
Comment by pier25 2 days ago
Comment by johndough 2 days ago
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1870699
And there is the issue of getting the browser to use the correct GPU in the first place, but that is a different can of worms.
Comment by bhouston 2 days ago
Comment by m132 2 days ago
Please support a fallback, ideally a 2D one too. WebGPU and WebGL are a privacy nightmare and the former is also highly experimental. I don't mind sub-60 FPS rendering, but I'd hate having to enable either of them just to see charts if websites were to adopt this library.
The web is already bad requiring JavaScript to merely render text and images. Let's not make it any worse.
Comment by dapperdrake 2 days ago
Comment by sroussey 2 days ago
Biggest issue is MacOS users with newer Safari on older MacOS.
Comment by kawogi 2 days ago
This blocks progress (and motivation) on some of my projects.
Comment by Joeboy 2 days ago
But personally, I'm not going to start turning on unsafe things in my browser so I can see the demo. I tried firefox and chromium and neither worked so pfft, whatever.
Comment by ColinEberhardt 2 days ago
D3fc maintainer here. A few years back we added WebGL support to D3fc (a component library for people building their own charts with D3), allowing it to render 1m+ datapoints:
https://blog.scottlogic.com/2020/05/01/rendering-one-million...
Comment by hdjrudni 2 days ago
That's what I'm using now but I gave it too much data and it takes like a minute to render so I'm quite interested in this.
Comment by huntergemmer 2 days ago
Comment by jeffbee 2 days ago
Comment by altern8 2 days ago
Error message: "WebGPU Error: Failed to request WebGPU adapter. No compatible adapter found. This may occur if no GPU is available or WebGPU is disabled.".
Comment by kettlecorn 2 days ago
WebGPU is supported on Chrome and on the latest version of Safari. On Linux with all browsers WebGPU is only supported via an experimental flag.
Comment by mholt 1 day ago
Comment by embedding-shape 2 days ago
Would be great if you had a button there one can press, and it does a 10-15 second benchmark then print a min/max report, maybe could even include loading/unloading the data in there too, so we get some ranges that are easier to share, and can compare easier between machines :)
Comment by huntergemmer 2 days ago
Love the benchmark button idea. A "Run Benchmark" mode that captures: - Load time - GPU time - CPU time - Min/max/avg FPS over 10-15 seconds - Hardware info
Then export a shareable summary or even a URL with encoded results. Would make for great comparison threads.
Adding this to the roadmap - would make a great v0.2 feature. Thanks for the suggestion!
Comment by zamadatix 2 days ago
Comment by pdyc 2 days ago
One more issue is that some browser and OS combinations do not support WebGPU, so we will still have to rely on existing libraries in addition to this, but it feels promising.
Comment by samradelie 2 days ago
I've been looking for a followup to uPlot - Lee who made uPlot is a genius and that tool is so powerful, however I need OffscreenCanvas running charts 100% in worker threads. Can ChartGPU support this?
I started Opus 4.5 rewrite of uPlot to decouple it from DOM reliance, but your project is another level of genius.
I hope there is consideration for running your library 100% in a worker thread ( the data munging pre-chart is very heavy in our case )
Again, congrats!
Comment by huntergemmer 2 days ago
Worker thread support via OffscreenCanvas is a great idea and WebGPU does support it. I haven't tested ChartGPU in a worker context yet, but the architecture should be compatible - we don't rely on DOM for rendering, only for the HTML overlay elements (tooltips, axis labels, legend).
The main work would be: 1. Passing the OffscreenCanvas to the worker 2. Moving the tooltip/label rendering to message-passing or a separate DOM layer
For your use case with heavy data munging, you could also run just the data processing in a worker and pass the processed arrays to ChartGPU on the main thread - that might be a quicker win.
Would you open an issue on GitHub? I'd love to understand your specific workload better. This feels like a v0.2 feature worth prioritizing.
Comment by samradelie 2 days ago
There is certainly something beautiful about your charging GPU code being part of a file that runs completely isolated in another thread along with our websocket Data fire hose
Architecturally that could be something interesting where you expose a typed API wrapping postmessage where consumers wanting to bind the main thread to a worker thread could provide the offscreen canvas as well as a stream of normalized, touch and pointer events, keyboard and wheel. Then in your worker listeners could handle these incoming events and treat them as if they were direct from the event listeners on the main thread; effectively, your library is thread agnostic.
I'd be happy to discuss this on GitHub. I'll try to get to that today. See you there.
Comment by samradelie 2 days ago
Comment by rustystump 2 days ago
Comment by pf1gura 2 days ago
On the topic of support for worker threads, in my current project I have multiple data sources, each handled by its own worker. Copying data between worker and main thread - even processed - can be an expensive operation. Avoiding it can further help with performance.
Comment by facontidavide 2 days ago
Funny enough, I am doing something very similar: a C++ portable (Windows, Linux MacOS) charting library, that also compile to WASM and runs in the browser...
I am still at day 2, so see you in 3 days, I guess!
Comment by ivanjermakov 2 days ago
Comment by mcintyre1994 2 days ago
Comment by huntergemmer 2 days ago
Comment by fourthark 2 days ago
Comment by d--b 3 days ago
Comment by huntergemmer 3 days ago
Which demo were you on? (million-points, live-streaming, or sampling?) I'll test on M1 today and get a fix out.
Really appreciate you taking the time to try it :)
Comment by qayxc 2 days ago
Comment by monegator 2 days ago
Comment by abuldauskas 2 days ago
Comment by mikepurvis 2 days ago
Comment by niilokeinanen 2 days ago
All the optimizations mentioned except LTTB downsampling in compute shaders can be done in WebGL.
Web charts with > 1 M points and 60 FPS zooming/panning have been available since 2019. For example, here's a line chart with 100M points (100x more): https://lightningchart.com/lightningchart-js-demos/100M/
But still, love to see it. WebGPU will surely go forward slowly as these things naturally do, but practical experimentation is essential.
Comment by utf_8x 2 days ago
Comment by smusamashah 2 days ago
Drawing and scrolling live data was problem for a lib (dont remember which one) because it was drawing the whole thing on every frame.
Comment by Mogzol 2 days ago
Although dragging the slider at the bottom is currently kind of broken as mentioned in another comment, seems like they are working on it though.
Comment by imiric 2 days ago
Most high-level charting libraries already support downsampling. Rendering data that is not visible is a waste of CPU cycles anyway. This type of optimization is very common in 3D game engines.
Also, modern CPUs can handle rendering of even complex 2D graphs quite well. The insanely complex frontend stacks and libraries, a gazillion ads and trackers, etc., are a much larger overhead than rendering some interactive charts in a canvas.
I can see GPU rendering being useful for applications where real-time updates are critical, and you're showing dozens of them on screen at once, in e.g. live trading. But then again, such applications won't rely on browsers and web tech anyway.
Comment by facontidavide 1 day ago
By no mean it is as nice looking as your demo, but it is interesting to ME... C++, compiled to WASM, using WebGL. Works on Firefox too. M4 decimation.
Comment by dfortes 2 days ago
This is just embarrassing.
Comment by dapperdrake 2 days ago
http://perceptualedge.com/examples.php
There is also ggplot, ggplot2 and the Grammar of Graphics by Leland Wilkinson. Sadly, Algebra is so incompatible with Geometry that I found the book beautiful but useless for my problem domains after buying and reading and pondering it.
Comment by mitdy 2 days ago
Comment by aixnr 2 days ago
Comment by dapperdrake 2 days ago
Maybe am just bad at reading specifications or finding the right web browser.
Comment by embedding-shape 2 days ago
Comment by buibuibui 2 days ago
Comment by dapperdrake 2 days ago
Comment by mholt 1 day ago
Comment by akst 2 days ago
Comment by reactordev 2 days ago
I’ve written several of these in the past. Was going to write one in pure WebGPU for a project I’m working on but you beat me to it and now I feel compelled to try yours before going down yet another charting rabbit hole.
Comment by elAhmo 2 days ago
Comment by dapperdrake 2 days ago
Comment by dangoodmanUT 2 days ago
Comment by kirilln0v 2 days ago
How do you think is it possible? Because on RN most of the Graph Libs on CPU or Skia (which is good but still utilise CPU for Path rendering)
Comment by amirhirsch 2 days ago
Comment by jhatemyjob 2 days ago
Comment by akdor1154 2 days ago
Vega/VGlite have amazing charting expressivity in their spec language, most other charting libs don't come close. It would be very cool to be able to take advantage of that.
Comment by Moosdijk 2 days ago
Comment by rgreen 2 days ago
Comment by KellyCriterion 2 days ago
Comment by artursapek 2 days ago
Comment by justplay 2 days ago
Comment by rzmmm 2 days ago
Comment by deburo 2 days ago
Comment by huntergemmer 2 days ago
3D is coming (it's the same rendering pipeline), but I'd want to get the 2D story solid first before expanding scope.
The slice animation is doable though - we already have animation infrastructure for transitions. An "explode slice on click" effect would be a fun addition to the pie/donut charts.
What's your use case? Dashboard visuals or something else?
Comment by dvh 2 days ago
Comment by PxldLtd 2 days ago
Comment by embedding-shape 2 days ago
Comment by bhouston 2 days ago
Comment by ivanjermakov 2 days ago
Comment by nova3000 2 days ago
Comment by Andr2Andr 2 days ago
Comment by kayson 2 days ago
Comment by escapecharacter 2 days ago
Comment by btbuildem 2 days ago
Comment by mdulcio 2 days ago
Comment by lacoolj 2 days ago
Comment by rustystump 2 days ago
The code in the repo is pretty awful with zero abstraction of duplicated render pipeline building and ai slop comments all over the place like “last resort do this”. Do not use this for production code. Instead, prompt the ai yourself and use your own slop.
The performance here is also terrible given it is gpu based. A gpu based renderer done correctly should be able to hit 50-100m blocks/lines etc at 60fps zoom/panning.
It is a testament to how good ai is though and the power of the dunning kruger effect
Comment by popalchemist 2 days ago
Comment by keepamovin 3 days ago
I hope you have a way to monetize/productize this, because this has three.js potential. I love this. Keep goin! And make it safe (a way to fund, don't overextend via OSS). Good luck, bud.
Also, you are a master of naming. ChartGPU is a great name, lol!
Comment by huntergemmer 3 days ago
Interesting you mention three.js - there's definitely overlap in the WebGPU graphics space. My focus is specifically on 2D data visualization (time series, financial charts, dashboards), but I could see the rendering patterns being useful elsewhere.
On sustainability - still figuring that out. For now it's a passion project, but I've thought about a "pro" tier for enterprise features (real-time collaboration, premium chart types) while keeping the core MIT forever. Open to ideas if you have thoughts.
Appreciate the kind words! :)
Comment by PxldLtd 2 days ago
Off the top of my head, look into Order Book Heatmaps, 3D Volatility Surfaces, Footprint Charts/Volatility deltas. Integrating drawing tools like Fibonacci Retracements, Gann Fans etc. It would make it very attractive to people willing to pay.
Comment by huntergemmer 2 days ago
This comment was buried yesterday. I'm sorry for the late response!
I was thinking about a pro tier for this kind of specialized stuff. Core stays MIT forever, but fintech tooling could be paid.
Of the chart types you listed, is there a preference for what gets done first?
Order Book Heatmaps first?
Comment by PxldLtd 1 day ago
Competitors typically have to snapshot/aggregate because their graphing libraries are heavily CPU bound. Being able visualise level 2/3 data without downsampling is a big win. Also being able to smoothly roll back through the last 12hrs of tick-level history would be really neat too.
I'd say the bare minimum feature set outside of that is going to be:
- Non linear X axis for gaps/sessions
- Crosshairs that snap to OHLC data
- Logarithmic scales, Candlesticks, Heikin-Ashi, and Volume profiles
- Getting the 'feel' nice so that you can quickly scale and drag (people are real sticklers for the feel of these tools)
- Solid callbacks for events for exchange integration, people hate leaving their charts to place an order eg (onOrderModify etc)
- Provide a nice websocket data ingestion pipeline
- Provide an api so devs can integrate their own indicators, some sort of 'layers' API or something.
Sorry if I can't be of more help as I'm just a hobbyist in this area!
Comment by keepamovin 1 day ago
Comment by lelanthran 2 days ago
Comment by maximgeorge 2 days ago
Comment by ycombadmin3 2 days ago
Comment by acedTrex 2 days ago
Comment by logicallee 2 days ago
Comment by facontidavide 2 days ago
The fact that the code was generated by a human or a machine is less and less important.
Comment by acedTrex 2 days ago
Comment by embedding-shape 2 days ago
Comment by stephenhumphrey 2 days ago
Comment by embedding-shape 2 days ago
Comment by acedTrex 2 days ago
Comment by embedding-shape 2 days ago
Comment by acedTrex 2 days ago
Comment by embedding-shape 2 days ago
Comment by keepamovin 2 days ago
Sorry if this is weird, it's just I've never personally experienced a comment with anything more than 100 - 200 points. And that was RARE. I totally get if you don't want to...but like, what were your "kilopoint" comments, or thereabouts? </offtopic>
Comment by embedding-shape 2 days ago
So, apparently these are my five most upvoted comments (based on going through the first 100 pages of my own comments...):
- 238 - https://news.ycombinator.com/item?id=46574664 - Story: Don't fall into the anti-AI hype
- 127 - https://news.ycombinator.com/item?id=46114263 - Story: Mozilla's latest quagmire
- 92 - https://news.ycombinator.com/item?id=45900337 - Story: Yt-dlp: External JavaScript runtime now required f...
- 78 - https://news.ycombinator.com/item?id=46056395 - Story: I don't care how well your "AI" works
- 73 - https://news.ycombinator.com/item?id=46635212 - Story: The Palantir app helping ICE raids in Minneapolis
I think if you too retire, have nothing specific to do for some months, and have too much free time to discuss with strangers on the internet about a wide range of topics you ideally have strong opinions about, you too can get more HN karma than you know what to do with :)
Comment by buckle8017 2 days ago
The idea that GPU vendors are going to care about memory access violations over raw performance is absurd.
Comment by the__alchemist 2 days ago
Comment by buckle8017 2 days ago
What is wrong with you JavaScript bros.
Comment by the__alchemist 2 days ago