OpenAI might end up on the right side of history
Posted by shoman3003 2 days ago
note: I am in MENA, am not with the military in any way.
when i first read the statement by Dario, i was shocked by the fact the military was so dismissive about Ai safety (not to mention privacy). Seeing anthropic resist the military, I felt so proud of being a claude user to the point I deleted gpt right away. it's nice to see your fav products sync with your values.
but today, after thinking more about it, i realized something. for a government to allow one Ai company to dictate terms, it opens up a precedent for Ai companies in the future to resist governmental oversight. that might not be a big deal in 2020s, but in 2030s by all estimations many Ai companies will be big enough to resist entire governmental structures. Maybe not the US or China, but they will definitely be big enough not to be easily influenced.
those independent companies will eventually grow so large, no government can hope to tame them. i know that right now it seems impossible for a mere ccorp valued at less than a trillion to resist a government that spends 7 trillion each year. but zooming out, it feels likely that the next generation of Ai companies will be easily valued at 10T. if you look at a 2-year-old which just learned how to talk and suddenly he starts talking quantum, you can bet your a* he will grow up to be a powerhouse.
i know soft monetary power is very different than hard military power, but enough tokens of the first type can easily be converted into the second type if: 1. you have a sufficiently ambitious CEO. 2. the survival of the company is threatened in some way. I am not talking about AGI here, but good old private equity that does whatever it needs to survive. ruled by suits that have more loyalty to shareholders than anyone or anything else.
at the end of the day, corporations are ruled by dictators (they have to be), governments are not (not in the West at least). maybe just maybe we should NOT trust private equity to seek anything but profits. governments are manipulative and bloody, but at least we can vote.
Comments
Comment by JacobArthurs 2 days ago
Comment by shoman3003 1 day ago
Comment by lgl 1 day ago
I'm not sure where this conclusion is coming from. We're very likely already in an AI bubble so I'm thinking that open/free models will eventually dilute the huge ridiculous valuations these companies have. Also the natural increase in consumer hardware power will eventually allow many people to just use local models instead both for privacy and cost reasons.
And seeing as most models are essentially only improved versions of the previous ones with larger context and more training data, unless some new "Attention Is All You Need" paper comes out that will give us a big step into AGI territory, I'm really not seeing a new company reach $10T valuation by just releasing marginally better models every couple of months imho.
Comment by shoman3003 1 day ago
Comment by lgl 7 hours ago
Comment by satvikpendem 1 day ago
Comment by umasood 1 day ago
Comment by watwut 2 days ago
A company being allowed to NOT make business with goverment somehow makes oversight impossible? Make it make sense.
USA is already basically controlled by oligarch. The road to there did not went through companies refusing business.
Comment by shoman3003 1 day ago
Comment by vessenes 2 days ago
A couple things to put out there - first, the US has a fairly strong rule of law that the government cannot compel speech -- essentially, while speech can be blocked/stopped, it's a hard rule of the republic that we cannot force certain speech - this is the legal theory for canary statements, by the way -- make your statement "I have not been forced to remove any user from this system by a secret court" -- and when it's no longer true, you remove the statement.
This speech concept extends to, say, software - a company can refuse to create software or tooling or what have you, if it chooses. What if a company has something deemed to be in the national security interest but does not wish to use it on behalf of the country? Traditionally we have both soft and hard power applied - soft - conversations, hearts and minds, perhaps threats, aimed at getting a company on board with the national goal.
Hard: Nationalization. The US has typically reserved nationalization for bailout / reworking pernicious economic incentives, but we have had some wartime nationalizations in the past -- Google tells me Western Telegraph and Smith and Wesson -- and Truman nationalized like everything basically whenever he wanted before and during the Korean War.
Nationalizing a valuable company like Anthropic which is research dominated is risky. You can't force research scientists to work; you could almost certainly find people to keep operating the inference. So you may get something today, and trigger a legendary set of Supreme Court cases, but you have no guarantee the goose will keep laying its golden eggs once Sec. Hegseth is in charge. I would guess this is going to be a very, very last resort even for the most aggressive of governments when there are credible alternatives in the economy. Under those terms, economics / market forces can do a lot of the work.
Upshot - I predict this is Sturm und Drang, and we'll see Anthropic figure out how to keep its gov contracts while oAI continues to work its way in to more government work simultaneously.
Comment by shoman3003 1 day ago