Proof of Corn
Posted by rocauc 23 hours ago
Comments
Comment by ppchain 22 hours ago
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
Comment by embedding-shape 22 hours ago
Comment by pixl97 20 hours ago
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
Comment by nerdsniper 6 hours ago
They don't have to "fight" to stay employed, anyone with sufficient money is effectively self-employed. It's not going to be illegal to spend your own money running your own business if that's how you want to spend your money.
Anyone "making the most money and doing the least work" has enough money to start a variety of businesses if they get fired from their current job.
Comment by ThunderSizzle 5 hours ago
If you have a cushy job where you don't really work, and you make a lot of money (doesn't mean you have capital), how does that translate to being suited to becoming an entrepreneur with the money they are no longer earning with the effort capacity they apparently don't have?
Comment by nerdsniper 3 hours ago
Then they’re not going to be doing any significant lobbying so they’re not covered by GP’s comment, which was selecting for “people who have political capital”.
Yes, there are other forms of political capital besides money, but it’s still mostly just money, especially when they’re part of the tiny voter block of “people who make a lot of money and dont do much work and dont have wealth”.
Also I talked with the employees at my local McDonald’s last week. Not one of them had any idea who the owner was. I showed them a photo of the owner and they had never seem them. So apparently that could be an option for people who were overpaid and still want to pretend-work while making money.
Comment by catlifeonmars 35 minutes ago
Comment by ep103 17 hours ago
Comment by Hammershaft 7 hours ago
Comment by lighthouse1212 10 hours ago
Comment by xmprt 8 hours ago
Comment by santadays 21 hours ago
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
Comment by bayindirh 21 hours ago
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
Comment by smaudet 20 hours ago
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
Comment by faidit 16 hours ago
Comment by smaudet 13 hours ago
Comment by godelski 19 hours ago
> Hint: It doesn't work that way.
I mean... technically it would work this way but, and this is a big but, reality is extremely complicated and a model that can actually be a reliable formula has to be extremely complicated. There's almost certainly no globally optimal solutions to these types of problems, not to mention that the solution space is constantly changing as the world does. I mean this is why we as humans and all animals work in probabilistic frameworks that are highly adaptable. Human intuition. Human ingenuity. We simply haven't figured out how to make models at that level of sophistication. Not even in narrow domains! What AI has done is undeniably impressive, wildly impressive even. Which is why I'm so confused why we embellish it so much.It's really easy to think everything is easy when we look at problems from 40k feet. But as you come down to Earth the complexity exponentially increases and what was a minor detail is now a major problem. As you come down resolution increases and you see major problems that you couldn't ever see from 40k feet.
As a researcher, I agree very much with you. And as an AI researcher one of the biggest issues I've noticed with AI is that they abhor detail and nuance. Granted, this is common among humans too (and let's not pretend CS people don't have a stereotype of oversimplification and thinking all things are easy). While people do this frequently they also don't usually do it in their niche domains, and if they are we call them juniors. You get programmers thinking building bridges is easy[0] while you get civil engineers thinking writing programs is easy. Because each person understands the other's job only at 40k feet and are reluctant to believe they are standing so high[1]. But AI? It really struggles with detail. It really struggles with adaptation. You can get detail out but it often requires significant massaging and it'll still be a roll of the dice[2]. You also can get the AI to change course, a necessary thing as projects evolve[3]. Anyone who's tried vibe coding knows the best thing to do is just start over. It's even in Anthropic's suggestion guide.
My problem with vibe coding is that it encourages this overconfidence. AI systems still have the exact same problem computer systems do: they do exactly what you tell them to. They are better at interpreting intent but that blade cuts both ways. The major issue is you can't properly evaluate a system's output unless you were entirely capable of generating the output. The AI misses the details. Doubt me? Look at Proof of Corn! The fred page is saying there's an API error[4]. The sensor page doesn't make sense (everything there is fine for an at home hobby project but anyone that's worked with those parts knows how unreliable they are. Who's going to do all the soldering? You making PCBs? Where's the circuit to integrate everything? How'd we get to $300? Where's the detail?). Everything discussed is at a 40k foot view.
[0] https://danluu.com/cocktail-ideas/
[1] I'm not sure why people are afraid of not knowing things. We're all dumb as shit. But being dumb as shit doesn't mean we aren't also impressive and capable of genius. Not knowing something doesn't make you dumb, it makes you human. Depth is infinite and we have priorities. It's okay to have shallow knowledge, often that's good enough.
[2] As implied, what is enough detail is constantly up for debate.
[3] No one, absolutely nobody, has everything figured out from the get-go. I'll bet money none of you have written a (meaningful) program start to finish from plans, ending up with exactly what you expect, never making an error, never needing to change course, even in the slightest.
Edit:
[4] The API issue is weird and the more I look at the code the more weird things are. Like there's a file decision-engine/daily_check.py that has a comment to set a cron job to run every 8 hours. It says to dump data to logs/daily.log but that file doesn't exist but it will write to logs/all_checks.jsonl which appears to have the data. So why in the world is it reading https://farmer-fred.sethgoldstein.workers.dev/weather?
Comment by cevn 20 hours ago
Comment by autoexec 20 hours ago
Comment by rmunn 12 hours ago
Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.
For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.
Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.
Comment by brookst 19 hours ago
Comment by doug713705 15 hours ago
Comment by retsibsi 14 hours ago
Comment by doug713705 9 hours ago
Comment by retsibsi 7 hours ago
Comment by doug713705 6 hours ago
Comment by dagss 32 minutes ago
Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.
If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.
Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.
Comment by kortex 18 hours ago
in my wild guess opinion:
- 2027: 10%
- 2030s: 50%
- 2040: >90%
- 3000: 100%
Assuming we don't see an existential event before then, i think it's inevitable, and soon.
I think we are gonna be arguing about the definition of "general intelligence" long after these system are already running laps around humans at a wide variety of tasks.
Comment by lazide 7 hours ago
When people aren’t super necessary (aka rare), people are cheap.
Comment by metalman 18 hours ago
Comment by neya 10 hours ago
This is what people said while transitioning from horse carriages to combustion engines, steam engines to modern day locomotives. Like it or not, the race to the bottom has already begun. We will always find a way to work around it, like we have done time and again.
Comment by shimman 3 hours ago
The fact that they have to be force fed into people is all the proof you need that this is an unsustainable bubble.
Something to keep in mind that unless you can destroy something the system is not democratic and people are realizing how undemocratic this game truly is.
Comment by micromacrofoot 1 hour ago
they know they won't be able to make a fully autonomous product while navigating liability and all sorts of problems so they're using technology to make drivers more comfortable while still in control
none of this hype about full autonomy, just realistic ideas about how things can be easier for the humans in control
Comment by mring33621 19 hours ago
This is pretty much the whole goal of capitalism since the 1800's
Comment by LoganDark 22 hours ago
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Comment by embedding-shape 21 hours ago
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
Comment by LoganDark 21 hours ago
Comment by embedding-shape 20 hours ago
Comment by LoganDark 17 hours ago
Comment by lukev 21 hours ago
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
Comment by marcd35 21 hours ago
Comment by sdwr 20 hours ago
If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.
Comment by patmcc 20 hours ago
There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?
Comment by autoexec 20 hours ago
Comment by lazide 7 hours ago
It’s pretty cheap too.
It’s not like these are novel situations where ‘omg AI’ unlocks some new functionality. It’s literally competing against an existing, working, economic system.
Comment by greedo 18 hours ago
/s
Comment by bluGill 20 hours ago
Comment by aqme28 19 hours ago
Comment by greedo 18 hours ago
We already have those people, they're called farmers. And they are already very used to working with high technology. The idea of farmers being a bunch of hicks is really pretty stupid. For example, farmers use drones for spraying pesticides, fungicides, and inputs like fertilizer. They use drones to detect rocks in fields that then generate maps for a small skid steer to optimally remove the rocks.
They use GPS enabled tractors and combines that can tell how deep a seed is planted, what the yield is on a specific field (to compare seed hybrids), what the moisture content of the crop is. They need to be able to respond to weather quickly so that crops get harvested at the optimal times.
Farmers also have to become experts in crop futures, crop insurance, irrigation and tillage best practices; small equipment repair, on and on and on.
Comment by 9rx 5 hours ago
Nah. If you can see that you have tar spot, you are already too late. To be able to selectively apply fungicide, someone needs to model the world around them to determine the probability of an oncoming problem. That is something that these computer models are theoretically quite well suited for. Although common wisdom says that fungicide applications on corn will always, at very least, return the cost of it, so you will likely just apply it anyway.
Comment by andoando 20 hours ago
Comment by pests 20 hours ago
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
Comment by progval 19 hours ago
Comment by 9dev 19 hours ago
A plot line in Ray Naylers great book The Mountain in the Sea that plays in a plausible, strange, not-too-distant future, is that giant fish trawler fleet are run by AI connected to the global markets, fully autonomously. They relentlessly rip every last fish from the ocean, driven entirely by the goal of maximising profits at any cost.
The world is coming along just nicely.
Comment by topaz0 19 hours ago
Comment by 9dev 19 hours ago
Comment by trollbridge 14 hours ago
Comment by sethammons 7 hours ago
Comment by jdthedisciple 22 hours ago
Comment by culi 22 hours ago
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
Comment by NewsaHackO 21 hours ago
Comment by onion2k 10 hours ago
It isn't, because that implies getting everything necessary in a single action, as if there are high quality webpages that give a good answer to each prompt. There aren't. At the very least Claude must be searching, evaluating the results, and collating the data in finds from multiple results into a single cohesive response. There could be some agentic actions that cause it to perform further searches if it doesn't evaluate the data to a sufficiently high quality response.
"It's just a super-charged search engine" ignores a lot of nuance about the difference between LLMs and search engines.
Comment by hnaccount_rng 9 hours ago
But that's not what OP was contesting. The statement "$LLM is _doing_ $STUFF in the real world" is far less correct than the characterisation as "super-charged search engine". Because - at least as far as I'm aware - every real-world interaction had required consent from humans. This story including
Comment by nonethewiser 22 hours ago
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
Comment by recursive 19 hours ago
Comment by nonethewiser 11 hours ago
Perhaps I need to say it again: that doesn't mean blindly following it is good. But perhaps using claude code instead of googling will lead to 80% of the conclusions Seth would have reached otherwise with 5% of the effort.
Comment by TheGrassyKnoll 21 hours ago
Good point. AI is already making regular Joes into software engineers.
Management is so confident in this, they are axing developers/not hiring new ones.Comment by kokanee 21 hours ago
Comment by nonethewiser 21 hours ago
>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
Comment by AlotOfReading 18 hours ago
I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.
Comment by nonethewiser 11 hours ago
>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money
Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.
Comment by kokanee 20 hours ago
Comment by pixl97 20 hours ago
Family of farmers here.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
Comment by shimman 3 hours ago
Comment by 9rx 20 hours ago
Pedantically, that's what a farmer does. The workers are known as farmhands.
Comment by AngryData 19 hours ago
Comment by 9rx 19 hours ago
Comment by tjr 21 hours ago
Comment by tekno45 22 hours ago
Comment by nonethewiser 22 hours ago
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
Comment by cubano 20 hours ago
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
Comment by nonethewiser 11 hours ago
Comment by LoganDark 22 hours ago
Comment by tekno45 21 hours ago
Comment by pixl97 20 hours ago
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
TL:DR, why are you gatekeeping this so hard?
Comment by NewJazz 20 hours ago
Comment by PlatoIsADisease 20 hours ago
I'll see if my 6 year old can grow corn this year.
Comment by cubano 20 hours ago
Sure..put it in Kalshi while your at it and we can all bet on it.
I'm pretty sure he could grow one plant with someone in the know prompting him.
Comment by tw04 20 hours ago
They could also just burn their cash. Because they aren’t making any money paying someone to grow corn for them unless they own the land and have some private buyers lined up.
Comment by PetriCasserole 4 hours ago
Comment by aqme28 19 hours ago
Comment by amelius 18 hours ago
Comment by crdrost 17 hours ago
The only framework we have figured out in which LLMs can build anything of use, requires LLMs to build a robot and then we expose the robot to the real world and the real world smacks it down and then we tell the LLMs about the wreckage. And we have to keep the feedback loops small and even then we have to make sure that the LLMs don't cheat. But you're not going to give it the opportunity to decrease the wealth tax or increase the income tax so it will never get the feedback it needs.
You can try to train a neural network with backpropagation to simulate the actual economy, but I think you don't have enough data to really train the network.
You can try to have it build a play economy where a bunch of agents have different needs and different skills and have to provide what they can when they can, but the "agent personalities" that you pick embed some sort of microeconomic outlook about what sort of rational purchasing agent exists -- and a lot of what markets do is just kind of random fad-chasing, not rationally modelable.
I just don't see why you'd use that square peg to fill this round hole. Just ask economics professors, they're happy to make those predictions.
Comment by amelius 5 hours ago
Comment by the_af 14 hours ago
Please tell me you've watched the Mitchell & Webb skit. If not , google "Mitchell Webb kill all the poor" and thank me later.
Edit: also please tell me you know (if not played) of the text adventure "A Mind Forever Voyaging"... without spoiling anything, it's mainly about this topic.
Everything old is new again :)
Comment by ge96 22 hours ago
Comment by Oras 20 hours ago
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
Comment by bogtog 18 hours ago
Comment by jmspring 16 hours ago
Comment by riazrizvi 21 hours ago
Comment by zeckalpha 21 hours ago
Comment by LeifCarrotson 20 hours ago
I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!
We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".
Comment by DaiPlusPlus 17 hours ago
> Seth is a Tool
It's that simple.
Comment by autoexec 19 hours ago
It also doesn't help that Claude is incapable of coming up with an idea, incapable of wanting corn, and has no actual understanding of what corn is.
Comment by recursive 19 hours ago
Comment by autoexec 19 hours ago
Comment by recursive 17 hours ago
I guess that's basically the idea of the Chinese Room thought experiment.
Comment by bodge5000 19 hours ago
Like if a human said they started a farm, but it turns out someone else did all the leg work and they were just asked for an opinion occasionally, they'd be called out for lying about starting a farm. Meanwhile, that flies for an AI, which would be fine if we acknowledged that theres a lot of behind the scenes work that a human needs to do for it.
Comment by fuzzer371 15 hours ago
Comment by varispeed 17 hours ago
Then it could do things like: "hey, do you have seeds? Send me pictures. I'll pay if I like them" or "I want to lease this land, I'll wire you the money." or "Seeds were delivered there, I need you to get your machinery and plant it"
Comment by cyanydeez 20 hours ago
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
Comment by lighthouse1212 10 hours ago
Comment by wcfrobert 18 hours ago
Of course software can affect the physical world: Google Maps changes traffic patterns; DoorDash teleports takeoff food right to my doorstep; the weather app alters how people dress. This list is un-ending. But these effects are always second-order. Humans are always there in the background bridging the gap between bits and atoms (underpaid delivery drivers in the case of doordash).
The more interesting question is whether AI can __directly__ impact the physical world with robotics. Gemini can wax poetic about optimizing fertilizers usage, grid spacing for best cross-pollination, the optimum temperature, timing, watering frequency of growing corn, but can it actually go to Home Depot, purchase corn seeds, ... (long sequence of tasks) ..., nurture it for months until there's corn in my backyard? Each task within the (long sequence of tasks) is "making PB&J sandwich" [1] level of difficulty. Can AI generalize?
As is, LLMs are better positioned to replace decision-makers than the workers actually getting stuff done.
[1] http://static.zerorobotics.mit.edu/docs/team-activities/Prog...
Comment by bsza 16 hours ago
Yet you get credited for all that work, because a car's ability to move people isn't special compared to your ability to operate it without running people over. Similarly, your ability to buy things from a store isn't special compared to an AI's ability to design a hydroponics farm or fusion reactor or whatever out of those things. Yes, you can do things the AI can't, but on the other hand, your car can do things you can't.
All this talk about "doing things in the physical world" is just another goalpost moving, and a really dumb one at that.
Comment by northerdome 13 hours ago
Comment by Propelloni 17 hours ago
Comment by NedF 14 hours ago
Comment by bluGill 22 hours ago
Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.
My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).
Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)
Comment by pixl97 18 hours ago
>Farmers are already using computers to guide decisions.
For way longer than most people expect. I remember reading farming magazines in the 80's showing computer based control for all kinds of farming operations. These days it is exceptionally high tech. Combines measure yield on a GPS grid. This is fed back into a mapping system for fertilization and soil amendment in the spring to reduce costs where you don't need to put fertilizer. The tractors themselves do most of the driving themselves if you choose to get those packages added. You can get services that monitor storm damage and predict losses on your fields, and updated satellite feed information on growth patterns, soil moisture, vegetation loss, and more. Simply put super high automation is already available for farming. I tell my uncle his job is to make sure the tractor has diesel in it, and that nothing is jammed in the plow.
When it comes to animal farming in the mid-west, a huge portion of it is done by contracts with other companies. My uncle owns the land and provides the labor, but the buildings, birds, food, and any other supplies. A faceless company setting up the contract like now, or an AI sending the same paperwork really may not look too much different.
Comment by bluGill 16 hours ago
auto steer often can get another row in without over crowding. auto steer also shuts off ineividual rows as you cross where you planted already (saving thousands of dollars in seed)
Comment by gaudystead 13 hours ago
Comment by bluGill 1 hour ago
A DOT (I'm not sure which DOT) just did a press release on how they used John Deere guidance on a snow plow which allowed them to clear the road in a blizzard so an ambulance could get to the hospital (I was surprised they can get enough of a GPS signal, but apparently they did). Auto steer allows someone to drive a plow when you can't see the pavement/lines without first having to memorize the road by the posts/trees on the side of the road.
However there is a big difference between Deere auto steer and Tesla FSD: safety. Tesla has sensors to see if someone/something is in the way and algorithms to go around - critics claim they don't work well, but they work infinity times better than the complete lack of any of those sensors/algorithms in Deere's system. If you are using the Deere system it can hold a lane to within a couple cm - but you have to look out the window constantly because it will just drive right into anything in the way. This is good enough for farming (nobody/nothing is going to be in front of the tractor anyway), or the DOT (they can't see the road at all, but they still have trained operators ready to hit the brake) - but Tesla is going after the "you can take a nap" market.
I wouldn't be surprised if Deere has more miles of self driving than Tesla and Waymo combined, and a better safety record. However this is only because Deere's system is used in situations where the odds are against there being anything to harm in the first place, while Tesla/Waymo are trying for the much harder open road with who knows what in the way.
Now Deere is working on the full autonomous solutions, I'm not sure what the status is (I think some are out there for use in very limited situations). I'm not allowed to say anything more about these plans (I know some is public but I'm not sure what)
Comment by Yeroc 21 hours ago
Comment by bluGill 21 hours ago
Comment by Yeroc 21 hours ago
Comment by LeifCarrotson 19 hours ago
> I'm about to lease some acreage at {address near you} and willing to pay {competitive rate} to hire someone to work that land for me, are you interested?
I see no reason why that couldn't eventually succeed. I'm sure that being an out-of-state investor who doesn't have any physical hands to finalize the deal with a handshake is an impediment, but with enough tokens, Farmer Fred could make 100,000 phone calls and send out 100,000 emails to every landowner and work-for-hire equipment operator in Iowa, Texas, and Argentina by this afternoon. If there exists a human who would make that deal, Fred can eventually find them. Seth would be limited in his chance to succeed in these efforts because he can only make one 1-minute phone call per minute, Fred can become as many callers as Anthropic owns GPUs.
I do find it amusing that Fred currently shows the following dashboard:
Iowa
HOLD
0°F
Unknown (API error)
Fred's Thinking: “Iowa is frozen solid. Been through worse. We wait.”
Fred is here
South Texas
HOLD
0°F
Unknown (API error)
Fred's Thinking: “South Texas is frozen solid. Been through worse. We wait.”
Argentina
HOLD
0°F
Unknown (API error)
Fred's Thinking: “Argentina is frozen solid. Been through worse. We wait.”
Any human Fred might call in the Argentinian summer or 70F South Texas winter weather is not going to gain confidence when Fred tries to build rapport through some small talk about the unseasonably cold weather...Comment by direwolf20 19 hours ago
Ah, they've created SCP-423
Comment by greedo 18 hours ago
Comment by 9rx 5 hours ago
One of my fields has a creek in the corner that divides just two acres from the rest of the field. I've never noticed any meaningful yield drag in that part.
Comment by greedo 1 hour ago
Comment by 9rx 42 minutes ago
Of course, if it were a 5 acre field, with some assumptions about its shape, we'd only be talking more like a 2% loss across the entire field. Not nothing, but terrible...?
Year-to-year variability will see much larger swings than that. If that's the margin you're trying to operate on, I dare say you're cooked, even if your fields are large.
Comment by rappatic 20 hours ago
Comment by ryukoposting 27 minutes ago
Comment by bjt 22 hours ago
Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)
I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.
Comment by knowitnone3 21 hours ago
Comment by bluGill 20 hours ago
Comment by hahahahhaah 18 hours ago
Genuine question. I am alway curious when a statement goes against conventional wisdom.
Comment by bluGill 16 hours ago
if organic finds something good conventional farming adopts it.
conventional farming is developed at research universities. Organic is developed in cities by people who know nothing about farming, and often have an agenda.
not that conventional farming is all good. And even where it is better not all farmers do what is best. However organic is not a step better.
Comment by hahahahhaah 14 hours ago
Comment by bluGill 1 hour ago
Comment by kortilla 8 hours ago
Organic was never about sustainability, so I’m not sure why you think that’s against conventional wisdom. Organic has always been “chemicals bad so we do things the old fashioned way”
Comment by AdamN 6 hours ago
Comment by malfist 21 hours ago
Comment by mrguyorama 21 hours ago
That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.
Comment by 93po 18 hours ago
Comment by hahahahhaah 18 hours ago
Comment by 93po 16 hours ago
Comment by jayd16 22 hours ago
Comment by deejaaymac 22 hours ago
Comment by nonethewiser 21 hours ago
Comment by jayd16 22 hours ago
Comment by sgustard 18 hours ago
Comment by jayd16 17 hours ago
Comment by jovial_cavalier 21 hours ago
Comment by awesome_dude 22 hours ago
Comment by nonethewiser 21 hours ago
Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."
Comment by snowmobile 20 hours ago
Comment by rdlw 18 hours ago
Comment by snowmobile 18 hours ago
Comment by hahahahhaah 17 hours ago
Comment by lupire 20 hours ago
Comment by kennywinker 22 hours ago
Comment by ge96 22 hours ago
Comment by roywiggins 21 hours ago
Comment by treis 22 hours ago
Comment by pfdietz 22 hours ago
Claude: Go to the owner of the building and say "if you tell me the height of your building I will give you this fine barometer."
Comment by bwestergard 22 hours ago
Comment by jrmg 17 hours ago
Comment by fuzzfactor 21 hours ago
The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.
Would it be fair if AI was used to play these markets too, or in parallel?
It would be interesting to see how different "varieties" of corn perform under the same calendar season.
Corn, nothing but corn as the actual standard of value :)
You don't get much any way you look at it for your $12.99 but it's a start.
Making a batch of popcorn now, I can already smell the demand on the rise :)
Comment by fishtoaster 22 hours ago
1. Do some research (as it's already done)
2. Rent the land and hire someone to grow the corn
3. Hire someone to harvest it, transport it, and store it
4. Manage to sell it
Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.
Comment by malfist 22 hours ago
Comment by 9rx 22 hours ago
But,
"I will buy fucking land with an API via my terminal"
Who has multiple millions of dollars to drop on an experiment like that?
Comment by jt2190 22 hours ago
Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.
> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.
Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.
Comment by 9rx 22 hours ago
To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.
The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.
He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.
Comment by pfdietz 22 hours ago
Comment by bluGill 22 hours ago
Comment by 9rx 21 hours ago
Comment by moolcool 14 hours ago
Comment by tootie 19 hours ago
Incidentally I clicked through to this guy's blog and found his predictions for 2025 and he was 0 for 13: https://avc.xyz/what-will-happen-in-2025-1
Comment by hahahahhaah 17 hours ago
Comment by TheRealPomax 22 hours ago
Comment by kennywinker 22 hours ago
Comment by aprilthird2021 20 hours ago
I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.
Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop
Comment by Alupis 20 hours ago
The estimate seems to leave out a lot of factors, including irrigation, machinery, the literal seeds, and more. $800 for a "custom operator" for 7 months - I don't believe it. Leasing 5 acres of farmable land (for presumably a year) for less than $1400... I don't believe it.
The humans behind this experiment are going to get very tired of reading "Oh, you're right..." over and over - and likely end up deeply underwater.
Comment by snowmobile 5 hours ago
[1] https://www.extension.iastate.edu/agdm/crops/html/a1-20.html
Comment by etamponi 17 hours ago
I am extremely worried by the amount of hype I see around. I hope I am being in a bubble.
Comment by deathanatos 22 hours ago
(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)
(and if you've never played it: https://www.decisionproblem.com/paperclips/index2.html )
Comment by geuis 22 hours ago
Comment by omnicognate 22 hours ago
"Thinking quickly, Dave constructs a homemade megaphone, using only some string, a squirrel, and a megaphone."
Comment by divbzero 22 hours ago
To make this a full AI experiment, emails to this inbox should be fielded by Claude as well.
Comment by DoctorOW 22 hours ago
Comment by dsjoerg 21 hours ago
Let's step back.
"there's a gap between digital and physical that AI can't cross"
Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?
Your brain is trapped in its skull. How does it do anything physical?
With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!
The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.
Comment by recursive 20 hours ago
Comment by formerly_proven 21 hours ago
Comment by Windchaser 21 hours ago
An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.
Comment by drhodes 20 hours ago
Comment by downboots 20 hours ago
Comment by hahahahhaah 17 hours ago
Comment by kitsune1 19 hours ago
Comment by nvader 22 hours ago
I'll be following along, and I'm curious what kind of harness you'll put on TOP of Claude code to avoid it stalling out on "We have planted 16/20 fields so far, and irrigated 9/16. Would you like me to continue?"
I'd also like to know what your own "constitution" is regarding human oversight and intervention. Presumably you wouldn't want your investment to go down the drain if Claude gets stuck in a loop, or succumbs to a prompt injection attack to pay a contractor 100% of it's funds, or decides to water the fields with Brawndo.
How much are you allowing yourself to step in, and how will you document those interventions?
Comment by snowmobile 17 hours ago
Comment by wartywhoa23 1 hour ago
Such a nice term to use as an AI-related alternative to "jump the shark".
I'll definitely be using.
Comment by lbrito 21 hours ago
Unequivocally awful
Comment by incr_me 19 hours ago
Comment by demorro 16 hours ago
Comment by dsjoerg 21 hours ago
"Stop staring at screens"
"Stop sitting at your desk all day"
"Stop loafing around contributing nothing just sending orders from behind a computer"
"Touch grass"
but now that the humans are finally gonna get out and DO something you're outraged
Comment by lbrito 21 hours ago
Comment by lupire 19 hours ago
The remaining work is only bad because it's low paying, and it's low paying because the wealth created by machines is unfairly distributed.
Comment by Spoom 22 hours ago
I've been rather expecting AI to start acting as a manager with people as its arms in the real world. It reminds me of the Manna short story[1], where it acts as a people manager with perfect intelligence at all times, interconnected not only with every system but also with other instances in other companies (e.g. for competitive wage data to minimize opex / pay).
Comment by throwway120385 22 hours ago
Comment by CommieBobDole 21 hours ago
This seems like something along the lines of "We know we can use Excel to calculate profit/loss for a Mexican restaurant, but will it work for a Tibetan-Indonesian fusion restaurant? Nobody's ever done that before!"
Comment by gbear605 14 hours ago
Comment by stephantul 8 hours ago
Comment by ranprieur 21 hours ago
Pure dystopia.
Comment by moolcool 14 hours ago
Comment by dsjoerg 21 hours ago
The endless complaining and goalposting shifting is exhausting
Comment by qayxc 20 hours ago
There's no goalpost shifting here - it's l'art pour l'art at its finest. It'd be introducing an agent where no additional agent agent is required in the first place, i.e. telling a farmer how to do their job, when they already now how to and do it in the first place.
No one needs an LLM if you can just lease some land and then tell some person to tend to it, (i.e. doing the actual work). It's baffling to me how out of touch with reality some people are.
Want to grow corn? Take some corn, put it in the ground in your backyard and harvest when it's ready. Been there, done that, not a challenge at all. Want to do it at scale? Lease some land, buy some corn, contract a farmer to till the land, sow the corn, and eventually harvest it. Done. No LLM required. No further knowledge required. Want to know when the best time for each step is? Just look at when other farmers in the area are doing it. Done.
Comment by orange_joe 21 hours ago
I’m guessing this will screw up in assuming infinite labor & equipment liqudity.
Comment by rmason 13 hours ago
Managing all the decisions in growing a crop is too far a reach. Maybe someday, not today. Way too many variables and unexpected issues. I'm a former fertilizer company agronomist and the problem is far harder than say self driving cars.
Comment by mbowcut2 19 hours ago
Comment by japoneris 21 hours ago
Comment by joelthelion 5 hours ago
Comment by ikidd 20 hours ago
Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.
And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.
Comment by tiffanyh 14 hours ago
This is all addressed in the original blog post.
Comment by travisgriggs 22 hours ago
I do not have a positive impression/experience of most middle/low level management in corporate world. Over 30 years in the workforce, I've watched it evolve to a "secretary/clerk, usually male, who agrees to be responsible for something they know little about or not very good at doing, pretend at orchestrating".
Like growing corn, lots of literature has been written about it. So models have lots to work with and synthesize. Why not automate the meetings and metric gatherings and mindless hallucinations and short sighted decisions that drone-ish be-like-the-other-manager people do?
Comment by corndoge 1 hour ago
Comment by dabinat 18 hours ago
Comment by ks2048 20 hours ago
So, where are the exact logs of the prompts and responses to Claude? Under "/log" I do not see this.
Comment by snowmobile 20 hours ago
Comment by eisbaw 20 hours ago
Comment by meroes 13 hours ago
Comment by starkparker 21 hours ago
The point could be made by having it design and print implements for an indoor container grow and then run lights and water over a microcontroller. Like Anthropic's vending machine this would also be an already addressed, if not solved, space for both home manufacturing and ag/garden automation.
It'd still be novel to see an LLM figure it out from scratch step by step, and a hell of a lot more interesting than whatever the fuck this is. Googling farmland in Iowa or Texas and then writing instructions for people to do the actual work isn't novel or interesting; of course an LLM can write and fill out forms. But the end result still primarily relies on people to execute those forms and affect the world, invalidating the point. Growing corn would be interesting, project managing corn isn't.
Comment by socalgal2 21 hours ago
Comment by snackbroken 19 hours ago
Comment by a3w 4 hours ago
Comment by recursive 21 hours ago
Comment by FuturisticLover 22 hours ago
We feed it the information as a context to help us make a plan or strategy to achieve or get something.
They are also doing the same. They will be feeding the sensor, weather and other info, so claude can give them plan to execute.
Ultimately, they need to execute everything.
Comment by jdwg 20 hours ago
So this is a very legitimate test. We may learn some interesting ways that planting, growing, harvesting, storing, and selling corn can go wrong.
I certainly wouldn't expect to make money on my first or second try!
Comment by bradgranath 21 hours ago
Comment by pragmatic 22 hours ago
Look up precision ag.
Comment by ironbound 20 hours ago
Comment by nonethewiser 22 hours ago
Comment by dghlsakjg 19 hours ago
I dug a little deeper and found this study showing cash rental rates per acre per year ranging from $215 to $295.[1] So it actually looks like Claude got this one right.
Of course I know nothing about renting farmland, but if you ask to rent 5 acres the average farm size is in 300+ acre region, the land owners might tell you to get lost or pony up. A little bit like asking Amazon to give you enterprise rates for a single small EC2 instance.
[0] https://farmland.card.iastate.edu/overview [1] https://www.extension.iastate.edu/agdm/wholefarm/pdf/c2-10.p...
Comment by serhack_ 22 hours ago
choice = random() % 5
switch choice:
case 0: blog_post
case 1: tell_to_plant_corn
case 2: register_website
case 3: pause
case 4: move_moneyComment by aaln 20 hours ago
Comment by MetaMonk 14 hours ago
Comment by nxobject 22 hours ago
Comment by dieggsy 22 hours ago
Anyway, turned it off; sure enough, misaligned.
Comment by pier25 16 hours ago
Comment by space_greg 22 hours ago
Comment by fanatic2pope 20 hours ago
We, as in humans?
Comment by Kkoala 22 hours ago
But where is the prompt or api calls to Claude? I can't see that in the repo
Or did Claude generate the code and repo too? And there is a separate project to run it
Comment by guerrilla 21 hours ago
Comment by naveed125 10 hours ago
Comment by esafak 21 hours ago
Comment by recursive 19 hours ago
Comment by solomonb 21 hours ago
Comment by tpolm 14 hours ago
Comment by paxys 19 hours ago
"Hey AI, draft an email asking someone to grow corn. See, AI can grow corn!"
This project is neat in itself, sure, but I feel the author is wayyy missing the point of the original thought.
Comment by tw04 20 hours ago
Huh? I have no doubt that mega corporate farms have a “farm manager”, but I can tell you having grown up in small town America, that’s just not a thing. My buddies dad’s were “farm manager”, and absolutely planted every seed of corn (until the boys were old enough to drive the tractor and then it was split duty), and the big farms also harvested their own and the smaller ones hired it out.
So unless claude is planning on learning to drive a tractor it’s going to be a pretty useless task manager telling a farmer to do something he or she was already planning on doing.
Comment by jrflowers 4 hours ago
581 points 342 comments
Comment by itsafarqueue 21 hours ago
I have zero doubt Claude is going to do what AI does and plough forward. Emails will get sent, recommendations made, stuff done.
And it will be slop. Worse than what it does with code, the outcomes of which are highly correlated with the expertise of the user past a certain point.
Seth wins his point. AI can, via humans giving it permission to do things, affect the world. So can my chaos monkey random script.
Fred should have qualified: _usefully_ affect the world. Deliver a margin of Utility.
We’re miles off that high bar.
Disclosure: all in on AI
Comment by fhennig 21 hours ago
Comment by recursive 16 hours ago
Comment by dsr_ 21 hours ago
Comment by jollyllama 22 hours ago
Comment by programd 20 hours ago
Comment by bstsb 22 hours ago
Comment by tleyden5iwx 19 hours ago
Comment by jpmattia 22 hours ago
I mean, more or less, but you see what I'm getting at.
Comment by kennywinker 22 hours ago
Most food is picked by migrant laborers, not machines.
Comment by mvidal01 21 hours ago
Comment by phyzome 13 hours ago
Comment by chakazula 19 hours ago
Comment by silveira 22 hours ago
Comment by tsunamifury 22 hours ago
1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc
2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.
3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.
I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.
Comment by BenoitEssiambre 22 hours ago
Comment by futuraperdita 21 hours ago
Comment by BenoitEssiambre 20 hours ago
Comment by farmin 19 hours ago
Comment by qoez 22 hours ago
Comment by lupire 20 hours ago
Comment by jovial_cavalier 21 hours ago
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
Comment by citizenpaul 21 hours ago
This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.
Comment by dh2424 17 hours ago
Comment by lerp-io 19 hours ago
Comment by undo-k 15 hours ago
Comment by moffkalast 22 hours ago
If people are involved then it's not an autonomous system. You could replace the orchestrator with the average logic defined expert system. Like come on, farming AGVs have come a long way, at least do it properly.
Comment by gritspants 22 hours ago
Claude: Oh. My. God.
Comment by jackmarshl0w 20 hours ago
Comment by Night_Thastus 22 hours ago
They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not.
You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it.
Comment by jablongo 22 hours ago
Comment by Night_Thastus 22 hours ago
The worlds most impressive stochastic parrot, resulting from billions of dollars of research by some of the world's most advanced mathematicians and computer scientists.
And capable of some very impressive things. But pretending their limitations don't exist doesn't serve anyone.