Language may rely less on complex grammar than previously thought: study
Posted by mikhael 23 hours ago
Comments
Comment by CyberDildonics 53 minutes ago
Comment by giardini 19 hours ago
In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.
For example, have these guys run their ideas past Schank's "conceptual dependency" theory?
Comment by alew1 4 hours ago
But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”
Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.
Comment by lupire 1 hour ago
Comment by mcswell 50 minutes ago
Comment by mcswell 54 minutes ago
Comment by akst 4 hours ago
I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.
Comment by mcswell 57 minutes ago
Comment by antonvs 5 hours ago
If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.
The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.
Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.
It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.
Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.