23 comments to the Chomskian Doctrine Within this text I ( or « we », as will be called the circle of those who adhere to the ideas presented hereby) propose some objections against the set of Chomskian theories (which will be labeled as « the Doctrine »). In the beginning, an intention was not to write a scientific article, but simply to save for eternity few notes of laic who wants to protect his laicity at least in some form, before he becomes completely re-programmed by the Doctrine. It is possible that the text will contain many contradictions with itself. Similiarly to Chomskian theories, science, and knowledge in general, this text it is evolving in time, and thus it can happen that what have been perceived as a serious flaw in « Initialization problem » is a solution to determinist/non-determinist dilemma in the « Halting problem ». It’s also very much possible that the majority of problems proposed hereby was already addressed either by Chomsky and a group of his « fideles », or by his « adversairies ».. 1. The initialization problem During the last class of syntax, students were given these two sentences: 1. La lecture de ce livre a été conseillé aux étudiants. 2. Le chat de la voisine semble être nourri par le concierge. In the first case, the V « conseiller » was projected from lexique to D-structure as « être conseillé » and the I thus received the value [+AGR]. The fact of +AGR allowed, during the creation of S-structure, the movement of « la lecture ...» NP from the position bounded to V-bar of « être conseillé » to the initial empty position where « la lecture » had received the Nominative case. In the second case, the V « nourrir » was projected from lexique to D-structure as « être nourri » and the I to which the V was bounded thus received the value [-AGR]. The fact of -AGR denied, during the creation of S-structure, the movement of « le chat » NP from the position bounded to V-bar of « être nourri » to the second IP. Therefore the whole NP had to « move » even further, and got bounded to the first IP, which was +AGR. Now a question of an enfant terrible student was this: Why, in the case of first sentence, we inserted the verb from lexique in conjugated form, thus creating +AGR, and in the second case in the the infinitive form, thus creating -ARG condition ? The answer was: Because such is the case in the resulting senteces (phrases d’arrivé). And now the really terrible question of the enfant terrible student follows: But how can we know what will BE the resulting phrase, while we are still in the process of its own derivation ? In other words, how can a result influence its own construction? That’s cabala, not science. This problem is, of course, not obvious for the students reading the books or doing exercises during the classes, because the « resulting sentence » is already present right there, in front of their eyes. From the very beginning, they know the result, they reason out « initial conditions » from it, and than they are happy that from those very causes they will arrive to the result from which they started... But in the real life situation , when a speaker is supposed to generate a sentence, there cannot be any « resulting sentence » present. At least not for a generative grammarian. For if there was, in some sense, the « resulting sentence » already present, for example in some potential, virtual form of a « pattern-template » with which the phrase-being-constructed is being matched, this pattern-template would also have to be either generated or it would have to be taken from memory. But if it would be generated, it would need an another pattern-template to match against etc. ad infinitum – in other words we would be posed in front of the problem of the problem of « infinite regress ». The option « taken from memory » is for a generative grammarian inacceptable because in such a case every sentence would potentially need its own pattern-template to be stored in the memory. The result would be a huge amount of patterns stored in the memory and no need to generate We will address this problem more closely in the « Halting problem argument » as well as in some others. Returning to our sentences, we now try offer this fast-made possible solution to the problem, trying to be at least little bit faithful to the framework of Chomskian theories : To arrive to the D-structure of « le chat semble... » something, some additional information saying that the verb will be in infinitive , has to be inserted, not only the lexical items. This very same « information » will « trigger » the derivation of the phrase 1. from the lexical items and not the derivation leading to « Il semble que le chat de la voisine est nourri par le concierge », which would be also a second valid derivation out of the very same lexique (if we suppose – as many modern theories do – that « il » in this case is not an independent lexical item) 2. Argument of a laic coming from a foreign country: Do flying fish have wings ? What grammarians called « cases » for hundreds of years is much more related to the thematique roles than to the « position in D-structure » or some « governing by V/N/whatever ». In other words, cases are at least for us, Slavs, much more morpho-semantique than syntactique entities (in the sense where « syntactique » means « in relation to the position within the sentence » ), for example an answer to the question « Who? What? [kto ? co ?]» is for me in Nominativ, the answer to the question « To whom? To what? [komu? comu?] » in dative etc. The « magic » of the cases is not hidden in the fact that one word/part of the sentence is strongly influencing the other word/part of the sentence (that would be a similar « discovery » as to find out that the sentence A is significantly influencing an understanding of sentence B which follows...), but in the fact that we are using morphology to do so. I simply don’t understand why the creators of Doctrine had chosen the same name Cases (and even with the capital!) to designate the set of solutions to the technical problems of their theory which have not very much in common with the cases in existing natural languages. Just a small example of how small slavic nations can use their « cases »: Nominative: hovorí láska (The love is speaking/telling) Genitive: hovorí (z) lásky ((S)he is speaking because of love OR love is the reason of her speech) Dative: hovorí láske ((S)he is speaking/telling to love) [metaphoric but acceptable] Accusative: hovorí lásku ((S)he is speaking love) [maybe not acceptable by some orthodox purists] Locative: hovorí (o) láske ((S)he is speaking ABOUT love) Instrumental: hovorí láskou ((S)he is spkng BY love) OR hovorí (s) láskou ((S)he is spkng with love) « O » preposition in Locative protects the case from semantic collision with morfologically identical Dative (at least within all 4 declinasion paradigms for feminine gendre), « Z » preposition either protects the case from semantic collision with Nominativ of plural form « lásky », or is a preposition in its own right, meaning « from ». To prove our point we feel no need to decide this sort of dilemmas. Thus we can construct sentence like : « Láska láske z lásky hovorí lásku a o láske s láskou. » It has 6 components, which can be freely permutated among each other, thus forming 6!= 720 possible sentences, positional contraints being only stylistic. While some of the results could be possibly labeled as « poetic », especially in the beginning of the lecture, we doubt that any reader could rightfully justify his stance when calling these sentences nongrammatical (especially after the competence of the reader would get « accomodated to the pattern » after reading few hundred permutations). The goal of this small exercise was to show that one verb in Slovaque language can, in some extreme cases , assign all types of cases, and even to the same noun. If such a situation occurs, a position plays only minor cosmetic role (the only exception being the clitics), and the assignation of the correct case is determined by morphology (in hearing-parsing passive performance situation ) and semantic roles (in producing,active, performance situation). Because it is not emitting almost any light upon the beauty of sanskrit’s or slavic language’s « cases », would You be , please, so kind, and choose a different term for Your Universal Theory of Cases ? 1 Additional comment: Imagine a sentence: p / \ N VP | / \ He is student in Slovak, Czech, Polish, we say: On je študent. (Pronoun is Sg. Masc. Nominative , Noun also in Nom.) How it is possible that the verb « to be » assigns Nominative not only to its NP externe (pronoun) but also to its « object », NP interne ? How would You deal with this situation ? You can:  Say that Nominative in our language can be intern as well as extern. In such a case Your definition of Your Nominative hadn’t offered us any information, it’s an empty tautology similiar to « either he is alive or he is dead », « either he is stupid or he is not » etc.  Forget hundreds years of Tradition and say that a case assigned is not Nominativ but some different case (for example in French that would be Accusative because it’s an intern NP and V is not V-Datif). In such a case we would kindly allow us to concentrate Your attention upon the fact that « this new case of Yours » would be completely redondant and useless for the theory of our language, for no matter what Noun it is, it’s case-signifiant morpheme in this position is always (for 12 paradigms in singular + 12 paradigms in plural) identical to the case-signifiant morpheme of Nominative (or Instrumental, as You’ll see later...). What a coincidence !  Make an « exception » for a word « to be », saying that it can do something very special, that it can have in fact two theta-roles externes. Thus we can do something like: |---IP---| | | | N V N | | | He is student 1 Or, You can maybe try to persuade that what is an indispensable part of our linguistic heritage are in « underlying » reality not cases at all . After all, if a generativist phonologue Schane succeeded to convince the world (and even french phonologicians !!!) that french in « underlying » reality doesn’t contain any nasal vowels, and we can even call this solution being the most elegant. But there is a small problem, it’s a complete heresy against the basic axiom of Chomskian doctrine : p -> N VP 2 ... And during this analysis I tried to left aside the fact that we can express the same meaning by saying: On je študentom. (He is student. Pronoun in Sg.Masc.Nom,Noun in INSTRUMENTAL! ) ...so now what ? Is the assignation of the Instrumental in contrast with Nom case of first example driven by the position of stars ? 3. Halting problem In this section we’ll use the method similiar to that of reductio ad absurdum of old scholastic Masters to show that something like Generative Grammar G « capable of producing infinite number of terminal sentences out of set of lexical items S by applying generative rules R » seems to be a chimere. Let’s assume that such a grammar G exists. We can thus ask a question: how have we obtained this infinity of terminal sentences? Because even a beginner in mathematics know that if we want to get from finite number N to infinite number I, we have to either multiplicate N by another number J, which itself is infinite, or we have to apply infinitely many times an operation/function/rule F upon N F(N), on the beginning, we see only these solutions to the question How can a grammar G be possible? :  1 - the set of rules R, which is applied upon the set of lexical items S, is herself infinite  2 - the set of rules R is finite, as well as set of lexical items S, but the number of times T we apply an operation/derivational rule (which belongs to set of R) can be potentially infinite  3 - the set of rules R is finite, the number T of operations is finite, but the set of lexical items S is infinite  4 - the set of rules R is finite, the number T of operations is finite, the number of lexical items S is finite, but is forwarded to the input of a first derivational rule ( to the D-Structure) in infinitely many variations V We immediately see that the first solution is invalid. For an infinite number of rules had to be stored somewhere, but all the possible storage spaces (brain-memory, DNA etc.) are finite. The same argument applies to the solution 3 – the set of lexical items used by a given person is necessarily finite. Thus the only source of « desired infinity » we can see can be solutions 2 and 4. Let’s first look closer upon the solution 2, where magic is hidden in «infinity of number T of applications of rule which belongs to the finite set of rules R ». In other words, a rule can be applied more than once, to generate infinite number of sentences, it can in fact be applied infinitely many times, if the given input allows it. In the framework of Standard Theory, let’s imagine for example a Deep structure: I know # I am # we can apply a rule RTind upon it, obtaining I know THAT #I am# and upon this structure we can once again apply the same rule: I know THAT THAT #I am# 2 We’ll attack this axiom in argument « I love You » and, more deeply, in antiEuclidean argument. et caetera et caetera, ad infinitum. But in such a case, it is our duty to ask a question: What can prevent a speaker of a sentence «I know that I am » from getting into an infinite that that that3 loop? According to what critere will he know that the syntactical derivation is finished and he can pass the output to the phonological layer? You will most probably answer: «Standard theory is dead, this is not an argument, I had dealt with these problems in the later works, for example by reducing set of rules R to its only one member «move alpha » , so there will be no more problematic rules like Yours RTind , and by introducing contraints which will prevent us from falling into the infinite loops which could possibly occur if move alpha had moved , in the first step, an element from position A to B, and in the second step from B to A4 ». And we’ll say: « We don’t know Your theory into detail, but doesn’t introduction of contraints in Your latter theories, which will prevent a speaker from falling into the « infinite loop » abyss, lead to the loss of capability of Your generative grammar G to produce infinite number of sentences5? Aren’t You posed between the Skylla of « G is capable of producing infinite number of sentences, but it can happen that infinite loops will occur » and Khabryde of « infinite loops are not present within my system but I lost the real infinity-oriented generativity of G»? And as usually, similiarly to a good hacker-programmer, You’ll try to adapt Your model to this problem, You’ll think a while and You’ll propose this ad hoc solution, which, as almost all ad hoc solutions, will make Your model less elegant, less scientific, less comprehensible to un-initiated (and thus un-reprogrammed) and You’ll say: « This is a serious problem. But I postulate an existence of a procedure P which will could be potentially capable to know en avance whether the derivation will finish, or will lead us into the « infinite loop » abbyss. Thus we’ll still be able to apply some rules infinitely many times, when needed, but we’ll never fall into an infinite loop ». If this would be Your solution, we’re sad to remind You , that according to the father of informatics , a man owning of the most brilliant minds of the 20th century named Alan Turing, such a universal procedure P capable of deciding whether a given programme ( a set of instructions ) will ever halt or not DOES NOT EXIST for a deterministic machine (cf. http://en.wikipedia.org/wiki/Halting_problem ) . And afterwards You’ll maybe try to offer another ad hoc solution and say that Your model is in fact a non-deterministic one. In such a case we’ll be very happy that You had finally arrived to the conclusion to the fact that « human being is more than a machine ». But until that moment, which will maybe come while we’ll become accustomed to Your « Minimalist programme », we have to express serious concerns for all previous G.G theories which seem to us to be very much deterministic. We repeat once again: such theories  either lead to the emergence of infinite loops during the derivational process  or to the impossibility to generate infinite number of terminal sentences out of finite number of variation of lexical inputs, upon which rules (or just a rule) can be possibly applied infinitely many times This was our answer to the possible solution 2. 3 4 5 Even while we try to show that generative grammar syntax is more or less an « impasse » in the evolution of linguistics, similiarly as was Ptolemaic geocentric approach to cosmology, we nonetheless had to admit that, it helped us to shed some light upon the pathologic case of « beguement ». And another contraint for three steps: (A to B, B to C, C to A) is forbidden , another constraint for four, another for five etc...Quite a lot of them in the end, n’est-ce pas? One can for example imagine a procedure: If the derivation is not finished within certain period of time, pass what You had already obtained to the phonological layer. But in such a case, the number of sentences possibly produced will be finite, since the « certain period of time » constant is also finite. G will thus not be allowed to generate infinite number of terminal sentences. The last possible way how we could possibly establish existence of generative grammar G capable of producing infinite number of sentences was the solution 4: « the number of lexical items S is finite, but is forwarded to the input of a first derivational rule ( to the D-Structure) in infinitely many variations V ». Imagine for example a speaker whose lexique contains only the items {I,You,know} , and the only rule R – already mentioned RTind . We can thus imagine that this speaker is living in the highly developed society ot telepathes where the only purpose of linguistic exchange are the affirmations of following kind: Variation Lexical input RTind Result V1 {I,know} not applied I know V2 {You,know} not applied You know V3 {I,know,You} not applied I know You V4 {I,know,You,know} applied once I know that You know V5 {You,know,I,know,I,know} applied 2x You know that I know that I know V6 {I,know,You,know, applied 2x I know that You know that You know You,know} etc. possibly ad infinitum. Truly, in such a case, we have a true generative grammar G capable of producing infinite number of sentences out of finite set of lexical items by applying finite number of rules. The only problem is that...to have such an infinite number of terminal sentences, the number of varieties V of lexical inputs which are being inserted into D-structure...has to be infinite. And thus this wonderful generative grammar G of Yours in fact does not shed any light upon the generativity of language, because the real generativity of the language is hidden in the fact that the lexique is capable of passing infinitely many varieties of its items to the syntactic component6. We’ll come back to this « generativity of lexique » within the « argument from poetics ». Here, we just proposed it as a last possible answer to the question « How can be generative grammar possible?». We had shown that the generative grammar G is technically impossible in case of solutions 1 and 3, have some serious difficulties with infinities in solution 2, and is completely useless in solution 4. We propose to change the model. 4. Connectionist’s argument: What??? Infinity of sentences ??? 5. Orator’s argument There exists a rhetoric figure called anakoluth which consists of breaking the rule of syntax to achieve desired effect upon the public. Shakespeare used it, for exemple: "Rather proclaim it, Westmoreland, through my host, That he which hath no stomach to this fight, Let him depart. Existence of such a figure , especially within the works of biggest Masters of language, poses a Doctrine in front of a problem which she’ll never surpass. We can formulate it like this: Syntax is just making it more neat and ordered afterwards, it’s not the master, but just a servant, a « femme de menage » of semantics. 6 Imagine that after years of exhaustive research, the S set of fundamental rules, (we can call them axioms) of Universal Grammar is found, and explicitely formulated. A rhetorician O comes afterwards with an intention to strongly influence the public, and knowing well that «if You want to impress people, You have to be non-violently different », he’ll apply his new figure, called from now on « universal/chomskian anakoluth », which can be described like: « Take any R rule out of Universal Grammar S. Create a completely new rule R’ or create it out of R, so that R and R’ are not consistent (R’ is negation of R). Add this new rule R’ to the set of rules which rest in S, thus obtaining S’. Generate all the sentences T’ according to these new set of rules ». The result would be, that sentences produced by O would not be generated according to the rules of Universal Grammar S, but according the rules of another grammar S’, which is not consistent with S. Thus, if there will exist a human being 7which will consider T’ sentences grammatical, it will follow that a grammar S is not universal . And since we could make a similiar procedure with no matter what set of rules , no matter the set S, it will never be universal . You can, of course, make objections like this:  1. Since this new grammar S’ will not be consistent with the Universal Grammar S, no human being will understand it, and it will thus not be a human grammar at all and sentences produced will not be sentences of human language.  2. One thing is to explicitely formulate the rules of Universal Grammar, the other thing is to construct sentences according to these rules. In fact all of the processes of U.G. are being realized on the sub-conscious functional-mind level , and what we observe are only outputs of these processes. We can say that access of our consciousness to U.G. is read only , we can know it, but we cannot consciously apply them, use them to construct sentences. We’ll return to objection no.1 in «power of stimulus argument ». For this moment, let’s just consider it as a question « Could a human being,except the orator O, perceive the set of sentences generated by the rules which are not consistent with the Universal Grammar - ever consider them as grammatical? » ,which can be decided by scientific means – by an experiment. If You’ll decide to use the second objection to save Your system, we have to warn You that You’ll make Your Doctrine very much impotent 8. Because if You’ll say « it’s not possible to consciously apply the rules of U.G. », You’ll loose a great deal of justification for «drawing trees and writing derivations for Your students», because drawing trees and writing derivations is , in the end, nothing else than a trial to consciously apply the rules of U.G.9 In other words : From the moment You’ll explicitely formulate Your U.G. , she will be in serious danger of losing its universality because of « chomskian anakoluth » which will, surely follow. And to save its universality by not formulating it at all, by saying « it’s not possible to formulate it, yet it exists » would be not far from the disputes concerning an ontological proof of existence of God, coming from the dark ages into which we simply don’t want to fall once again. Except an orator O, of course. For if his new figure will not have any positive effect upon the public, he will most probably end in an institution for mentally disturbed where some wise man with diplom, title and a white coat will write into a diagnose «speaker insufficient of producing syntactically correct sentences », and we cannot take for serious linguistic judgmenets of such a person, can we? On the other hand, if he’ll succeed, he’ll be celebrated as a genius and his new grammatical rule will maybe even became a NORME. Voilà la différence entre le fou et le génie – la réussite. 8You’ll be like a mathematician trying to explain an idea behind a newly discovered Operator saying to his students « So, You see what this Operator does? Unfortunately we cannot do any exercises because we don’t know any objects with which it does what it does, for if we had known them, it would follow that we wouldn’t be able to apply this Operator! » 9 And the exercices in the form of « explain why the sentence S: « sentence this grammatic not is » is not grammatical » are in fact the first examples of application of chomskian anakoluth. 7 6. Saussurian universal darwinism argument 7. Adequacy with the world 8. Empiricists argument - Competence and performance mess 9. Popperian argument and falsifiability 10. Argument from poetics 11. Arguments from other arts 12. Argument from children: 13. Argument from an anarchist - Normativity of G.G. 14. Argument « I Love You » You try to persuade me that every grammatical English sentence can be analysed as N VP10 . Asking You to analyse the sentence « I Love You », You automatically do something like: p / \ N VP | / \ I Love You and if You are a mentalist, You will even try to persuade me that something like THAT structure exists in my very head. And then I will tell You: « but the things can be observed, analyzed differently ». And I will draw: a) p /|\ NVN | | | I love You b) p | N---V---N | | | I love You c) p / \ core1 N / | | N V | | | | I love You and if You’re honest to truth, at least for a short while You’ll have nothing to say. Because if You don't know it, You shall at least feel that there CAN BE cultures which mentally represent the love relation in such a mutually symmetric (examples a and b) or maybe even by- « the-other-one-is-primary » (c) fashion. And then, because You are also honest to Your Doctrine You’ll add: « But that breaks all the most essential rules ». And I will respond: « not rules, but conventions, which a community of post-wars syntaciticans had voluntarily chosen driven by a need to facilitate the communication among its members, and which are being imposed upon the new generation ». 15. Russel’s argument - G.G. as a formal system 16. Lobatchevski’s (antiEuclidean) argument 10 In X-bar You’ll add some bars and Is and Ps here and there, but the overall cartesian architecture of your system, based upon assignation of a special place to the subject, rests the same 17. Hilbert’s argument 18. Godelian argument (ou un petit K.O. de grace) 19. postFreudien argument 20. Kuhnian’s argument 21. Skinnerian « power of stimulus » argument 22. Memetician’s argument 23. Kopernic’s argument ironic remarks concerning the lecture Haegaman’s Governement and Binding manual 97 – (Keyne proposes)... that the relevant parameter distinguishing VO languages from OV languages is related to the application of the leftward movement rule. (an distinguished anglo-saxxon academiciann proposes)... that the relevant parameter distinguishing Arabic script from the classical Latin script is related to right-left movement of hand only. For as everybody (especially a well formed anglosaxxon) knows, it is the left-right movement which is universal, present in the deep layers of not only cerebral structures but DNA itself, and all empiric data which are not in accordance with this universal principle are just surface phenomena. 106 – We must introduce some parameter to distinguish configurational languages from non-configurational languages. And we must, of course, introduce some parameter to distinguish a language from non-language. And it will be coded like this – if mother is speaking it (if the utterances are much more frequent and intense than any others), it is a language. 145 – Apart from the identification of verb inflection, we shall not be concerned with the decomposition of words into morphemes either. Such an approach is similiarly absurd from the global point of view as to say: « Apart from the identification of slavic/sanskrt cases, we shall not be concerned with the decomposition of words into morphemes either. For every case we’ll create a nice CaseP ,(analogic to IP) and when it comes to the conjugaison of verbs, we’ll invent a « Theory of verb agreement module » which we’ll position between the D-Structure and S-structure. » Why not? 143 – It is easy to see that the more elements are involved the more choices are available (« Language acquisition as a defense of binary branching theory ») Counter-argument: It is easy to see that for a child in early stages of its language acquistion, the sentence « Daddy sleeps » has the same number of elements as a sentence « Mummy must go now ». And that number is one. Or do You believe that Mummy or Daddy says « white space » after each word? The number of lexicon elements into which sentence can be analysed grows in parallel with « internalization of mother language’s grammatical structure ». Thus binary branching does not offer any real advantage for language acquisition. (and in the end, almost no language cannot be fitted upon the strict binary branching paradigm anyway) Bonus Some sentences from Shakespeare (found by application of perl regular expression /[\.\; ]([\w ]*?I [\w ]+? me [\w ]+?\.)/g upon corpus "The complete works of William Shakespeare" ) which, in my opinion, violate the second binding principle: I have kept me from the cup. I cross me for sinner. I would wish me only he. Something I must do to procure me grace. Here on this molehill will I sit me down. I can buy me twenty at any market. I have bethought me of another fault. I will shelter me here. Here will I rest me till the break of day. I do repent me that I put it to you. I fear me both are false. I can no longer hold me patient. For I repent me that the Duke is slain. And now I cloy me with beholding it. bread I it makes me mad. That I should yet absent me from your bed. How I may bear me here.