I have six fixed factors: A, B, C, D, E and F, and one random factor R. I want to test linear terms, pure quadratic terms and two-way interactions using language R. So, I constructed the full linear mixed model and tried to test its terms with drop1:
full.model <- lmer(Z ~ A + B + C + D + E + F
                     + I(A^2) + I(B^2) + I(C^2) + I(D^2) + I(E^2) + I(F^2)
                     + A:B + A:C + A:D + A:E + A:F
                           + B:C + B:D + B:E + B:F
                                 + C:D + C:E + C:F 
                                       + D:E + D:F
                                             + E:F
                     + (1 | R), data=mydata, REML=FALSE)
drop1(full.model, test="Chisq")
It seems that drop1 is completely ignoring linear terms:
Single term deletions
Model:
Z ~ A + B + C + D + E + F + I(A^2) + I(B^2) + I(C^2) + I(D^2) + 
    I(E^2) + I(F^2) + A:B + A:C + A:D + A:E + A:F + B:C + B:D + 
    B:E + B:F + C:D + C:E + C:F + D:E + D:F + E:F + (1 | R)
       Df    AIC     LRT   Pr(Chi)    
<none>    127177                      
I(A^2)  1 127610  434.81 < 2.2e-16 ***
I(B^2)  1 127378  203.36 < 2.2e-16 ***
I(C^2)  1 129208 2032.42 < 2.2e-16 ***
I(D^2)  1 127294  119.09 < 2.2e-16 ***
I(E^2)  1 127724  548.84 < 2.2e-16 ***
I(F^2)  1 127197   21.99 2.747e-06 ***
A:B     1 127295  120.24 < 2.2e-16 ***
A:C     1 127177    1.75  0.185467    
A:D     1 127240   64.99 7.542e-16 ***
A:E     1 127223   48.30 3.655e-12 ***
A:F     1 127242   66.69 3.171e-16 ***
B:C     1 127180    5.36  0.020621 *  
B:D     1 127202   27.12 1.909e-07 ***
B:E     1 127300  125.28 < 2.2e-16 ***
B:F     1 127192   16.60 4.625e-05 ***
C:D     1 127181    5.96  0.014638 *  
C:E     1 127298  122.89 < 2.2e-16 ***
C:F     1 127176    0.77  0.380564    
D:E     1 127223   47.76 4.813e-12 ***
D:F     1 127182    6.99  0.008191 ** 
E:F     1 127376  201.26 < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
If I exclude interactions from the model:
full.model <- lmer(Z ~ A + B + C + D + E + F
                     + I(A^2) + I(B^2) + I(C^2) + I(D^2) + I(E^2) + I(F^2)
                     + (1 | R), data=mydata, REML=FALSE)
drop1(full.model, test="Chisq")
then the linear terms get tested:
Single term deletions
Model:
Z ~ A + B + C + D + E + F + I(A^2) + I(B^2) + I(C^2) + I(D^2) + 
    I(E^2) + I(F^2) + (1 | R)
       Df    AIC    LRT   Pr(Chi)    
<none>    127998                     
A       1 130130 2133.9 < 2.2e-16 ***
B       1 130177 2181.0 < 2.2e-16 ***
C       1 133464 5467.6 < 2.2e-16 ***
D       1 129484 1487.9 < 2.2e-16 ***
E       1 130571 2575.0 < 2.2e-16 ***
F       1 128009   12.7 0.0003731 ***
I(A^2)  1 128418  422.2 < 2.2e-16 ***
I(B^2)  1 128193  197.4 < 2.2e-16 ***
I(C^2)  1 129971 1975.1 < 2.2e-16 ***
I(D^2)  1 128112  115.6 < 2.2e-16 ***
I(E^2)  1 128529  533.0 < 2.2e-16 ***
I(F^2)  1 128017   21.3 3.838e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
				
                        
Because this is the way
drop1works (it's not specific to mixed models - you would find this behaviour for a regular linear model fitted withlmas well). From?drop1:I discuss this at some length in this CrossValidated post
However, as Venables actually describes in the linked article, you can get R to violate marginality if you want (p. 15):
In other words, using
scope = . ~ .will forcedrop1to ignore marginality. You do this at your own risk - you should definitely be able to explain to yourself what you're actually testing when you follow this procedure ...For example: