The Wald test uses the variance-covariance matrix from the regression. test performs a Wald test for beta_i = exp against beta_i != exp, not a t-test. Interestingly enough though, the standard error of the estimate is useless in this case anyway. There is no noise in the model, nothing deviates. The standard error is missing here because the relationship between x and y is perfectly collinear. Of course you could program it yourself to get an approximation of the p-value in these cases. There is no way to do this out of the box that I'm aware of. Quietly: reg foreign value value2 weight length if id = `i', noconstant Below, both groups would be classified as type 1, though I only want the second group. If I cannot reject the hypothesis at the 95% confidence level, I classify the observation as type 1. The end goal would be to classify observations within subsets based on the hypothesis test. Is there a way to conduct a hypothesis test that catches these cases? So I can identify the cases where the predictor does a very good job of predicting the DV-but I miss those cases where prediction is perfect. In the actual data, there is usually more variation. However, for some subsets, I have perfect collinearity, and Stata is not able to calculate standard errors.įor example, in the below case, sysuse auto, clear For each regression, I would then like to test the hypothesis that beta_1 = 1/2. I would like to run the same regression over many subsets of my data. How can I conduct a hypothesis test in Stata when my predictor perfectly predicts my dependent variable?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |