« August 2006 | Main | October 2006 »

## September 28, 2006

### Left-Out Variables

We considered the consequences of mistakenly leaving out a regressor.

Set up for the two-variable regression model:

Consider two possibilities:

If the truth is I and we pick I, we are in good shape:

An important proposition:

If the truth is II and we pick I, we have a left-out variable, which effectively becomes part of the error term.

We can derive an expression for the resulting left-out variable bias.

Posted by bparke at 09:29 PM | Comments (0)

## September 26, 2006

### Functional Form

Somebody forgot his camera today, but the material we discussed resembled a Fall 2005 lecture and a Spring 2005 lecture.

Posted by bparke at 08:24 PM | Comments (0)

## September 21, 2006

### The F Test

We can use an F test to test a hypothesis involving more than one restriction.

We created rvm2 = rvm*rvm to show that an F test for one linear restriction is equivalent to a t test.

Posted by bparke at 10:52 PM | Comments (0)

## September 19, 2006

### Multiple Regression

Posted by bparke at 10:29 PM | Comments (0)

## September 14, 2006

### Ordinary Least Squares

What is better, large n or small n? What is better, big V(x) or small V(x)?

Supplement -- a way to think about the model:

Posted by bparke at 10:32 PM | Comments (0)

## September 12, 2006

### Random Numbers

Stata generates random numbers uniformly distributed between 0 and 1 if you use "gen x = uniform()". To generate standard normal random variables use "gen e = invnorm(uniform())".

Posted by bparke at 10:23 PM | Comments (0)

### Ordinary Least Squares

Posted by bparke at 09:52 PM | Comments (0)

### Monte Carlo "do" File

The following "do" file will probably be used in today's lecture.

mc1.do

Posted by bparke at 12:13 AM | Comments (0)

## September 07, 2006

### Hypothesis Testing

The basic strategy

One-tail test

Two-tail test

Posted by bparke at 05:49 PM | Comments (0)

## September 05, 2006

### Bayesian vs. Classical Statistics, including Confidence Intervals

A fairly innovative approach via Bayesian statistics reviewed confidence intervals (of classical statistics).

If you were in class, you know that the last diagram illustrates a point of profound importance.

Posted by bparke at 09:27 PM | Comments (0)