gretl commands

add

Argument: varlist
Options: --vcv (print covariance matrix)
 --quiet (don't print estimates for augmented model)
Examples: add 5 7 9
 add xx yy zz

Must be invoked after an estimation command. The variables in varlist are added to the previous model and the new model is estimated. If more than one variable is added, the F statistic for the added variables will be printed (for the OLS procedure only) along with its p-value. A p-value below 0.05 means that the coefficients are jointly significant at the 5 percent level.

If the --quiet option is given the printed results are confined to the test for the joint significance of the added variables, otherwise the estimates for the augmented model are also printed. In the latter case, the --vcv flag causes the covariance matrix for the coefficients to be printed also.

Menu path: Model window, /Tests/add variables

addto

Arguments: modelID varlist
Option: --quiet (don't print estimates for augmented model)
Example: addto 2 5 7 9

Works like the add command, except that you specify a previous model (using its ID number, which is printed at the start of the model output) to take as the base for adding variables. The example above adds variables number 5, 7 and 9 to Model 2.

Menu path: Model window, /Tests/add variables

adf

Arguments: order varname
Example: adf 2 x1

Computes statistics for two Dickey–Fuller tests. In each case the null hypothesis is that the selected variable exhibits a unit root. The first is a t-test based on the model

The null hypothesis is that g = 0. The second (augmented) test proceeds by estimating an unrestricted regression (with regressors a constant, a time trend, the first lag of the variable, and order lags of the first difference) and a restricted version (dropping the time trend and the first lag). The test statistic is

where T is the sample size, k the number of parameters in the unrestricted model, and the subscripts u and r denote the unrestricted and restricted models respectively. Note that the critical values for these statistics are not the usual ones; a p-value range is printed, when it can be determined.

Menu path: /Variable/Augmented Dickey-Fuller test

ar

Arguments: lags ; depvar indepvars
Option: --vcv (print covariance matrix)
Example: ar 1 3 4 ; y 0 x1 x2 x3

Computes parameter estimates using the generalized Cochrane–Orcutt iterative procedure (see Section 9.5 of Ramanathan). Iteration is terminated when successive error sums of squares do not differ by more than 0.005 percent or after 20 iterations.

lags is a list of lags in the residuals, terminated by a semicolon. In the above example, the error term is specified as

Menu path: /Model/Time series/Autoregressive estimation

arch

Arguments: order depvar indepvars
Example: arch 4 y 0 x1 x2 x3

Tests the model for ARCH (Autoregressive Conditional Heteroskedasticity) of the specified lag order. If the LM test statistic has a p-value below 0.10, then ARCH estimation is also carried out. If the predicted variance of any observation in the auxiliary regression is not positive, then the corresponding squared residual is used instead. Weighted least squares estimation is then performed on the original model.

See also garch.

Menu path: Model window, /Tests/ARCH

arma

Arguments: p q ; depvar [ indepvars ]
Options: --native (Use native plugin (default))
 --x-12-arima (use X-12-ARIMA for estimation)
 --verbose (print details of iterations)
 --vcv (print covariance matrix)
Examples: arma 1 2 ; y
 arma 2 2 ; y 0 x1 x2 --verbose

If no indepvars list is given, estimates a univariate ARMA (Autoregressive, Moving Average) model. The integer values p and q represent the AR and MA orders respectively. If indepvars are added, the model becomes ARMAX.

The default is to use the "native" gretl ARMA function; in the case of a univariate ARMA model X-12-ARIMA may be used instead (if the X-12-ARIMA package for gretl is installed).

The options given above may be combined, except that the covariance matrix is not available when estimation is by X-12-ARIMA.

The native gretl ARMA algorithm is largely due to Riccardo "Jack" Lucchetti. It uses a conditional maximum likelihood procedure, implemented via iterated least squares estimation of the outer product of the gradient (OPG) regression. See Example 9-3 for the logic of the procedure. The AR coefficients (and those for any additional regressors) are initialized using an OLS auto-regression, and the MA coefficients are initialized at zero.

The AIC value given in connection with ARMA models is calculated according to the definition used in X-12-ARIMA, namely

where L is the log-likelihood and k is the total number of parameters estimated. The "frequency" figure printed in connection with AR and MA roots is the λ value that solves

where z is the root in question and r is its modulus.

Menu path: /Variable/ARMA model, /Model/Time series/ARMAX

Other access: Main window pop-up menu (single selection)

boxplot

Argument: varlist
Option: --notches (show 90 percent interval for median)

These plots (after Tukey and Chambers) display the distribution of a variable. The central box encloses the middle 50 percent of the data, i.e. it is bounded by the first and third quartiles. The "whiskers" extend to the minimum and maximum values. A line is drawn across the box at the median.

In the case of notched boxes, the notch shows the limits of an approximate 90 percent confidence interval for the median. This is obtained by the bootstrap method.

After each variable specified in the boxplot command, a parenthesized boolean expression may be added, to limit the sample for the variable in question. A space must be inserted between the variable name or number and the expression. Suppose you have salary figures for men and women, and you have a dummy variable GENDER with value 1 for men and 0 for women. In that case you could draw comparative boxplots with the following varlist:


	salary (GENDER=1) salary (GENDER=0)
      

Some details of gretl's boxplots can be controlled via a (plain text) file named .boxplotrc. For details on this see the Section called Boxplots in Chapter 7.

Menu path: /Data/Graph specified vars/Boxplots

chow

Argument: obs
Examples: chow 25
 chow 1988:1

Must follow an OLS regression. Creates a dummy variable which equals 1 from the split point specified by obs to the end of the sample, 0 otherwise, and also creates interaction terms between this dummy and the original independent variables. An augmented regression is run including these terms and an F statistic is calculated, taking the augmented regression as the unrestricted and the original as restricted. This statistic is appropriate for testing the null hypothesis of no structural break at the given split point.

Menu path: Model window, /Tests/Chow test

coeffsum

Argument: varlist
Example: coeffsum xt xt_1 xr_2

Must follow a regression. Calculates the sum of the coefficients on the variables in varlist. Prints this sum along with its standard error and the p-value for the null hypothesis that the sum is zero.

Note the difference between this and omit, which tests the null hypothesis that the coefficients on a specified subset of independent variables are all equal to zero.

Menu path: Model window, /Tests/sum of coefficients

coint

Arguments: order depvar indepvars
Example: coint 4 y x1 x2

The Engle–Granger cointegration test. Carries out Augmented Dickey–Fuller tests on the null hypothesis that each of the variables listed has a unit root, using the given lag order. The cointegrating regression is estimated, and an ADF test is run on the residuals from this regression. The Durbin–Watson statistic for the cointegrating regression is also given. Note that none of these test statistics can be referred to the usual statistical tables.

Menu path: /Model/Time series/Cointegration test/Engle-Granger

coint2

Arguments: order depvar indepvars
Option: --verbose (print details of auxiliary regressions)
Examples: coint2 2 y x
 coint2 4 y x1 x2 --verbose

Carries out the Johansen trace test for cointegration among the listed variables for the given order. Critical values are computed via J. Doornik's gamma approximation (Doornik, 1998). For details of this test see Hamilton, Time Series Analysis (1994), Chapter 20.

The following table is offered as a guide to the interpretation of the results shown for the test, for the 3-variable case. H0 denotes the null hypothesis, H1 the alternative hypothesis, and c the number of cointegrating relations.


                 Rank     Trace test         Lmax test
                          H0     H1          H0     H1
                 ---------------------------------------
                  0      c = 0  c = 3       c = 0  c = 1
                  1      c = 1  c = 3       c = 1  c = 2
                  2      c = 2  c = 3       c = 2  c = 3
                 ---------------------------------------
      

Menu path: /Model/Time series/Cointegration test/Johansen

corc

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)
Example: corc 1 0 2 4 6 7

Computes parameter estimates using the Cochrane–Orcutt iterative procedure (see Section 9.4 of Ramanathan). Iteration is terminated when successive estimates of the autocorrelation coefficient do not differ by more than 0.001 or after 20 iterations.

Menu path: /Model/Time series/Cochrane-Orcutt

corr

Argument: [ varlist ]
Example: corr y x1 x2 x3

Prints the pairwise correlation coefficients for the variables in varlist, or for all variables in the data set if varlist is not given.

Menu path: /Data/Correlation matrix

Other access: Main window pop-up menu (multiple selection)

corrgm

Arguments: variable [ maxlag ]
Example: corrgm x 12

Prints the values of the autocorrelation function for the variable specified (either by name or number). See Ramanathan, Section 11.7. It is thus where ut is the tth observation of the variable u and s is the number of lags.

The partial autocorrelations are also shown: these are net of the effects of intervening lags. The command also graphs the correlogram and prints the Box–Pierce Q statistic for testing the null hypothesis that the series is "white noise". This is asymptotically distributed as chi-square with degrees of freedom equal to the number of lags used.

If a maxlag value is specified the length of the correlogram is limited to at most that number of lags, otherwise the length is determined automatically.

Menu path: /Variable/Correlogram

Other access: Main window pop-up menu (single selection)

criteria

Arguments: ess T k
Example: criteria 23.45 45 8

Computes the model selection statistics (see Ramanathan, Section 4.3), given ess (error sum of squares), the number of observations (T), and the number of coefficients (k). T, k, and ess may be numerical values or names of previously defined variables.

critical

Arguments: dist param1 [ param2 ]
Examples: critical t 20
 critical X 5
 critical F 3 37

If dist is t, X or F, prints out the critical values for the student's t, chi-square or F distribution respectively, for the common significance levels and using the specified degrees of freedom, given as param1 for t and chi-square, or param1 and param2 for F. If dist is d, prints the upper and lower values of the Durbin-Watson statistic at 5 percent significance, for the given number of observations, param1, and for the range of 1 to 5 explanatory variables.

Menu path: /Utilities/Statistical tables

cusum

Must follow the estimation of a model via OLS. Performs the CUSUM test for parameter stability. A series of (scaled) one-step ahead forecast errors is obtained by running a series of regressions: the first regression uses the first k observations and is used to generate a prediction of the dependent variable at observation k + 1; the second uses the first k + 1 observations and generates a prediction for observation k + 2, and so on (where k is the number of parameters in the original model). The cumulated sum of the scaled forecast errors is printed and graphed. The null hypothesis of parameter stability is rejected at the 5 percent significance level if the cumulated sum strays outside of the 95 percent confidence band.

The Harvey–Collier t-statistic for testing the null hypothesis of parameter stability is also printed. See Chapter 7 of Greene's Econometric Analysis for details.

Menu path: Model window, /Tests/CUSUM

data

Argument: varlist

Reads the variables in varlist from a database (gretl or RATS 4.0), which must have been opened previously using the open command. In addition, a data frequency and sample range must be established using the setobs and smpl commands prior to using this command. Here is a full example:


	open macrodat.rat
	setobs 4 1959:1
	smpl ; 1999:4
	data GDP_JP GDP_UK

These commands open a database named macrodat.rat, establish a quarterly data set starting in the first quarter of 1959 and ending in the fourth quarter of 1999, and then import the series named GDP_JP and GDP_UK.

If the series to be read are of higher frequency than the working data set, you must specify a compaction method as below:


	data (compact=average) 
	LHUR PUNEW

The four available compaction methods are "average" (takes the mean of the high frequency observations), "last" (uses the last observation), "first" and "sum".

Menu path: /File/Browse databases

delete

Argument: [ varlist ]

Removes the listed variables (given by name or number) from the dataset. Use with caution: no confirmation is asked, and any variables with higher ID numbers will be re-numbered.

If no varlist is given with this command, it deletes the last (highest numbered) variable from the dataset.

Menu path: Main window pop-up (single selection)

diff

Argument: varlist

The first difference of each variable in varlist is obtained and the result stored in a new variable with the prefix d_. Thus diff x y creates the new variables d_x = x(t) - x(t-1) and d_y = y(t) - y(t-1).

Menu path: /Data/Add variables/first differences

else

See if.

end

Ends a block of commands of some sort. For example, end system terminates an equation system.

endif

See if.

endloop

Marks the end of a command loop. See loop.

eqnprint

Argument: [ -f filename ]
Option: --complete (Create a complete document)

Must follow the estimation of a model. Prints the estimated model in the form of a LaTeX equation. If a filename is specified using the -f flag output goes to that file, otherwise it goes to a file with a name of the form equation_N.tex, where N is the number of models estimated to date in the current session. See also tabprint.

If the --complete flag is given, the LaTeX file is a complete document, ready for processing; otherwise it must be included in a document.

Menu path: Model window, /LaTeX

equation

Arguments: depvar indepvars
Example: equation y x1 x2 x3 const

Specifies an equation within a system of equations (see system). The syntax for specifying an equation within an SUR system is the same as that for, e.g., ols. For an equation within a Three-Stage Least Squares system you may either (a) give an OLS-type equation specification and provide a common list of instruments using the instr keyword (again, see system), or (b) use the same equation syntax as for tsls.

fcast

Arguments: [ startobs endobs ] fitvar
Examples: fcast 1997:1 2001:4 f1
 fcast fit2

Must follow an estimation command. Forecasts are generated for the specified range (or the largest possible range if no startobs and endobs are given) and the values saved as fitvar, which can be printed, graphed, or plotted. The right-hand side variables are those in the original model. There is no provision to substitute other variables. If an autoregressive error process is specified (for hilu, corc, and ar) the forecast is conditional one step ahead and incorporates the error process.

Menu path: Model window, /Model data/Forecasts

fcasterr

Arguments: startobs endobs
Option: --plot (display graph)

After estimating a model via OLS you can use this command to print out fitted values over the specified observation range, along with the estimated standard errors of those predictions and 95 percent confidence intervals.

The standard errors are calculated in the manner described by Wooldridge in chapter 6 of his Introductory Econometrics. They incorporate two sources of variation: the variance associated with the expected value of the dependent variable, conditional on the given values of the independent variables, and the variance of the regression residuals.

Menu path: Model window, /Model data/Forecasts

fit

A shortcut to fcast. Must follow an estimation command. Generates fitted values, in a series called autofit, for the current sample, based on the last regression. In the case of time-series models, also pops up a graph of fitted and actual values of the dependent variable against time.

freq

Argument: var

Prints the frequency distribution for var (given by name or number); also shows the results of the Doornik–Hansen chi-square test for normality. In interactive mode a graph of the distribution is displayed.

Menu path: /Variable/Frequency distribution

garch

Arguments: p q ; depvar [ indepvars ]
Options: --robust (robust standard errors)
 --verbose (print details of iterations)
 --vcv (print covariance matrix)
Examples: garch 1 1 ; y
 garch 1 1 ; y 0 x1 x2 --robust

Estimates a GARCH model (GARCH = Generalized Autoregressive Conditional Heteroskedasticity), either a univariate model or, if indepvars are specified, including the given exogenous variables. The integer values p and q represent the lag orders in the conditional variance equation.

The gretl GARCH algorithm is basically that of Fiorentini, Calzolari and Panattoni (1996), used by kind permission of Professor Fiorentini.

Several variant estimates of the coefficient covariance matrix are available with this command. By default, the Hessian is used unless the --robust option is given, in which case the QML (White) covariance matrix is used. Other possibilities (e.g. the information matrix, or the Bollerslev–Wooldridge estimator) can be specified using the set command.

Menu path: /Model/Time series/GARCH model

genr

Arguments: newvar = formula

Creates new variables, usually through transformations of existing variables. See also diff, logs, lags, ldiff, multiply and square for shortcuts.

Supported arithmetical operators are, in order of precedence: ^ (exponentiation); *, / and % (modulus or remainder); + and -.

The available Boolean operators are (again, in order of precedence): ! (negation), & (logical AND), | (logical OR), >, <, =, >= (greater than or equal), <= (less than or equal) and != (not equal). The Boolean operators can be used in constructing dummy variables: for instance (x > 10) returns 1 if x > 10, 0 otherwise.

Supported functions fall into these groups:

All of the above functions with the exception of cov, corr, pvalue, uniform and normal take as their single argument either the name of a variable (note that you can't refer to variables by their ID numbers in a genr formula) or an expression that evaluates to a variable (e.g. ln((x1+x2)/2)). cov and corr both require two arguments, and return respectively the covariance and the correlation coefficient between the arguments. The pvalue function takes the same arguments as the pvalue command, but in this context commas should be placed between the arguments. uniform() and normal(), which do not take arguments, return pseudo-random series drawn from the uniform (0–1) and standard normal distributions respectively (see also the set command, seed option). Uniform series are generated using the Mersenne Twister;[1] for normal series the method of Box and Muller (1958) is used, taking input from the Mersenne Twister.

Besides the operators and functions just noted there are some special uses of genr:

Note: In the command-line program, genr commands that retrieve model-related data always reference the model that was estimated most recently. This is also true in the GUI program, if one uses genr in the "gretl console" or enters a formula using the "Define new variable" option under the Variable menu in the main window. With the GUI, however, you have the option of retrieving data from any model currently displayed in a window (whether or not it's the most recent model). You do this under the "Model data" menu in the model's window.

The internal series uhat and yhat hold, respectively, the residuals and fitted values from the last regression.

Two "internal" dataset variables are available: $nobs holds the number of observations in the current sample range (which may or may not equal the value of $T, the number of observations used in estimating the last model), and $pd holds the frequency or periodicity of the data (e.g. 4 for quarterly data).

The variable t serves as an index of the observations. For instance genr dum = (t=15) will generate a dummy variable that has value 1 for observation 15, 0 otherwise. The variable obs is similar but more flexible: you can use this to pick out particular observations by date or name. For example, genr d = (obs>1986:4) or genr d = (obs="CA"). The last form presumes that the observations are labeled; the label must be put in double quotes.

Scalar values can be pulled from a series in the context of a genr formula, using the syntax varname[obs]. The obs value can be given by number or date. Examples: x[5], CPI[1996:01]. For daily data, the form YYYY:MM:DD must be used, e.g. ibm[1970:01:23].

Table 11-1 gives several examples of uses of genr with explanatory notes; here are a couple of tips on dummy variables:

Table 11-1. Examples of use of genr command

FormulaComment
y = x1^3x1 cubed
y = ln((x1+x2)/x3) 
z = x>yz(t) = 1 if x(t) > y(t), otherwise 0
y = x(-2)x lagged 2 periods
y = x(2)x led 2 periods
y = diff(x)y(t) = x(t) - x(t-1)
y = ldiff(x)y(t) = log x(t) - log x(t-1), the instantaneous rate of growth of x
y = sort(x)sorts x in increasing order and stores in y
y = -sort(-x)sort x in decreasing order
y = int(x)truncate x and store its integer value as y
y = abs(x)store the absolute values of x
y = sum(x)sum x values excluding missing –999 entries
y = cum(x)cumulation:
aa = $essset aa equal to the Error Sum of Squares from last regression
x = coeff(sqft)grab the estimated coefficient on the variable sqft from the last regression
rho4 = rho(4)grab the 4th-order autoregressive coefficient from the last model (presumes an ar model)
cvx1x2 = vcv(x1, x2)grab the estimated coefficient covariance of vars x1 and x2 from the last model
foo = uniform()uniform pseudo-random variable in range 0–1
bar = 3 * normal()normal pseudo-random variable, μ = 0, σ = 3
samp = !missing(x)= 1 for observations where x is not missing.

Menu path: /Variable/Define new variable

Other access: Main window pop-up menu

gnuplot

Arguments: yvars xvar [ dumvar ]
Options: --with-lines (use lines, not points)
 --with-impulses (use vertical lines)
 --suppress-fitted (don't show least squares fit)
 --dummy (see below)
Examples: gnuplot y1 y2 x
 gnuplot x time --with-lines
 gnuplot wages educ gender --dummy

Without the --dummy option, the yvars are graphed against xvar. With --dummy, yvar is graphed against xvar with the points shown in different colors depending on whether the value of dumvar is 1 or 0.

The time variable behaves specially: if it does not already exist then it will be generated automatically.

In interactive mode the result is displayed immediately. In batch mode a gnuplot command file is written, with a name on the pattern gpttmpN.plt, starting with N = 01. The actual plots may be generated later using gnuplot (under MS Windows, wgnuplot).

A further option to this command is available: following the specification of the variables to be plotted and the option flag (if any), you may add literal gnuplot commands to control the appearance of the plot (for example, setting the plot title and/or the axis ranges). These commands should be enclosed in braces, and each gnuplot command must be terminated with a semi-colon. A backslash may be used to continue a set of gnuplot commands over more than one line. Here is an example of the syntax:

{ set title 'My Title'; set yrange [0:1000]; }

Menu path: /Data/Graph specified vars

Other access: Main window pop-up menu, graph button on toolbar

graph

Arguments: yvars xvar
Option: --tall (use 40 rows)

ASCII graphics. The yvars (which may be given by name or number) are graphed against xvar using ASCII symbols. The --tall flag will produce a graph with 40 rows and 60 columns. Without it, the graph will be 20 by 60 (for screen output). See also gnuplot.

hausman

This test is available only after estimating a model using the pooled command (see also panel and setobs). It tests the simple pooled model against the principal alternatives, the fixed effects and random effects models.

The fixed effects model adds a dummy variable for all but one of the cross-sectional units, allowing the intercept of the regression to vary across the units. An F-test for the joint significance of these dummies is presented. The random effects model decomposes the residual variance into two parts, one part specific to the cross-sectional unit and the other specific to the particular observation. (This estimator can be computed only if the number of cross-sectional units in the data set exceeds the number of parameters to be estimated.) The Breusch–Pagan LM statistic tests the null hypothesis (that the pooled OLS estimator is adequate) against the random effects alternative.

The pooled OLS model may be rejected against both of the alternatives, fixed effects and random effects. Provided the unit- or group-specific error is uncorrelated with the independent variables, the random effects estimator is more efficient than the fixed effects estimator; otherwise the random effects estimator is inconsistent and the fixed effects estimator is to be preferred. The null hypothesis for the Hausman test is that the group-specific error is not so correlated (and therefore the random effects model is preferable). A low p-value for this test counts against the random effects model and in favor of fixed effects.

Menu path: Model window, /Tests/panel diagnostics

hccm

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

Heteroskedasticity-Consistent Covariance Matrix: this command runs a regression where the coefficients are estimated via the standard OLS procedure, but the standard errors of the coefficient estimates are computed in a manner that is robust in the face of heteroskedasticity, namely using the MacKinnon–White "jackknife" procedure.

Menu path: /Model/HCCM

help

Gives a list of available commands. help command describes command (e.g. help smpl). You can type man instead of help if you like.

Menu path: /Help

hilu

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

Computes parameter estimates for the specified model using the Hildreth–Lu search procedure (fine-tuned by the CORC procedure). This procedure is designed to correct for serial correlation of the error term. The error sum of squares of the transformed model is graphed against the value of rho from –0.99 to 0.99.

Menu path: /Model/Time Series/Hildreth-Lu

hsk

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

An OLS regression is run and the residuals are saved. The logs of the squares of these residuals then become the dependent variable in an auxiliary regression, on the right-hand side of which are the original independent variables plus their squares. The fitted values from the auxiliary regression are then used to construct a weight series, and the original model is re-estimated using weighted least squares. This final result is reported.

The weight series is formed as , where y* denotes the fitted values from the auxiliary regression.

Menu path: /Model/Heteroskedasticity corrected

if

Flow control for command execution. The syntax is:

ifcondition
 commands1
else 
 commands2
endif 

condition must be a Boolean expression, for the syntax of which see genr. The else block is optional; ifendif blocks may be nested.

import

Argument: filename
Option: --box1 (BOX1 data)

Brings in data from a comma-separated values (CSV) format file, such as can easily be written from a spreadsheet program. The file should have variable names on the first line and a rectangular data matrix on the remaining lines. Variables should be arranged "by observation" (one column per variable; each row represents an observation). See Chapter 4 for details.

With the --box1 flag, reads a data file in BOX1 format, as can be obtained using the Data Extraction Service of the US Bureau of the Census.

Menu path: /File/Open data/import

info

Prints out any supplementary information stored with the current datafile.

Menu path: /Data/Read info

Other access: Data browser windows

label

Arguments: varname -d description -n displayname
Example: label x1 -d "Description of x1" -n "Graph name"

Sets the descriptive label for the given variable (if the -d flag is given, followed by a string in double quotes) and/or the "display name" for the variable (if the -n flag is given, followed by a quoted string). If a variable has a display name, this is used when generating graphs.

Menu path: /Variable/Edit attributes

Other access: Main window pop-up menu

labels

Prints out the informative labels for any variables that have been generated using genr, and any labels added to the data set via the GUI.

lad

Arguments: depvar indepvars

Calculates a regression that minimizes the sum of the absolute deviations of the observed from the fitted values of the dependent variable. Coefficient estimates are derived using the Barrodale–Roberts simplex algorithm; a warning is printed if the solution is not unique. Standard errors are derived using the bootstrap procedure with 500 drawings.

Menu path: /Model/Least Absolute Deviation

lags

Argument: varlist

Creates new variables which are lagged values of each of the variables in varlist. The number of lagged variables equals the periodicity. For example, if the periodicity is 4 (quarterly), the command lags x y creates x_1 = x(t-1), x_2 = x(t-2), x_3 = x(t-3) and x_4 x(t-4). Similarly for y. These variables must be referred to in the exact form, that is, with the underscore.

Menu path: /Data/Add variables/lags of selected variables

ldiff

Argument: varlist

The first difference of the natural log of each variable in varlist is obtained and the result stored in a new variable with the prefix ld_. Thus ldiff x y creates the new variables ld_x = and ld_y = .

Menu path: /Data/Add variables/log differences

leverage

Option: --save (save variables)

Must immediately follow an ols command. Calculates the leverage (h, which must lie in the range 0 to 1) for each data point in the sample on which the previous model was estimated. Displays the residual (u) for each observation along with its leverage and a measure of its influence on the estimates, . "Leverage points" for which the value of h exceeds 2k/n (where k is the number of parameters being estimated and n is the sample size) are flagged with an asterisk. For details on the concepts of leverage and influence see Davidson and MacKinnon (1993, Chapter 2).

DFFITS values are also shown: these are "studentized residuals" (predicted residuals divided by their standard errors) multiplied by . For a discussion of studentized residuals and DFFITS see G. S. Maddala, Introduction to Econometrics, chapter 12; also Belsley, Kuh and Welsch (1980). Briefly, a "predicted residual" is the difference between the observed value of the dependent variable at observation t, and the fitted value for observation t obtained from a regression in which that observation is omitted (or a dummy variable with value 1 for observation t alone has been added); the studentized residual is obtained by dividing the predicted residual by its standard error.

If the --save flag is given with this command, then the leverage, influence and DFFITS values are added to the current data set.

Menu path: Model window, /Tests/influential observations

lmtest

Options: --logs (non-linearity, logs)
 --autocorr (serial correlation)
 --squares (non-linearity, squares)
 --white (heteroskedasticity (White's test))

Must immediately follow an ols command. Carries out some combination of the following: Lagrange Multiplier tests for nonlinearity (logs and squares), White's test for heteroskedasticity, and the LMF test for serial correlation up to the periodicity (see Kiviet, 1986). The corresponding auxiliary regression coefficients are also printed out. See Ramanathan, Chapters 7, 8, and 9 for details. In the case of White's test, only the squared independent variables are used and not their cross products. In the case of the autocorrelation test, if the p-value of the LMF statistic is less than 0.05 (and the model was not originally estimated with robust standard errors) then serial correlation-robust standard errors are calculated and displayed. For details on the calculation of these standard errors see Wooldridge (2002, Chapter 12).

Menu path: Model window, /Tests

logistic

Arguments: depvar indepvars [ ymax=value ]
Option: --vcv (print covariance matrix)
Examples: logistic y const x
 logistic y const x ymax=50

Logistic regression: carries out an OLS regression using the logistic transformation of the dependent variable,

The dependent variable must be strictly positive. If it is a decimal fraction, between 0 and 1, the default is to use a y* value (the asymptotic maximum of the dependent variable) of 1. If the dependent variable is a percentage, between 0 and 100, the default y* is 100. If you wish to set a different maximum, use the optional ymax=value syntax following the list of regressors. The supplied value must be greater than all of the observed values of the dependent variable.

The fitted values and residuals from the regression are automatically transformed using

where x represents either a fitted value or a residual from the OLS regression using the transformed dependent variable. The reported values are therefore comparable with the original dependent variable.

Note that if the dependent variable is binary, you should use the logit command instead.

Menu path: /Model/Logistic

logit

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

Binomial logit regression. The dependent variable should be a binary variable. Maximum likelihood estimates of the coefficients on indepvars are obtained via the EM or Expectation–Maximization method (see Ruud, 2000, Chapter 27). As the model is nonlinear the slopes depend on the values of the independent variables: the reported slopes are evaluated at the means of those variables. The chi-square statistic tests the null hypothesis that all coefficients are zero apart from the constant.

If you want to use logit for analysis of proportions (where the dependent variable is the proportion of cases having a certain characteristic, at each observation, rather than a 1 or 0 variable indicating whether the characteristic is present or not) you should not use the logit command, but rather construct the logit variable (e.g. genr lgt_p = log(p/(1 - p))) and use this as the dependent variable in an OLS regression. See Ramanathan, Chapter 12.

Menu path: /Model/Logit

logs

Argument: varlist

The natural log of each of the variables in varlist is obtained and the result stored in a new variable with the prefix l_ which is "el" underscore. logs x y creates the new variables l_x = ln(x) and l_y = ln(y).

Menu path: /Data/Add variables/logs of selected variables

loop

Argument: control
Examples: loop 1000
 loop while essdiff > .00001
 loop for i=1991..2000

The parameter control must take one of three forms, as shown in the examples: an integer number of times to repeat the commands within the loop; "while" plus a numerical condition; or "for" plus a range of values for the internal index variable i.

This command opens a special mode in which the program accepts commands to be executed repeatedly. Within a loop, only certain commands can be used: genr, ols, print, sim, smpl, store and summary (store can't be used in a "while" loop). You exit the mode of entering loop commands with endloop: at this point the stacked commands are executed. Loops cannot be nested.

See Chapter 9 for further details and examples.

meantest

Arguments: var1 var2
Option: --unequal-vars (assume variances are unequal)

Calculates the t statistic for the null hypothesis that the population means are equal for the variables var1 and var2, and shows its p-value. By default the test statistic is calculated on the assumption that the variances are equal for the two variables; with the --unequal-varsoption the variances are assumed to be different. This will make a difference to the test statistic only if there are different numbers of non-missing observations for the two variables.

Menu path: /Data/Difference of means

modeltab

Arguments: add or show or free

Manipulates the gretl "model table". See Chapter 3 for details. The sub-commands have the following effects: add adds the last model estimated to the model table, if possible; show displays the model table in a window; and free clears the table.

Menu path: Session window, Model table icon

mpols

Arguments: depvar indepvars

Computes OLS estimates for the specified model using multiple precision floating-point arithmetic. This command is available only if gretl is compiled with support for the Gnu Multiple Precision library (GMP).

To estimate a polynomial fit, using multiple precision arithmetic to generate the required powers of the independent variable, use the form, e.g. mpols y 0 x ; 2 3 4 This does a regression of y on x, x squared, x cubed and x to the fourth power. That is, the numbers (which must be positive integers) to the right of the semicolon specify the powers of x to be used. If more than one independent variable is specified, the last variable before the semicolon is taken to be the one that should be raised to various powers.

Menu path: /Model/High precision OLS

multiply

Arguments: x suffix varlist
Examples: multiply invpop pc 3 4 5 6
 multiply 1000 big x1 x2 x3

The variables in varlist (referenced by name or number) are multiplied by x, which may be either a numerical value or the name of a variable already defined. The products are named with the specified suffix (maximum 3 characters). The original variable names are truncated first if need be. For instance, suppose you want to create per capita versions of certain variables, and you have the variable pop (population). A suitable set of commands is then:


	genr invpop = 1/pop
	multiply invpop pc income

which will create incomepc as the product of income and invpop, and expendpc as expend times invpop.

nls

Arguments: function derivatives
Option: --vcv (print covariance matrix)

Performs Nonlinear Least Squares (NLS) estimation using a modified version of the Levenberg–Marquandt algorithm. The user must supply a function specification. The parameters of this function must be declared and given starting values (using the genr command) prior to estimation. Optionally, the user may specify the derivatives of the regression function with respect to each of the parameters; if analytical derivatives are not supplied, a numerical approximation to the Jacobian is computed.

It is easiest to show what is required by example. The following is a complete script to estimate the nonlinear consumption function set out in William Greene's Econometric Analysis (Chapter 11 of the 4th edition, or Chapter 9 of the 5th). The numbers to the left of the lines are for reference and are not part of the commands. Note that the --vcv option, for printing the covariance matrix of the parameter estimates, attaches to the final command, end nls.


	1   open greene11_3.gdt
	2   ols C 0 Y
	3   genr alpha = coeff(0)
	4   genr beta = coeff(Y)
	5   genr gamma = 1.0
	6   nls C = alpha + beta * Y^gamma
	7   deriv alpha = 1
	8   deriv beta = Y^gamma
	9   deriv gamma = beta * Y^gamma * log(Y)
	10  end nls --vcv
      

It is often convenient to initialize the parameters by reference to a related linear model; that is accomplished here on lines 2 to 5. The parameters alpha, beta and gamma could be set to any initial values (not necessarily based on a model estimated with OLS), although convergence of the NLS procedure is not guaranteed for an arbitrary starting point.

The actual NLS commands occupy lines 6 to 10. On line 6 the nls command is given: a dependent variable is specified, followed by an equals sign, followed by a function specification. The syntax for the expression on the right is the same as that for the genr command. The next three lines specify the derivatives of the regression function with respect to each of the parameters in turn. Each line begins with the keyword deriv, gives the name of a parameter, an equals sign, and an expression whereby the derivative can be calculated (again, the syntax here is the same as for genr). These deriv lines are optional, but it is recommended that you supply them if possible. Line 10, end nls, completes the command and calls for estimation.

For further details on NLS estimation please see Chapter 8.

Menu path: /Model/Nonlinear Least Squares

noecho

Obsolete command. See set.

nulldata

Argument: series_length
Example: nulldata 500

Establishes a "blank" data set, containing only a constant and an index variable, with periodicity 1 and the specified number of observations. This may be used for simulation purposes: some of the genr commands (e.g. genr uniform(), genr normal()) will generate dummy data from scratch to fill out the data set. This command may be useful in conjunction with loop. See also the "seed" option to the set command.

Menu path: /File/Create data set

ols

Arguments: depvar indepvars
Options: --vcv (print covariance matrix)
 --robust (robust standard errors)
 --quiet (suppress printing of results)
Examples: ols 1 0 2 4 6 7
 ols y 0 x1 x2 x3 --vcv
 ols y 0 x1 x2 x3 --quiet

Computes ordinary least squares (OLS) estimates with depvar as the dependent variable and indepvars as the list of independent variables. Variables may be specified by name or number; use the number zero for a constant term.

Besides coefficient estimates and standard errors, the program also prints p-values for t (two-tailed) and F-statistics. A p-value below 0.01 indicates significance at the 1 percent level and is denoted by ***. ** indicates significance between 1 and 5 percent and * indicates significance between 5 and 10 percent levels. Model selection statistics (described in Ramanathan, Section 4.3) are also printed.

Various internal variables may be retrieved using the genr command, provided genr is invoked immediately after this command.

The specific formula used for generating robust standard errors (when the --robust option is given) can be adjusted via the set command.

Menu path: /Model/Ordinary Least Squares

Other access: Beta-hat button on toolbar

omit

Argument: varlist
Options: --vcv (print covariance matrix)
 --quiet (don't print estimates for reduced model)
Example: omit 5 7 9

This command must follow an estimation command. The selected variables are omitted from the previous model and the new model estimated. If more than one variable is omitted, the Wald F-statistic for the omitted variables will be printed along with its p-value (for the OLS procedure only). A p-value below 0.05 means that the coefficients are jointly significant at the 5 percent level.

If the --quiet option is given the printed results are confined to the test for the joint significance of the omitted variables, otherwise the estimates for the reduced model are also printed. In the latter case, the --vcv flag causes the covariance matrix for the coefficients to be printed also.

Menu path: Model window, /Tests/omit variables

omitfrom

Arguments: modelID varlist
Option: --quiet (don't print estimates for reduced model)
Example: omitfrom 2 5 7 9

Works like omit, except that you specify a previous model (using its ID number, which is printed at the start of the model output) to take as the base for omitting variables. The example above omits variables number 5, 7 and 9 from Model 2.

Menu path: Model window, /Tests/omit variables

open

Argument: datafile

Opens a data file. If a data file is already open, it is replaced by the newly opened one. The program will try to detect the format of the data file (native, plain text, CSV or BOX1).

This command can also be used to open a database (gretl or RATS 4.0) for reading. In that case it should be followed by the data command to extract particular series from the database.

Menu path: /File/Open data

Other access: Drag a data file into gretl (MS Windows or Gnome)

outfile

Arguments: filename option
Options: --append (append to file)
 --close (close file)
 --write (overwrite file)
Examples: outfile --write regress.txt
 outfile --close

Diverts output to filename, until further notice. Use the flag --append to append output to an existing file or --write to start a new file (or overwrite an existing one). Only one file can be opened in this way at any given time.

The --close flag is used to close an output file that was previously opened as above. Output will then revert to the default stream.

In the first example command above, the file regress.txt is opened for writing, and in the second it is closed. This would make sense as a sequence only if some commands were issued before the --close. For example if an ols command intervened, its output would go to regress.txt rather than the screen.

panel

Options: --cross-section (stacked cross sections)
 --time-series (stacked time series)

Request that the current data set be interpreted as a panel (pooled cross section and time series). By default, or with the --time-series flag, the data set is taken to be in the form of stacked time series (successive blocks of data contain time series for each cross-sectional unit). With the --cross-section flag, the data set is read as stacked cross-sections (successive blocks contain cross sections for each time period). See also setobs.

Menu path: /Sample/interpret as panel

pca

Argument: varlist
Options: --save-all (Save all components)
 --save (Save major components)

Principal Components Analysis. Prints the eigenvalues of the correlation matrix for the variables in varlist along with the proportion of the joint variance accounted for by each component. Also prints the corresponding eigenvectors (or "component loadings").

If the --save flag is given, components with eigenvalues greater than 1.0 are saved to the dataset as variables, with names PC1, PC2 and so on. These artificial variables are formed as the sum of (component loading) times (standardized Xi), where Xi denotes the ith variable in varlist.

If the --save-all flag is given, all of the components are saved as described above.

Menu path: Main window pop-up (multiple selection)

pergm

Argument: varname
Option: --bartlett (use Bartlett lag window)

Computes and displays (and if not in batch mode, graphs) the spectrum of the specified variable. Without the --bartlett flag the sample periodogram is given; with the flag a Bartlett lag window of length (where T is the sample size) is used in estimating the spectrum (see Chapter 18 of Greene's Econometric Analysis). When the sample periodogram is printed, a t-test for fractional integration of the series ("long memory") is also given: the null hypothesis is that the integration order is zero.

Menu path: /Variable/spectrum

Other access: Main window pop-up menu (single selection)

plot

Argument: varlist
Option: --one-scale (force a single scale)

Plots the values for specified variables, for the range of observations currently in effect, using ASCII symbols. Each line stands for an observation and the values are plotted horizontally. By default the variables are scaled appropriately. See also gnuplot.

pooled

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

Estimates a model via OLS (see ols for details on syntax), and flags it as a pooled or panel model, so that the hausman test item becomes available.

Menu path: /Model/Pooled OLS

print

Arguments: varlist or string_literal
Options: --byobs (by observations)
 --ten (use 10 significant digits)
Examples: print x1 x2 --byobs
 print "This is a string"

If varlist is given, prints the values of the specified variables; if no list is given, prints the values of all variables in the current data file. If the --byobs flag is given the data are printed by observation, otherwise they are printed by variable. If the --ten flag is given the data are printed by variable to 10 significant digits.

If the argument to print is a literal string (which must start with a double-quote, "), the string is printed as is. See also printf.

Menu path: /Data/Display values

printf

Arguments: format args

Prints scalar values under the control of a format string (providing a small subset of the printf() statement in the C programming language). Recognized formats are %g and %f, in each case with the various modifiers available in C. Examples: the format %.10g prints a value to 10 significant figures; %12.6f prints a value to 6 decimal places, with a width of 12 characters.

The format string itself must be enclosed in double quotes. The values to be printed must follow the format string, separated by commas. These values should take the form of either (a) the names of variables in the dataset, or (b) expressions that are valid for the genr command. The following example prints the values of two variables plus that of a calculated expression:


	ols 1 0 2 3
	genr b = coeff(2)
	genr se_b = stderr(2)
	printf "b = %.8g, standard error %.8g, t = %.4f\n", b, se_b, b/se_b
      

The maximum length of a format string is 127 characters. The escape sequences \n (newline), \t (tab), \v (vertical tab) and \\ (literal backslash) are recognized. To print a literal percent sign, use %%.

probit

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)

The dependent variable should be a binary variable. Maximum likelihood estimates of the coefficients on indepvars are obtained via iterated least squares (the EM or Expectation–Maximization method). As the model is nonlinear the slopes depend on the values of the independent variables: the reported slopes are evaluated at the means of those variables. The chi-square statistic tests the null hypothesis that all coefficients are zero apart from the constant.

Probit for analysis of proportions is not implemented in gretl at this point.

Menu path: /Model/Probit

pvalue

Arguments: dist [ params ] xval
Examples: pvalue z zscore
 pvalue t 25 3.0
 pvalue X 3 5.6
 pvalue F 4 58 fval
 pvalue G xbar varx x

Computes the area to the right of xval in the specified distribution (z for Gaussian, t for Student's t, X for chi-square, F for F and G for gamma). For the t and chi-square distributions the degrees of freedom must be given; for F numerator and denominator degrees of freedom are required; and for gamma the mean and variance are needed.

Menu path: /Utilities/p-value finder

pwe

Arguments: depvar indepvars
Option: --vcv (print covariance matrix)
Example: pwe 1 0 2 4 6 7

Computes parameter estimates using the Prais–Winsten procedure, an implementation of feasible GLS which is designed to handle first-order autocorrelation of the error term. The procedure is iterated, as with corc; the difference is that while Cochrane–Orcutt discards the first observation, Prais–Winsten makes use of it. See, for example, Chapter 13 of Greene's Econometric Analysis (2000) for details.

Menu path: /Model/Time series/Prais-Winsten

quit

Exits from the program, giving you the option of saving the output from the session on the way out.

Menu path: /File/Exit

rename

Arguments: varnumber newname

Changes the name of the variable with identification number varnumber to newname. The varnumber must be between 1 and the number of variables in the dataset. The new name must be of 8 characters maximum, must start with a letter, and must be composed of only letters, digits, and the underscore character.

Menu path: /Variable/Edit attributes

Other access: Main window pop-up menu (single selection)

reset

Must follow the estimation of a model via OLS. Carries out Ramsey's RESET test for model specification (non-linearity) by adding the square and the cube of the fitted values to the regression and calculating the F statistic for the null hypothesis that the parameters on the two added terms are zero.

Menu path: Model window, /Tests/Ramsey's RESET

restrict

Evaluates a set of linear restrictions on the parameters of the model last estimated. In script mode, the set of restrictions must be enclosed by "restrict" and "end restrict", but in the restrictions dialog box these lines may be omitted.

Each restriction in the set should be expressed as an equation, with a linear combination of parameters on the left and a numeric value to the right of the equals sign. Parameters are referenced in the form bN, where N represents the position in the list of regressors, starting at zero. For example, b1 denotes the second regression parameter.

The second and subsequent bN terms in an equation may be prefixed with a numeric multiplier, using * to represent multiplication, for example 3.5*b4.

Here is an example of a set of restrictions:


	restrict
	 b1 = 0
	 b2 - b3 = 0
	 b4 + 2*b5 = 1
	end restrict

The restrictions are evaluated via a Wald F-test, using the coefficient covariance matrix of the model in question.

Menu path: Model window, /Tests/linear restrictions

rhodiff

Arguments: rholist ; varlist
Examples: rhodiff .65 ; 2 3 4
 rhodiff r1 r2 ; x1 x2 x3

Creates rho-differenced counterparts of the variables (given by number or by name) in varlist and adds them to the data set, using the suffix # for the new variables. Given variable v1 in varlist, and entries r1 and r2 in rholist, v1# = v1(t) - r1*v1(t-1) - r2*v1(t-2) is created. The rholist entries can be given as numerical values or as the names of variables previously defined.

rmplot

Argument: varname

Range–mean plot: this command creates a simple graph to help in deciding whether a time series, y(t), has constant variance or not. We take the full sample t=1,...,T and divide it into small subsamples of arbitrary size k. The first subsample is formed by y(1),...,y(k), the second is y(k+1), ..., y(2k), and so on. For each subsample we calculate the sample mean and range (= maximum minus minimum), and we construct a graph with the means on the horizontal axis and the ranges on the vertical. So each subsample is represented by a point in this plane. If the variance of the series is constant we would expect the subsample range to be independent of the subsample mean; if we see the points approximate an upward-sloping line this suggests the variance of the series is increasing in its mean; and if the points approximate a downward sloping line this suggests the variance is decreasing in the mean.

Besides the graph, gretl displays the means and ranges for each subsample, along with the slope coefficient for an OLS regression of the range on the mean and the p-value for the null hypothesis that this slope is zero. If the slope coefficient is significant at the 10 percent significance level then the fitted line from the regression of range on mean is shown on the graph.

Menu path: /Variable/Range-mean graph

run

Argument: inputfile

Execute the commands in inputfile then return control to the interactive prompt.

Menu path: Run icon in script window

runs

Argument: varname

Carries out the nonparametric "runs" test for randomness of the specified variable. If you want to test for randomness of deviations from the median, for a variable named x1 with a non-zero median, you can do the following:


	genr signx1 = x1 - median(x1)
	runs signx1
      

Menu path: /Variable/Runs test

scatters

Arguments: yvar ; xvarlist or yvarlist ; xvar
Examples: scatters 1 ; 2 3 4 5
 scatters 1 2 3 4 5 6 ; 7

Plots pairwise scatters of yvar against all the variables in xvarlist, or of all the variables in yvarlist against xvar. The first example above puts variable 1 on the y-axis and draws four graphs, the first having variable 2 on the x-axis, the second variable 3 on the x-axis, and so on. The second example plots each of variables 1 through 6 against variable 7 on the x-axis. Scanning a set of such plots can be a useful step in exploratory data analysis. The maximum number of plots is six; any extra variable in the list will be ignored.

Menu path: /Data/Multiple scatterplots

seed

Obsolete command. See set.

set

Arguments: variable value

Set the values of various program parameters. The given value remains in force for the duration of the gretl session unless it is changed by a further call to set. The parameters that can be set in this way are enumerated below. Note that the settings of hac_lag and hc_version are used when the --robust option is given to the ols command.

echovalues: off or on (the default). Suppress or resume the echoing of commands in gretl's output.
qrvalues: on or off (the default). Use QR rather than Cholesky decomposition in calculating OLS estimates.
hac_lagvalues: nw1 (the default) or nw2, or an integer. Sets the maximum lag value, p, used when calculating HAC (Heteroskedasticity and Autocorrelation Consistent) standard errors using the Newey-West approach, for time series data. nw1 and nw2 represent two variant automatic calculations based on the sample size, T: for nw1, , and for nw2, .
hc_versionvalues: 0 (the default), 1, 2 or 3. Sets the variant used when calculating Heteroskedasticity Consistent standard errors with cross-sectional data. The options correspond to the HC0, HC1, HC2 and HC3 discussed by Davidson and MacKinnon in Econometric Theory and Methods, chapter 5. HC0 produces what are usually called "White's standard errors".
force_hcvalues: off (the default) or on. By default, with time-series data and when the --robust option is given with ols, the HAC estimator is used. If you set force_hc to "on", this forces calculation of the regular Heteroskedasticity Consistent Covariance Matrix (which does not take autocorrelation into account).
garch_vcvvalues: unset, hessian, im (information matrix) , op (outer product matrix), qml (QML estimator), bw (Bollerslev–Wooldridge). Specifies the variant that will be used for estimating the coefficient covariance matrix, for GARCH models. If unset is given (the default) then the Hessian is used unless the "robust" option is given for the garch command, in which case QML is used.
hp_lambdavalues: auto (the default), or a numerical value. Sets the smoothing parameter for the Hodrick–Prescott filter (see the hpfilt function under the genr command). The default is to use 100 times the square of the periodicity, which gives 100 for annual data, 1600 for quarterly data, and so on.

setobs

Arguments: periodicity startobs
Examples: setobs 4 1990:1
 setobs 12 1978:03
 setobs 20 1:01

Force the program to interpret the current data set as time series or panel, when the data have been read in as simple undated series. periodicity must be an integer; startobs is a string representing the date or panel ID of the first observation. See also panel.

Menu path: /Sample/Set frequency, startobs

setmiss

Arguments: value [ varlist ]
Examples: setmiss -1
 setmiss 100 x2

Get the program to interpret some specific numerical data value (the first parameter to the command) as a code for "missing", in the case of imported data. If this value is the only parameter, as in the first example above, the interpretation will be applied to all series in the data set. If value is followed by a list of variables, by name or number, the interpretation is confined to the specified variable(s). Thus in the second example the data value 100 is interpreted as a code for "missing", but only for the variable x2.

Menu path: /Sample/Set missing value code

shell

Argument: shellcommand
Examples: ! ls -al
 ! notepad

A ! at the beginning of a command line is interpreted as an escape to the user's shell. Thus arbitrary shell commands can be executed from within gretl.

sim

Arguments: [ startobs endobs ] varname a0 a1 a2 …
Examples: sim 1979.2 1983.1 y 0 0.9
 sim 15 25 y 10 0.8 x

Simulates values for varname for the current sample range, or for the range startobs through endobs if these optional arguments are given. The variable y must have been defined earlier with appropriate initial values. The formula used is

The ai(t) terms may be either numerical constants or variable names previously defined; these terms may be prefixed with a minus sign.

This command is deprecated. You should use genr instead.

smpl

Alternate forms: smpl startobs endobs
 smpl +i -j
 smpl dumvar --dummy
 smpl condition --restrict
 smpl --no-missing [ varlist ]
 smpl n --random
 smpl full
Examples: smpl 3 10
 smpl 1960:2 1982:4
 smpl +1 -1
 smpl x > 3000 --restrict
 smpl y > 3000 --restrict --replace
 smpl 100 --random

Resets the sample range. The new range can be defined in several ways. In the first alternate form (and the first two examples) above, startobs and endobs must be consistent with the periodicity of the data. Either one may be replaced by a semicolon to leave the value unchanged. In the second form, the integers i and j (which may be positive or negative, and should be signed) are taken as offsets relative to the existing sample range. In the third form dummyvar must be an indicator variable with values 0 or 1 at each observation; the sample will be restricted to observations where the value is 1. The fourth form, using --restrict, restricts the sample to observations that satisfy the given Boolean condition (which is specified according to the syntax of the genr command).

With the --no-missing form, if varlist is specified observations are selected on condition that all variables in varlist have valid values at that observation; otherwise, if no varlist is given, observations are selected on condition that all variables have valid (non-missing) values.

With the --random flag, the specified number of cases are selected from the full dataset at random. If you wish to be able to replicate this selection you should set the seed for the random number generator first (see the set command).

The final form, smpl full, restores the full data range.

Note that sample restrictions are, by default, cumulative: the baseline for any smpl command is the current sample. If you wish the command to act so as to replace any existing restriction you can add the option flag --replace to the end of the command.

The internal variable obs may be used with the --restrict form of smpl to exclude particular observations from the sample. For example


	smpl obs!=4 --restrict

will drop just the fourth observation. If the data points are identified by labels,


	smpl obs!="USA" --restrict

will drop the observation with label "USA".

One point should be noted about the --dummy, --restrict and --no-missing forms of smpl: Any "structural" information in the data file (regarding the time series or panel nature of the data) is lost when this command is issued. You may reimpose structure with the setobs command.

Please see Chapter 5 for further details.

Menu path: /Sample

spearman

Arguments: x y
Option: --verbose (print ranked data)

Prints Spearman's rank correlation coefficient for the two variables x and y. The variables do not have to be ranked manually in advance; the function takes care of this.

The automatic ranking is from largest to smallest (i.e. the largest data value gets rank 1). If you need to invert this ranking, create a new variable which is the negative of the original first. For example:


	genr altx = -x
	spearman altx y

Menu path: /Model/Rank correlation

square

Argument: varlist
Option: --cross (generate cross-products as well as squares)

Generates new variables which are squares of the variables in varlist (plus cross-products if the --cross option is given). For example, square x y will generate sq_x = x squared, sq_y = y squared and (optionally) x_y = x times y. If a particular variable is a dummy variable it is not squared because we will get the same variable.

Menu path: /Data/Add variables/squares of variables

store

Arguments: datafile [ varlist ]
Options: --csv (use CSV format)
 --gnu-octave (use GNU Octave format)
 --gnu-R (use GNU R format)
 --traditional (use traditional ESL format)
 --gzipped (apply gzip compression)
 --dat (use PcGive ASCII format)

Saves either the entire dataset or, if a varlist is supplied, a specified subset of the variables in the current dataset, to the file given by datafile.

By default the data are saved in "native" gretl format, but the option flags permit saving in several alternative formats. CSV (Comma-Separated Values) data may be read into spreadsheet programs, and can also be manipulated using a text editor. The formats of Octave, R and PcGive are designed for use with the respective programs. Gzip compression may be useful for large datasets. See Chapter 4 for details on the various formats.

Note that any scalar variables will not be saved automatically: if you wish to save scalars you must explicitly list them in varlist.

Menu path: /File/Save data; /File/Export data

summary

Argument: [ varlist ]

Print summary statistics for the variables in varlist, or for all the variables in the data set if varlist is omitted. Output consists of the mean, standard deviation (sd), coefficient of variation (= sd/mean), median, minimum, maximum, skewness coefficient, and excess kurtosis.

Menu path: /Data/Summary statistics

Other access: Main window pop-up menu

system

Arguments: type savevars
Examples: system type=sur
 system type=sur save=resids
 system type=3sls save=resids,fitted

Starts a system of equations. At present two types of system are supported: sur (Seemingly Unrelated Regressions) and 3sls (Three-Stage Least Squares). In the optional save= field of the command you can specify whether to save the residuals (resids) and/or the fitted values (fitted). The system must contain at least two equations specified using the equation command, and it must be terminated with the line end system.

In the context of a Three-Stage Least Squares system, you may provide a list of instruments (by name or number). This should be on a line by itself, within the system block, prefaced with the keyword instr.

tabprint

Argument: [ -f filename ]
Option: --complete (Create a complete document)

Must follow the estimation of a model. Prints the estimated model in the form of a LaTeX table. If a filename is specified using the -f flag output goes to that file, otherwise it goes to a file with a name of the form model_N.tex, where N is the number of models estimated to date in the current session. See also eqnprint.

If the --complete flag is given the LaTeX file is a complete document, ready for processing; otherwise it must be included in a document.

Menu path: Model window, /LaTeX

testuhat

Must follow a model estimation command. Gives the frequency distribution for the residual from the model along with a chi-square test for normality, based on the procedure suggested by Doornik and Hansen (1984).

Menu path: Model window, /Tests/normality of residual

tobit

Arguments: depvar indepvars
Options: --vcv (print covariance matrix)
 --verbose (print details of iterations)

Estimates a Tobit model. This model may be appropriate when the dependent variable is "truncated". For example, positive and zero values of purchases of durable goods on the part of individual households are observed, and no negative values, yet decisions on such purchases may be thought of as outcomes of an underlying, unobserved disposition to purchase that may be negative in some cases. For details see Greene's Econometric Analysis, Chapter 20.

Menu path: /Model/Tobit

tsls

Arguments: depvar indepvars ; instruments
Options: --vcv (print covariance matrix)
 --robust (robust standard errors)
Example: tsls y1 0 y2 y3 x1 x2 ; 0 x1 x2 x3 x4 x5 x6

Computes two-stage least squares (TSLS or IV) estimates: depvar is the dependent variable, indepvars is the list of independent variables (including right-hand side endogenous variables) in the structural equation for which TSLS estimates are needed; and instruments is the combined list of exogenous and predetermined variables in all the equations. If the instruments list is not at least as long as indepvars, the model is not identified.

In the above example, the ys are the endogenous variables and the xs are the exogenous and predetermined variables.

Menu path: /Model/Two-Stage least Squares

var

Arguments: order varlist ; detlist
Option: --quiet (don't print impulse responses etc.)
Example: var 4 x1 x2 x3 ; const time

Sets up and estimates (using OLS) a vector autoregression (VAR). The first argument specifies the lag order, then follows the setup for the first equation. Don't include lags among the elements of varlist — they will be added automatically. The semi-colon separates the stochastic variables, for which order lags will be included, from deterministic terms in detlist, such as the constant, a time trend, and dummy variables.

In fact, gretl is able to recognize the more common deterministic variables (constant, time trend, dummy variables with no values other than 0 and 1) as such, so these do not have to placed after the semi-colon. More complex deterministic variables (e.g. a time trend interacted with a dummy variable) must be put after the semi-colon.

A separate regression is run for each variable in varlist. Output for each equation includes F-tests for zero restrictions on all lags of each of the variables; an F-test for the significance of the maximum lag; forecast variance decompositions; and impulse response functions.

The variance decompositions and impulse responses are based on the Cholesky decomposition of the contemporaneous covariance matrix, and in this context the order in which the (stochastic) variables are given matters. The first variable in the list is assumed to be "most exogenous" within-period.

Menu path: /Model/Time series/Vector autoregression

varlist

Prints a listing of variables currently available. list and ls are synonyms.

vartest

Arguments: var1 var2

Calculates the F statistic for the null hypothesis that the population variances for the variables var1 and var2 are equal, and shows its p-value.

Menu path: /Data/Difference of variances

wls

Arguments: wtvar depvar indepvars
Option: --vcv (print covariance matrix)

Computes weighted least squares estimates using wtvar as the weight, depvar as the dependent variable, and indepvars as the list of independent variables. Specifically, an OLS regression is run on wtvar * depvar against wtvar * indepvars. If the wtvar is a dummy variable, this is equivalent to eliminating all observations with value zero for wtvar.

Menu path: /Model/Weighted Least Squares

Notes

[1]

See Matsumoto and Nishimura (1998). The implementation is provided by glib, if available, or by the C code written by Nishimura and Matsumoto.