Quantcast
Channel: Statalist
Viewing all 65052 articles
Browse latest View live

undo does not undo

$
0
0
Dear all,
surely a minor mishap with Stata SE/14.1, for Windows (64-bit x86-64), but enough to hit my curiosity.
When I write:
Code:
di Carlo
and then click -Undo-, nothing happens.

Conversely, everything works as expected with Stata 12 and 13.

Did somebody experience the same issue?

Mixlogit - Compare model results / significance test

$
0
0
Dear all,

again another -mixlogit- question from my side.

I would like to compare -mixlogit- results for two subsets of my data. The background is that the participants of the study had to complete a set of choice tasks, then received additional information, and then had to complete another set of choice tasks afterwards.
I would like to test whether the parameter effects differ significantly comparing those two sets of choice tasks.
Therfore, I fitted the mixlogit model for the first group and then a second mixlogit model for the second group (all other things equal).

Trying to use suest, I received the following error message I don't understand:
unable to generate scores for model CSround1
suest requires that predict allow the score option
r(322);

Does anybody has any idea on that or an alternative suggestion how to compare the two models?

Thanks a lot!
Cordula

Multiple regression model

$
0
0
How are you everyone? How is the model Q=e^(Bo+e)P^(B1) Y^(B2)T^(B3)X^(B4), simplified into a level-log regression model?

Then how is the Model above fitted into the data below;
Q P Y T X
40 6 4 3 2
44 10 4 6 9
46 12 5 5 2
48 14 7 6 9
50 15 8 4 10
58 18 12 10 1
60 22 14 2 0
68 24 20 1 7
74 26 21 0 9
80 37 24 5 7

Using REGHDFE for separate samples

$
0
0
Hi Statlisters,

I'm currently using REGHDFE, created by Sergio Correira, to estimate firm fixed effects using panel data in a wage model such as:

wage(i,t) = x(i,t)*beta + worker fixed effect + firm fixed effect + residual (i,t)

x(i,t) contains time varying worker characteristics (age, education, etc). I am using Stata14 on windows.

I run a command such as:

reghdfe wage X1 X2 X3, absvar(p=Worker_ID j=Firm_ID)

In particular, I'm interested in whether firm fixed effects differ for men and women working at the same firm. My method is therefore to 1.) sample firms that hire both men and women, 2.) run the model separately for men and women in this subsample, 3.) collect the female firm fe and male fe attached to each firm and correlate them.

My query 1: the correlation between the female and male fe are unconvincingly small (almost negligible). I am using a Portuguese dataset, and this result is at odds with the current literature (e.g. Card, Cardoso, Kline 2015). Is there something about the estimation procedure of REGHDFE that I'm missing which implies my method would lead to these result even if the results are incorrect? Is there some implicit normalization/parameterization that has led me astray? I've tried dropping outliers and the correlation is still very small.

My query 2: I also notice that some of the firm fe-s that are estimated in the female sample are not estimated in the male sample, and vice versa. Why is this? I know REGHDFE drops singletons, but surely this can't be the reason since if a firm were a singleton in one sample, it would also be a singleton in the other? Pls correct me if I am wrong...

My query 3: Is it possible to save out the standard errors associated with the firm fe using REGHDFE?

Many thanks in advance,

Nicky

Multiple Imputation and asclogit

$
0
0
Hi All,

I need your help. I use alternatif specific conditional logit model (asclogit) for estimations. But I have missing values and i want to impute them using Multiple Imputation. Is it possible to use asclogit with "mi estimate"? If not, which method should i use to replace missing values for asclogit?

Best Wishes,
Ugur

PSM - kernel and radius matching before/after graphs

$
0
0
Hi everyone,

I am struggling to make a graph showing the distribution of propensity scores of treatment and control groups before/after matching.

I am using the psmatch2 command. I'm able to make the graphs when using nearest neighbour matcing as the matched pairs is given by a specific variable. However, I haven't managed to figure it out with radius and kernel matching - does anyone have any clues?

Thanks very much in advance!

Are

no room to add more observations although stata version and PC meet all requirements

$
0
0
Hello, I am trying to upload a large data set (10,966,434 KB) but keep getting the error below. I know a few people have posted about this message before, but I'm still confused as to why I get this error when from reading the 'which stata is right for me' site ( http://www.stata.com/products/which-...ght-for-me/#SE ) the data set should upload.

I am using stata 11 IC , 64-bit.
My PC has 8GB RAM (stata/IC requires 512MB) , 64 bit OS, and 56.6 GB free space (stata IC requires 900MB disk space).
Additionally, I have much less than the maximum allowable 2,047 variables, and 2billion+ observations.

Do you know why I am getting this error, given all of the requirements above seem to be satisfied?

Error message:
no room to add more observations
An attempt was made to increase the number of observations beyond what is currently possible. You have the following alternatives:

1. Store your variables more efficiently; see help compress. (Think of Stata's data area as the area of a rectangle; Stata can trade
off width and length.)

2. Drop some variables or observations; see help drop.

3. Increase the amount of memory allocated to the data area using the set memory command; see help memory.

Create dummy variables quickly.

$
0
0
Hi Folks,

I'm looking to examine the difference between a grant a sport club applied for and the actual amount the received. Besides having a number of regional variables, I would also like to take into account the differences between sports. For this I'm going to utilise dummy variables. However, my dataset contains 44 different sports. They are coded 1..2..3....44.

Code:
gen gaa=0
replace gaa=1 if sportcode==1
gen soccer=0
replace soccer=1 if sportcode==2
gen multisport=0
replace multisport=1 if sportcode==3
gen rugby=0
replace rugby=1 if sportcode==4
Is there a quicker way to carry out such a command, than having to do the following - as shown above? Something like,

Code:
gen 1=0
replace 1=1 if sportcode==1
and so up until 44.

Unicode bug of -markdoc- command

$
0
0
Hi there!

Recently, I was frustrated by a bug of -markdoc- when I was trying to generate a PDF document including Unicode characters. I have reported to the author and he suggests me to post it here because he would like to hear what others might suggest. The following codes may reproduce the problem and the possible failure is described.

Code:
 //Environment: Stata 12, Windows 10.
  
  /**/ quietly log using bug_test, replace
  
  //The bug will occur when including any of the following characters.
  
  // dis "∑"
  // dis "ü"
  // dis "你好"
  // dis "Ⅹ"
  // dis "①"
  
  /**/ quietly log close
  
  //All the supported formats are tested below.
 
  markdoc bug_test, replace export(pdf) install mathjax    //return: file bug_test.html not found
  
  markdoc bug_test, replace export(slide) install mathjax    //return: MarkDoc could not produce bug_test.<format name>. Same below.
  markdoc bug_test, replace export(docx) install mathjax
  markdoc bug_test, replace export(odt) install mathjax
  markdoc bug_test, replace export(tex) install mathjax
  markdoc bug_test, replace export(html) install mathjax
  markdoc bug_test, replace export(epub) install mathjax
 
  markdoc bug_test, replace export(md) install mathjax    //successfully generated though.
Perhaps the problem does not exist in Stata 14 since it could perfectly support Unicode, but I do not have the latest version...

Thanks in advance!
Zeming Zhong

MLE not converging

$
0
0
Hi all,

I am using asclogit to run a conditional logit model, and I am trying to include different explanatory variables in my model. Say, model A is converging and it has several covariates but when I add one more variable, the new model fail to converge and it seems that the log likelihood starts from very small value. After 16000 iterations, the program stopped and gave the no convergence message even though the logliklihood is still increasing.

I am wondering whether I need to supply an initial value (I tried to run with a subsample of the data to get initial values but it seems that convergence would still fail too since log liklihood value is very small). Even though the convergence problem started with the addition of the new variable but the new variable should be added according to theory.

Thank you all

&quot;heckologit&quot; ?

$
0
0
Dear all,

I am estimating a number of Ordered Logit Models. As a robustness check, I would like to test whether my results change when I account for possible selection into my sample. While I found a Stata command -heckoprobit- to do a Heckman with a OProbit in stage 2, I found no analogous "heckologit". Is there some other sensible way of doing this?

Thanks so much, JZ

Replicating -margins with -nlcom, using xtivreg2

$
0
0
Dear Statalist

I am using Stata 13.0

I am currently working on a panel dataset using the xtivreg2 command. Since I am using interactions and want to interpret them, I would use -margins, however xtivreg2 does not allow -margins. I saw a post from 2014, showing a way to mimic -margins using -nlcom, however, the margins command I want to mimic is slightly different and I don't know how to mimic this one.

Therefore my question is, how to mimic the following -margins command, illustrated below:

Code:
sysuse auto, clear
gen mpgturn=mpg*turn
regress price mpg turn mpgturn
margins, at(mpg=(min(by)max) turn=(min(by)max))
The workaround that I found, posted by Mark Schaffer is below (http://www.stata.com/statalist/archi.../msg00586.html)
Code:
sysuse auto, clear
gen mpgturn=mpg*turn
regress price mpg turn mpgturn
forvalues x=31(1)51 {
nlcom _b[mpg] + _b[mpgturn]*(`x')
}
This mimics the following:
Code:
regress price mpg turn c.mpg#c.turn
margins, dydx(mpg) at(turn=(31(1)51))
Kind regards
Henri Nirk



Split dataset into two (question related to holdout sample)

$
0
0
Hello everyone,

I have a dataset of 35,770 observations. I need to split this dataset into two:
1. Random sample of 5,000 observations
2. The remaining 30,770 observations

I managed to create the random sample of 5,000 by doing the following:
set seed 54321
sample 5000, count

But I can't figure out how to save the dataset of the 30,770 observations that got dropped in the process.

I appreciate your help.

Graphing interactive variable outcome

$
0
0
Hi,

I am trying to graph the results for the interactive variable effects of corruption and tariff rates on trade. What I have obtained from my regression is that the cpi#tariffrates produces a significant negative result while cpi^2#tariffrates produces a significant positive result. When I proceed to graph the result using grinter command, it produces an upward sloping curve instead of a u shaped curve as expected. (My y axis is the marginal effect of corruption on tariff, x axis is tariff rates). why is that or is there another way?

Please advice

Thank you

Cluster

$
0
0
Hello,

I am using Stata 12 and would like to do following with a panel:
I have information about different Personal Returns and would like to compare them only if they are comparable concerning following criteria.

1. Year (2000 2001 2002 2003 2004 2005)
2. Style (A B C D)
3. Size ($ 0 - 1,000,000 $)

The persons differ furthermore concerning sex. But this aspect will be regarded later.
First of all, I calculated quintiles for the third variable Size.
xtile inc_5=Size, nquantiles(5)

After that, I formed groups:
egen group = group (Year, Style, inc_5)

Now, I have comparable groups concerning the three criteria and would like to do following:

1. Calculate within each group the mean of returns seperated for men and women (dummy sex is 1 if man and 0 if woman).
2. Calculate for each group: mean(men) - mean(woman)
3. Paired t-test to check, whether men are better than women.

Can anyone help me by telling the full right command.
Thanks a lot.

Kind regards,
John



Omitted variable bias

$
0
0
Hi folks!

ovtest and linktest showed that my regression suffers from omitted variable bias. I've tried to include some other variables, and although the coefficients of significant variables do not change, the tests show that problem is not solved. I know this may sound stupid, but do you know any measure to tackle the omitted variable bias if I do not know which variable causes this bias?

Thank you beforehand!

Dates as collumn headers

$
0
0
Dear Stata users

I have a dataset where column headers are dates. For each date there stocks which are in a portfolio for a particular date.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str12(A B C D)
"KR7001511005" "KR7025830001" "KR7001380005" "KR7033250002"
"KR7074610007" "KR7018500009" "KR7033250002" "KR7007160005"
"KR7011780004" "KR7001250000" "KR7001250000" "KR7003961000"
"KR7011781002" "KR7001380005" "KR7009380007" "KR7001250000"
end
When I try to import data into Stata dates convert into letters (A,B, C...etc).

I need to do two things:
1. Keep dates as column headers.
2. Reshape data into panel format. I used to use the following code: reshape long var (in my case should be date), i (I do not really have this variable).

Any thought how to implement it?

Thanks a lot in advance.

Reading Stata 14 mmat-file in stata 13

$
0
0
How do I read a mmat-file created using "matsave" in STATA 14 in STATA 13?
I get error 692 (file I/O error on read) when I used "matuse" in STATA 13.

I know you can use oldsave to save dta-files in older STATA-formats. But how does this work for mmat-formats?

bugfixing why initial values are not feasible for optimize()

$
0
0
Hi,
I am using Stata 14.1 MP on mac and unix, and the ado file I painstakingly tested on mock data fails upon its first encounter with enemy fire (IRL data). As you might be aware, Mata can be cryptic with its error messages, and currently I am looking for strategies how to understand what's wrong with the code (or the data). Below is the Mata code, and I printed the parameter values calculated before optimize would (presumably) evaluate the objective function for the first time but (presumably) fails. Those values are:

muhat0: 0.5536
C: 887.95
p: 2975
varmu: 0.034
gammahat0: 6.27

These are similar to those in my simulation, muhat0: .501235, C: 2.866523, p: 316, varmu: .0051211, gammahat0: 47.8172. So I am not sure what is "infeasible" about these starting values.

FYI, the code is part of an adaptation of Semiparametric SURE shrinkage of binomial means, cf. "Optimal Shrinkage Estimation of Mean Parameters in Family of Distributions with Quadratic Variance" by Xie, Kou and Brown.


Thanks for any thoughts,

Laszlo

Code:
mata:

void function minsureNEFQVF2(    string scalar rname,
                                string scalar Yname,
                                string scalar Wname,
                                string scalar gammahatname,
                                string scalar muhatname,
                                string scalar tousename)
{
real vector r
st_view(r,.,rname,tousename)
real rowvector nu
nu = (0, 1, - 1)
real vector Y
st_view(Y,.,Yname,tousename)
real vector W
if (Wname == "") W = J(length(r),1,1)
else  st_view(W,.,Wname,tousename)

real scalar nu0, nu1, nu2
nu0 = nu[1]
nu1 = nu[2]
nu2 = nu[3]

real vector mu2hat
mu2hat = (r :* Y :^ 2 :- nu1 :* Y :- nu0) :/ (r :+ nu2)

real scalar muhat0, muhat, C, p, varmu, gammahat0, gammahat
muhat0 = mean(Y)
C = quadsum(1 :/ r)
p = length(r)
varmu = ((quadsum(Y :^ 2) :- (nu0 :+ nu1 :* muhat0) :* C :- muhat0:^2 :* (p :+ nu2 :* C))) :/ (p :+ nu2 :* C)
gammahat0 = max( (epsilon(1), (nu0 :+ nu1 :* muhat0 :+ nu2 :* muhat0 :^2) :/ varmu :+ nu2) )

printf("muhat0 is %9.0g, C is %9.0g, p is %9.0g, varmu is %9.0g, gammahat0 is %9.0g.",muhat0,C,p,varmu,gammahat0)

transmorphic matrix S
S = optimize_init()
optimize_init_which(S ,"min")
optimize_init_evaluator(S, &surerisk()  )
optimize_init_evaluatortype(S, "gf0")
optimize_init_params(S, gammahat0)
optimize_init_argument(S,1,r)
optimize_init_argument(S,2,Y)
optimize_init_argument(S,3,W)
optimize_init_argument(S,4,mu2hat)
optimize_init_singularHmethod(S,"hybrid")
optimize_init_technique(S, "nr dfp bfgs bhhh")
optimize(S)
gammahat = optimize_result_params(S)

muhat = quadsum((gammahat :/ (r :+ gammahat)) :^ 2 :* Y :* W) / quadsum((gammahat :/ (r :+ gammahat)) :^ 2 :* W) 

st_numscalar(muhatname,muhat)
st_numscalar(gammahatname,gammahat)
}

void function surerisk(    todo,
                        real scalar r0,
                        real vector r,
                        real vector Y,
                        real vector W,
                        real vector mu2hat,
                        real vector res,
                        g,
                        H)
{
        real scalar mu0
        if (r0 < 0) res = J(rows(r),1,-r0)
        else
        {
           mu0 = quadsum((r0 :/ (r :+ r0)) :^ 2 :* Y :* W) / quadsum((r0 :/ (r :+ r0)) :^ 2 :* W)
           res = (((Y :* r :+ r0 :* mu0) :/ (r :+ r0)) :^ 2 - 2 :* r0 :* mu0 :/ (r :+ r0) :* Y - (r :- r0) :/ (r :+ r0) :* mu2hat) :* W
        }
}

end

how to save output directly in wide format?

$
0
0
hello all,
I would like to draw a graph showing observed and estimated glioma incidence rates.
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int dg_y float(age_group sex) byte allglioma long pop float(Agecat Agecat2 year Cat year5 Cat2 inc_rate) int ESP2013 byte _merge
2012  1 1  1 148802 1 1 42 20121 9 91   .672034 5000 3
1998  1 1  3 149519 1 1 28 19981 6 61  2.006434 5000 3
2008  1 1  3 144198 1 1 38 20081 8 81 2.0804727 5000 3
2000  1 1  2 142424 1 1 30 20001 7 71 1.4042577 5000 3
1972  1 1  6 153917 1 1  2 19721 1 11  3.898205 5000 3
1995  1 1  4 159404 1 1 25 19951 6 61  2.509347 5000 3
1981  1 1  6 155512 1 1 11 19811 3 31  3.858223 5000 3
1990  1 1  1 152712 1 1 20 19901 5 51  .6548274 5000 3
1983  1 1  2 158042 1 1 13 19831 3 31 1.2654864 5000 3
2006  1 1  2 140984 1 1 36 20061 8 81 1.4186007 5000 3
1980  1 1  4 156176 1 1 10 19801 3 31  2.561213 5000 3
1992  1 1  7 158895 1 1 22 19921 5 51  4.405425 5000 3
2011  1 1  1 148096 1 1 41 20111 9 91  .6752377 5000 3
2013  1 1  1 148105 1 1 43 20131 9 91  .6751966 5000 3
1973  1 1  6 147585 1 1  3 19731 1 11  4.065454 5000 3
2007  1 1  3 142701 1 1 37 20071 8 81 2.1022978 5000 3
2010  1 1  5 147378 1 1 40 20101 9 91 3.3926365 5000 3
1993  1 1  9 159871 1 1 23 19931 5 51  5.629539 5000 3
2009  1 1  1 145917 1 1 39 20091 8 81  .6853211 5000 3
2005  1 1  1 139353 1 1 35 20051 8 81  .7176021 5000 3
1974  1 1  4 147055 1 1  4 19741 1 11  2.720071 5000 3
1987  1 1  5 154084 1 1 17 19871 4 41  3.244983 5000 3
1977  1 1  0 152450 1 1  7 19771 2 21         0 5000 3
1996  1 1  6 156836 1 1 26 19961 6 61 3.8256524 5000 3
1984  1 1  2 158847 1 1 14 19841 3 31  1.259073 5000 3
1979  1 1  1 156755 1 1  9 19791 2 21  .6379382 5000 3
2001  1 1  0 140347 1 1 31 20011 7 71         0 5000 3
1994  1 1  5 160691 1 1 24 19941 5 51  3.111562 5000 3
1999  1 1  4 145763 1 1 29 19991 6 61  2.744181 5000 3
1982  1 1  3 156226 1 1 12 19821 3 31  1.920295 5000 3
1975  1 1  3 148031 1 1  5 19751 2 21 2.0266025 5000 3
1988  1 1  4 152235 1 1 18 19881 4 41  2.627517 5000 3
1970  1 1  6 166273 1 1  0 19701 1 11 3.6085234 5000 3
1989  1 1  9 151347 1 1 19 19891 4 41  5.946599 5000 3
1986  1 1  6 157166 1 1 16 19861 4 41 3.8176196 5000 3
1997  1 1  3 153447 1 1 27 19971 6 61 1.9550724 5000 3
1985  1 1  8 158748 1 1 15 19851 4 41  5.039433 5000 3
1991  1 1  2 155468 1 1 21 19911 5 51 1.2864383 5000 3
2003  1 1  3 138217 1 1 33 20031 7 71    2.1705 5000 3
1978  1 1  6 155991 1 1  8 19781 2 21  3.846376 5000 3
2002  1 1  3 138207 1 1 32 20021 7 71 2.1706572 5000 3
1976  1 1  5 149808 1 1  6 19761 2 21 3.3376055 5000 3
1971  1 1  3 160437 1 1  1 19711 1 11  1.869893 5000 3
2004  1 1  3 138405 1 1 34 20041 7 71 2.1675518 5000 3
1981  1 2  4 162438 1 1 11 19811 3 31  2.462478 5000 3
1979  1 2  6 164208 1 1  9 19791 2 21  3.653902 5000 3
1977  1 2  4 160122 1 1  7 19771 2 21  2.498095 5000 3
1994  1 2  5 167011 1 1 24 19941 5 51  2.993815 5000 3
2003  1 2  3 144920 1 1 33 20031 7 71 2.0701077 5000 3
2002  1 2  5 144988 1 1 32 20021 7 71  3.448561 5000 3
1997  1 2  2 159286 1 1 27 19971 6 61  1.255603 5000 3
1975  1 2  9 155364 1 1  5 19751 2 21  5.792848 5000 3
2006  1 2  4 147143 1 1 36 20061 8 81  2.718444 5000 3
2013  1 2  4 154901 1 1 43 20131 9 91 2.5822945 5000 3
1972  1 2  5 160946 1 1  2 19721 1 11  3.106632 5000 3
2001  1 2  2 146445 1 1 31 20011 7 71 1.3657005 5000 3
1980  1 2  9 163355 1 1 10 19801 3 31  5.509473 5000 3
1973  1 2  5 154914 1 1  3 19731 1 11  3.227597 5000 3
1991  1 2 11 162288 1 1 21 19911 5 51  6.778073 5000 3
1990  1 2  8 159773 1 1 20 19901 5 51  5.007104 5000 3
1978  1 2  3 163543 1 1  8 19781 2 21   1.83438 5000 3
1985  1 2 14 165994 1 1 15 19851 4 41   8.43404 5000 3
1976  1 2  6 157791 1 1  6 19761 2 21  3.802498 5000 3
2009  1 2  5 152197 1 1 39 20091 8 81  3.285216 5000 3
2010  1 2  3 154243 1 1 40 20101 9 91  1.944983 5000 3
1971  1 2  2 167275 1 1  1 19711 1 11  1.195636 5000 3
2008  1 2  2 150804 1 1 38 20081 8 81 1.3262248 5000 3
1988  1 2  5 158829 1 1 18 19881 4 41   3.14804 5000 3
2004  1 2  5 145288 1 1 34 20041 7 71   3.44144 5000 3
2005  1 2  4 145596 1 1 35 20051 8 81  2.747328 5000 3
1974  1 2  2 153908 1 1  4 19741 1 11 1.2994776 5000 3
1984  1 2  4 166464 1 1 14 19841 3 31  2.402922 5000 3
1993  1 2  8 166569 1 1 23 19931 5 51  4.802814 5000 3
2007  1 2  3 148927 1 1 37 20071 8 81 2.0144098 5000 3
1982  1 2  5 163570 1 1 12 19821 3 31 3.0567954 5000 3
1989  1 2  6 158072 1 1 19 19891 4 41  3.795739 5000 3
1996  1 2  5 163336 1 1 26 19961 6 61 3.0611744 5000 3
1970  1 2  6 173171 1 1  0 19701 1 11 3.4647834 5000 3
2012  1 2  2 155281 1 1 42 20121 9 91 1.2879876 5000 3
1992  1 2  8 165864 1 1 22 19921 5 51  4.823229 5000 3
1995  1 2  8 165466 1 1 25 19951 6 61   4.83483 5000 3
1986  1 2  3 164400 1 1 16 19861 4 41 1.8248175 5000 3
2011  1 2  6 154881 1 1 41 20111 9 91  3.873942 5000 3
1987  1 2 12 160796 1 1 17 19871 4 41  7.462872 5000 3
2000  1 2  0 148851 1 1 30 20001 7 71         0 5000 3
1983  1 2  4 165384 1 1 13 19831 3 31  2.418614 5000 3
1998  1 2  5 155417 1 1 28 19981 6 61  3.217151 5000 3
1999  1 2  4 151759 1 1 29 19991 6 61  2.635758 5000 3
1977  2 1  2 155019 1 1  7 19771 2 21 1.2901645 5500 3
1972  2 1  3 183210 1 1  2 19721 1 11 1.6374652 5500 3
1978  2 1  6 147582 1 1  8 19781 2 21 4.0655365 5500 3
2012  2 1  4 145274 1 1 42 20121 9 91  2.753418 5500 3
1985  2 1  2 158203 1 1 15 19851 4 41 1.2641985 5500 3
1983  2 1  3 157521 1 1 13 19831 3 31  1.904508 5500 3
1973  2 1  1 180080 1 1  3 19731 1 11 .55530876 5500 3
2010  2 1  4 141523 1 1 40 20101 9 91  2.826396 5500 3
1982  2 1  4 153281 1 1 12 19821 3 31  2.609586 5500 3
1975  2 1  3 168667 1 1  5 19751 2 21 1.7786527 5500 3
2002  2 1  2 154304 1 1 32 20021 7 71 1.2961427 5500 3
2005  2 1  2 143522 1 1 35 20051 8 81 1.3935146 5500 3
1994  2 1  3 153477 1 1 24 19941 5 51 1.9546903 5500 3
1976  2 1  0 161968 1 1  6 19761 2 21         0 5500 3
1988  2 1  8 159008 1 1 18 19881 4 41  5.031193 5500 3
1984  2 1  5 158695 1 1 14 19841 3 31  3.150698 5500 3
1993  2 1  3 154385 1 1 23 19931 5 51  1.943194 5500 3
1995  2 1  2 154672 1 1 25 19951 6 61  1.293059 5500 3
2006  2 1  4 141563 1 1 36 20061 8 81  2.825597 5500 3
1997  2 1  3 160223 1 1 27 19971 6 61 1.8723904 5500 3
1979  2 1  1 146043 1 1  9 19791 2 21  .6847298 5500 3
2000  2 1  3 160193 1 1 30 20001 7 71  1.872741 5500 3
1991  2 1  7 158514 1 1 21 19911 5 51 4.4160137 5500 3
1981  2 1  4 149310 1 1 11 19811 3 31   2.67899 5500 3
1989  2 1  6 159729 1 1 19 19891 4 41 3.7563624 5500 3
2007  2 1  2 139786 1 1 37 20071 8 81 1.4307585 5500 3
2008  2 1  2 140101 1 1 38 20081 8 81 1.4275416 5500 3
1987  2 1  8 157537 1 1 17 19871 4 41  5.078172 5500 3
2001  2 1  3 157671 1 1 31 20011 7 71  1.902696 5500 3
1970  2 1  6 186729 1 1  0 19701 1 11  3.213213 5500 3
1992  2 1  2 155746 1 1 22 19921 5 51  1.284142 5500 3
2011  2 1  0 143394 1 1 41 20111 9 91         0 5500 3
1998  2 1  5 160816 1 1 28 19981 6 61  3.109143 5500 3
1974  2 1  2 174496 1 1  4 19741 1 11 1.1461581 5500 3
2013  2 1  6 147058 1 1 43 20131 9 91  4.080023 5500 3
1980  2 1  5 146648 1 1 10 19801 3 31  3.409525 5500 3
1996  2 1  3 157050 1 1 26 19961 6 61 1.9102197 5500 3
1986  2 1  8 157233 1 1 16 19861 4 41   5.08799 5500 3
2003  2 1  5 150392 1 1 33 20031 7 71  3.324645 5500 3
2009  2 1  2 140419 1 1 39 20091 8 81 1.4243087 5500 3
1999  2 1  6 161544 1 1 29 19991 6 61  3.714158 5500 3
1971  2 1  6 184936 1 1  1 19711 1 11  3.244366 5500 3
1990  2 1  9 159751 1 1 20 19901 5 51  5.633768 5500 3
2004  2 1  1 146723 1 1 34 20041 7 71  .6815564 5500 3
2010  2 2  3 147824 1 1 40 20101 9 91 2.0294404 5500 3
1989  2 2  8 167346 1 1 19 19891 4 41  4.780515 5500 3
2002  2 2  1 160159 1 1 32 20021 7 71  .6243795 5500 3
1990  2 2  5 167076 1 1 20 19901 5 51   2.99265 5500 3
2011  2 2  4 149722 1 1 41 20111 9 91  2.671618 5500 3
1976  2 2  1 168774 1 1  6 19761 2 21  .5925083 5500 3
1994  2 2  6 160466 1 1 24 19941 5 51   3.73911 5500 3
1991  2 2  6 165997 1 1 21 19911 5 51  3.614523 5500 3
2009 18 2  0  28087 5 4 39 20094 8 84         0 2500 3
2013 18 2  5  36549 5 4 43 20134 9 94 13.680264 2500 3
1980 18 2  0   7089 5 4 10 19804 3 34         0 2500 3
1999 18 2  1  18549 5 4 29 19994 6 64  5.391126 2500 3
1979 18 2  0   6712 5 4  9 19794 2 24         0 2500 3
1985 18 2  1   9528 5 4 15 19854 4 44 10.495382 2500 3
2000 18 2  0  18825 5 4 30 20004 7 74         0 2500 3
1994 18 2  0  15568 5 4 24 19944 5 54         0 2500 3
end
label values age_group AgeCat
label def AgeCat 1 "0-4", modify
label def AgeCat 2 "5-9", modify
label def AgeCat 3 "10-14", modify
label def AgeCat 4 "15-19", modify
label def AgeCat 5 "20-24", modify
label def AgeCat 6 "25-29", modify
label def AgeCat 7 "30-34", modify
label def AgeCat 8 "35-39", modify
label def AgeCat 9 "40-44", modify
label def AgeCat 10 "45-49", modify
label def AgeCat 11 "50-54", modify
label def AgeCat 12 "55-59", modify
label def AgeCat 13 "60-64", modify
label def AgeCat 14 "65-69", modify
label def AgeCat 15 "70-74", modify
label def AgeCat 16 "75-79", modify
label def AgeCat 17 "80-84", modify
label def AgeCat 18 "85+", modify
label values sex sex
label def sex 1 "Female", modify
label def sex 2 "male", modify
label values year years
label def years 0 "1970", modify
label def years 1 "1971", modify
label def years 2 "1972", modify
label def years 3 "1973", modify
label def years 4 "1974", modify
label def years 5 "1975", modify
label def years 6 "1976", modify
label def years 7 "1977", modify
label def years 8 "1978", modify
label def years 9 "1979", modify
label def years 10 "1980", modify
label def years 11 "1981", modify
label def years 12 "1982", modify
label def years 13 "1983", modify
label def years 14 "1984", modify
label def years 15 "1985", modify
label def years 16 "1986", modify
label def years 17 "1987", modify
label def years 18 "1988", modify
label def years 19 "1989", modify
label def years 20 "1990", modify
label def years 21 "1991", modify
label def years 22 "1992", modify
label def years 23 "1993", modify
label def years 24 "1994", modify
label def years 25 "1995", modify
label def years 26 "1996", modify
label def years 27 "1997", modify
label def years 28 "1998", modify
label def years 29 "1999", modify
label def years 30 "2000", modify
label def years 31 "2001", modify
label def years 32 "2002", modify
label def years 33 "2003", modify
label def years 34 "2004", modify
label def years 35 "2005", modify
label def years 36 "2006", modify
label def years 37 "2007", modify
label def years 38 "2008", modify
label def years 39 "2009", modify
label def years 40 "2010", modify
label def years 41 "2011", modify
label def years 42 "2012", modify
label def years 43 "2013", modify
label values _merge _merge
label def _merge 3 "matched (3)", modify
Above data is incomplete though, but in exact format what I have now.


Now, I would like to use these two codes:
Code:
poisson allglioma c.year##Agecat2, exposure(pop) irr, if sex==1
margins Agecat2, at(year=(0(1)42)) vsquish predict(ir)
and save this output in stata format,wide one so that it would be easier to draw graphs for time series data.

Any help would be appreciated.Thank you.
Viewing all 65052 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>