Quantcast
Channel: Statalist
Viewing all 65044 articles
Browse latest View live

Expression builder dialog

$
0
0
Is there any way to make the expression builder dialog larger? The lists are too small to be usable and require constant scrolling both vertical and horizontal.
(Stata 15 on Windows 10):

Array

Average calculation issues

$
0
0
Hi there,

I would like to calculate the average account receivable using netaccountsreceivable and lag_net_account_receivable.

However, I found the average value seems to be incorrect. For example, it is 276860608 for stkcd 1 in 1995, shouldn't it be (309246048+244475184)/2 = 276860616?

Could someone please tell me why this would happen and how to avoid that?

Thanks

code I used:
Code:
gen avg_account_receivable= (netaccountsreceivable+lag_net_account_receivable)/2
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long stkcd int year float(netaccountsreceivable lag_net_account_receivable avg_account_receivable)
1 1990           .           .            .
1 1991           .           .            .
1 1992           .           .            .
1 1993           .           .            .
1 1994   244475184           .            .
1 1995   309246048   244475184    276860608
1 1996   469556608   309246048    389401344
1 1997   534341664   469556608    501949120
1 1998   516883264   534341664    525612480
1 1999   378508704   516883264    447696000
1 2000  -396003232   378508704     -8747264
1 2001    11202902  -396003232   -192400160
1 2002  -762710976    11202902   -375754048
1 2003  -833153216  -762710976   -797932096
1 2004  -874245760  -833153216   -853699456
1 2005  -763604032  -874245760   -818924928
1 2006           .  -763604032            .
1 2007           .           .            .
1 2008 15109592064           .            .
1 2009 35209261056 15109592064  25159426048
1 2010 32229515264 35209261056  33719388160
1 2011 1.84321e+11 32229515264 108275253248
1 2012 99201843200 1.84321e+11 141761413120
1 2013 1.91714e+11 99201843200 145457922048
1 2014 2.56183e+11 1.91714e+11 2.239485e+11
1 2015 3.14259e+11 2.56183e+11  2.85221e+11
1 2016 4.19846e+11 3.14259e+11 3.670525e+11
1 2017 4.25209e+11 4.19846e+11 4.225275e+11
1 2018           . 4.25209e+11            .
2 1991   142333296           .            .
2 1992   125544736   142333296    133939016
2 1993   158023472   125544736    141784096
2 1994   386629024   158023472    272326240
2 1995   650885952   386629024    518757504
2 1996   691276416   650885952    671081216
2 1997   775456832   691276416    733366656
2 1998   528862240   775456832    652159552
2 1999   535368512   528862240    532115392
2 2000   516286848   535368512    525827680
2 2001   477500384   516286848    496893632
2 2002   302297120   477500384    389898752
2 2003   365968992   302297120    334133056
2 2004   838037504   365968992    602003264
2 2005  1082277376   838037504    960157440
2 2006  1035615424  1082277376   1058946432
2 2007   864883008  1035615424    950249216
2 2008   922774848   864883008    893828928
2 2009   713191936   922774848    817983360
2 2010  1594024576   713191936   1153608192
2 2011  1514813824  1594024576   1554419200
2 2012  1886548480  1514813824   1700681216
2 2013  3078969856  1886548480   2482759168
2 2014  1894071808  3078969856   2486520832
2 2015  2510653184  1894071808   2202362368
2 2016  2075256832  2510653184   2292955136
2 2017  1432733952  2075256832   1753995392
2 2018  1586180736  1432733952   1509457408
3 1991    96595616           .            .
3 1992   183449968    96595616    140022784
3 1993   389976800   183449968    286713376
3 1994   533173248   389976800    461575040
3 1995   652192256   533173248    592682752
3 1996   949415296   652192256    800803776
3 1997   710684608   949415296    830049920
3 1998   566756352   710684608    638720512
3 1999   711459392   566756352    639107840
3 2000   522822464   711459392    617140928
3 2001   344030464   522822464    433426464
4 1991    18389532           .            .
4 1992    53118160    18389532     35753848
4 1993   203273392    53118160    128195776
4 1994   191898672   203273392    197586032
4 1995   279380832   191898672    235639744
4 1996   118328432   279380832    198854624
4 1997   126063984   118328432    122196208
4 1998    93838936   126063984    109951456
4 1999    55299972    93838936     74569456
4 2000   222196064    55299972    138748016
4 2001    31434708   222196064    126815384
4 2002    62747196    31434708     47090952
4 2003    88416992    62747196     75582096
4 2004    98330824    88416992     93373904
4 2005    76457512    98330824     87394168
4 2006    62786196    76457512     69621856
4 2007     7229633    62786196     35007916
4 2008     4995476     7229633      6112554
4 2009     6860687     4995476      5928081
4 2010   4296549.5     6860687      5578618
4 2011     4540338   4296549.5      4418444
4 2012     8129105     4540338      6334721
4 2013     3760743     8129105      5944924
4 2014     2742071     3760743      3251407
4 2015   1726513.8     2742071    2234292.5
4 2016     1706211   1726513.8    1716362.5
4 2017     9456606     1706211      5581409
4 2018    27268320     9456606     18362464
5 1992   155349056           .            .
5 1993   166755648   155349056    161052352
5 1994   200326880   166755648    183541264
5 1995   127329032   200326880    163827952
end

Random Effect Autocorrelation Test

$
0
0
Dear Good People,

I am currently shifting to using Random Effect for my data and I would like to know if there is a spesific command to execute autocorrelation test using Random Effect? Previously I use the command "xtserial y1 x1 x2". And also, I use robust for my Random Effect, is there any different in the command? Thank you!

Sample size estimation for comparison of three proportions: Control, Treatment1 and Treatment2

$
0
0
Dear StataList,
How can I calculate the sample size requirements for proportions in 3 (three) groups, namely: Control, Treatment1, and Treatment 2, when the outcome proportions typically range between 2% and 20%? Indeed, is it necessary to compare 3 groups - is it sufficient to compare Treatment1 vs Control and Treatment2 vs Control, and Treatment1 vs Treatment2, when I wish to establish that at least one of the treatment groups differs statistically from the Control group, and that Treatment1 differs significantly from Treatment2. Ideally, I would like to detect statistically significant differences between Controls and each of the treatment groups and between Treatment1 and Treatment2.
I will appreciate your expert comments.
Dora Pearce

How to display the P value of the mediation with khb command?

$
0
0
Hi all,

Recently I read a paper published in JAMA Pediatrics(doi:10.1001/jamapediatrics.2019.1212), the authors provide the P value of the mediating variables using the -khb- command in Stata (Table 2, listed below). Array


I want to estimate the the P value in Summary of confounding part(Conf_Pct column) and Components of Difference part(P_Reduced column), as illustrated in Table 2, but didnot know how.

The -khb- is a user-written program and can be installed by command:

Code:
. net sj 13-1 st0236_2
. net install st0236_2   // INSTALLATION FILES
. net get st0236_2       // ANCILLARY FILES, including dlsy_khb.dta and khb.do
Below is my codes and results, can anyone offer any clue?

Code:
. use dlsy_khb.dta

. khb logit univ fses || abil intact boy, disentangle summary verbose

(omitted)

Logistic regression                             Number of obs     =      1,896
                                                LR chi2(4)        =     216.87
                                                Prob > chi2       =     0.0000
Log likelihood = -468.31516                     Pseudo R2         =     0.1880

------------------------------------------------------------------------------
        univ |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        fses |   .3817324   .0778061     4.91   0.000     .2292353    .5342295
        abil |   1.065516    .106775     9.98   0.000     .8562405    1.274791
      intact |    1.08391   .7386558     1.47   0.142    -.3638292    2.531648
         boy |   .9821406   .1848351     5.31   0.000     .6198704    1.344411
       _cons |  -4.462997   .7479123    -5.97   0.000    -5.928878   -2.997116
------------------------------------------------------------------------------

(omitted)

Logistic regression                             Number of obs     =      1,896
                                                LR chi2(4)        =     216.87
                                                Prob > chi2       =     0.0000
Log likelihood = -468.31516                     Pseudo R2         =     0.1880

------------------------------------------------------------------------------
        univ |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        fses |   .5805281   .0786111     7.38   0.000     .4264531    .7346031
    __000001 |   1.065516    .106775     9.98   0.000     .8562405    1.274791
    __000002 |    1.08391   .7386558     1.47   0.142    -.3638292    2.531648
    __000003 |   .9821406   .1848351     5.31   0.000     .6198704    1.344411
       _cons |  -2.945969    .124697   -23.63   0.000    -3.190371   -2.701568
------------------------------------------------------------------------------

Decomposition using the KHB-Method

Model-Type:  logit                                 Number of obs     =    1896
Variables of Interest: fses                        Pseudo R2         =    0.19
Z-variable(s): abil intact boy
------------------------------------------------------------------------------
        univ |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
fses         |
     Reduced |   .5805281   .0786111     7.38   0.000     .4264531    .7346031
        Full |   .3817324   .0778061     4.91   0.000     .2292353    .5342295
        Diff |   .1987956   .0359394     5.53   0.000     .1283557    .2692355
------------------------------------------------------------------------------

Summary of confounding

        Variable | Conf_ratio    Conf_Pct   Resc_Fact  
    -------------+-------------------------------------
            fses |  1.5207722       34.24   1.1317064  
    ---------------------------------------------------

Components of Difference

      Z-Variable |      Coef    Std_Err     P_Diff  P_Reduced  
    -------------+---------------------------------------------
    fses         |                                            
            abil |  .1661177   .0301003      83.56      28.61  
          intact |   .020142   .0144611      10.13       3.47  
             boy |  .0125359    .011524       6.31       2.16  
    -----------------------------------------------------------
Thank you all in advance!

The Table 2 detailed in the following article:
Easterlin MC, Chung PJ, Leng M, Dudovitz R. Association of Team Sports Participation With Long-term Mental Health Outcomes Among Individuals Exposed to Adverse Childhood Experiences. JAMA Pediatr. Published online May 28, 2019.
https://jamanetwork.com/journals/jam...stract/2734743

The user-written program -khb-, created by Ulrich Kohler, Kristian Bernt Karlson, and Anders Holm, and detailed in the following article:
Kohler, U., K.B. Karlson, and A. Holm. 2011. "Comparing Coefficients of Nested Nonlinear Probability Models." Stata Journal, 11(3): 420-38.
https://www.stata-journal.com/sjpdf....iclenum=st0236

renaming variables using values they hold

$
0
0
Hey everyone,

Is there a simple command (or few lines of code) to rename variables based on a specific string value they hold?

In more detail, the situation is that I have an elections results data set in which every variable is a "party", and the values it holds represent the amount of votes for that party. the problem is that variables are named by the letters represent each party on the ballots. I would like to rename the variables so that they indicate the parties' names. For doing that, I have another small table that has the same unwanted names (the representing letters) as variable names, but under each of them there is a string value of the party's real name.

Any ideas how to deal with this issue?

Much appreciation,
Ben

Code of Patell Z-statistic

$
0
0
Goodmorning everyone,

I'd like to derive a Z-statistic by means of the Patell test (https://www.eventstudytools.com/sign...e-tests#Patell). Unfortunately, I did not found an option within Stata to perform the test or found any material on the forum. Is there anyone who's got either a command to use or code to execute? I've tried the code underneath, but is does not seem to provide good values (CAR: cumulative abnormal returns/AR: abnormal return/CAAR: cumulative average abnormal return).

Thanks in advance,

Paul



Code:
gen temp123=1
                levelsof temp123, local(123)
                foreach id of local 123{
                    by ticker: egen s2_short_`id' = sd(logret) if estimation_window_short`id'==1
                    by ticker: egen L_short_`id' = count(logret) if estimation_window_short`id'==1
                    by ticker: egen aRM_short_`id' = mean(logret) if estimation_window_short`id'==1
                    by ticker: egen R_short_`id' = max(logret) if eventid==`id' &estimation_window_short`id'==1
                    }
                            
                foreach id of local 123{
                    by ticker: gen Rsum_short_t_`id' = logret-aRM_short_`id' if estimation_window_short`id'==1
                    }
                                                
                foreach id of local 123{
                    by ticker: egen Rsum_short_`id' = sum(Rsum_short_t_`id'^2) if estimation_window_short`id'==1
                    }
                    
                foreach id of local 123{
                    drop Rsum_short_t_`id'
                    }    
                                
            //3.10.1.2 Part-by-part calculation of statistic                        
                //(Rmt-ARm)^2
                foreach id of local 123{
                    gen part1_s_`id' = R_short_`id'- aRM_short_`id' if estimation_window_short`id'==1
                    replace part1_s_`id' = part1_s_`id'^2  if estimation_window_short`id'==1
                    }
                
                //(Rmt-ARm)^2/sum(Rmt-ARm)^2
                foreach id of local 123{
                    gen part2_s_`id' = part1_s_`id'/Rsum_short_`id' if estimation_window_short`id'==1
                    }
                
                //1+(1/L)+(part2)
                foreach id of local 123{
                    gen part3_s_`id' = 1+(1/L_short_`id')+part2_s_`id' if estimation_window_short`id'==1
                    }
                
                //s2(part3)^0.5
                foreach id of local 123{
                    gen part4_s_`id' = sqrt(s2_short_`id'*part3_s_`id') if estimation_window_short`id'==1
                    }
                
                foreach id of local 123{
                    by ticker: replace part4_s_`id' = part4_s_`id'[_n-1] if missing(part4_s_`id')&event_window_short`id'==1 
                    }
                            
            //3.10.1.3 Standardize abnormal returns Patell short
                foreach id of local 123{
                    gen AR_sd_s_pat`id' =. if estimation_window_short`id'==1
                    }    
                    
                foreach id of local 123{
                    replace AR_sd_s_pat`id' = abnormal_return_short`id'/part4_s_`id' if event_window_short`id'==1
                    }
                        
            //3.10.1.4 Drop variables
            foreach id of local 123{
                drop R_short_`id' aRM_short_`id' L_short_`id' s2_short_`id' part1_s_`id' part2_s_`id' part3_s_`id' part4_s_`id' 
                }
                
            save temp11.dta, replace
            
        //3.10.2 Cummulate standardized Patell abnormal returns short
            use temp11.dta, clear
            foreach id of local 123 {
                    by ticker: egen CAR_pat_t_s`id'= sum(AR_sd_s_pat`id') if event_window_short`id'==1
                    }    
                    
            foreach id of local 123 {
                    by ticker: gen CAR_pat_s`id' = CAR_pat_t_s`id'*(1/(sqrt(3))) if event_window_short`id'==1
                    }    
                        
        //3.10.3 Derive Patell Z-statistic
            foreach id of local 123 {
                    egen CAAR_P_s`id'= mean(CAR_pat_s`id') if eventid==`id'
                    }
                                    
            foreach id of local 123 {
                    egen No_CAAR_P_s`id'= count(CAAR_P_s`id')
                    replace No_CAAR_P_s`id'= sqrt(No_CAAR_P_s`id')
                    }                        
                                                
            foreach id of local 123 {
                    gen test_P_s`id'= CAAR_P_s`id'* No_CAAR_P_s`id' 
                    }

Taking the average of observations within a specific date range

$
0
0
Hi,

I want to compute the average of a variable (teamsize) for observations within a specific time period. I have a date variable formatted as such that I would like to use. Specifically, I have data on companies and their teams at different points in time. I want to calculate the average team size for a specific company within a given time period, always between the current date and 365 days prior to the observation.


. bysort company: egen mean(teamsize) if inrange()

is my best guess (sorry, new to stata and related programs in general!). I do not know how to specify inrange so that it takes the average of all observations with a date (variable is DATE, formatted as %td) in the range of the observation date and the 365 previous days. For example, if the teamsize was 55 on June 1st 2011, I want to create variable with a mean that takes into account all teamsizes from June 1st 2010 to June 1st 2011, including the team size of June 1st 2011.

It would be awesome if someone could help me out!

Best,
Julian

Problems creating Herfindahl-Hirschman Index

$
0
0
Hello everyone,

I want to create the Herfindahl-Hirschman Index for my data. I have quartlery data for different banks. I want to have the HHI for market share.

I tried the command

hhi marketshare, by(bank_id, date) however it only gives me 1 for every observation. Leaving out the date gives me results, however I want the date included.
(same with the hhi5 command). When I try the entropyetc command it says matsize too small (even after I increased the matsize)

Does anyone have a suggestion?

Thank you in advance

Variable Labels

$
0
0

Hello dear forum members, I am new here and am currently learning Stata. I am a beginner and have only one question. Maybe there is already a thread to it, I apologize ifthere is a similar one. Now my problem. I have already published an Exel file with a list of buildings (number of floors, year of construction and buildings on the property). Array The task is to delete all observations with more than 10 floors. Here is my code
Code:
drop if STORY> 10
:D so far so good. No science. But why are not they deleted? When I go to the "35" but I see a "5". Do you see what I mean? The AGE is also wrong from front to back. It should be deducted from the year 2019 - YRB, but with AGE I do not get the right values. Maybe the pros can help me a bit and give me a hint what I'm doing wrong. I would be very thankful. Yours sincerely

Values of multiple columns in one column using a loop

$
0
0
Hi everyone,

I'd like to place all values of multiple columns in one column. I tried stack using a loop:

Code:
foreach id of local temp {
            stack cum_abnormal_return_short`id', into(cum_abnormal_return_short)
            stack cum_abnormal_return_med`id', into(cum_abnormal_return_med)
            stack cum_abnormal_return_long`id', into(cum_abnormal_return_long)
            }
Unfortunately, an error occurs as stack changes the dataset. Is there any other way?

Thanks,

Paul

Create fiscal years from years and months

$
0
0
Dear Statalist,


I would like to create a fiscal year variable that take values like 2005-06, 2006-07 and so on till 2018-19, from the variables year, month as given below.
Code:
year    month
2006    Jan
2006    Feb
2006    Mar
2006    Apr
2006    May
2006    Jun
2006    Jul
2006    Aug
2006    Sep
2006    Oct
2006    Nov
2006    Dec
2007    Jan
2007    Feb
2007    Mar
2007    Apr
2007    May
2007    Jun
2007    Jul
---------
---------
2019   Mar
2019   Apr
Can someone help me code this?
Thanks.

Propensity Score matching: how to define the dependent variable

$
0
0
Hello everyone, we are trying to use propensity score matching (PSM) to find the suitable control group. Our data set is at the exporter (China)-product(HS6)-destination (EU)-year level from 2000-2013. The dataset contains over 650,000 unit of observations. The treatment group is the products that are subject to the EU antidumping (AD) measures. Between 2000-2013, we had 160 product at the HS6 level exported from China to the EU faced with EU AD measures. We want to use PSM to find the control group, as the products that were subject to the EU AD measures are not random, using PSM could avoid this selection bias. We use the logit model and to estimate the probability of a product being imposed by the AD measure, based on a set of observable characteristics. The estimation equation is as follows:
Pr(AD=1)_p=beta_0+beta_1 IP(China)_pt-1+beta_2 GDP_t+ RER_t+year FE+error term

IP(China)_pt-1 is lagged import penetration, which is defined as the share of import from China over total imports in the EU at the HS6 level;
GDP_t: is the GDP growth rate in EU in year t
RER_t: is the log real exchange rate in terms of Euro per Chinese RMB;
I also include the year fixed effects.


My question is how should we define the dependent variable (DV). More specifically, should we define the dependent variable as AD_p=1, if the product is subject to EU AD and 0 otherwise? With this definition, the DV does not change over time, only varies across products. Alternatively, we can define the dependent variable as AD_pt=1, if the product is subject to EU AD in year t and it remains to 1 if this measure is still in force, and 0 otherwise. For instance, if a product imposed an AD measure in 2005 and the measure stayed in force until 2010, then AD_pt=1 between 2005 and 2010, but for the years before the treatment, AD_pt=0, and for the years after the measure is revoked (i.e., after 2010), AD_pt=0. If the DV indeed needs to vary at time dimension, my question is how PSM could find a control for the treated product before the treatment. Specifically, if the product was treated between 2005 and 2010, the rest of the years are all taking 0, how could PSM find a control group for the treated product say in 2004 or 2011?

I am very confusing what is the right definition for the DV, and I really appreciate your help and suggestion.

Recoding in panel data

$
0
0
How to recode values by groups in panel data by using Stata commands? For example, the score for id1 at wave 1 is 3, and I seek to let the score at wave 2 to 4 to be 3.
id wave score
1 1 3
1 2
1 3
1 4
2 1 1
2 2
2 3
2 4
3 1 2
3 2
3 3
3 4















Ideal:
id wave score
1 1 3
1 2 3
1 3 3
1 4 3
2 1 1
2 2 1
2 3 1
2 4 1
3 1 2
3 2 2
3 3 2
3 4 2







Probit Marginal Effect

$
0
0
Hello Everyone,
I am working on a Probit regression for my research, I want to report the marginal effect but I can't seem to figure out the command to use to export my results to excel.
I had initially tried this command;
" margins, dydx(*)" to get the marginal effect and the command "outreg2 using myreg,replace excel " to export my result. But outreg2 command I used tends to export my regression output, not the marginal effect output.
Please, can anyone recommend a command for me?
Thanks

Gravity model of migration: ppml vs. ppmlhdfe

$
0
0
Dear all,

I'm working on a gravity model of migration for my thesis. My data contains 28 EU countries as destination countries, 130 non-EU countries as origin countries and spans 10 years.
My dependant variable is the number of first-time issued residence permits for employment for any given courtrypair (as I'm focussing on work-related migration) and I want to interpret the influence of different migration policies. Due to the large number of zeros in the dependant variable, I want to use the PPML estimator as described by Santos Silva & Tenreyo (2006) and used in most gravity-related literature in the recent years.

I want to run the following command:
Code:
ppml permits_all lgdp_o lgdp_d ldist contig comlang_off colony comcol market_test shortage_list point_system job_offer dFE_ot*, cluster(dist)
market_test, shortage_list, point_system and job_offer are dummy variables for the different migration policies, dFE_ot* are origin*time FE that I want to include

As I'm using Stata 15.1 IC, I can't run the code because 1300 origin*time dummies are too much for the IC version. Before buying a new licence (poor student here), I checked for alternatives and found the following three commands:
1. xtpoisson, fe
2. ppml_panel_sg
3. ppmlhdfe

I'm (theoretically) abel to use xtpoisson, fe and ppml_panel_sg, but since they are set in regards to the kinds of fixed effects they use, they are not practical for me.
I like ppmlhdfe, as I can decide which fixed effects to add (as with ppml), but don't need to inflate my modell with the manually added FE dummy variables (and thus can run the regression in Stata IC).

Now to my question:
Is my understanding correct, that the following to codes would produce the same output?

Code:
ppml permits_all lgdp_o lgdp_d ldist contig comlang_off colony comcol market_test shortage_list point_system job_offer dFE_ot*, cluster(dist)
Code:
ppmlhdfe permits_all lgdp_o lgdp_d ldist contig comlang_off colony comcol market_test shortage_list point_system job_offer, a(i.origin#i.year)
Best regards,
Sarah

New Stata 16 versions of dolog, dotex and dologx on SSC

$
0
0
Thanks as always to Kit Baum, new versions of the dolog, dotex and dologx packages are now available for download from SSC. In Stata,use the ssc command to do this, or adoupdate if you already have old versions of these pachages.

The packages dolog, dotex and dologx are described as below on my website. The new versions have been updated to Stata Version 16. However, users of older Stata versions can still download old versions of these packages from my website by typing, in Stata,

net from http://www.rogernewsonresources.org.uk/

and selecting the subfolder for the user's own Stata version, where the appropriate old version can be found.

Best wishes

Roger


---------------------------------------------------------------------------------------
package dolog from http://www.rogernewsonresources.org.uk/stata16
---------------------------------------------------------------------------------------

TITLE
dolog: Execute commands from a do-file, creating a text or SMCL log file

DESCRIPTION/AUTHOR(S)
dolog and dosmcl (like do) causes Stata to execute the commands stored in
filename.do as if they were entered from the keyboard, and echos the commands
as it executes them, creating a text log file filename.log (in the case of
dolog) or a SMCL log file filename.smcl (in the case of dosmcl). If filename
is specified without an extension, then filename.do is assumed. If filename
is specified with an extension other than .do, or with no extension, then the
log file will still have .log or .smcl as its extension, so dolog and dosmcl
will not overwrite the original do-file. Arguments are allowed (as with do or
run).

Author: Roger Newson
Distribution-Date: 27june2019
Stata-Version: 16

INSTALLATION FILES (click here to install)
dolog.ado
dosmcl.ado
dolog.sthlp
dosmcl.sthlp
---------------------------------------------------------------------------------------
(click here to return to the previous screen)


---------------------------------------------------------------------------------------
package dotex from http://www.rogernewsonresources.org.uk/stata16
---------------------------------------------------------------------------------------

TITLE
dotex: Execute a do-file generating a SJ LaTeX log

DESCRIPTION/AUTHOR(S)
dotex is a version of the dolog package, and causes Stata to execute the
commands stored in a do-file named filename.do as if they were entered from
the keyboard, and echoes the commands as it executes them, creating a log file
filename.tex written in Stata Journal (SJ) LaTeX. The file filename.tex (or
parts of it) can then be included in the stlog environment of SJ LaTeX. The
dotex package was derived by hybridising the dolog package with the SJ's
logopen and logclose, which also create log files written in SJ LaTeX for
inclusion in the stlog environment. The dotex package has the advantage that
a user can run the same do-file using dotex, dolog and do, creating a SJ
LaTeX log file, a text log file, and no log file, respectively.

Author: Roger Newson
Distribution-Date: 27june2019
Stata-Version: 16

INSTALLATION FILES (click here to install)
dotex.ado
dotex.sthlp
---------------------------------------------------------------------------------------
(click here to return to the previous screen)

---------------------------------------------------------------------------------------
package dologx from http://www.rogernewsonresources.org.uk/stata16
---------------------------------------------------------------------------------------

TITLE
dologx: Multiple versions of dolog for executing certification scripts

DESCRIPTION/AUTHOR(S)
dologx (like dolog) causes Stata to execute the commands stored in a do-file
named filename.do as if they were entered from the keyboard, and echos the
commands as it executes them, creating a log file filename.log. The dologx
package contains multiple versions of dolog, written in multiple Stata
versions, intended for use when running certification scripts. Usually, a
do-file should contain a version statement at or near the top, so it will
still run in the Stata version in which it was written, even if the user runs
it under a later version of Stata. Certification scripts are an exception to
this rule, because they are run under multiple Stata versions, to certify
that the package being tested works under all of these versions. A
certification script therefore should not contain a version statement at the
top. The version of Stata under which it run will therefore be the version in
force in the program that calls it, even if that program is dolog. The
standard version of dolog should therefore not be used to run a certification
script, and the user should use the dologx package instead, using dolog6 to
run it under Stata 6, dolog7 to run it under Stata 7, and so on.

Author: Roger Newson
Distribution-Date: 27june2019
Stata-Version: 16

INSTALLATION FILES (click here to install)
dolog6.ado
dolog7.ado
dolog8.ado
dolog9.ado
dolog10.ado
dolog11.ado
dolog12.ado
dolog13.ado
dolog14.ado
dolog15.ado
dolog16.ado
dolog6.sthlp
dolog7.sthlp
dolog8.sthlp
dolog9.sthlp
dolog10.sthlp
dolog11.sthlp
dolog12.sthlp
dolog13.sthlp
dolog14.sthlp
dolog15.sthlp
dolog16.sthlp
dologx.sthlp
---------------------------------------------------------------------------------------
(click here to return to the previous screen)

SEM, error coefficients output.

$
0
0
Hi,
How can I receive the outputs for the "path coefficient" of error terms to each items of a latent variable? The output only shows the variance of error terms at the end of the table. I need to fixate the path coefficient of error terms to their reciprocal items for running another model. However, I don't know how to receive them first.

For example, I know I can fixate the path coefficient of each error term. For example: (e.a1@1->a1) (Latent->a1 a2 a3) , how can I receive the error path coefficient value for e.a2 -> a2 or e.a3->a3 ?

Please help.

Cox regression assumption

$
0
0
I am doing a research to examine the association between sleep and all-cause mortality. The average follow-up time is only 5 years for both men and women.

I was thinking to run cox regression since my outcome is death. However, after checking the assumption, unfortunately, I have evidence of non-proportional hazards with almost all covariates.
Please any suggestion of what to do next or should I run different regression type?


Thank you

reshape - wide connections

$
0
0
I have variables in wide that belong together i groups (eg. var1a and var2a).
I would like to reshape into long while assigning the value in the var. institution to each obs/id. The data example should then become 4 obs. long.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int id str24 institution str17 var1a str36 var2a str17 var1b str36 var2b
43 "hus 1" "1 kvinde" "Bil" "1 kvinde" "Bil"
44 "hus 2" "1 mand"   "Bil" "1 kvinde" "Bil"
end
hope someone can help me.
Best regards
Lars


If above is unclear it is this data structure I aim for:
id institution var1 var
43-1 hus1 1 kvinde Bil
43-2 hus1 1 kvinde Bil
44-1 hus2 1 mand Bil
44-2 hus2 1 kvinde Bil
Viewing all 65044 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>