Quantcast
Channel: Statalist
Viewing all 65029 articles
Browse latest View live

Looping over sub-folders in a folder, identifying .dta files in all sub-folders, and extracting particular variables

$
0
0
Hi,
I am a new user of STATA. I have 32 sub-folders, and each sub-folder contains files of different types including .dta. I need to extract only a few variables from these .dta files of all sub-folders. Could you please guide me through how could I do that? Thanks.

variance decomposition and svar issue

$
0
0
Hi
I am trying to estimate a structural var of gap growth and money supply with long term restrictions. I plot the responses to the shocks for gdp level using the following code. One shock has a permanent effect on gdp level and the other (restricted to zero) will have a transitory effect.
Note that dlogrgnp is a growth rate.

Code:
matrix c=(.,0\.,.)
svar dlogrgnp money, lags(1/8) lreq(c)      

cap irf drop ir
irf set ir
irf create ir, step(40) set(ir) replace   

use ir.irf, clear
sort irfname impulse response step
gen csirf=sirf

by irfname impulse: replace csirf= sum(sirf) if response=="dlogrgnp"
order irfname impulse response step sirf csirf 

save ir2.irf, replace

irf set ir2.irf

irf graph csirf, yline(0, lcolor(black)) noci xlabel(0(1)40) byopts (yrescale)
Assuming that my code above is correct, how can I estimate variance decomposition for both gdp growth (dlogrgnp) and gdp level?


Thanks

Problems converting a string to numeric from an Excel file

$
0
0
Hello

I have been given an Excel file which has a logistic outcome variable (AR, label "Was the scheduled PRO completed electronically") which came across to Stata from Excel as a string variable. I have tried to encode it to a numeric variable (called "eprocom") which seems to look ok in the dataset but Stata keeps saying "the outcome does not vary" even if I put it in a model with no predictors, whcih shouldn't be affected by missing values in the independent variables.

If you look at how dataex captures the variable, it's doing something funny and storing it as all 2's for some reason-a problem with labelling somehow which I have not requested.

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input byte WasthescheduledPROcompleted str85 IfPRONOTcompletedreasonpro str1 AR str127 IfNOTcompletedelectronically long eprocom
1 "." "0" "Not technologically confident to use the app/difficulty with using the app" 2
1 "." "0" "Not technologically confident to use the app/difficulty with using the app" 2
1 "." "0" "Not technologically confident to use the app/difficulty with using the app" 2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "Participant does not have the required device to install the app"           2
1 "." "0" "Participant does not have the required device to install the app"           2
1 "." "0" "NO EPROS"                                                                   2
1 "." "0" "NO EPRO"                                                                    2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "Participant does not have the required device to install the app"           2
1 "." "0" "Internet connection/availability problems"                                  2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "PT REQUIRED INTERPRETER TO ANSWERE QUESTIONS"                               2
1 "." "0" "PT REQUIRED AN INTERPRETER TO COMPLETE"                                     2
1 "." "0" "More convenient for participant"                                            2
1 "." "0" "Participant does not have the required device to install the app"           2
1 "." "0" "More convenient for participant"                                            2
end
label values eprocom eprocom
label def eprocom 2 "0", modify
When I tabulate my "eprocom" variable it makes sense:
Was the
scheduled
PRO
completed
electronica
lly? Freq. Percent Cum.
. 410 23.10 23.10
0 1,320 74.37 97.46
1 45 2.54 100.00
Total 1,775 100.00
But I can't use the variable in models without the "outcome does not vary" error message.

Not sure what is going on here.

Help appreciated and thanks in advance.

Regards

Chris

Generating an Aggregate index with Panel data

$
0
0
Hello. Am relatively new to Stata. Would anyone know how I can best tackle this problem? I would like to create a prudential policy index that takes on three different values: loosening actions, tightening actions, or no change (1 or 0). The aim is to link this index to growth by assuming that only changes in prudential policy can affect real growth. The index should be a sum of all prudential policy actions for each of the 10 categories for tightening and loosening actions considering a five-year horizon for 15 countries over the period 1990 - 2020. So far I have 1,048,510 observations.
PPi,t= 1/5 *((1+ T)/(1+E))

2. As per the literature, the descriptive statistics are calculated to show the impact of the policy actions on output growth. Ie when examining the Prudential Policy net tightening, the author differentiates the impact of cyclical, resilience, and capital-based prudential actions.
Literature: https://papers.ssrn.com/sol3/papers....act_id=3302300

Any help would be much appreciated. Thanks! The dataset looks like this:

input str11 year byte c_id str23 country byte(conservation_t capital_t lvr_t loanr_t ltv_t dsti_t liquidity_t lfx_t sifi_t ot_t conservation_l capital_l lvr_l loanr_l ltv_l dsti_l liquidity_l lfx_l sifi_l ot_l)
"1990" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1991" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1992" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1993" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1994" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1995" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1996" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1997" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1998" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1999" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2000" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2001" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2002" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2003" 1 "Angola" 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
"2004" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2005" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2006" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2007" 1 "Angola" 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
"2008" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2009" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2010" 1 "Angola" 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
"2011" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2012" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2013" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2014" 1 "Angola" 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2015" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2016" 1 "Angola" 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
"2017" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2018" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2019" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2020" 1 "Angola" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1990" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1991" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1992" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1993" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1994" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1995" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1996" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1997" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1998" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1999" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2000" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2001" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2002" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2003" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2004" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2005" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2006" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2007" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2008" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2009" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2010" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2011" 2 "Benin" 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
"2012" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2013" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2014" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2015" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2016" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2017" 2 "Benin" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2018" 2 "Benin" 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
"2019" 2 "Benin" 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2020" 2 "Benin" 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0
"1990" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1991" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1992" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1993" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1994" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1995" 3 "Botswana" 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
"1996" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1997" 3 "Botswana" 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
"1998" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1999" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2000" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2001" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2002" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2003" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2004" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2005" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2006" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2007" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2008" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2009" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2010" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2011" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2012" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2013" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2014" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2015" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2016" 3 "Botswana" 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
"2017" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2018" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2019" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"2020" 3 "Botswana" 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
"1990" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1991" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1992" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1993" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1994" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1995" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
"1996" 4 "CongoDemocraticRepublic" 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
end

Positive effect of market share, yet negative one of market-wide HHI?

$
0
0
Hi there,

in regressions on different topics and in different setups I have encountered cases where each observation's market share had a positive effect on the outcome whereas a higher Herfindahl Hirschmann Index, i.e. a higher sum of squared market shares in that same market, had a negative one.
Note that in the regressions with market shares I did not control for market-wide HHI, nor did I control for the observation's own market share when analyzing the effect of HHI.

I am struggling to understand how this is possible. Can anyone see it?

Thanks so much,
PM

Newey-West t-statistics in a Fama MacBeth regression using asreg, fmb

$
0
0
Hi together,

Is there a way to come up with Newey-West t-statistics during the last step of the FMB regression? I performed the FMB regression via the following code:
Code:
*standardization of betas 
        foreach var of varlist TNIC_ew_peer_ret_t0 FF48_vw_peer_ret_t0 delta_Mkt_RF SMB HML ln_lagged_BtM SIZE_prev_june ret_t1_t12 ret_t1 {
        egen z_`var' = std(`var'), mean(0) std(1)
        }
    
    *time series regression: regression for each firm over time -> beta estimate  
    bys PERMNO: asreg excess_ret z_TNIC_ew_peer_ret_t0 z_FF48_vw_peer_ret_t0 z_delta_Mkt_RF z_SMB z_HML z_ln_lagged_BtM z_SIZE_prev_june dummy_neg_BtM z_ret_t1_t12 z_ret_t1
    
    drop _Nobs _R2 _adjR2
    
    *FMB regression 
    asreg excess_ret _b_z_TNIC_ew_peer_ret_t0 _b_z_FF48_vw_peer_ret_t0 _b_z_delta_Mkt_RF _b_z_SMB _b_z_HML _b_z_ln_lagged_BtM _b_z_SIZE_prev_june _b_dummy_neg_BtM _b_z_ret_t1_t12 _b_z_ret_t1, fmb newey(2) 

    *Newey t-statistics 
    tsset PERMNO month    
    newey excess_ret _b_z_TNIC_ew_peer_ret_t0 _b_z_FF48_vw_peer_ret_t0 _b_z_delta_Mkt_RF _b_z_SMB _b_z_HML _b_z_ln_lagged_BtM _b_z_SIZE_prev_june _b_dummy_neg_BtM _b_z_ret_t1_t12 _b_z_ret_t1,lag(2) force
Knowing that the last step probably has not the intended effect since it results in the t stats of a regular newey regression, not as intedend in the newy t stats of atwo step fmb regression.


Thank you in advance for your support. I would be glad if someone might have some helpful insights

Johannes

Convergence in GSEM

$
0
0
Hello everyone,

I am trying to run a CFA of two factors (L1 and L2).

L1 and L2 are the latent variables and g1 - g11 and p1 - p17 that are binary coded 0 and 1.

Code:

Code:
 gsem (L1 <- g1 g2 g3 g4 g5 g6 g7 g8 g9 g10 g11, family(bernoulli) link(logit)) (L2 <- p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17, family(bernoulli) link(logit)),covstruct(_lexogenous, diagonal) group() latent(L1 L2) cov(L1*L2) nocapslatent
However, after entering the syntax above, I am unable to get the model to converge (it endlessly iterates). I am not sure what the issue is, so I would really appreciate some input on how to correct it.

I also attempted to run the model after removing items with a low prevalence (less than 5% are '1's) in the dataset but still had the same issue.

Thank you in advance!

How to create a simple graph about contract lenght in days?

$
0
0
Hi everyone,

I would like to obtain a graph similar to this one. Here is what I want please:
  1. I want one graph with contract length on the y-axis, and on the x-axis my date variable, but only months-years (like in %tm format) please, not my dates in %td format. So for example, for contracts started the 01jun2016, let's say, how many contract were signed on that day. My variable relative to starting date of contracts is -date_contract_start- (cf. below my dataex).
  2. I want also the same graph as below. The contract lenght in days is represented by my variable -between_dates-
Array
However, the only thing that I obtain is this strange thing:


Array


I simply use the following code to test it:


Code:
graph twoway spike between_dates date_contract_start
Could anyone help me please?


Here is a dataex;


Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input long id double(date_contract_start date_contract_end) float between_dates long idcontrato
1001 18887 21700 2813    1001
1001 21701 22431  730  451697
1001 22432 22645  213 1236132
1001 22646 22676   30 1730454
1001 22677 22735   58 2082075
1001 22736 23010  274 2172904
1001 23011 23069   58 2872183
1001 23070     .    . 3107888
1005 18800 21639 2839    1005
1005 21640 21651   11  420392
end
format %td date_contract_start
format %td date_contract_end
Thank you in advance for your help.
Best,

Michael



Romer database

$
0
0
Hello everyone,

I want to replicate Romer's paper: Romer, C. D., and D. H. Romer. 2004. A new measure of monetary shocks: Derivation and implications. America

but when I put bcuse romer2004 I get an error and I can't get the base. Can someone help me and tell me where I can find the base of the paper please.

Greetings

Generated percent variable not rounding properly

$
0
0
Hello,

I generated a float variable of a percentage:


g double share = weeks/tbw if fs==0
replace share = round(share, 0.1)
replace share = share*100

Dataex input:

clear
input float(fs weeks) int tbw float fam_inc double share
1 . 20 30295 .
1 . 19 30295 .
1 . 50 34752 .
0 3 30 25528 10
0 4 40 23408 10
1 . 24 31911 .
1 . 43 31798 .
0 17 50 15625 30.000000000000004
0 14 29 10581 50
1 . 15 35026 .
1 . 35 38734 .
1 . 18 27337 .
0 12 42 8673 30.000000000000004
0 13 13 21922 100
0 50 50 18412 100
1 . 20 31228 .
1 . 14 27775 .
0 3 37 19436 10
1 . 22 26442 .
1 . 27 26661 .
1 . 38 26661 .
1 . 27 29479 .
0 21 21 720 100
1 . 73 31693 .
0 39 50 13459 80
1 . 27 34868 .
1 . 23 34868 .
0 9 9 1 100
0 18 30 19989 60.00000000000001
1 . 20 29150 .
end
[/CODE]

I don't know why the 30% and 60%, is not rounded, but shows up as 60.0000000001.

When I don't do a double, strange things are happening as well. It doesn't show up like that but I cant browse if I say "br if share==60".

Thanks!

How to export tables and diagrams from Stata to Word

$
0
0
I´m having trouble inserting my tables into a word document, without it being completely messed up. It´d be nice if someone were to give me a easy step by step instruction on how to do it. Thanks!

How do you count the number of times a contract begins and ends on a given date?

$
0
0
Hi everyone,

I would like to count the number of times a contract begins and ends at specific dates.
Here is a dataex:

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input long id double(date_contract_start date_contract_end) long idcontrato
1001 18887 21700    1001
1001 22646 22676 1730454
1001 22677 22735 2082075
1001 23011 23069 2872183
1001 22432 22645 1236132
1001 21701 22431  451697
1001 22736 23010 2172904
1001 23070     . 3107888
1005 23131     . 3215923
1005 22646 22676 1742918
end
format %td date_contract_start
format %td date_contract_end
For example, I would like to know in my sample how many times a contract began at 17 September 2011 and ended at 31 May 2019. And so on for all possible combinations.
Could you please give me a hand with that?

I would like then to do some kind of plot suitable with that please.

Thank you very much.
Best,

Michael

melogit - stata error - only one fixed effects

$
0
0
Hi all

I am trying to do a mixed multilevel logistic regression

random effects (doctored)

Code:
melogit var1 c.var2 c.var3 || doctorid, vce(robust) or level(95)
Stata gives me an error 'Only one fixed effects equation allowed'

I've just stata help file and I can see my syntax seems ok
Any help diagnosing the problem would be appreciated

Mlogit does not converge when conducting multiple imputation

$
0
0
Dear reader,
I am currently trying to implement MICE, however, it does not seem to work.
My data has a lot of missingness, I am afraid that this might be the problem. I have currently used the following code:

Code:
mi set mlong
mi register imputed overhours_new irregular_hours_new side_job_new heavy_burden_new skills_new support_new appreciation_new career_advancement_new tenure workingstudent contracttype paidwork released_finding_job seeking_after_interruption student care_household private_means flo disability voluntary_work uncertainty_new parental_leave informal_care mha mhb mhd mhc mhe mh geslacht gebjaar gross_wages_cat work_outside_office_hours work_during_evening work_at_night satisfaction_atmosphere
mi impute chained (ologit, augment) overhours_new irregular_hours_new heavy_burden_new skills_new appreciation_new career_advancement_new uncertainty_new gross_wages_cat (mlogit, augment) work_outside_office_hours work_during_evening work_at_night side_job_new tenure workingstudent contracttype paidwork released_finding_job seeking_after_interruption student care_household private_means flo disability voluntary_work parental_leave informal_care geslacht (regress) gebjaar satisfaction_atmosphere = nomem_encr wave, add(10) force
I added the augment option twice. I have tried adding the 'force' option, but that did not help.

Stata issues the following error message:
convergence not achieved
convergence not achieved
mlogit failed to converge on observed data
error occurred during imputation of overhours_new irregular_hours_new heavy_burden_new
skills_new appreciation_new career_advancement_new uncertainty_new gross_wages_cat
work_outside_office_hours work_during_evening work_at_night side_job_new tenure
contracttype paidwork released_finding_job seeking_after_interruption student
care_household private_means flo disability voluntary_work parental_leave informal_care
geslacht gebjaar satisfaction_atmosphere on m = 1

I understand that convergence is already an issue in the first iteration. I am not sure, however, how to fix this issue. I have tried also using 'ologit' for all multinomial variables, I have read that this might make a change, however, I get the same error message.

Thank you for taking the time to read my post.

Best wishes,
Ciel

creating a dummy with conditions in panel data

$
0
0
Dear Stata users

I am working with a panel dataset and an excerpt of the dataset is below:

Code:
----------------------- copy starting from the next line -----------------------

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input str15 kom_name int year byte office_WS int l_close
"Älvkarleby" 2010 0 2017
"Älvkarleby" 2011 1 2017
"Älvkarleby" 2012 0 2017
"Älvkarleby" 2013 0 2017
"Älvkarleby" 2014 0 2017
"Älvkarleby" 2015 1 2017
"Älvkarleby" 2016 1 2017
"Älvkarleby" 2017 1 2017
"Älvkarleby" 2018 0 2017
"Älvkarleby" 2019 0 2017
"Älvkarleby" 2020 0 2017
"Älvkarleby" 2021 0 2017
"Älvkarleby" 2022 . 2017
"Älvkarleby" 2023 . 2017
"Knivsta"     2010 0 2015
"Knivsta"     2011 1 2015
"Knivsta"     2012 1 2015
"Knivsta"     2013 0 2015
"Knivsta"     2014 0 2015
"Knivsta"     2015 0 2015
"Knivsta"     2016 0 2015
"Knivsta"     2017 0 2015
"Knivsta"     2018 0 2015
"Knivsta"     2019 0 2015
"Knivsta"     2020 0 2015
"Knivsta"     2021 0 2015
"Knivsta"     2022 . 2015
"Knivsta"     2023 . 2015
end
I want to create a dummy variable called "dummy" that will take the value 1 for each kom_name that has office_WS>0 in l_close-1 year. For example, for Knivsta, the dummy will take 0 since l_close==2015 and office_WS was 0 in 2014. However, it will take the value 1 for Älvkarleby since in 2016 (which is l_close-1) office_WS>0.

Any help would be highly appreciated.

Thanks,

Zariab Hossain
Uppsala University

Loops

$
0
0
Hi
I am doing some coding were I am looking if an ID for an individual change between two years or not (1 for a change and 0 if the values are the same). I have one variable for each ID and year. Now I want to create new variables identifying the differences for each year, stretching over a time period of 20 years. I know how to do this manually but I want to this more efficient in a loop. Any experts that help me?
My code for each individual year.

Differences in the year 1975

gen Different_ID1975 = 0
replace Different_ID1975= 1 if ID1975!=ID1974
replace Different_ID1975 = . if ID1975== . | ID1974==.

Year 1976
....

obtain odds ratio from regression for continous variable

$
0
0
Hi I would like to obtain odds ratio for a continuous dependent variable. All variables in the code are continuous variables
Charlson - is a continuous variable that measures the pt comorbidity

Code:
reg ptage splinevar* charlson [pw=weight]
This gives me a coefficient value
eg: charlson has a coef 0.06

My interpretation is: The sicker the patient the more likely the older the pt is by 0.06 points.

However can I get an odds ratio instead?
Does it make sense to get an odds ratio is perhaps I can interpret with the co efficient?

ologit interpretation - no issues with code.

$
0
0
Hello all,

I'm going through stuff I have previously written, and I'm just trying to refresh my memory

I had performed this regression

Code:
ologit ASA experiencegrp* age [pw=weight],or
ASA- is a measure of sickliness from 1-3
experiencegrp - continuous variables (2 splines)
age - age of pt

I obtained the following result - see attached

Just to confirm - from the result as the ASA is a three levelled ordinal variable I can not interpret the relation of exerpiencegrp on ASA right?

But instead, I need to obtain predicted probabilities for each ASA level using
-margins plot-

Which I have done with the help of statlist:
https://www.statalist.org/forums/for...tic-regression

Array

DID Estimation Standard Errors

$
0
0
Greetings,

I have encountered the following issue: I am estimating a DID fixed effects model with 2 periods (pre-intervention and post-intervention) for 2 groups of commodities (control and treatment).

However, upon estimating, I have faced the situation, when sometimes the app does not calculate the standard errors for me at all. However, this time I got a functioning output with standard errors, but for some reason the pretrend test does not seem to work for me (it returns spurious results with (F 1,1), which is simply impossible). I have balanced monthly panel data from 2020m1 to 2023m7. The code is attached below. No way to get the testing output right. Could anyone advise me, what can be the issue here? Thank you in advance!

Code:
Panel variable: ID (strongly balanced)
 Time variable: Month, 2020m1 to 2023m7
         Delta: 1 month

. didregress (Value) (Interaction), group(Group) time(Month)

Treatment and time information

Time variable: Month
Control:       Interaction = 0
Treatment:     Interaction = 1
-----------------------------------
             |   Control  Treatment
-------------+---------------------
Group        |
       Group |         1          1
-------------+---------------------
Time         |
     Minimum |       720        746
     Maximum |       720        746
-----------------------------------

Difference-in-differences regression                        Number of obs = 86
Data type: Repeated cross-sectional

                                  (Std. err. adjusted for 2 clusters in Group)
------------------------------------------------------------------------------
             |               Robust
       Value | Coefficient  std. err.      t    P>|t|     [95% conf. interval]
-------------+----------------------------------------------------------------
ATET         |
 Interaction |
   (1 vs 0)  |    2.72046   7.41e-09  3.7e+08   0.000      2.72046     2.72046
------------------------------------------------------------------------------
Note: ATET estimate adjusted for group effects and time effects.

. estat trendplots

. estat ptrends

Parallel-trends test (pretreatment time period)
H0: Linear trends are parallel

 F(1, 1) = 2.38e+08
Prob > F =   0.0000

. estat granger

Granger causality test
H0: No effect in anticipation of treatment

 F(0, 1) = .
Prob > F = .

DSGE Models

$
0
0
Greetings,

I am working on a basic nonlinear dsge model using the dsgenl command, but I would like the external shock to be not 1%, but 0.06%, or any other magnitude. is this possible in Stata?

Code:
Code:
matrix param = (0.97, 0.36, 0.06, 0.95, 0.4)    
matrix colnames param = beta alpha delta rho gamma 

dsgenl  ({gamma}/c*w = (1-{gamma})/(1-n))  
        ( {beta} * c / F.c * (F.r +1 - {delta}) = 1 )
        (y = k^({alpha}) * (a*n)^(1-{alpha})) 
        (y=c+i) 
        (F.k = i + (1-{delta})*k) 
        ({alpha} * k^({alpha}-1) * (a*n)^(1-{alpha}) = r) 
        ((1-{alpha}) * k^({alpha}) * n^(-{alpha}) *a^(1-{alpha}) = w) 
        (log(F.a) = {rho} * log(a)) 
        , observed(y) unobserved(c i w n r) ///
        endostate(k) exostate(a) ///
        solve noidencheck from(param) nolog
      
estat steady, compact  
estat policy, compact

irf set dsgeirf.irf 
irf create model2, step(40) replace
irf graph irf, impulse(a) response(y c i n w a r k) irf(model2) byopts(yrescale) legend(cols(2)) noci

Thanks,
Viewing all 65029 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>