Quantcast
Channel: Statalist
Viewing all 65508 articles
Browse latest View live

Multilevel Analysis

$
0
0
Hello Everyone,

I am working in my thesis on three models, model one with with the dependent variable proportions including zeros (proportions of women in the board, without 1s (there is not boards full of women) ) and 2 models with dichotomic variables as dependent variable. My independent variable is dichotomic, testing interaction with culture (Hofstede) and several additional controlling variables.

I have measures for 44 countries, eight years (2007 - 2014) and a total of 5400 companies, unbalanced. I was asked to test using an HLM3 level: firm-year, firm and country but I am not clear how to consider firm-year as a level, could be a level 1? I have information about industry too, for me could be more natural thinking about firm, industry, country but I was asked for the firm-year. I am using stata 15. Now I am reading to understand (from the scratch) multilevel analysis and learn the commands for testing my models.

After all that explanation, I would like to ask you please if you could help me to understand how this firm-year/firm/country levels works and if you have any paper or summary of commands to use in this combination of Stata. I saw some melogit command for the dichotomic variables, but what if it is a panel data? I used fracreg for the simple reggression of the first model, but I don`t see any mefracreg command to use.

Thank you very much for any help.

Best Regards !!

IPWRA (TEFFECTS Propensity Score): choosing a model building approach

$
0
0
Hello, I'm seeking advice on optimal variable selection strategies for building Treatment and Outcomes models with IPWRA. I've obtained the 4 published papers identifiable on PubMed as having used the Stata command - TEFFECTS IPWRA -.

In two of these papers, the authors used a parsimonious model:
* Running IPWRA twice: first with all potential confounders, then a final IPWRA model, choosing all variables significantly associated with either Treatment or Outcome. (Moniodis & Townsend, 2017).
* Running IPWRA once, choosing for Treatment model those potential confounders significantly associated in logistic regression with receipt of Treatment. And, choosing for Outcome model, those associated with having the (dichotomous) outcome, again using logistic regression. (Anothaisintawee & Udomsubpayakul, 2016)

A third paper used ALL variables potentially associated with Treatment and Outcome, for both Treatment and Outcome models. (Criski & Culkin, 2015)

And, the fourth didn't specify their model or justify the choice. (Traxer & Wendt-Nordahl, 2015)

I'd appreciate any guidance as to navigate this choice in model building.
Michael


For those interested, the citations are below.
Comparison of extracorporeal photopheresis and alemtuzumab for the treatment of chronic lung allograft dysfunction.Moniodis A, Townsend K, Rabin A, Aloum O, Stempel J, Burkett P, Camp P, Divo M, El-Chemaly S, Mallidi H, Rosas I, Fuhlbrigge A, Koo S, Goldberg HJ.J Heart Lung Transplant. 2017 Mar 24. pii: S1053-2498(17)31732-1. doi: 10.1016/j.healun.2017.03.017. PMID: 28431983

Effect of Lipophilic and Hydrophilic Statins on Breast Cancer Risk in Thai Women: A Cross-sectional Study.Anothaisintawee T, Udomsubpayakul U, McEvoy M, Lerdsitthichai P, Attia J, Thakkinstian A.J Cancer. 2016 Jun 6;7(9):1163-8. doi: 10.7150/jca.14941. PMID: 27326260.

Preoperative JJ stent placement in ureteric and renal stone treatment: results from the Clinical Research Office of Endourological Society (CROES) ureteroscopy (URS) Global Study.Assimos D, Crisci A, Culkin D, Xue W, Roelofs A, Duvdevani M, Desai M, de la Rosette J; CROES URS Global Study Group.BJU Int. 2016 Apr;117(4):648-54. doi: 10.1111/bju.13250. Epub 2015 Sep 6.PMID: 26237735

Differences in renal stone treatment and outcomes for patients treated either with or without the support of a ureteral access sheath: The Clinical Research Office of the Endourological Society Ureteroscopy Global Study.Traxer O, Wendt-Nordahl G, Sodha H, Rassweiler J, Meretyk S, Tefekli A, Coz F, de la Rosette JJ.World J Urol. 2015 Dec;33(12):2137-44. doi: 10.1007/s00345-015-1582-8. Epub 2015 May 14.PMID: 25971204

.dta file corrupting

$
0
0
Hi everyone,
Thanks in advance for your advice! I am using Stata 15 on Windows , although I also tried this below with Stata 14 and with saveold Stata13...
I have a bunch of files that I am running a program on to clean, then merging these cleaned files together into a master dataset. When I get to a certain file (in this case the one June05), the file is unable to merge into the master file because the variables I am merging based on "do not uniquely identify observations in the using dataset." This is only a problem for this one scrape from June05 so I wrote some code to drop these duplicates (see below). However, although the code shows it runs in the command box, it says zero duplicates and nothing is dropped. So, when it tries to merging this file in I get the same error.
The more complex problem is that if I pick up from where the code breaks (the merge line, see below) and run the drop duplicates code (as it is written) it then works and the duplicates are dropped. The duplicates are dropped and I am able to merge the file into the master file. But this causes the master file to become corrupt, with the error message ".dta file is corrupt. The file unexpectedly ended before it should have." Per other advice, I have tried running dtaverify on the file after it is corrupt and it says " SERIOUS ERROR: unexpected end of file and SERIOUS ERROR: map[1] invalid." Effectively I can't try to fix the error with the duplicates above and then try running the rest of the code to complete the dataset because breaking this way causes a corrupt file and I have no idea why this would be. Any help would be very much appreciated. Thank you!
I am pasting the code below and attaching the ado file.


global identifiers mr_no work_code job_card_number worker_name work_start_date days_worked total_cash_payments

local scrape_list output_28Nov2014 output_06Dec2014 output_19Dec2014 full_output_19Dec2014 output_26Dec2014 ///
output_02Jan2015 output_09Jan2015 full_output_10Jan2015 output_16Jan2015 output_23Jan2015 output_30Jan2015 ///
output_06Feb2015 output_13Feb2015 output_20Feb2015 output_27Feb2015 ///
full_output_16Mar2015 output_20Mar2015 output_10Apr2015 output_17Apr2015 output_24Apr2015 ///
output_01May2015 output_08May2015 output_15May2015 output_22May2015 output_29May2015 ///
full_output_01Jun2015 output_05Jun2015 output_12Jun2015 output_19Jun2015 ///
output_03Jul2015 output_10Jul2015 output_13Jul2015 ///
full_output_10Sep2015 full_output_15Sep2015 full_output_20Nov2015 full_output_15Sep2016 full_output_18Nov2017

local n : word count `scrape_list'
forvalues i = 1/`n' {

local scrape : word `i' of `scrape_list'

import delimited using "MIS_scrapes/Data/Raw/unzipped/`scrape'/muster.csv", varnames(1) clear
cap gen aadhar_no=""
cap gen account_no=""

clean_muster_scrape `scrape' `i'

if "`scrape'" == "output_05Jun2015" {
duplicates drop $identifiers, force
}

compress
save "MIS Merge New/MIS_scrapes/Data/temp/union_muster_all_using_v2.dta", replace

use "MIS Merge New/MIS_scrapes/Data/temp/union_muster_all_master_v2.dta", replace

merge 1:1 $identifiers using "MIS Merge New/MIS_scrapes/Data/temp/union_muster_all_using_v2.dta", nogenerate
label var muster_merge_`i' "Merge Indicator, `scrape'"
note: scrape_`i'=`scrape'

compress
save "MIS Merge New/MIS_scrapes/Data/temp/union_muster_all_master_v2.dta", replace
}

conditional mean for a given category

$
0
0
Hi everyone,
I have a dataset which contains for each observation: firm's specific variables (i.e. balance sheet data), two main categories "default" and "not default" and the rating class of the firm.
I need to obtain a variable Y with the mean of a variable X (firm specific) by rating class conditioning to the category "default".
The issue is that variable Y must be "global" in the sense that also the "not default" firms must contain these values but what I get at the moent is just missing values for the "not default" type.
Any suggestion?
Thank you!
Michael

merge panel data and pooled cross-sectional survey data

$
0
0
Hello,
I have two data sets: the master one is a pooled cross-sectional survey data with 5 weeps containing survey results for 1989,1992,1995,2000 and 2005 while my other data is a panel data with the same countries as my master data but with years from 1989 to 2013.

Please, I need help merging this two datasets.

Thanks.

mi estimate, saving produces different results with xtlogit

$
0
0
Sorry for the long post. I am new, and hope this question makes sense and is explained correctly. In Stata 14.2, I'm running mi estimate: xtlogit, and saving the estimates:
Code:
mi estimate, dots saving(intest) esample(esamp) post: xtlogit gunca Wdeal Wzprdel Wzysragg t ///
i.black#c.Wdeal i.black#c.Wzprdel i.black#c.Wzysragg Bdeal Bzprdel Bzysragg i.sample i.black ///
i.black#c.Bdeal i.black#c.Bzprdel i.black#c.Bzysragg, i(id_s) re
However, I've noticed that when I recover the estimates later using the following commands, they have changed:
Code:
estimates use intest
estimates replay
The coefficients are not the same. In addition, when I originally run the model, the header says things like "Equal FMI" for Model F test, and indicates a large sample DF adjustment. These things are no longer reported when I recover the saved estimates, and it instead only says "Integration method: mvaghermite."
The original estimates:
Code:
. mi estimate, dots saving(intest) esample(esamp) post: xtlogit gunca Wdeal Wzprdel ///
Wzysragg t i.black#c.Wdeal i.black#c.Wzprdel i.black#c.Wzysragg Bdeal Bzprdel Bzysragg i.sample /// i.black i.black#c.Bdeal i.black#c.Bzprdel i.black#c.Bzysragg, i(id_s) re
Imputations (50): .........10.........20.........30.........40.........50 done Multiple-imputation estimates Imputations = 50 Random-effects logistic regression Number of obs = 4,995 Group variable: id_s Number of groups = 999 Random effects u_i ~ Gaussian Obs per group: min = 5 Integration points = 12 avg = 5.0 max = 5 Average RVI = 0.2380 Largest FMI = 0.3297 DF adjustment: Large sample DF: min = 458.72 avg = 2,139.91 max = 5,481.36 Model F test: Equal FMI F( 15,20159.0) = 24.12 Within VCE type: OIM Prob > F = 0.0000 ---------------------------------------------------------------------------------- gunca | Coef. Std. Err. t P>|t| [95% Conf. Interval] -----------------+---------------------------------------------------------------- Wdeal | 1.466972 .4020053 3.65 0.000 .6786674 2.255277 Wzprdel | .1297818 .1510445 0.86 0.390 -.1663253 .4258889 Wzysragg | 1.257324 .2443876 5.14 0.000 .7781461 1.736502 t | .2489118 .0612104 4.07 0.000 .1287118 .3691117 | black#c.Wdeal | 1 | .8533233 .4623327 1.85 0.065 -.0533736 1.76002 | black#c.Wzprdel | 1 | .4275371 .1807275 2.37 0.018 .0731915 .7818828 | black#c.Wzysragg | 1 | -.7081883 .2904291 -2.44 0.015 -1.277842 -.1385348 | Bdeal | 2.769167 .6171847 4.49 0.000 1.558898 3.979436 Bzprdel | 1.559569 .2721862 5.73 0.000 1.024773 2.094365 Bzysragg | -.1106736 .2099039 -0.53 0.598 -.5222855 .3009382 | sample | oldest | .3196671 .1878959 1.70 0.089 -.0487792 .6881134 1.black | .3550727 .3022817 1.17 0.240 -.2378 .9479454 | black#c.Bdeal | 1 | 1.676404 .7645017 2.19 0.028 .17703 3.175778 | black#c.Bzprdel | 1 | -.8649218 .3167695 -2.73 0.007 -1.487421 -.2424225 | black#c.Bzysragg | 1 | .5600946 .2499411 2.24 0.025 .0699593 1.05023 | _cons | -4.958428 .3143691 -15.77 0.000 -5.574925 -4.341931 -----------------+---------------------------------------------------------------- /lnsig2u | .247208 .2599575 -.2627578 .7571738 -----------------+---------------------------------------------------------------- sigma_u | 1.131568 .1470797 .8768855 1.46022 rho | .2801658 .0524264 .1894473 .3932494 ----------------------------------------------------------------------------------
And the recovered estimates:
Code:
. estimates use intest

. estimates replay

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
active results
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Random-effects logistic regression              Number of obs     =      4,995
Group variable: id_s                            Number of groups  =        999

Random effects u_i ~ Gaussian                   Obs per group:
                                                              min =          5
                                                              avg =        5.0
                                                              max =          5

Integration method: mvaghermite                 Integration pts.  =         12

                                                Wald chi2(15)     =     460.09
Log likelihood  = -902.62441                    Prob > chi2       =     0.0000

----------------------------------------------------------------------------------
           gunca |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-----------------+----------------------------------------------------------------
           Wdeal |   1.306689   .3716408     3.52   0.000     .5782866    2.035092
         Wzprdel |   .2003617   .1413164     1.42   0.156    -.0766134    .4773367
        Wzysragg |   1.253574   .2290109     5.47   0.000     .8047209    1.702427
               t |   .2112262    .050596     4.17   0.000     .1120598    .3103926
                 |
   black#c.Wdeal |
              0  |          0  (empty)
              1  |   .7843317   .4204366     1.87   0.062    -.0397088    1.608372
                 |
 black#c.Wzprdel |
              0  |          0  (empty)
              1  |   .4323564   .1660132     2.60   0.009     .1069765    .7577363
                 |
black#c.Wzysragg |
              0  |          0  (empty)
              1  |  -.8261272    .262063    -3.15   0.002    -1.339761   -.3124931
                 |
           Bdeal |   2.756953   .5503567     5.01   0.000     1.678273    3.835632
         Bzprdel |   1.484126   .2171048     6.84   0.000     1.058609    1.909644
        Bzysragg |  -.0699422    .188472    -0.37   0.711    -.4393405     .299456
                 |
          sample |
       youngest  |          0  (empty)
         oldest  |   .3751893   .1656054     2.27   0.023     .0506087    .6997698
                 |
           black |
              0  |          0  (empty)
              1  |   .4189419    .263487     1.59   0.112    -.0974831    .9353668
                 |
   black#c.Bdeal |
              0  |          0  (empty)
              1  |   1.476672   .6676174     2.21   0.027     .1681658    2.785178
                 |
 black#c.Bzprdel |
              0  |          0  (empty)
              1  |  -.8800992   .2486459    -3.54   0.000    -1.367436   -.3927623
                 |
black#c.Bzysragg |
              0  |          0  (empty)
              1  |   .5593919   .2219713     2.52   0.012     .1243362    .9944476
                 |
           _cons |  -4.839172   .2767139   -17.49   0.000    -5.381521   -4.296823
-----------------+----------------------------------------------------------------
        /lnsig2u |   .0375149   .2574606                     -.4670986    .5421283
-----------------+----------------------------------------------------------------
         sigma_u |   1.018934   .1311677                      .7917186    1.311359
             rho |   .2398809   .0469449                      .1600379    .3432782
----------------------------------------------------------------------------------
LR test of rho=0: chibar2(01) = 30.61                  Prob >= chibar2 = 0.000
Does anyone know what is going on or why this happens? Has anyone experienced something similar? Everything is updated, and I ran this multiple times to make sure, but it happened every time.

What are the contemporary issues in applied economics?

$
0
0
Dear everyone,
I am currently thinking of carried out a research in applied economics.
Accordingly, to be in line with current happenings in developing economies, I would greatly appreciate in if any one can outline some few contemporary issues in applied economics that economist must seek solutions to.
Specifically, this could be in the areas of poverty, heath, education, finance and or some recent concern to developing economies.
All your comments and suggestions are Warmly welcome and thanking you in advance.

Calculate maximum of row of r() matrix

$
0
0
In the example below, r(table) is a matrix containing regression estimates, their standard errors (se), etc. I would like to calculate, and put into a (scalar) macro, the largest se in the table. This is hard for me because I am new to Stata matrices, but I bet it would be easy for some of you. Many thanks for your advice. Code follows.
webuse mheart1s0, clear
mi impute regress bmi attack smokes age female hsgrad, replace add(5)
mi estimate: regress attack bmi smokes age female hsgrad

matrix estimates = r(table)
matrix list estimates /* is a matrix containing the elements of the regression table */

local max_se = max(estimates["se",1...]) /* is my best idea for how to put the largest se in a macro called max_se. But it doesn't work. */
/* Instead it returns the following error message: */
matrix operators that return matrices not allowed in this context

MG residuals loop problem with code

$
0
0
Dear Statalist Users,
I am trying to calculate and store the residuals from a MG regression. With help of a loop. For some reason I do not get any results. "yhat" and "emg" show missing values for all of my observations. If anyone could point out to me why this might be the case it would be greatly appreciated.
Thank you, J.

This is the code I am working with

"

levelsof cid, local(cidlist)

qui {
xtpmg d.($static), lr(l.($static)) replace mg full
g yhat=.
g emg=.
foreach id of local cidlist{
predict y_`id' if cid==`id', eq(cid_`id'SR)
replace yhat=y_`id' if cid==`id'
replace emg=`e(depvar)'-yhat if cid==`id'
sum emg if _est_mg==1 & cid==`id'
replace emg=emg-r(mean) if cid==`id'
}
drop yhat y_*
}

"

Defining closing date price and intraday returns based on intraday data

$
0
0
Hello together,

I'm relatively new to Stata and still have a lot of problems with (probably) simple tasks.
Attached you can find a file of intraday stock price data for one stock with the columns describing the following:
  • Column 1: Trade Price of the Stock
  • Column 2: Timestamp for the execution of the trade
  • Column 3: Timestamp for when the order was made
  • Column 4: Date the trade was made (you can already see that form the Timestamp in Column 2)
I actually want to calculate the intraday log returns, meaning I need first to define the closing date price.
For that I actually want to get in a 5th Column the closing price, as the price of a trade that occurred the last on a given trade date.

Concrete this means in Column 5 I want to fill for the first 27 observations (for all for which the trade_date == 19540) the last observed trade price on trade_date == 19540, which should be 12.125 as indicated in row 27.
From thereon in Column 5 I want to fill in the closing date of day 2, so the last price for which trade_date == 19541, which should be 12,145 as you can find that in row 72.

As my dataset has much more observations I want to continue this in Column 5, sucht that I always have for the respective trading dates the last observed trade price.

I tried a lot in my dataset but couldn't find a real solution. Therefore I really appreciate your help and advice.

I thank you very much in advance.

Best regards
Philipp

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double(price trade_timestamp order_entry_timestamp) int trade_date
 11.83 1688288578000 1688287830060 19540
 11.83 1688288578000 1688288398670 19540
  11.8 1688288734630 1688288032020 19540
  11.8 1688288734630 1688288734630 19540
 11.91 1688306048560 1688306048560 19540
 11.91 1688306048560 1688054917000 19540
 11.91 1688306048600 1688306048600 19540
 11.91 1688306048600 1688286747060 19540
 11.85 1688306143190 1688305132130 19540
 11.85 1688306143190 1688306143190 19540
 11.88 1688306143190 1688305194050 19540
 11.88 1688306143190 1688306143190 19540
  12.1 1688311387770 1688311387770 19540
  12.1 1688311387770 1688311325840 19540
 12.12 1688311573560 1688311573560 19540
 12.12 1688311573560 1688311573540 19540
  12.1 1688312012900 1688312012900 19540
 12.15 1688313453030 1687362723660 19540
 12.15 1688313453030 1688313453030 19540
12.125 1688318739180 1688317423960 19540
12.125 1688318739180 1688318739180 19540
12.125 1688318739180 1688318739180 19540
12.125 1688318739180 1688317423960 19540
12.135 1688318740200 1688318740200 19540
12.135 1688318740200 1688318735000 19540
12.125 1688318954200 1688318954200 19540
12.125 1688318954200 1688318781690 19540
 12.15 1688374958010 1688373087820 19541
 12.15 1688374958010 1688374682010 19541
 12.08 1688396474850 1688396402030 19541
 12.08 1688396474850 1688396474850 19541
 12.05 1688396705530 1688396705530 19541
 12.05 1688396705530 1688396480780 19541
 12.08 1688398229750 1688398229750 19541
 12.08 1688398229750 1688396402030 19541
 12.08 1688398385390 1688396402030 19541
 12.08 1688398385390 1688398385390 19541
12.095 1688398535390 1688398535390 19541
12.095 1688398535390 1688381948030 19541
12.095 1688398535390 1688398535390 19541
12.095 1688398535390 1688381948030 19541
  12.1 1688398544680 1688379326890 19541
12.095 1688398544680 1688398544680 19541
12.095 1688398544680 1688398544680 19541
 12.14 1688405364870 1688405331960 19541
 12.14 1688405364870 1688405364870 19541
 12.15 1688405368870 1688405368870 19541
 12.15 1688405368870 1688399475070 19541
 12.15 1688405370210 1688399475070 19541
 12.15 1688405370210 1688405370210 19541
12.145 1688405744010 1688405700070 19541
12.145 1688405744010 1688405461890 19541
12.145 1688405744010 1688405591130 19541
12.145 1688405744010 1688405700070 19541
12.145 1688405744010 1688405699630 19541
12.145 1688405744010 1688405699630 19541
12.145 1688405744010 1688405491780 19541
12.145 1688405744010 1688405492010 19541
12.145 1688405744010 1688405695380 19541
12.145 1688405744010 1688405695380 19541
12.145 1688405744010 1688405461890 19541
12.145 1688405744010 1688405520470 19541
12.145 1688405744010 1688405641140 19541
12.145 1688405744010 1688405591130 19541
12.145 1688405744010 1688405461890 19541
12.145 1688405744010 1688405641140 19541
12.145 1688405744010 1688405641140 19541
12.145 1688405744010 1688405699630 19541
12.145 1688405744010 1688405532360 19541
12.145 1688405744010 1688405695380 19541
12.145 1688405744010 1688405492010 19541
12.145 1688405744010 1688405431440 19541
 11.98 1688461375000 1688455810500 19542
 11.98 1688461375000 1688461237420 19542
 11.98 1688461375000 1688461195320 19542
 11.98 1688461375000 1688461175460 19542
 11.98 1688461375000 1688461193590 19542
 11.98 1688461375000 1688455810500 19542
 11.98 1688461375000 1688461237420 19542
 11.98 1688461375000 1688455810500 19542
 11.98 1688461379030 1688461195320 19542
 11.98 1688461379030 1688461379030 19542
 11.98 1688461379030 1688461379030 19542
 11.98 1688461379030 1688461195320 19542
 11.98 1688461379030 1688461195320 19542
 11.98 1688461379030 1688461379030 19542
 11.98 1688461385430 1688461195320 19542
 11.98 1688461385430 1688461385430 19542
 11.98 1688461385430 1688461385430 19542
 11.98 1688461385430 1688461195320 19542
11.965 1688461385670 1688461375060 19542
11.965 1688461385670 1688461385670 19542
11.965 1688461385670 1688461376950 19542
11.965 1688461385670 1688461385670 19542
11.965 1688461385670 1688461381020 19542
11.965 1688461385670 1688461385670 19542
 11.96 1688461461660 1688461423910 19542
 11.96 1688461461660 1688461461660 19542
 11.95 1688461462380 1688456123790 19542
 11.95 1688461462380 1688461462380 19542
end
format %tcDD.NN.CCYY_HH:MM:SS.ss trade_timestamp
format %tcDD.NN.CCYY_HH:MM:SS.ss order_entry_timestamp

Tabout and Unicode export to HTML or docx?

$
0
0
I wonder if someone can help me out with -tabout- (see http://tabout.net.au/docs/home.php - beta 3.0.3) and unicode characters in labels/titles.
When I export tabout-generated tables to HTML and docx the Umlaute are not correctly displayed in Firefox and Word.

An example:
Code:
sysuse auto
la de origin 1 "Ausländisch", modify
tabout foreign using test.docx, style(docx) paper(A4) c(col) f(1) clab(Col_%) npos(col) nlab(N) title(In- und ausländische Autos)
tabout foreign using test.html, style(htm) c(col) f(1) clab(Col_%) npos(col) nlab(N) title(In- und ausländische Autos)
Is there an "easy" way out? Or do I have to relabel all my labels manually and if how?

margins in tobit model

$
0
0
Hello everybody,


i try to measure the effect of collaboration with several external partners on the success of new products. So my dependent variable is double censored at 0 and 100 (share of innovative sales).
I estimate a tobit model.


tobit sharenewprod coopbc coopcc coopms coopsf indun exs bhsp rnd logempl i.east, ll (0) ul (100) vce (robust)

All of the coop-variables are dummies and measure if a firm collaborates with an external partner or not. The indun variable is ordinal and has a minimum value of 0 (a firm collaborates at no stage of the new product development process with universities) and a maximum value of 5 (a firm collaborates at five stages of the new product development process with universities).

Can anybody tell me which one of the different types of marginal effects for tobit models is the right one if i want to know:

1)

- if a firm switches from the lower bound to the uncensored values (or in other words switches from no innovative sales to innovative sales)

2)

- if a firm benefits from collaborating at more stages of the process in terms of increasing innovative sales


I am to some extend confused by the given examples in the tobit postestimation sheet.

Thank you so much in advance.

Best regards

Philipp














Different division within the same variable

$
0
0
The following variable shows the period amount covered by the take-home pay for an individual. I have another variable which shows the take-home pay for an individual. I would like to find out the weekly take home pay for all the observation in my dataset. How do I convert the fortnight/four weeks/calendar month/year to weekly pay and retain the "one-week pay" in that variable?


Take-home pay period amount covered:
Freq. Percent Cum.
----------------------------------------------------
One week | 1,963 25.99 25.99
A fortnight | 99 1.31 27.30
Four weeks | 482 6.38 33.68
A calendar month| 4,694 62.15 95.83
A year or | 280 3.71 99.54
Other period | 35 0.46 100.00

Problem with marginal effects after cmp command (by D. Roodman) with survey data

$
0
0
I am using cmp (by D. Roodman) from SSC in Stata 15.1.

I am using survey data. I declared my survey structure with the following command:
svyset [pw=w12wtrsp], strata(countrywave)

When I run cmp with a multinomial probit (w2status is a 4 level factor variable) with endogenous treatment (w2shock15p), my model converges.

cmp (w2status = w1func i.w1civil i.w1educ4 w1agey w1age2 i.w2shock15p##i.country i.w2shock15p i.country i.wave, iia) (w2shock15p = w1HS i.w1educ4 i.w1smoken w1agey i.w1overweight i.country i.wave) , ind($cmp_mprobit $cmp_probit) tech(dfp nr) difficult nolr svy

To get the average marginal effect of the treatment by country on my second choice, I then run:
. margins, dydx(w2shock15p) predict(equation(_outcome_1_2) pr) force noestimcheck over(country)

But I get the following error message:

__00000S not found


When I type ereturn list, I get


macros:
e(cmd) : "cmp"
e(cmdline) : "cmp (w2status = i.w2shock15p w1func w1agey w1age2 i.w1educ4 i.w1gender i.w1civil i.w.."
e(title) : "Mixed-process regression"
e(predict) : "cmp_p"
e(eqnames) : "_outcome_1_1 _outcome_1_2 _outcome_1_3 _outcome_1_4 w2shock15p atanhrho_25 atanhrho_.."
e(depvar) : "w2status w2shock15p"
e(ghktype) : "halton"
e(diparmopt) : "diparm(atanhrho_25, tanh label("rho_25")) diparm(atanhrho_35, tanh label("rho_35")) .."
e(quad_method) : "ghermite"
e(resultsform) : "structural"
e(EffNames1_5) : "_cons"
e(covariance5) : "unstructured"
e(EffNames1_4) : "_cons"
e(covariance4) : "unstructured"
e(EffNames1_3) : "_cons"
e(covariance3) : "unstructured"
e(EffNames1_2) : "_cons"
e(covariance2) : "unstructured"
e(EffNames1_1) : "_cons"
e(covariance1) : "unstructured"
e(covariance) : "unstructured"
e(ivars) : "_n"
e(model) : "lf1 cmp_lnL() (_outcome_1_1: _mp_cmp_y1 =, offset() exposure()) (_outcome_1_2: _mp_.."
e(marginsok) : "Pr XB default"
e(svyml) : "svyml"
e(opt) : "moptimize"
e(singleunit) : "missing"
e(strata1) : "countrywave"
e(wvar) : "__00000S"
e(wexp) : "= __00000S"
e(wtype) : "pweight"
e(vcetype) : "Linearized"
e(vce) : "linearized"
e(prefix) : "svy"
e(user) : "cmp_lnL()"
e(ml_method) : "lf1"
e(technique) : "dfp nr"
e(which) : "max"
e(properties) : "b V"


If I run the same cmp command without the svy option, my model converges and I get results after typing
. margins, dydx(w2shock15p) predict(equation(_outcome_1_2) pr) force noestimcheck over(country)


This hints to the fact that my issue comes from the survey option. Did I not define my survey structure properly?

Thank you for your help.

latent variable constrained with binary outcome gsem

$
0
0
Hello Stata Forum,

I have 1007 observations for 10 questions mapped to two latent variables (4 continuous measured items to the latent variable "D" & 6 encoded (ordinal) variables to the latent variable "VA"). My theoretical frame work is that latent variable D is associated with VA which in turn predicts whether [binary outcome] FSN. Here is my code:

gsem (VA -> codeq103_1, ) (VA -> codeq103_2, ) (VA-> codeq103_6, ) (VA-> codeq101_2, ) (VA-> codeRq103_3, ) (VA -> codeRq101_1, ) (VaccAtt -> binq2_FSN, family(binomial) link(logit)) (D -> VA, ) (D -> grm, ) (D -> an, ) (Disgust -> cont, ) (D -> cr, ), latent(VA D ) cov( e.codeq103_1*e.codeq103_2 e.codeq103_6*e.codeRq103_3) nocapslatent

Stata defaults to constraining the outcome

( 1) [binq2_FSN]VA = 1

I am unable to define my own constraint [r(111) ] because, I believe, VA is a latent variable.

Can anyone help me to understand whether I can use gsem with a binary outcome predicted by my latent variable.

Thank you,
~ Heidi Brown

Impact of one time series variable with a different frequency on another variable with a different frequency

$
0
0
I am trying to understand the effect of GDP, Inflation and NFP data on the value of the stock market.
I have daily data on S&P500, quarterly data on GDP and inflation and monthly data on NFP.
I have combined these together in Stata however, for days when there is no data announcements GDP, CPI and NFP have missing values

When I do the following:

regress sp500 yoygdp yoycpi and momnfp

It only uses observations when there is data available for all the variables.

Is there a way I can analyse the impact of the data announcement for all days for which I have stock market data?

see data below:

input str10 date float(sp500 yoygdp yoycpi) int momnfpchange float date2
"19/07/2010" 1071.25 . . . 18462
"20/07/2010" 1083.48 . . . 18463
"21/07/2010" 1069.59 . . . 18464
"22/07/2010" 1093.67 . . . 18465
"23/07/2010" 1102.66 . . . 18466
"26/07/2010" 1115.01 . . . 18469
"27/07/2010" 1113.84 . . . 18470
"28/07/2010" 1106.13 . . . 18471
"29/07/2010" 1101.53 . . . 18472
"30/07/2010" 1101.6 . . . 18473
"02/08/2010" 1125.86 . . . 18476
"03/08/2010" 1120.46 . . . 18477
"04/08/2010" 1127.24 . . . 18478
"05/08/2010" 1125.81 . . . 18479
"06/08/2010" 1121.64 . . . 18480
"09/08/2010" 1127.79 . . . 18483
"10/08/2010" 1121.06 . . . 18484
"11/08/2010" 1089.47 . . . 18485
"12/08/2010" 1083.61 . . . 18486
"13/08/2010" 1079.25 . . . 18487
"16/08/2010" 1079.38 . . . 18490
"17/08/2010" 1092.54 . . . 18491
"18/08/2010" 1094.16 . . . 18492
"19/08/2010" 1075.63 . . . 18493
"20/08/2010" 1071.69 . . . 18494
"23/08/2010" 1067.36 . . . 18497
"24/08/2010" 1051.87 . . . 18498
"25/08/2010" 1055.33 . . . 18499
"26/08/2010" 1047.22 . . . 18500
"27/08/2010" 1064.59 . . . 18501
"30/08/2010" 1048.92 . . . 18504
"31/08/2010" 1049.33 . 1.1 -36 18505
"01/09/2010" 1080.29 . . . 18506
"02/09/2010" 1090.1 . . . 18507
"03/09/2010" 1104.51 . . . 18508
"06/09/2010" . . . . 18511
"07/09/2010" 1091.84 . . . 18512
"08/09/2010" 1098.87 . . . 18513
"09/09/2010" 1104.18 . . . 18514
"10/09/2010" 1109.55 . . . 18515
"13/09/2010" 1121.9 . . . 18518
"14/09/2010" 1121.1 . . . 18519
"15/09/2010" 1125.07 . . . 18520
"16/09/2010" 1124.66 . . . 18521
"17/09/2010" 1125.59 . . . 18522
"20/09/2010" 1142.71 . . . 18525
"21/09/2010" 1139.78 . . . 18526
"22/09/2010" 1134.28 . . . 18527
"23/09/2010" 1124.83 . . . 18528
"24/09/2010" 1148.67 . . . 18529

Problems with Poisson Postestimation

$
0
0
Hi,

I am struggling with truly understanding my postestimation results following a poisson regression.
I have a panel dataset of 50 countries over 10 years. The dependant variable is a count i.e., the average number of World Bank conditions a country receives in a year. I proceeded in the following steps:

Step 1: Running Poisson regression

Code:
xtset country year
egen t= group(Year)
xtpoisson AvConditions ForeignAid GDP Inflation Investment t, vce(cluster recid)
I include a time trend to control for structural changes in conditionality over time.
I also assume that the errors are not independent within recipient countries, and therefore cluster the standard errors by recipient country in order to control for heteroscedasticity.

Step 2: Checking Goodness of Fit:

Code:
poisgof
last estimates for poisson not found
r(301);
I couldn't perform the above test with ‘xtpoisson’.

Q1) What are the alternatives of checking goodness of fit of the model other than the above command and graphical representation?

One option was to perform LR chi-square test by estimating both the poisson and negative binomial (although a formal test didn’t find any evidence of overdispersion):

Code:
xtpoisson AvConditions ForeignAid GDP Inflation Investment t, vce(cluster recid
est store poisson
xtnbreg AvConditions ForeignAid GDP Inflation Investment t, vce(cluster recid
est store nbreg
lrtest poisson nbreg, stats force
However, I couldn't perform the above test because vcetype 'cluster’ is not allowed with xtnbreg.


Q2) In a previous post I was suggested to use ‘REST’, however; I have the following concerns related to REST:
  • 2A) Is RESET only valid after poisson or is it only appropriate for 'OLS’?
  • 2B) As I cant specify ‘vce cluster (country)’ and ‘fe/re’ with REST, can I still rely on it?
https://www.statalist.org/forums/for...ative-binomial

Step 3: Choosing b/w Fixed vs Random Effects:

Next, I wanted to perform Hausman test for comparing between fixed and random effects as follows:

Code:
xtpoisson AvConditions ForeignAid GDP Inflation Investment t, fe
est store fe
xtpoisson AvConditions ForeignAid GDP Inflation Investment t, re
est store fe
hausman fe re
According to the Hausman results, the data fails to meet the asymptotic assumptions of the Hausman test.

Upon reading the previous posts related to my problem, some alternatives discussed were using ‘suest’, ‘sigmamore’ which I cant do with poisson. In one of the previous posts, Joas Santos Silva suggested
The Poisson regression you are using is based on a non-linear model and in this context the random effects estimator will have to be based on unreasonable assumptions about the distribution of the errors. So, I would just stick to the FE regression.
https://www.statalist.org/forums/for...-for-xtpoisson

However, a key author in my research field has used poisson regression with random effects after performing Hausman test. There are numerous other studies using random effects with poisson. So, I got confused reading it.

Q3) Is there any other alternative to Hausman test for poisson regressions?
Q4) I also couldnt include vce(cluster recid) while performing Hausman. Is that OK to exclude it for the sake of Hausman test only?

Some of the problems might be solved by using ‘poisson/nbreg’ instead of ‘xtpoisson/xtnbreg’ but it also comes with some limitations for instance, the option of ‘fe/re’ is not allowed with ‘poisson’. I wonder the rationale behind two different command for the same purpose.

Apologies if I am not able to put the problems clearly.
I look forward to your guidance.

Best regards,
Imran Khan.

Question about log-logistic for health care cost

$
0
0
Dear scientist,

My question is how to use log-logistic to model health cost data. As we know, the distribution of health cost data skewness to the right. I am considering to use methods modeling this cost data: 1, GLM with gamma, 2 OLS with lognormal, 3 log-logistic. For example, y is the cost, covariates are age and group.

For GLM I am thinking to use
Code:
 glm y age i.group, family(gamma) link(log)
For OLS with log-normal I am thinking to use
Code:
 reg ln(y) age i.group
Would you please tell me whether these two modeling methods are correct and how can I model health cost with log-logistic?

Thank you very much!

Jack Liang Wang

Results turning insignificant after introducing unconditional country-fixed effects in a negative binomial regression

$
0
0
Hello,

I am running a negative binomial regression on a panel data. My desired coefficient is turning insignificant when I control for unconditional country fixed effects

Code:
nbreg DV IV, vce (cluster country)
nbreg DV IV i.country, vce (cluster country)
I am wondering what does it signify? Does it mean that the first model is missing something and I must have to control for unconditional country fixed effects?

I cant even perform Hausman test given my understanding that -nbreg is not a true fixed-effect model.

In some of the prior studies on my topic, authors have controlled for unconditional country fixed effects. On the other hand, some authors have only clustered the standard errors by country.

Any comments would be much appreciated.

Kind regards,
Shazmeen Maroof.

Problem with panel logistic regression

$
0
0
Hi Statalist community,

I'm studying the influence of macroeconomic variables in the bankruptcy of european companies and i'm 100% new to stata. I have a database of 14,000 companies and 783 of them are bankrupted. I am trying to use a panel logistic regression to predict bankruptcy but i'm having difficulties with the results. The database is composed by annual financial and macroeconomic variables during a 10-year period. Below there is a sample of the real data, including some of the financial and macroeconomic variables (the total of financial and macroeconomic variables is 24 and 12, respectively).

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double(ID year country ebitcapital equitytotalliabilities ebitdatotalassets GDPgrowth Inflation Producerpriceindex status)
 1 2007  4   .23814305765664037  .26667669414671263  .06961752847815933  3.76899240771773 2.8 95.76667 0
 1 2008  4    .1250327456454555  .24090875652907165  .03089353134092884  1.11759664396229 4.1 100.7667 0
 1 2009  4   .22242225429285223  .17294602659718725    .044293114743546 -3.57366541819603 -.2     95.9 0
 1 2010  4   .23866933638570717  .15207285663290396 .043342123762601714  .014063877762696   2      100 0
 1 2011  4   .21988540972599593  .14972839997418194  .04851466868218692 -.998764958115056   3 106.0083 0
 1 2012  4    .2760273082838007  .16106427424166606  .07363138643983107 -2.92775050717711 2.4    108.7 0
 1 2013  4    .2969426156814938  .16179524832263936  .07172461691270385 -1.70570500034655 1.5    108.3 0
 1 2014  4   .19356838233084217  .14427198685380616  .04536146132568478   1.3799972382427 -.2 106.7333 0
 1 2015  4   .23236064379205276  .17506271611408863  .05695319716411316  3.43225332792365 -.6    105.1 0
 1 2016  4   .24818533664158868  .17563844887979171 .052478052342854305  3.27446273967741 -.3  103.325 0
 2 2007  3   .04095280623296299   .3919506454773353  .04085432803065271  3.36732999465967 2.3 97.77017 0
 2 2008  3   .10166363369699483    .305931293266728 .056334581779407006  .818129076752237 2.8 100.1811 0
 2 2009  3    .1609222552172417   .3533238843135158  .08211500808999689 -5.56484434674223  .2 97.44035 0
 2 2010  3   .16775683532465388   .3977062675289382  .09301208421011892  3.94760318130693 1.1      100 0
 2 2011  3   .15241399545149772  .35174304049025384  .08922418336465443  3.71752981637311 2.5 103.5167 0
 2 2012  3   .14021844065953046  .33367110813802187  .08924464431204657  .685942663233891 2.1 105.1167 0
 2 2013  3   .17788657133315117   .2825404224441252  .08510009298952113  .599708640127677 1.6 104.8667 0
 2 2014  3 -.024111181605063738   .2569699270214783 .023988910414155056  1.92669354543975  .8 104.6417 0
 2 2015  3   .18302704887712648  .31084411257863714  .07454274809545725  1.50427411571374  .1   104.55 0
 2 2016  3   .24320507817259118  .22551433114491287   .0652122251152228   1.8553500009344  .4 103.8417 0
 3 2007  6    .4144271570014144   .1058224816644215   .0760693015701137  2.34783509068632 1.6    98.95 0
 3 2008  6      .40473061760841  .10314448359989158  .07740508661997789  .077115531960841 3.2 103.1333 0
 3 2009  6   .43733681462140994  .10390667390124797  .10373647984267453 -2.87392034727281  .1   97.825 0
 3 2010  6   .41721854304635764  .07860489328474753  .07386701558225288  1.89314974280796 1.7      100 0
 3 2011  6   .45314505776636715      .0950927734375  .08037008137331402  2.10217713507451 2.3 104.3667 0
 3 2012  6     .441747572815534  .09748018454986396  .09507383852538537  .225197059687519 2.2 106.2583 0
 3 2013  6    .4759299781181619  .10955291861440729   .0963595117208599  .611588324089456   1 105.9417 0
 3 2014  6    .3983353151010702  .09380925822643614  .08831327758515195  .991978383362659  .6 104.7583 0
 3 2015  6   .37286324786324787   .0988802028311853  .07469717362045761  .975685630921063  .1 102.5583 0
 3 2016  6   .42392717815344605  .07912336660150221  .07112890922959574  1.11901078378441  .3 100.6083 0
 4 2007  6    .3803462321792261  .32657133355503826   .1452745048884432  2.34783509068632 1.6    98.95 0
 4 2008  6     .364018691588785   .3654995730145175  .15572232645403378  .077115531960841 3.2 103.1333 0
 4 2009  6     .312390158172232   .4205469327420547   .1550468262226847 -2.87392034727281  .1   97.825 0
 4 2010  6   .21279317697228145   .4398799474770212  .12623762376237624  1.89314974280796 1.7      100 0
 4 2011  6   .20609462710505214   .4329109529595556  .11811023622047244  2.10217713507451 2.3 104.3667 0
 4 2012  6   .19169329073482427  .43016663803470195  .11255255255255256  .225197059687519 2.2 106.2583 0
 4 2013  6   .20063316185199842  .44032061334727307  .11021050084684249  .611588324089456   1 105.9417 0
 4 2014  6     .133446519524618   .5152204338698391  .09317630758572913  .991978383362659  .6 104.7583 0
 4 2015  6    .1299559471365639   .5092540661805945  .10281184194227673  .975685630921063  .1 102.5583 0
 4 2016  6   .15923332104681165   .4752145734804694  .09902635953455237  1.11901078378441  .3 100.6083 0
 5 2007 11   .33925888459313525   .1679201942244606  .06341575773261993  3.69675257286876 1.6 96.60833 0
 5 2008 11     .271687466520382  .14686295194307894  .04853874994011361  1.69850746268657 2.2 103.8083 0
 5 2009 11   .11064168295101984  .14866218808977016                   0 -3.76650481762244   1 91.75833 0
 5 2010 11   .08652155481907314  .18262498644776196                   0  1.33080763488674  .9 100.0083 0
 5 2011 11     .131100819073837  .19209834622502142                   0  1.66407908494661 2.5 110.9083 0
 5 2012 11   .11194524105838428   .1609739441882666                   0 -1.05715229600264 2.8  114.925 0
 5 2013 11   .06219180990346996  .21275382855745495                   0 -.121113806028916 2.6    113.4 0
 5 2014 11   -.0453018841577627    .201372613851997                   0  1.41901789895785  .3 110.8833 0
 5 2015 11 -.011804125703119393  .22951812052510803                   0  2.26021114168882  .2 102.9167 0
 5 2016 11                    .  .21126525158811846                   0  2.14548423538141  .1    99.15 0
 6 2007  9    .3035651555525091 .045961309377307406 .028484748999280145  1.32978780918613   2 97.34167 0
 6 2008  9   .34460299773552355  .04563566789713414  .03600292287848939 -1.04840020101422 3.5 101.6417 0
 6 2009  9   .22076488648842474   .0561952912625333 .029733603368725298 -5.53421017284482  .8 96.80833 0
 6 2010  9   .31401112841040674  .03889529606465585  .02631343940326328   1.6482827573676 1.6      100 0
 6 2011  9    .3421593592482189   .0317998880147799 .020953830951741855  .720038053966915 2.9 104.5333 0
 6 2012  9   .13345217298299406  .22195361755751317  .05339062214156315 -2.85172614796226 3.3 106.4917 0
 6 2013  9   .10442699964081138  .14340396684239054  .03847085715132498 -1.74893391242169 1.2 106.3083 0
 6 2014  9   .15928356867562413  .24501661249106768  .05787062424995141  .193334483778495  .2    105.7 0
 6 2015  9   .19233761692128318  .21544827332521063  .06408263479237701  .875244254248146  .1    104.3 0
 6 2016  9    .1978098387193993  .17123926541388204  .05783908820629858  1.05716514050009 -.1  102.825 0
 7 2007  9   .19401504004022507  .19257150359244482  .05408860271010165  1.32978780918613   2 97.34167 0
 7 2008  9   .16561392959068127   .2328781742742338  .04545421235858463 -1.04840020101422 3.5 101.6417 0
 7 2009  9   .10001995676035257  .27266543728649595 .034081974417140753 -5.53421017284482  .8 96.80833 0
 7 2010  9   .12017788883012895   .3373108418733838 .047223706815131924   1.6482827573676 1.6      100 0
 7 2011  9   .12685754387965975   .3843776944773262  .05696832383142844  .720038053966915 2.9 104.5333 0
 7 2012  9 -.023888515094376193   .6364440575399099 .015552190974211417 -2.85172614796226 3.3 106.4917 0
 7 2013  9   .10414342111188388   .5428456020214312   .0625441848077447 -1.74893391242169 1.2 106.3083 0
 7 2014  9   .22645073359535475  .21637191947082285  .06778154825885088  .193334483778495  .2    105.7 0
 7 2015  9    .2125018490212514  .19996450509253325  .06614833256644537  .875244254248146  .1    104.3 0
 7 2016  9    .2032900154334251  .17070649033542126 .059146194968112215  1.05716514050009 -.1  102.825 0
 8 2007 11   .19828325049903595  .41548799687765164  .08457041731331322  3.69675257286876 1.6 96.60833 0
 8 2008 11   .16262467745409628   .3813894770725721  .06853673011665672  1.69850746268657 2.2 103.8083 0
 8 2009 11   .16573312107154428  .41977729416585396                   0 -3.76650481762244   1 91.75833 0
 8 2010 11   .04866179791770709   .3938533152360137                   0  1.33080763488674  .9 100.0083 0
 8 2011 11   .15626600562439638  .35577611543366044                   0  1.66407908494661 2.5 110.9083 0
 8 2012 11  -.19413325021729597  .23072246751229653                   0 -1.05715229600264 2.8  114.925 0
 8 2013 11    .1583778203235593  .32950594070198386                   0 -.121113806028916 2.6    113.4 0
 8 2014 11   .28328498978959826  .35294353182695987                   0  1.41901789895785  .3 110.8833 0
 8 2015 11   .19447755673889042  .36335972974444725                   0  2.26021114168882  .2 102.9167 0
 8 2016 11                    .                   .                   .  2.14548423538141  .1    99.15 0
 9 2007  4   .40726774979620484    .148815204000807   .0751167777033643  3.76899240771773 2.8 95.76667 0
 9 2008  4    .4975557101506596  .12487041217197357  .07939309264857768  1.11759664396229 4.1 100.7667 0
 9 2009  4    .3937372453394627  .16608854878916943  .07834763438197201 -3.57366541819603 -.2     95.9 0
 9 2010  4    .3006653507393072  .22663108670590895  .07510340933400961  .014063877762696   2      100 0
 9 2011  4    .4697878942896326  .19069312304415253  .09476209098599689 -.998764958115056   3 106.0083 0
 9 2012  4   .23791946562698332   .2941776865707721  .06805278897721459 -2.92775050717711 2.4    108.7 0
 9 2013  4   .30925759682796117  .32240736306463236   .0877030453800718 -1.70570500034655 1.5    108.3 0
 9 2014  4     .173209566279401   .3318152226364774  .05599740325851984   1.3799972382427 -.2 106.7333 0
 9 2015  4    .1407703271290773   .4666231541097543  .05563229228789517  3.43225332792365 -.6    105.1 0
 9 2016  4  .005423742389665445   .4557557316311813 .012333957620020392  3.27446273967741 -.3  103.325 0
10 2007  4                    .                   .                   .  3.76899240771773 2.8 95.76667 0
10 2008  4                    .                   .                   .  1.11759664396229 4.1 100.7667 0
10 2009  4    .3637494711831915   .5789714862094771  .17109112629529713 -3.57366541819603 -.2     95.9 0
10 2010  4    .3295450102968234   .5995436299864435  .16357073777613074  .014063877762696   2      100 0
10 2011  4    .3652654472568526    .480927819442622  .15308638399094857 -.998764958115056   3 106.0083 0
10 2012  4    .3659497633189905  .38260232612970196  .14003362208236383 -2.92775050717711 2.4    108.7 0
10 2013  4    .3786570057601004  .35110376359683826   .1374562118574985 -1.70570500034655 1.5    108.3 0
10 2014  4    .2985188715026209   .4764936679590712   .1342249525541363   1.3799972382427 -.2 106.7333 0
10 2015  4    .3768489920549247  .40784836178101846   .1522550723143334  3.43225332792365 -.6    105.1 0
10 2016  4                    .                   .                   .  3.27446273967741 -.3  103.325 0
end
format %ty year
And here are some of the results:

Code:
Random-effects logistic regression              Number of obs     =     91,598
Group variable: ID                              Number of groups  =     11,518

Random effects u_i ~ Gaussian                   Obs per group:
                                                              min =          1
                                                              avg =        8.0
                                                              max =          9

Integration method: mvaghermite                 Integration pts.  =         12

                                                Wald chi2(4)      =     203.97
Log likelihood  = -3796.2326                    Prob > chi2       =     0.0000

-----------------------------------------------------------------------------------
           status |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
------------------+----------------------------------------------------------------
ebitdatotalassets |
              L1. |  -.5149134   .1044121    -4.93   0.000    -.7195573   -.3102695
                  |
          rlativo |
              L1. |   .1827195   .0352374     5.19   0.000     .1136554    .2517836
                  |
        GDPgrowth |
              L1. |  -.0642959    .017428    -3.69   0.000    -.0984542   -.0301377
                  |
             HICP |
              L1. |   .1479776   .0117485    12.60   0.000      .124951    .1710043
                  |
            _cons |  -19.21878   1.148396   -16.74   0.000     -21.4696   -16.96797
------------------+----------------------------------------------------------------
         /lnsig2u |  -10.27488   11.63557                     -33.08017    12.53042
------------------+----------------------------------------------------------------
          sigma_u |   .0058727   .0341662                      6.56e-08    525.9517
              rho |   .0000105    .000122                      1.31e-15    .9999881
-----------------------------------------------------------------------------------
LR test of rho=0: chibar2(01) = 3.1e-04                Prob >= chibar2 = 0.493

My major questions are the following:

1- How can i capture the macroeconomic variables effects in the way that they are associated to the countries?

2- It is necessary a fixed-effects model? I run the Hausman test and the results showed me that a random effects model is more appropriate, nevertheless, i was told to run the regression with fixed-effects

3 - The coefficients of the models that i estimated using only financial variables with xtreg were very small, but when i set the data for time-series purposes (tsset) and ran a logistic regression, the coefficientes seem more realistic. My question is if i'm making some major mistake by using tsset (with an ID variable defined) for panel data with both time and ID.

Thanks in advance.
Rodrigo
Viewing all 65508 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>