Quantcast
Channel: Statalist
Viewing all 65526 articles
Browse latest View live

Is it possible to use the Stata-python integration module inside of foreach Stata loop?

$
0
0
Hello,

I am trying to execute the code below and it is giving me an error, mainly because the Stata takes the command 'end' to end the python mode as a termination of the entire foreach loop:

gen Alpha = .
gen AUC = .
local i = 0
range alphas 0.0 1.0 20

foreach a in alphas {

i++

python:

from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn import metrics # import scikit-learn metrics module for accuracy calculation
from sfi import Data
import numpy as np
import pandas as pd

a = Data.get("a")

# predict using the best value for alpha
mnb = MultinomialNB(alpha = a, class_prior = None, fit_prior = True)

# calculate probability of each class on the test set
# '[:, 1]' at the end extracts the probability for each pharmacy to be under compliance
Y_mnb_score = mnb.fit(X_train, np.ravel(Y_train)).predict_proba(X_test)[:, 1]

# make test_compliance python variable
test_compliance = Y_test['compliance']

# transfer the python variables Y_mnb_score and test_compliance to STATA
Data.setObsTotal(len(Y_mnb_score))
Data.addVarFloat('mnbScore')
Data.store(var = 'mnbScore', obs = None, val = Y_mnb_score)

Data.setObsTotal(len(test_compliance))
Data.addVarFloat('testCompliance')
Data.store(var = 'testCompliance', obs = None, val = test_compliance)

end // this 'end' is causing a problem

roctab testCompliance mnbScore
replace AUC= r(area) in `i'
replace Alpha = `a’

} // loop not working

How can I use the Stata-python integration module inside of the foreach loop?

Thank you,

spxtregress with unbalanced panel data

$
0
0
Dear Statalist members,

How would be possible to use spxtregress with unbalanced panel data?

I'm currently using a sample of firms for 3 years, with the coordinates for each one, and without assuming a shapefile. However from one year to the other, there are certain firms that enter in the sample, and others that exit. According with STATA 15 material, I should consider the last year matrix since it is assumed that location is the same in both years.

My problem is that imposing a balanced panel of firms that exist in the 3 years lead to a reduction of 20% in my sample, which is in fact relevant.

Thank you in advance.

How to compare two different coefficients from two different multilevel equations?

$
0
0
Dear Statalist,

I am working in a three level model (time nested in firms nested in regions) and using Stata 15.1. I would like to compare two different coefficients (say: z2 and z3) from two different regressions. Even though both have the same dep. variable, there are a huge (near 0.8) correlation between the two independent variables (z2 and z3) reason why I do not include them jointly.
Some one told me to compare the distribution of the Betas and search for overlapping, but I am not sure how to do this.

On top of that, I would like to ask if there is something like the suest test (suest does not support meglm) but for a model like the one I show you next.

Thanks in advanced.

Code:
melogit y L.x1 L.x2 z1 z2 ||region: ||firm: , or vce(robust)
melogit y L.x1 L.x2 z1 z3 ||region: ||firm: , or vce(robust)

Can you run forvalues with bysort to generate new variables?

$
0
0
Dear Statalist,

I have the following data. Each individual have done multiple angiographic examinations with each examinations possibly resulting in multiple treatments. Each observation is a treatment. Im trying to create a variable looping through each individual and identifying if a treated segment
gets treated again in the next examination. I have tried to lift up the information using a loop but its inneffective and requires modification of the variables .

Code:
foreach a in 1 2 3 4 {
by idnr : gen newvar`a' = segment_pci[_n+`a'] 
by idnr : gen date_of_angio`a' = date_of_angio[_n+`a'] 
}

replace newvar1=. if newvar1!=segment_pci
replace date_of_angio1=. if newvar1!=segment_pci

My question is, is there a forvalues loop that can run through each individual, identifying if each treated segment appears in a different examination
and create a new variable on the right segment as is shown in newvar and newvardate?

idnr angio_nr date of angio segment_pci newvar newvardate
54 1 18-Nov-11 8 1 28-Jan-12
54 1 18-Nov-11 7 1 28-Jan-12
54 1 18-Nov-11 6 1 28-Jan-12
54 2 28-Jan-12 8
70 1 26-Sep-10 2
70 1 26-Sep-10 12
70 1 26-Sep-10 6
81 1 26-aug-10 6
81 2 10-Apr-12
81 2 10-Apr-12
81 3 02-Aug-17
81 4 23-Nov-17 2 1 23-Dec-17
81 4 23-Nov-17 3 1 23-Dec-17
81 4 23-Nov-17 1 1 23-Dec-17
81 5 23-Dec-17 3
86 1 09-Sep-13
86 1 09-Sep-13 6
86 2 09-Oct-13 2
86 2 09-Oct-13 3
86 2 09-Oct-13 1
86 3 11-Dec-13 11 1 02-Jan-14
86 4 02-Jan-14 11
90 1 17-Jan-14
90 2 25-May-18 11

Sincerely,
Moman
and many thanks for a splendid forum!!!

weak instrument problem

$
0
0
Hi,
So im replicating a paper and they have used 3 main food prices as their instrumental variable. There is another paper from a good journal who also replicated the same paper used 12 food prices as their IV as their 3 food prices was appearing to be a weak IV (f test)

Now i have tried various of combinations of food prices and its interaction terms, all makes the F stat<10 hence weak IV.
is there a stata command or somthing that picks the combo of strong instruments?

How to produce a predicted graph between dependent and independent variables, keeping all other variables constant?

$
0
0
How to produce a predicted graph between a dependent and an independent variable (both continuous variables), keeping all the other control variables constant at means?

Fixed &amp; Random Effects with noisy ESG (sustainability) data

$
0
0
Dear readers & contributers!

I am completely new here, a master (Msc. Sustainable Finance) student just a few weeks removed from leaving the world of academia behind, gratefully Please excuse me if I do something wrong in this post, I have read the FAQ and will try to apply the CODE tags!

I am running my data analysis on Stata 14.0 right now, and would appreciate it immensely if you could find the time to advice me on my issue!

A tiny little background: I am researching whether or not BGD (= board gender diversity, variable name GR) has an impact on the environmental & social (EIVA and SIVA, respectfully) performance of a company. I do this in the context of neo-institutional theory, meaning that I gathered sufficient data to do a pan-continental analysis (comparing 69 countries grouped in 4 classes, depending on how well-institutionalized corporate sustainability is in this class) and see whether the BGD has more/less impact on EIVA & SIVA. I have a panel data set with 18,573 firm-year observations, years range from 2015-2018

My variables are:
dependent: EIVA or SIVA
independent:
- GR: gender ratio; the higher this number the more male-dominant the board of directors is; the MAIN variable of interest
- NM: ratio representing the nationality mix
- market cap, revenue and debt as control variables - variable names MC, Revenu and Debt
- nordicEU; westernEU; thirdgroup and fourthgroup = the 4 groups of classified countries; THESE ARE ALL DUMMY VARIABLES!
- sectornum: the sector a company is in, which I
Code:
encode
from string to numeric to be able to create dummy variables with
Code:
i.sectornum
- year, also a dummy variable which I use to control for macro-economic changes in each year
- 69 country dummy variables which I do not use directly in the regression but used to create the 4 classes

I am trying to decide between the fixed effects and random effects, whereby:
a) the Hausman test clearly points to the fixed effects
b) the random effects model results are EXACTLY what I wanted to show for both the GR variable and the 4 country-classes; it is in line with the literature and my own rationale
c) one of my Finance professors had a (too short) talk with me recently whereby he criticised the use of FE for ESG/sustainability data, as, according to him, this is very noisy data and a FE estimator would simply take away the little meaningful variation we have in the data and regress predominantly, noise. As I mentioned the Hausman test he brushed it off by saying it is often very biased. He had to rush away and I had no chance to express just how unclear it was what he said, yet I feel like his remark could help me out with writing a convincing methodology in favor of the RE. Does any of you have an idea what he could have meant? He is abroad now and will be so due to family circumstances for a couple of months, and it would be highly inappropriate to bother him.

now, my regressions are:

Code:
 xtreg EIVA GR NM MC Revenu Debt nordicEU westernEU thirdgroup fourthgroup i.sectornum i.year, re vce(robust)
Code:
 xtreg EIVA GR NM MC Revenu Debt  i.year, fe vce(robust)
when it comes to the FE regression, I had to leave out all the 4 country classifications and the sector dummies since Stata just dropped them due to collinearity. This is a HUGE problem for me since I need the coefficients on the 4 classes as a significant part of what I am trying to contribute with this master thesis!!

The coefficient on GR under the FE estimator is positive (which would mean that the less women directors there are on a board the better the environmental performance, going against literally ALL literature), whereas the coefficient is negative under the RE estimator, and like I already said, the RE model makes sense overall. Yet the Prob>chi2 of the Hausman is 0.000, begging for an inconsistent random effects estimator. I ran the Hausman before I included robust standard errors.

So now that you know (in case you had the wonderful patience to actually read all of this) the situation, my questions specifically are:

1. Does anyone have a clue what the professor could have meant with the noisy data not being appropriate for FE, and vice versa?
2. Is my coding in Stata even correct?
3. How can I make the smart choice between FE and RE? Or would you suggest another model that would allow to estimate the 4 classes of countries?
3. Would you know any academic articles/literature in general that might help me further/back up using the RE?

THANK YOU so much in advance, I am so happy to have found this Stata-community & forum!

Kind regards,

Amira












Random group generation

$
0
0
Dear Statalists,

I would like to generate 67 random groups for 271 observations. I know the below code might work:

set obs 271
range no 1 271
gen r= uniform()
gen group=1
sort r
replace group=2 in 5/8
...

However, is there any better way that can be used to generate 67 random groups simultaneously?

Thanks in advance.

Best,
Cong

Issues with string variable that takes numeric values ('1', '2', ...)

$
0
0
I read in a dataset that includes many columns of categorical variables denoted by '1', '2', ...

1) I cannot condition on the values directly -- br if variable=='2' will give error "'2' invalid name"
2) I cannot use destring to convert to numeric (they all end up as missing if I force it)
3) I've attached screenshots of what the codebook and tab commands show

Any ideas for how to either a) convert these to their corresponding integers or b) be able to condition on the string values

I am using Stata 16.





Fake private messages

$
0
0
If anyone receives a private message that appears to be from me please disregard it (and perhaps notify the Statalist administrator). I have not and do not send such messages but have had a request from another user no to send them to him.
I do not know how this could happen; perhaps I did something inadvertently but I do not think so.
Thanks.
Laurence

How to estimate multiple treatment effects?

$
0
0
Hi friends,

I need some advise. I am trying to estimate the effect of government program (funding) on regions at metropolitan statistical area (MSA) level. The funding is very selective (i.e., so far only 1/3 of MSAs got this funding) and lasts generally 5 years. As the funding lasts about 5 years, each year has a cohort of about 10 new entrants. Each recipient gets different amount of funding at different period. This funding ($) is the key independent variable. The dependent variable is continuous variable ($). I constructed a panel data with several control variables between 2006 and 2018.

These days, I suddenly remembered that I read an article (Png 2017) saying...
"To investigate the impact of the UTSA on R&D expenditure, I apply an empirical strategy of difference in differences (Bertrand, Duflo, & Mullainathan, 2004), with multiple treatments at different times with different intensity in the various states. Specifically, I estimate the following model, for company i, in state s, in year t:

ln (1+ Rist) = b1*UTSAst + b2*Xit + bis + bt +eist (1).

Rist represents R&D expenditure by company i in state s in year t, UTSAst represents the increase in the legal protection of trade secrets arising from the UTSA being in effect, Xit are time-varying company characteristics, bis, bt are company by state and year fixed effects, and eist is an idiosyncratic error term."
I think my panel is similar to the case above. But I have no idea how to implement this. I will appreciate any comment. Thank you.

STATA command

$
0
0
Hello. My name is Kilourou Yenipoho. I am using STATA 13 to analyze a large panel. I have a concern. I read an article by Rodman (2009) in which the author uses both techniques to reduce the number of instruments using the GMM estimators: One of them is limiting the lag depth, the other one is “collapsing” the instrument set. And another method is the combination of the both previous techniques. So, I would like you to help me please. I would like to have the commands to execute these differents techniques on STATA namely :

-The limit the lag depth

-The “collapsing” of the instrument set.

-The combination of the both previous techniques.

Thank You….

PPML Fixed Effects

$
0
0
Hi,

In a Gravity Model estimation with PPML,
which is the difference between these two options if I want to include fixed effects on my model? Are both correct?

Option 1:

egen exp = group(exporter)
quietly tabulate exp, gen(EXPORTER_FE)

egen imp = group(importer)
quietly tabulate imp, gen(IMPORTER_FE)

egen time = group(year)
quietly tabulate imp, gen(YEAR_FE)

Option 2:

egen exp_time = group(exporter year)
quietly tabulate exp_time, gen(EXPORTER_TIME_FE)

egen imp_time = group(importer year)
quietly tabulate imp_time, gen(IMPORTER_TIME_FE)


Thanks,

Daniel

merging global findex and GEM data

$
0
0
I have 3 datasets that offer different information on countries. Global findex data set is on how adults save, borrow, make payments, and manage risk. Women, Business and the Law (WBL) is a World Bank Group project collecting unique data on the laws and regulations that restrict women's economic opportunities and Global Entrepreneurship monitor which has 2 daasets (APS) looks at the characteristics, motivations and ambitions of individuals starting businesses, as well as social attitudes towards entrepreneurship. (NES) looks at the national context in which individuals start businesses. I want to merge all of them for analysis. what should be the basefile and what merge command to use
also how can i attach datasets here

upgrade to stata 16

$
0
0
Dear all,
Sorry in advance if this is not the right place to ask the question. I have a permanent license for stata/ic 15.1 for mac. i just want to know how to upgrade to the new stata 16. do i need to buy a new license?

best regards



How to run this formula on stata = [Standard Deviation * (trading days) 1/2 ] , for a number of series.

$
0
0
Hello,
In the attached data below, in the first column there is 1, 1, 1…. 1 which means first security, then 2, 2, 2…..2 mean second security and so on till 501 which means 501th security. Each number is 19 times in the month of July, which shows that particular security is traded for 19 days except a few securities . Second column is the residuals series of 501 securities in the month of July, 2005. I got these residuals after employing OLS on daily data for each security with the help of Stata.
Similarly, I have got these residuals series of 501 securities for each month from the period of July 2005 to June 2019. Below I am just showing a sample of my data set.
Now I want to calculate = [Standard Deviation * (trading days) 1/2 ] for each security every month. In MS Excel I have to calculate for each security which is very time consuming.

I run this command:
Code:
dataex Securities Residuals
My data is look like:
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int Securities double Residuals
1    .06100905526664617
1   .018200765921769196
1   .002302944490864386
1  -.033965307491921164
1  .0009181674237631124
1    .06403532828369113
1    .05333690686003768
1   .015190512297211294
1   .005032692203814661
1   -.03426425281175801
1  -.002881117047946408
1   -.03371983377341705
1  -.038556254176230764
1  -.011430591851673392
1  -.019482834034763565
1   -.03404594016616286
1   -.05031003388061296
1   .014875586087672074
1   .023754206399016434
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
2                     .
3  -.004844024826951678
3  -.018068070155601464
3    .06367076469697697
3    .03809349127519174
3  -.004253538158807668
3 -.0028536664163587733
3  .0046647922571916856
3  -.017304186793684666
3  -.015854318387752496
3  -.004021355277043498
3   .013462510497008263
3  -.007330210581243784
3  -.013201097967129819
3 -.0034195904319810643
3   -.01583547289747817
3   -.01513205173650813
3    .01646283312031617
3   .001470463671279395
3  -.015707271887423033
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
4                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
5                     .
6    .00398852613465446
6  -.007123296989719599
6  -.006353320411417913
6  -.006491804673747924
6   .004909149193909702
end
dataex command is not showing the complete series as this has 9,519 observations. Sir please suggest some possible ways of doing this on Stata so that it save my time.
Thank you
PRIYA

multiple imputation: mvn vs chained

$
0
0
Hi Statalist

Questions regarding issues encountered during -mi- have been asked many times in this forum. I note especiallyMy question is somewhat different. Specifically, I have 1 continuous variable & 7 binary variables with missing data. The continuous variable has about 10% of its data missing whereas the proportions of missing data for the binary variables are trivial (i.e. no more than 1-2% in each).

My question is that I don't understand why mi impute mvn works but mi impute chained does not since - from my reading of the Stata documentation - chained equations are a much more flexible (accommodating?) method.

Furthermore, I narrowed down the problem to be my binary variables in the MICE approach and even when I only include a single binary variable like
Code:
mi impute chained (logit) binary_var = ... , augment
it still fails.
  1. Is mvn necessarily a more restrictive method than MICE with only continuous & binary variables?
  2. is it ok to rely on mvn when -mi impute chained- fails to converge or does it signify the data is such that multiple imputation is perhaps ill-advised?
Thank you.


Stock Price Pattern recognition using kernel regression on stata

$
0
0
Hello. I am doing a paper on technical analysis pattern recognition using Stata. Below is my initial work.

Array

I also attached the data set. I am trying to replicate a paper titled "Technical Analysis," which is also attached to this post.

Now I want to code in order to detect the following patterns.
Array

including a constant in a first difference mdoel

$
0
0
I have the following question:

I want to estimate the effect of retirement on the 'cesd-score' (which indicates someones mental health, using a panel dataset and the first difference model:

reg d.cesd d.retired d.age d.female d.education d.mstat2 d.mstat3 d.mstat4 d.white

My question is: should I include the constant or not? What do I base my decision on?

My data:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long id byte(wave education mstat) int age byte cesd float(female white retired)
    3010  1 12 1  56  . 0 1 0
    3010  2 12 1  58  0 0 1 0
    3010  3 12 1  60  3 0 1 0
    3010  4 12 1  62  3 0 1 0
    3010  5 12 1  64  1 0 1 0
    3010  6 12 1  66  1 0 1 0
    3010  7 12 1  68  0 0 1 0
    3010  8 12 1  70  0 0 1 1
    3010  9 12 1  72  0 0 1 1
    3010 10 12 1  74  0 0 1 1
    3010 11 12 1  76  0 0 1 1
10001010  2 12 4  55  4 0 1 1
10001010  3 12 4  57  1 0 1 0
10001010  4 12 4  58  5 0 1 0
10001010  5 12 4  60  1 0 1 1
10001010  6 12 4  62  1 0 1 1
10001010  7 12 4  64  1 0 1 1
10001010  8 12 4  66  1 0 1 1
10001010  9 12 4  69  1 0 1 1
10001010 10 12 4  71  1 0 1 1
10001010 11 12 4  72  1 0 1 1
10001010 12 12 4  74  0 0 1 1
10003020  1 16 1  58  . 0 1 0
10003020  2 16 1  60 .m 0 1 0
10003020  3 16 1  62 .m 0 1 0
10003020  4 16 1  64 .m 0 1 1
10003030  1 16 1  36  . 1 1 0
10003030  2 16 1  38  1 1 1 0
10003030  3 16 1  40  3 1 1 0
10003030  4 16 1  42  3 1 1 0
10003030  6 16 3  46  1 1 1 0
10003030  8 16 3  50  4 1 1 1
10003030 10 16 3  54  0 1 1 1
10003030 11 16 3  56  0 1 1 1
10003030 12 16 2  58  1 1 1 1
10083010  4 10 1  59  2 0 0 0
10083010  5 10 1  61  1 0 0 0
10083010  6 10 1  63  0 0 0 1
10083010  7 10 1  65  0 0 0 1
10083010  8 10 1  67  0 0 0 1
10083010  9 10 1  69  1 0 0 1
10094010  1 12 3  58  . 1 0 0
10114010  1 12 4  55  . 1 0 0
10114010  2 12 4  56  2 1 0 1
10114010  3 12 4  58  4 1 0 1
10114010  4 12 4  60  0 1 0 1
10114010  5 12 4  62  1 1 0 1
10124011  5 12 1 100 .m 0 0 0
10155010  1  7 2  53  . 1 0 0
10155010  2  7 2  55  1 1 0 0
10155010  3  7 2  57  0 1 0 0
10155010  4  7 2  59  1 1 0 0
10225010  1  8 4  57  . 1 0 0
10225010  2  8 4  59  7 1 0 0
10225010  3  8 4  61  8 1 0 1
10225010  4  8 4  63  5 1 0 1
10225010  5  8 4  65  2 1 0 1
10225010  6  8 4  67  1 1 0 1
10225010  7  8 4  69  1 1 0 1
10225010  8  8 4  71  0 1 0 1
10225010  9  8 4  73  1 1 0 1
10225010 10  8 4  76  2 1 0 1
10225010 11  8 4  77  2 1 0 1
10225010 12  8 4  79  4 1 0 1
10240010  1  9 2  53  . 0 1 0
10240010  2  9 2  55  1 0 1 0
10240010  6  9 2  63  8 0 1 1
10325020  3 14 1  57  0 1 1 0
10325020  4 14 1  59  0 1 1 0
10325020  5 14 1  60  1 1 1 0
10325020  6 14 1  63  0 1 1 1
10325020  7 14 1  65 .m 1 1 1
10325020 11 14 3  73  0 1 1 1
10325020 12 14 3  74  0 1 1 1
10346010  1 11 4  52  . 0 1 0
10372010  1 10 4  56  . 1 0 0
10372010  2 10 4  58  4 1 0 0
10372010  3 10 4  60  6 1 0 0
10372010  4 10 4  62  3 1 0 0
10372010  5 10 4  64  6 1 0 1
10372010  6 10 4  66  5 1 0 1
10372010  7 10 4  68  5 1 0 1
10372010  8 10 4  70  6 1 0 1
10372010  9 10 4  72  2 1 0 1
10372010 10 10 4  75  1 1 0 1
10372010 11 10 4  76  4 1 0 1
10372010 12 10 4  78  3 1 0 1
10378010  1 16 4  53  . 1 0 0
10378010  2 16 4  54  0 1 0 0
10378010  4 16 1  58  5 1 0 0
10378010  5 16 4  60  1 1 0 0
10378010  6 16 4  62  1 1 0 1
10378010  7 16 1  64  1 1 0 1
10394010  5 16 1  59  3 0 1 0
10394010  8 16 1  65  0 0 1 0
10404010  1 12 3  52  . 1 1 0
10404010  2 12 2  54  1 1 1 0
10404010  3 12 2  56  3 1 1 0
10404010  4 12 3  58  0 1 1 0
10404010  5 12 3  60  0 1 1 0
end
label values education EDYRS
label values mstat marital
label def marital 1 "Married or in partnership", modify
label def marital 2 "Separated or divorced", modify
label def marital 3 "Widowed", modify
label def marital 4 "Single", modify

Meta Analysis - Data input problem

$
0
0
I am trying to learn meta analysis and I am using an example from Bland, An Introduction to Medical Statistics:
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(or ll ul)
 .29 .03  3.12
5.09  .5 52.29
 .35 .16   .74
 .18 .08   .55
1.24 .52  2.96
 .17 .07   .43
 .71 .14  3.66
 .58 .25  1.36
  .4 .24   .66
 .01   0   .21
 .82 .27  2.54
1.44 .37  5.67
 .61 .29  1.28
 .59 .34  1.04
 .46  .1  2.05
  .5 .17  1.54
 .98 .56  1.72
end
When I use admetan, from SSC, I have no problem:
Code:
. admetan or ll ul

Studies included: 17
Participants included: Unknown

Meta-analysis pooling of aggregate data
using the fixed-effect inverse-variance model

--------------------------------------------------------------------
Study                |   Effect    [95% Conf. Interval]   % Weight
---------------------+----------------------------------------------
1                    |     0.290      0.030     3.120       0.21
2                    |     5.090      0.500    52.290       0.00
3                    |     0.350      0.160     0.740       6.00
4                    |     0.180      0.080     0.550       9.14
5                    |     1.240      0.520     2.960       0.34
6                    |     0.170      0.070     0.430      15.57
7                    |     0.710      0.140     3.660       0.16
8                    |     0.580      0.250     1.360       1.64
9                    |     0.400      0.240     0.660      11.44
10                   |     0.010      0.000     0.210      45.76
11                   |     0.820      0.270     2.540       0.39
12                   |     1.440      0.370     5.670       0.07
13                   |     0.610      0.290     1.280       2.06
14                   |     0.590      0.340     1.040       4.12
15                   |     0.460      0.100     2.050       0.53
16                   |     0.500      0.170     1.540       1.08
17                   |     0.980      0.560     1.720       1.50
---------------------+----------------------------------------------
Overall effect       |     0.193      0.122     0.264     100.00
--------------------------------------------------------------------

Test of overall effect = 0:  z =   5.336  p = 0.000


Heterogeneity Measures
---------------------------------------------------------
                     |     Value      df     p-value
---------------------+-----------------------------------
Cochran's Q          |     39.61     16      0.001
I² (%)               |     59.6%
Modified H²          |     1.475
tau²                 |    0.0420
---------------------------------------------------------

I² = between-study variance (tau²) as a percentage of total variance
Modified H² = ratio of tau² to typical within-study variance
However when I try to use meta set from Stata 16 I get the following error message:
Code:
. meta set or ll ul
confidence intervals not symmetric
    CIs defined by variables ll and ul must be symmetric and based on a normal distribution. If you are
    working with effect sizes such as odds ratios, risk ratios, or hazard ratios, the CIs should be
    specified on the log scale.
r(459);
I appreciate from the Manual that the CI should be symmetric but I do not understand how I can resolve this error.

Thank you,
Janet


Viewing all 65526 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>