Quantcast
Channel: Statalist
Viewing all 65481 articles
Browse latest View live

The problem with creating dummy variables

$
0
0
Hi Statalists,
I am trying to find the relationship between the change in labour productivity (dependent variable) and immigrants occupational level (independent variable). My unit analysis is by sector. From 2010-2016. Right now I am going to include dummy variable (manager, senior office and etc = is equal to 1 for high skilled occupations and vice versa) and interaction term sector*highskill into the regression. However the dummy is omitted because of collinearity as shown at the end of the post***. So my question is.....

If my focus is on the interaction term (i.e. how immigrants with different occupational level (high or low skilled occupations) affects labour productivity in each sector), how can I do that?
I've attached a picture of my final dataset which I use for my regression

Thanks in advance
Lisa

Count how many units have the dummy variable=1

$
0
0
Hi
I have data for 353 local authorities over 20 quarters and one of my variables is a dummy variable that takes the value 1 or zero if that local authority is a unitary authority or not and this does not change over time.
Is there a way to easily count how many separate local authorities are unitary? I predict around 55 are unitary but as the data is in long format this is tedious to count. All of the local authorities below are unitary, but for many in my sample this dummy takes the value 0. 'acode' is just a numeric identity for each local authority.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str43 name float acode byte unitarydummy
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Hartlepool Borough Council"           6000001 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Middlesbrough Borough Council"        6000002 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Redcar and Cleveland Borough Council" 6000003 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Stockton-on-Tees Borough Council"     6000004 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
"Darlington Borough Council"           6000005 1
end

"sts graph, risktable() saving()" tries (and fails) to save graph-file twice

$
0
0
Hi all,

I've come across some unexpected behaviour which I'd appreciate comments on. I'm using Stata 15.1 on Windows 7.

Code:
webuse drug2, clear

// sts graph without risktable: successful
sts graph, by(drug) saving(testgraph1.gph)

// sts graph without risktable: exits with error r(602)
sts graph, by(drug) risktable saving(testgraph2.gph)

It appears that the code for the "risktable" subroutine causes "sts graph" to attempt to save the graph twice, the second such attempt failing as would be expected.

This can of course be worked around by saving the graph in a separate command line; but to me this is nevertheless a bug.

Thanks,

David.

How to do a differential effect in Stata

$
0
0
I have a binary variable : 0 = English company, 1 = French company.
I have a second variable "profits".
And a third variable "labor cost".

How can I estimate the differential effect of the profit variations on both types of firms and on the cost of labor, in a regression.
In other words, how, in stata, can I estimate a differential effect through a regression ?

pairwise correlations

$
0
0
Are pairwise correlations the same as Pearson's Coefficient Correlation and what command would you enter into stata to generate a table?

XTunitroot error r(111) variable ccode not found

$
0
0
I am using the command XTunitroot to test for stationarity with a cross country-year dataset, using Stata 15.1.

My data looks like this (where "..." indicates more of the same):

country year wheat_yld rice_yld maize_yld ....
Angola 1980 21,241
Angola 1981 23,701
Angola 1982 24,868
Angola .... ....
​​​​​​​Angola 2010 23,939
Ethiopia 1980 25,429
Ethiopia 1981 21,682
Ethiopia 1982 24,752
Ethiopia .... ....
Ethiopia 2010 31,409
....

The variables ending with _yld are yields. These yields are defined as the number of hectograms/hectare of the given crop produced in the country in the given year.

Where X is one of my agricultural yield variables, eg. wheat_yld, I seek to run the following commands:

Code:
​​​​​​​xtunitroot ips X, demean
xtunitroot ips X, demean trend
 
xtunitroot fisher X, dfuller lags(1) demean
xtunitroot fisher X, dfuller lags(1) demean trend
 
xtunitroot fisher X, pperron lags(1) demean
xtunitroot fisher X, pperron lags(1) demean trend
In response to any of these commands, Stata responds:

Code:
variable ccode not found
r(111);
But my data has no variable called ccode, I didn't generate any variable called ccode in my syntax, and the command lines I am trying to run don't include a ccode variable, so I am not sure why Stata is looking for a variable called ccode?

I include a more complete example set of my data below using dataex if helpful. Thanks kindly for any help!

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long country_codes int year long(wheat_yld maize_yld)
 3 1995  8924 16115
 3 1996 13090 17154
 3 1997  8016 21417
 3 1998  8847 17222
 3 1999 10711 22375
 3 2000  9194 36186
 3 2001 11104 27175
 3 2002 10739 33480
 3 2003 14480 28853
 3 2004 13582 31729
 3 2005 15057 33864
 3 2006 15068 65668
 3 2007 12742 85625
 3 2008 11038 44714
 3 2009 15975 30914
 3 2010 14838 25827
 7 1980 15596 25703
 7 1981 14014 38008
 7 1982 20501 30284
 7 1983 18405 30303
 7 1984 23068 31407
 7 1985 16177 35629
 7 1986 17819 37450
 7 1987 18847 31897
 7 1988 18344 37744
 7 1989 18947 29103
 7 1990 18947 34608
 7 1991 18972 40444
 7 1992 21809 45237
 7 1993 23272 43552
 7 1994 20237 42371
 7 1995 21675 45223
 7 1996 19344 40397
 7 1997 22426 45557
 7 1998 26088 60780
 7 1999 23028 53702
 7 2000 24873 54329
 7 2001 24934 54553
 7 2002 22398 60791
 7 2003 20361 64767
 7 2004 25441 63931
 7 2005 26356 73587
 7 2006 25302 59030
 7 2007 26234 76655
 7 2008 28271 64525
 7 2009 19628 55760
 9 1990 16344 41821
 9 1991 14698 39938
 9 1992 17812 51721
 9 1993 19658 44261
 9 1994 11356 46620
 9 1995 17899 48263
 9 1996 21673 57689
 9 1997 18412 59403
 9 1998 19153 47544
 9 1999 20066 51212
 9 2000 18209 49360
 9 2001 18209 46471
 9 2002 21076 54929
 9 2003  9070 62570
 9 2004 19998 56326
 9 2005 16348 57983
 9 2006 20213 53856
 9 2007  9173 49103
 9 2008 10788 56912
 9 2009 15831 58171
 9 2010 15729 55593
14 1980 18990  7146
14 1981 18479  7300
14 1982 18114  7387
14 1983 21090  7829
14 1984 23021  8179
14 1985 21647  8669
14 1986 19282  9246
14 1987 18657  9458
14 1988 17542  9118
14 1989 18246  9417
14 1990 15034 10015
14 1991 16767  9781
14 1992 18534  9319
14 1993 18457  9351
14 1994 18390  9042
14 1995 19483 10700
14 1996 19531 10166
14 1997 20544 10914
14 1998 22408  7339
14 1999 21627 12346
14 2000 22104 20597
14 2001 21643 32216
14 2002 21644 40349
14 2003 21327 40334
14 2004 19527 48243
14 2005 17478 53311
14 2006 15344 52998
14 2007 18471 59812
14 2008 21753 60172
14 2009 21516 56831
14 2010 23959 58378
21 1980  5992 13063
21 1981  6943 16087
end
label values country_codes country_codes
label def country_codes 3 "Algeria", modify
label def country_codes 7 "Argentina", modify
label def country_codes 9 "Australia", modify
label def country_codes 14 "Bangladesh", modify
label def country_codes 21 "Bolivia (Plurinational State of)", modify

inlist with many variables

$
0
0
Hello.

I have 100 numeric variables (var1-var100) which are all categorical (1, 2, 3). I would like to identify the observations with any variable equal to 3.
Is there any simple function that can do that? I tried browse if inlist(3,var1-var100), but it doesn't work.

Thank you very much.

SEM with fixed effects

$
0
0
Hello.

This is my first time posting on this forum.

I am trying to examine whether student participation in class affects schools ranking though mechanism.
Since I am new to Stata, I used SEM builder.

my model is

sem (participate -> mechanism, ) (participate -> ranking, ), vce (cluster class) standardized nocapslatent

However, since the schools are located in multiple countries, I would like to include country fixed effects.

Thank you for your help!





nl problem with modified Richards equation

$
0
0
Dear Team,
I'm trying to fit modified Richards equation to simulated cumulated volume (v_elong) data over time (dia).
Parameters to be estimated are Maximum volume (Vol_max), flexibility parameter (V), maximum rate (Max_rate) and lag phase (lamda)
The used code and error are:

PHP Code:
 nl V_elong= {Vol_max=80}*(  + ( {V}*exp(1+ {V})*({Max_rate}*(1+{V})*(1+(1/{V}))*({lamda}- dia ) / {Vol_max}) )^(-1/{V})) )
(
obs 31)

starting values invalid or some RHS variables have missing values
r
(480); 
I was trying with a lot of initial values for V with no result. I have also fit logistic and 3-parameters Gompertz models with no problems.
Could you be so kind to advice me?
Thank you in advance,
Jorge

Create a variable that defines the value of the next row

$
0
0
Hi,

I have a variable called ia_label that defines an area of interest and a variable called rankIA that gives the order in which that area of interest was accessed by participants. I want to create a third variable that defines, for each area of interest and each participant, what was the next area of interest that was accessed. Example of data below:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int participant str11 ia_label float(rankIA nextIA)
2 "Label 3" 3 .
2 "Label 2" 2 .
2 "Label 1" 1 .
end
An example of what I am aiming for is as follows:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int participant str11 ia_label float rankIA str7 nextIA
2 "Label 3" 3 ""      
2 "Label 2" 2 "Label 3"
2 "Label 1" 1 "Label 2"
end
Any thoughts would be much appreciated.

Example of multilevel logistic regression with only level-2 predictors

$
0
0
Hi there,

I am building a multilevel logistic regression model without level-1 predictors. This means that I dispose only of level-2 predictors and thus want to analyze a macro-micro relationship. Do you know any literature on this topic or studies with a similar design? Like a step-by-step manual would be great. So far, I encountered only literature working with predictors at min. 2 levels.

Thank you very much!

All the best,
Pavel

Merging two continuous variables in the same dataset

$
0
0
Hi,

I have two variables: 1) age of initiating cigarette use with values from 10-23 and 2) age of initiating hookah use with values from 11-19. Also, these variables are not mutually exclusive and so people could have used both products at different ages. For example, a person could have first-tried a cigarette at 10 years and hookah at 13. I want to combine these variables so that I can create a new variable on age of initiating tobacco substances with values that reflect both variables. I therefore, do not want the variables to simply add up the values. I want the variable to show the lowest age of initiation for each age.


ex.

ageofciguse ageofhookahuse
16 15
15 16
10 17
12 22
11 11
14 19
15 16
19 18
16 14

Ideally, I would like to create a new variable

ageoftobaccouse
15
15
10
12
11
14
15
18
14

Next, how can I do this for more than two variables?

reghdfe errors message : class FixedEffects undefined

$
0
0
I tried reghdfe for the first time and I used the following code to use reghdfe
But I get this error message.

Code:
reghdfe depvar var1 var2 ~ var7, absorb(day week week#city) compact pool(1) vce(robust[,bw(#)])
(MWFE estimator converged in 4 iterations)
class FixedEffects undefined
day has the following numbers: 01dec2011 ~ 31jan2019
week has the following numbers : 1 ~ 375
city has the following numbers : 1 ~ 7

i can't understand message "class FixedEffects undefined"
What's the problem?

and

There is one more question.
I understand that the standard errors of the two following codes are considered in the same way.
Is that right?
Code:
xtreg depvar indepvar, fe vce (r)
reghdfe depvar indepvar, absorb () vce (r, [bw (#)})
Thank you for any advice.

Identify variables in a range

$
0
0
Hello,

Can you please help me with the following issue:
I would like to identify the ids of the companies that have observations over the whole sample period, i.e. from Jan 2005 to Dec 2017.
Here is an example of my data:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long date float(mdate id) double ret_eom
16467 540 14713  .0065657461766821384
16467 540  8468  -.003093270296692758
16467 540 17056 -.0016608393468379927
16467 540 29267  -.001942484988471897
16467 540 35903   -.01216822099008778
16467 540 33340 -.0028468637678217734
16467 540 29323   .007498869742503357
end
format %d date
format %tm mdate
I tried the following code, but is only counting how many ids satisfy an in range condition, and when I tried to list one of them them, it returned nothing.

Code:
egen v1= total(inrange(yr, 2005, 2017)), by (id)
list if id == 63 & yr == 2006
Thanks a lot for your help!






Lag not working for monthly/quarterly data

$
0
0
I am struggling with this issue and can't understand, why it's not working.
I have quarterly cross-sectional data (isid gvkey date is alright) and want to generate lagged values (the lag is quarterly, or 3 months respectively)
I did the following preparation:

gen date=date(datadate,"YMD")
format date %tm
tsset gvkey date, monthly
sort gvkey date
gen test=L3.ibq

but it only generates missing values for test. Alternatively with delta(3) and test=L.ibq it doesn't work either...

Could someone please help me identifying where the mistake in my process is?

Many thanks in advance!!

Philipp Z-L



Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long gvkey str8 datadate float(date ibq test)
1166 "19980630" 14060     6.37 .
1166 "19990630" 14425   12.201 .
1166 "20010630" 15156      9.3 .
1166 "20040331" 16161    14.88 .
1166 "20040630" 16252    4.359 .
1166 "20040930" 16344    3.692 .
1166 "20041231" 16436    1.395 .
1166 "20050331" 16526   -7.248 .
1166 "20050630" 16617     .518 .
1166 "20050930" 16709   -6.296 .
1166 "20051231" 16801  -27.191 .
1166 "20060331" 16891   10.033 .
1166 "20060630" 16982   18.718 .
1166 "20060930" 17074   13.055 .
1166 "20061231" 17166   12.012 .
1166 "20070331" 17256   11.157 .
1166 "20070630" 17347   14.903 .
1166 "20070930" 17439   15.827 .
1166 "20071231" 17531    19.09 .
1166 "20080331" 17622   12.639 .
1166 "20080630" 17713  -34.706 .
1166 "20080930" 17805    2.421 .
1166 "20081231" 17897   -6.245 .
1166 "20090331" 17987  -23.276 .
1166 "20090630" 18078  -56.549 .
1166 "20090930" 18170  -15.817 .
1166 "20091231" 18262  -11.741 .
1166 "20100331" 18352    4.172 .
1166 "20100630" 18443   51.153 .
1166 "20100930" 18535   34.286 .
1166 "20101231" 18627   24.656 .
1166 "20110331" 18717   40.074 .
1166 "20110630" 18808    54.67 .
1166 "20110930" 18900   80.971 .
1166 "20111231" 18992   15.445 .
1166 "20120331" 19083     6.26 .
1166 "20120630" 19174   23.654 .
1166 "20120930" 19266    4.908 .
1166 "20121231" 19358  -21.733 .
1166 "20130331" 19448 1410.146 .
1166 "20130630" 19539  -23.403 .
1166 "20130930" 19631    -.875 .
1166 "20131231" 19723 -333.975 .
1166 "20140331" 19813   27.137 .
1166 "20140630" 19904   34.598 .
1166 "20140930" 19996   54.564 .
1166 "20141231" 20088   21.009 .
1166 "20150331" 20178   59.988 .
1166 "20150630" 20269   39.862 .
1166 "20150930" 20361   35.707 .
1166 "20151231" 20453   21.719 .
1166 "20160331" 20544    5.358 .
1166 "20160630" 20635   35.576 .
1166 "20160930" 20727   33.058 .
1166 "20161231" 20819   61.479 .
1166 "20170331" 20909   35.863 .
1166 "20170630" 21000  132.122 .
1166 "20170930" 21092   42.211 .
1166 "20171231" 21184  242.206 .
1166 "20180331" 21274   14.954 .
1166 "20180630" 21365   59.408 .
1166 "20180930" 21457   39.096 .
1166 "20181231" 21549   43.676 .
1932 "20010630" 15156      213 .
1932 "20040331" 16161      215 .
1932 "20040630" 16252      255 .
1932 "20040930" 16344      421 .
1932 "20041231" 16436      207 .
1932 "20050331" 16526      428 .
1932 "20050630" 16617      510 .
1932 "20050930" 16709      443 .
1932 "20051231" 16801      390 .
1932 "20060331" 16891      452 .
1932 "20060630" 16982      549 .
1932 "20060930" 17074      446 .
1932 "20061231" 17166      449 .
1932 "20070331" 17256      495 .
1932 "20070630" 17347      584 .
1932 "20070930" 17439      600 .
1932 "20071231" 17531      451 .
1932 "20080331" 17622      599 .
1932 "20080630" 17713      650 .
1932 "20080930" 17805      657 .
1932 "20081231" 17897      551 .
1932 "20090331" 17987      725 .
1932 "20090630" 18078      725 .
1932 "20090930" 18170    631.5 .
1932 "20091231" 18262    631.5 .
1932 "20100331" 18352    762.5 .
1932 "20100630" 18443    762.5 .
1932 "20100930" 18535      677 .
1932 "20101231" 18627      677 .
1932 "20110331" 18717      935 .
1932 "20110630" 18808      935 .
1932 "20110930" 18900    612.5 .
1932 "20111231" 18992    612.5 .
1932 "20120331" 19083      954 .
1932 "20120630" 19174    964.5 .
1932 "20120930" 19266    944.5 .
1932 "20121231" 19358      956 .
end
format %tm date

Mean on a graph pie

$
0
0
Hello.
I have two variable : one is continous, the other is trinary. I want to make a graph pie of those variable but I also want that the mean of each category appears on the graph. What is the stata command to do so ?
thank you.

Doing Merge using rangejoin

$
0
0
Hello, I am a new member to this forum! Nice to meet you all.

I have just went through set of posts regarding rangejoin, but I could not apply it to my own data project, and all helps would be very grateful!

So, I have two data sets where

use dataSet1

PERMNO FIRSTDATE LASTDATE
25881 19701113 19780630
10015 19830920 19860731
10023 19721214 19730605


and
use dataSet2
PERMNO TargetDate
25881 19701220
10015 19840305
10023 19721216

and on.
so, using dataset 1, I ran the following code
(Both my FirstDate and LastDate are in long format)

rangejoin NamesDate FirstDate LastDate using dataSet2.dta, by (PERMNO)
but I am getting
no observation with valid interval bounds to use as my error message. Would anyone be willing to help me out? Thank you all.

How can I run an event study on this dataset?

$
0
0
Hi,

I want to look at market reaction in response to earnings announcements, and I have a dataset in the following format:
Code:
ticker   company_id   date_id   date   event_date   rm   ret   ar   surp
company_id identifies the company (numerical representation of ticker), date_id ranges from -5 to 5 (-5 -4 -3 -2 -1 0 1 2 3 4 5) and it is the numerical representation of days in the event window, date are the corresponding dates, event_date is the date of the event (corresponds to date_id = 0), rm is the market return, ret is the stock return, ar is the stock abnormal return, the surp is earnings surprise.
I have looked at the Princeton page already (https://dss.princeton.edu/online_hel...ventstudy.html), tried to follow it but got a little lost because my data is not entirely the same format as that one.

I want something like this in the end: ar = x1*day_m5 + x2*day_m4 + ... so I can see how the returns react in the days leading up to, on the day of, and after the announcement.

Thank you!

How to find ttest at 1% ?

$
0
0
Hello.

I want to make a t-test at 5%and one at 1%. I use this code :
Code:
ttest d2, by(b7a)
Then, t = 0,4282 <1,96 so it is not significative at 5%. But what about 1% ? Do I have to make an other code (Which one then ?). Or saying 0,4282<2,58 is enough to say that it is not significative at 1% ?


Array

Testing b coefficients Panel models

$
0
0
I'm running 9 different Panel model to test diaspora effects of migration on trade for 9 different countries. I would be able to compare parameter estimates obtained for one country with those of another country, the question is whether the hypothesis that both sets of coefficient estimates are the same is acceptable on the data or should be rejected. How can I test this for every pair of countries? Thank you
Viewing all 65481 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>