Quantcast
Channel: Statalist
Viewing all 65136 articles
Browse latest View live

program error: code follows on the same line as open brace

$
0
0
Hi everyone,

I am new to Stata , i want to replicate this code but it show me a program error saying that code follows on the same line as open brace, I would be very happy if someone could help me.
here is the code

two part model-independent errors

$
0
0
Hi all,

I have estimated a two-part model, and I am interested in checking if the residuals from the first and the second part are correlated or if they are independent. Is there any command to do that?

In addition, is there any command to estimate a double-hurdle model with dependent errors?

Thank you in advance.

Nikos Korompos

Covariate balance with kernal matching

$
0
0
Hello all,

i am using the following code to apply kernal matching the treated and control group - and then merge the weights to my main dataset

Code:
    foreach x in xvarsls {
    foreach yesno in 0 1 {
        use "$posted/sample_aw_robust2", clear
        keep if female==`yesno'
        probit treat $`x' welle_*
        predict ps if e(sample), p
        sort random
        qui psmatch2 treat, outcome(d1lifesat) kernel bw(0.06) ps(ps)
        gen w_treat_`x'_kmd1=_weight
        keep pid syear w_treat_`x'_kmd1 _*
        save "$tables/match_`x'_kmd1_`yesno'", replace
    }
    }

* combine the sub-datasets                
    foreach x in xvarsls {
    use     "$tables/match_`x'_kmd1_0", clear
    append using "$tables/match_`x'_kmd1_1"
    sort pid syear
    save "$tables/matched_`x'_kmd1", replace
    }
    
    use "$posted/sample_aw_robust2", clear    
    foreach x in xvarsls {
    merge 1:1 pid syear using "$tables/matched_`x'_kmd1.dta", keep(master match) nogen
    }
however, when i run the covbal command

Code:
foreach yesno in 0 1 {
preserve
keep if female==`yesno' & w_treat_xvarsls_kmd1>0 & w_treat_xvarsls_kmd1!=.
covbal treat age age2 mig foreign religious badhlth medhlth goodhlth nounemp pgerwzeit pgerwzeit2 pgexpft pgexpft2 ///
pgpsbil2_1 pgpsbil2_2 pgpsbil2_3 pgpsbil2_4 uni voctrain kids_0 kids_1 kids_2 kids_3 k_0_4 k_5_12 k_13_18 ///
mardur mardur2 marriedyoung household_size household_size2 own hhw4 marriages_1 marriages_2 lifesat lifesat2 ///
p_age p_age2 p_pgpsbil2_1 p_pgpsbil2_2 p_pgpsbil2_3 p_pgpsbil2_4 p_uni p_voctrain $exact, wt(w_treat_xvarsls_kmd1) abs for (%9,3f)
restore
}
Output
Code:
             |             Treated             |             Control             |        Balance      
             |      Mean   Variance   Skewness |      Mean   Variance   Skewness |  Std-diff  Var-ratio
-------------+---------------------------------+---------------------------------+----------------------
         age |    40,739     60,339      0,052 |    43,352     62,228     -0,150 |     0,334      0,970
        age2 |  1719,841   4,09e+05      0,418 |  1941,598   4,59e+05      0,153 |     0,336      0,891
         mig |     0,174      0,144      1,721 |     0,186      0,152      1,612 |     0,032      0,951
   foreigner |     0,104      0,094      2,588 |     0,145      0,124      2,019 |     0,123      0,757
   religious |     0,400      0,241      0,408 |     0,512      0,250     -0,050 |     0,227      0,963
     badhlth |     0,096      0,087      2,750 |     0,087      0,080      2,921 |     0,029      1,087
     medhlth |     0,325      0,220      0,749 |     0,327      0,220      0,738 |     0,005      0,999
    goodhlth |     0,580      0,244     -0,323 |     0,586      0,243     -0,347 |     0,012      1,007
     nounemp |     0,658      0,226     -0,666 |     0,710      0,206     -0,926 |     0,112      1,096
   pgerwzeit |    10,228     66,515      0,881 |    12,526     91,139      0,715 |     0,259      0,730
  pgerwzeit2 |   170,925  57208,581      2,329 |   248,042   1,02e+05      1,817 |     0,273      0,559
     pgexpft |    17,792     63,704      0,213 |    20,746     74,657     -0,006 |     0,355      0,853
    pgexpft2 |   380,060  94825,504      1,187 |   505,061   1,35e+05      0,733 |     0,369      0,701
  pgpsbil2_1 |     0,304      0,212      0,850 |     0,318      0,217      0,780 |     0,030      0,978
  pgpsbil2_2 |     0,386      0,238      0,470 |     0,387      0,237      0,462 |     0,004      1,001
  pgpsbil2_3 |     0,093      0,084      2,808 |     0,069      0,064      3,409 |     0,088      1,318
  pgpsbil2_4 |     0,217      0,171      1,370 |     0,225      0,175      1,314 |     0,019      0,977
         uni |     0,255      0,191      1,124 |     0,263      0,194      1,078 |     0,017      0,984
    voctrain |     0,774      0,175     -1,310 |     0,743      0,191     -1,113 |     0,072      0,919
      kids_0 |     0,270      0,197      1,039 |     0,333      0,222      0,708 |     0,139      0,889
      kids_1 |     0,342      0,226      0,666 |     0,270      0,197      1,036 |     0,157      1,145
      kids_2 |     0,299      0,210      0,880 |     0,291      0,206      0,922 |     0,017      1,019
      kids_3 |     0,090      0,082      2,868 |     0,106      0,095      2,554 |     0,055      0,863
       k_0_4 |     0,200      0,160      1,500 |     0,213      0,168      1,399 |     0,033      0,956
      k_5_12 |     0,357      0,230      0,599 |     0,296      0,208      0,895 |     0,130      1,105
     k_13_18 |     0,191      0,155      1,570 |     0,182      0,149      1,644 |     0,023      1,040
      mardur |    12,186     56,093      0,715 |    15,774     78,551      0,255 |     0,437      0,714
     mardur2 |   204,417  54042,907      1,887 |   327,368  95990,035      1,026 |     0,449      0,563
marriedyoung |     0,223      0,174      1,330 |     0,257      0,191      1,112 |     0,079      0,910
household_~e |     3,545      1,161      0,742 |     3,594      1,277      1,007 |     0,045      0,910
household_~2 |    13,725     76,468      2,184 |    14,195     97,899      4,601 |     0,050      0,781
         own |     0,461      0,249      0,157 |     0,610      0,238     -0,451 |     0,302      1,047
        hhw4 |     3,188      4,450      0,711 |     2,991      5,639      2,444 |     0,088      0,789
 marriages_1 |     0,899      0,091     -2,640 |     0,944      0,053     -3,863 |     0,169      1,730
 marriages_2 |     0,101      0,091      2,640 |     0,056      0,053      3,863 |     0,169      1,730
     lifesat |     6,435      3,310     -0,688 |     7,164      2,421     -1,002 |     0,431      1,368
    lifesat2 |    44,707    465,231      0,023 |    53,739    409,483     -0,187 |     0,432      1,136
       p_age |    37,878     62,479      0,467 |    41,060     68,205     -0,007 |     0,394      0,916
      p_age2 |  1497,061   4,05e+05      1,503 |  1754,161   4,68e+05      0,521 |     0,389      0,867
p_pgpsbil2_1 |     0,235      0,180      1,251 |     0,252      0,188      1,144 |     0,039      0,957
p_pgpsbil2_2 |     0,539      0,249     -0,157 |     0,509      0,250     -0,038 |     0,060      0,997
p_pgpsbil2_3 |     0,041      0,039      4,657 |     0,043      0,041      4,512 |     0,012      0,951
p_pgpsbil2_4 |     0,186      0,152      1,618 |     0,196      0,157      1,535 |     0,026      0,963
       p_uni |     0,177      0,146      1,694 |     0,207      0,164      1,447 |     0,076      0,889
  p_voctrain |     0,762      0,182     -1,233 |     0,715      0,204     -0,955 |     0,107      0,893
     welle_1 |     0,041      0,039      4,657 |     0,045      0,043      4,416 |     0,020      0,918
     welle_2 |     0,038      0,036      4,856 |     0,043      0,041      4,500 |     0,028      0,882
     welle_3 |     0,061      0,057      3,673 |     0,045      0,043      4,376 |     0,070      1,327
     welle_4 |     0,049      0,047      4,165 |     0,042      0,041      4,545 |     0,033      1,158
     welle_5 |     0,035      0,034      5,078 |     0,038      0,036      4,841 |     0,017      0,924
     welle_6 |     0,026      0,025      5,946 |     0,042      0,040      4,571 |     0,087      0,634
     welle_7 |     0,049      0,047      4,165 |     0,043      0,041      4,527 |     0,032      1,151
     welle_8 |     0,081      0,075      3,068 |     0,067      0,063      3,459 |     0,053      1,194
     welle_9 |     0,067      0,062      3,474 |     0,061      0,058      3,658 |     0,022      1,084
    welle_10 |     0,070      0,065      3,384 |     0,064      0,060      3,564 |     0,022      1,084
    welle_11 |     0,061      0,057      3,673 |     0,059      0,056      3,729 |     0,006      1,027
    welle_12 |     0,064      0,060      3,571 |     0,055      0,052      3,908 |     0,037      1,154
    welle_13 |     0,035      0,034      5,078 |     0,048      0,046      4,204 |     0,069      0,730
    welle_14 |     0,052      0,050      4,028 |     0,050      0,048      4,126 |     0,010      1,043
    welle_15 |     0,067      0,062      3,474 |     0,048      0,046      4,227 |     0,080      1,364
    welle_16 |     0,038      0,036      4,856 |     0,039      0,037      4,766 |     0,007      0,972
    welle_17 |     0,032      0,031      5,329 |     0,035      0,034      5,037 |     0,019      0,909
    welle_18 |     0,035      0,034      5,078 |     0,054      0,051      3,955 |     0,092      0,661
    welle_19 |     0,061      0,057      3,673 |     0,062      0,058      3,650 |     0,003      0,993
    welle_20 |     0,041      0,039      4,657 |     0,059      0,056      3,726 |     0,087      0,698
    welle_21 |     0,000      0,000          . |     0,000      0,000          . |         .          .
    welle_22 |     0,000      0,000          . |     0,000      0,000          . |         .          .
    welle_23 |     0,000      0,000          . |     0,000      0,000          . |         .          .
--------------------------------------------------------------------------------------------------------


(44,966 observations deleted)



             |             Treated             |             Control             |        Balance      
             |      Mean   Variance   Skewness |      Mean   Variance   Skewness |  Std-diff  Var-ratio
-------------+---------------------------------+---------------------------------+----------------------
         age |    39,789     54,342      0,059 |    43,201     61,775     -0,197 |     0,448      0,880
        age2 |  1637,332   3,52e+05      0,475 |  1928,139   4,49e+05      0,129 |     0,459      0,784
         mig |     0,169      0,141      1,766 |     0,154      0,130      1,918 |     0,041      1,081
   foreigner |     0,110      0,098      2,495 |     0,116      0,102      2,404 |     0,018      0,959
   religious |     0,473      0,250      0,107 |     0,528      0,249     -0,114 |     0,111      1,003
     badhlth |     0,113      0,100      2,450 |     0,105      0,094      2,583 |     0,026      1,070
     medhlth |     0,327      0,221      0,739 |     0,336      0,223      0,695 |     0,019      0,989
    goodhlth |     0,561      0,247     -0,244 |     0,560      0,246     -0,240 |     0,002      1,002
     nounemp |     0,614      0,238     -0,469 |     0,671      0,221     -0,728 |     0,119      1,076
   pgerwzeit |     7,950     53,992      1,285 |    10,370     76,370      1,003 |     0,300      0,707
  pgerwzeit2 |   117,036  39213,245      2,676 |   183,913  74130,701      2,222 |     0,281      0,529
     pgexpft |    10,311     64,578      0,968 |    12,453     85,415      0,733 |     0,247      0,756
    pgexpft2 |   170,707  58292,701      2,274 |   240,490  93950,995      1,708 |     0,253      0,620
  pgpsbil2_1 |     0,180      0,148      1,663 |     0,224      0,174      1,325 |     0,109      0,853
  pgpsbil2_2 |     0,513      0,251     -0,051 |     0,508      0,250     -0,034 |     0,008      1,002
  pgpsbil2_3 |     0,039      0,038      4,733 |     0,044      0,042      4,460 |     0,022      0,908
  pgpsbil2_4 |     0,268      0,197      1,050 |     0,224      0,174      1,326 |     0,102      1,132
         uni |     0,231      0,178      1,277 |     0,253      0,189      1,134 |     0,052      0,942
    voctrain |     0,766      0,180     -1,258 |     0,739      0,193     -1,087 |     0,063      0,931
      kids_0 |     0,338      0,224      0,685 |     0,456      0,248      0,177 |     0,242      0,905
      kids_1 |     0,355      0,230      0,606 |     0,267      0,196      1,052 |     0,190      1,173
      kids_2 |     0,234      0,180      1,258 |     0,217      0,170      1,371 |     0,039      1,056
      kids_3 |     0,073      0,068      3,276 |     0,060      0,056      3,719 |     0,055      1,214
       k_0_4 |     0,096      0,087      2,747 |     0,085      0,077      2,985 |     0,039      1,121
      k_5_12 |     0,344      0,226      0,658 |     0,274      0,199      1,012 |     0,151      1,136
     k_13_18 |     0,242      0,184      1,203 |     0,214      0,168      1,397 |     0,068      1,096
      mardur |    13,614     61,170      0,414 |    17,995     86,362      0,089 |     0,510      0,708
     mardur2 |   246,341  62238,446      1,640 |   410,171   1,25e+05      0,851 |     0,536      0,498
marriedyoung |     0,375      0,235      0,518 |     0,474      0,249      0,104 |     0,202      0,942
household_~e |     3,377      1,004      0,288 |     3,315      1,068      0,429 |     0,061      0,940
household_~2 |    12,408     51,361      1,026 |    12,057     55,665      1,595 |     0,048      0,923
         own |     0,493      0,251      0,028 |     0,611      0,238     -0,454 |     0,238      1,054
        hhw4 |     6,400     20,642      1,847 |     6,133     19,512      2,102 |     0,059      1,058
 marriages_1 |     0,913      0,080     -2,924 |     0,931      0,064     -3,394 |     0,067      1,240
 marriages_2 |     0,087      0,080      2,924 |     0,069      0,064      3,394 |     0,067      1,240
     lifesat |     6,394      3,375     -0,479 |     7,016      2,788     -0,907 |     0,354      1,211
    lifesat2 |    44,254    498,099      0,233 |    52,012    453,503     -0,153 |     0,356      1,098
       p_age |    42,665     67,173      0,176 |    46,230     78,259      0,102 |     0,418      0,858
      p_age2 |  1887,268   5,13e+05      0,607 |  2215,515   6,93e+05      0,642 |     0,423      0,739
p_pgpsbil2_1 |     0,265      0,195      1,066 |     0,311      0,214      0,814 |     0,103      0,910
p_pgpsbil2_2 |     0,439      0,247      0,244 |     0,403      0,241      0,395 |     0,074      1,027
p_pgpsbil2_3 |     0,065      0,061      3,536 |     0,060      0,056      3,705 |     0,020      1,077
p_pgpsbil2_4 |     0,231      0,178      1,277 |     0,225      0,174      1,316 |     0,014      1,021
       p_uni |     0,254      0,190      1,133 |     0,268      0,196      1,046 |     0,034      0,967
  p_voctrain |     0,761      0,183     -1,221 |     0,749      0,188     -1,146 |     0,028      0,970
     welle_1 |     0,023      0,022      6,434 |     0,035      0,034      5,084 |     0,073      0,659
     welle_2 |     0,039      0,038      4,733 |     0,038      0,037      4,818 |     0,006      1,034
     welle_3 |     0,048      0,046      4,235 |     0,039      0,038      4,750 |     0,043      1,214
     welle_4 |     0,025      0,025      6,039 |     0,035      0,034      5,052 |     0,057      0,732
     welle_5 |     0,025      0,025      6,039 |     0,032      0,031      5,354 |     0,038      0,809
     welle_6 |     0,037      0,035      4,934 |     0,038      0,037      4,828 |     0,008      0,966
     welle_7 |     0,034      0,033      5,159 |     0,037      0,035      4,935 |     0,015      0,929
     welle_8 |     0,065      0,061      3,536 |     0,062      0,058      3,626 |     0,011      1,042
     welle_9 |     0,068      0,063      3,444 |     0,061      0,057      3,661 |     0,026      1,100
    welle_10 |     0,059      0,056      3,737 |     0,065      0,061      3,523 |     0,025      0,916
    welle_11 |     0,082      0,075      3,055 |     0,067      0,062      3,477 |     0,058      1,210
    welle_12 |     0,048      0,046      4,235 |     0,057      0,054      3,829 |     0,040      0,853
    welle_13 |     0,068      0,063      3,444 |     0,057      0,054      3,815 |     0,043      1,173
    welle_14 |     0,062      0,058      3,634 |     0,056      0,053      3,860 |     0,025      1,102
    welle_15 |     0,042      0,041      4,551 |     0,050      0,048      4,117 |     0,038      0,850
    welle_16 |     0,051      0,048      4,096 |     0,048      0,045      4,247 |     0,014      1,064
    welle_17 |     0,025      0,025      6,039 |     0,039      0,037      4,770 |     0,077      0,663
    welle_18 |     0,070      0,066      3,358 |     0,057      0,054      3,807 |     0,053      1,214
    welle_19 |     0,085      0,078      2,988 |     0,066      0,062      3,487 |     0,069      1,253
    welle_20 |     0,045      0,043      4,386 |     0,061      0,057      3,673 |     0,071      0,755
    welle_21 |     0,000      0,000          . |     0,000      0,000          . |         .          .
    welle_22 |     0,000      0,000          . |     0,000      0,000          . |         .          .
    welle_23 |     0,000      0,000          . |     0,000      0,000          . |         .          .
--------------------------------------------------------------------------------------------------------

the covariates do not appear to be balanced....

is there a problem with the coding? (The same approach worked for inverse propensity weights)

Thank you

Export mata from stata to txt

$
0
0
Hi everyone!
Do you know how to export a matrix that I did in stata with mata into a txt file? Thank you !
Best,
Clara

Time Fixed Effect and Period Dummy

$
0
0
Hi there,


I want to capture the changes in mean for the dependent variable (y) across different periods. Assume I have 10 years' panel data on 10 companies. I use a dummy variable Y6Y10 to indicate the last 5 years (i.e. year 6 to year 10), so its coefficient will capture the changes in the mean of y in years 6 to 10 relative to years 1 to 5. I also want to include company fixed effect to control for any unobservable changes in company characteristics.

My question is whether I will be able to include also the year fixed effect to control for unobservable changes in the economic trend while I have already included the period dummy Y6Y10 in the model? Will this cause any potential statistical/econometrical issue? (I have run the model and got statistically significant loadings on both the year dummies and the period dummy.)

If I consider three control variables x1, x2 and x3, my model will look like:

regress y x1 x2 x3 Y6Y10 i.year i.company, vce(cluster year)

Your help will be very much appreciated.


Regards,
Georgina




How do you create a mata function that accepts vectors and scalars as arguments

$
0
0
Hello, I have had trouble writing a Mata function that accepts two arguments: (1) a real vector A and (2) a real scalar p. If the scalar p contains decimal places, the function should round the scalar to the nearest integer. The function should then take vector A and calculate (element- wise) 𝐴^2, 𝐴^3, ... , 𝐴^p-1, 𝐴^p (all in separate columns). The result should be displayed in a matrix which should also be exported to Stata’s return list. The code must declare the type of input and all variables used in the function.
I am a beginner in stata and this task may be beyond my expertise so any help would be appreciated.


Best regards,

Juan

How do you create a program which only displays variables at their maximum value

$
0
0
Hello, how can you write a Stata program that only accepts a list of numeric variables as input. The program should then check the variables for their maximum value and then display the variables that have the largest maximum. Any help would be appreciated.

Best regards,

Juan

Rangerun/Rangestat with complicated conditions

$
0
0
Dear all,

Suppose I have the following dataset.

Code:
clear
input str8 groupid str8 memberid year donations
"A000" "B000" 1980 12
"A000" "B001" 1980 23
"A000" "B002" 1980 45
"A000" "B001" 1981 56
"A000" "B002" 1981 31
"A000" "B003" 1981 589
"A000" "B004" 1981 23
"A000" "B002" 1982 23
"A000" "B003" 1982 25
"A000" "B005" 1982 34
"A000" "B002" 1983 65
"A000" "B005" 1983 65
"A000" "B002" 1984 12
"A000" "B002" 1985 87
"A000" "B006" 1985 14
"A001" "B015" 1984 69
"A001" "B018" 1985 34
"A001" "B019" 1985 32
"A001" "B017" 1986 23
"A001" "B019" 1986 65
"A002" "B000" 1980 54
"A002" "B000" 1981 98
"A002" "B005" 1981 54
end
I'm struggling to compute the total value of donations over the last 5 years (interval(year -4 0)) made by the member whose id appears most frequently in each group over the last 5 years as well as the total value of donations made by other members altogether over the last 5 years?

I'm also struggling to estimate the total value of donations over the last 5 years (interval(year -4 0)) made by the member who donates the most (sum of donation values) in each group over the last 5 years and the total value of donations made by other members altogether over the last 5 years?

Could anyone please help me with this ?

Thank you very much for your help,

Vinh

How to calculate the percentage of death-censored event-free survival ?

$
0
0
Hello, I have problem with calculating 10-year event-free survival percentage.

When I plotted the KM curve, the 10-year event-free survival rate seemed to be around 0.60.

But when I use stdescribe, the mean of failure events was 0.26. Thus made event-free survival rate around 0.74, which was much higher than observed from the KM curve.

I suspect this was due to some of patients were censored from dead, but I don't know how to calculate the exact value of the death-censored 10-year event-free survival.

Thank you for your help

Help writing a stata function

$
0
0
Hello, I need help writing a Stata program that takes a number list as input and then checks which of the included numbers lies in a certain (closed) interval specified by the user (the user can simply specify the lower end and the upper end of the interval using separate options). All numbers that have been found to be part of the interval should then be displayed in ascending order. The numbers should also be stored in the return list. Its quite a challenge and any help would be appreciated.

Best regards,

Juan

Doing a t test on regression coefficients

$
0
0
Hi,

I am trying to do a t test that the null hypothesis is B2= -B1 where B2 is the coefficient on a variable called lexpendB and B1 is the coefficient on lexpendA.

My stata output is as follows:

regress voteA lexpendA lexpendB prtystrA

Source | SS df MS Number of obs = 173
-------------+---------------------------------- F(3, 169) = 215.23
Model | 38405.1089 3 12801.703 Prob > F = 0.0000
Residual | 10052.1396 169 59.4801161 R-squared = 0.7926
-------------+---------------------------------- Adj R-squared = 0.7889
Total | 48457.2486 172 281.728189 Root MSE = 7.7123

------------------------------------------------------------------------------
voteA | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
lexpendA | 6.083316 .38215 15.92 0.000 5.328914 6.837719
lexpendB | -6.615417 .3788203 -17.46 0.000 -7.363247 -5.867588
prtystrA | .1519574 .0620181 2.45 0.015 .0295274 .2743873
_cons | 45.07893 3.926305 11.48 0.000 37.32801 52.82985
------------------------------------------------------------------------------


But when I do ttest lexpendB=-lexpendA it gives me an error. What would be the code for testing this hypothesis?

How to convert the DECOMPOSE function output to TEX?

$
0
0
I've been told to use eststo/estout but it doesn't give me what I need.

Here is the test data

Code:
input wage educ gender
100 0 0
130 1 0
150 1 0
90 0 1
65 0 1
end
label var educ "Education"
eststo: decompose wage educ, by(gender) detail
estout
just yields

Code:
Summary of decomposition results:
High: gender==     0.0000
Low:  gender==     1.0000
-------------------------------------------------------------------
  Mean prediction high (H): 126.667
   Mean prediction low (L):  77.500
Raw differential (R) {H-L}:  49.167
   - due to endowments (E):   0.000
 - due to coefficients (C):  22.500
 - due to interaction (CE):  26.667
-------------------------------------------------------------------
                         D:   0       1       0.5     0.600   *
                              -------------------------------------
Unexplained (U){C+(1-D)CE}:  49.167  22.500  35.833  33.167  12.500
    Explained (V) {E+D*CE}:   0.000  26.667  13.333  16.000  36.667
       % unexplained {U/R}:   100.0    45.8    72.9    67.5    25.4
         % explained (V/R):     0.0    54.2    27.1    32.5    74.6
-------------------------------------------------------------------
Note: D in 4th column = relative frequency of high group
      * reference: pooled model over both categories

Decomposition results for variables:
-------------------------------------------------------------------
                                      explained: D = 
                                     ------------------------------
 Variables    E(D=0)  C       CE      1       0.5     0.600   *
-------------------------------------------------------------------
      educ    0.000   0.000  26.667  26.667  13.333  16.000  36.667
     _cons    0.000  22.500   0.000   0.000   0.000   0.000   0.000
-------------------------------------------------------------------
     Total    0.000  22.500  26.667  26.667  13.333  16.000  36.667
-------------------------------------------------------------------
and

Code:
. estout

-------------------------
                     est1
                        b
-------------------------
educ                   55
_cons                  85
-------------------------
What I want is .tex output that looks like this (I'm not sure why the line breaks are not rendering):

Code:
 Mean prediction high (H): & 126.667 \\ Mean prediction low (L): & 77.500 \\ Raw differential (R) H-L: & 49.167 \\ Total unexplained & 33.167 \\ Total explained & 16.000 \\ Fraction unexplained & 67.5 \\ Fraction explained & 32.5 \\ Decomposition results for variables & \\ Education & 16.000 \\

split a string variable

$
0
0
Hello, I´ve got a multiple response variable from a survey:
1 "Miradas lascivas (degeneradas)"
2 "Silbidos y otros sonidos (besos, jadeos, bocinazos)"
3 "Acoso verbal (aluciones al cuerpo y de tipo sexual)"
4 "Arcamiento intimidante (tocar cintura, hablar al oido,etc)"
5 "Agarrones (de senos, vulva, trasero, pene, besos a la fuerza)"
6 "Sentimiento de presion"
7 "Persecución (a pie o en medio de transporte)"
8 "Exhibicionismo"
9 "Violación"
10 "Nunca he sido acosada/o"
11 "Otro"
This variable has a multiple response, so there are some respondent who can choose:
1, 2 and 3; or just 1; or 1, 2, 5, 6, 7; or 1, 11 and so on.
My data is in excel and what I want is to split every response in order to create a frequency chart to find how many respondent answer 1; how many 2; and so on
I hope yo can guide me with this
Kind regards

Vary a variable while setting others to their mean

$
0
0
Hello everyone, I'm new to Stata and running into a problem ; I'm currently using Stata 14 on Windows 10.

So, I have this dataset from “A comparison of parametric and semiparametric estimates of the effect of spousal health insurance coverage on weekly hours worked by wives” by Olson, Craig A. (1998). As the title states it's about health insurance coverage and has 11 variables for about 22 000 observations.

I was tasked with creating a dummy variable for the dependent variable and more dummies for the other variables, to run a logit regression, which wasn't an issue.

Now though, I'm tasked with setting all variables (23 when excluding the dependent variable and it's dummified version) except one, to their respective means, multiply them by their coefficients (from the previous logit regression) and look at the effects on the variable that wasn't set to its mean, the variable I have to vary ; it's a binary variable taking values 0 or 1.

I'm aware that I could use the -egen- function and a foreach loop to get this result, what it not clear to me though is :

1) How do I create new values/modify my values in one go such that every variable is set to its mean and multiplied by its coefficient.

2) How do I then run a logit regression on the variable I vary while using the information in 1).

3) Am I doing this the wrong way?

Thanks for your help.

Nathan

Synth_Runner Package

$
0
0
Hello everyone,
I am trying to run the placebo check for Synthetic control method. I am able to run synth fine, but when I try to run synth-runner, it will not work because I have an unbalanced panel. Does anyone know how I could still get it to run with the unbalanced panel. This is the error I get
" Panel must be strongly balanced. See -tsset-."
r(9);
here is my code
synth_runner Foreign Foreign(1997) Foreign(1998) Foreign(1999) Foreign(2000) Foreign(2001) Foreign(2002) Corruption(2000) Corruption(2002) OilrentsofGDPNYGDPPET GDPpercapitacurrentUSNY PopulationtotalSPPOPTOTL , trunit(32) trperiod(2003) keep(Morocco, replace) nested fig
if I put Synth instead of Synth_Runner it runs fine but I only get one possible outcome.
Thanks in advance!!!!

(nonstandard) cumulative product?

$
0
0
Dear All, I found this question here (http://bbs.pinggu.org/forum.php?mod=...=1#pid54858044) The data set is
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(brand t) long sales
1  1  511000
1  2 1100000
1  3  294000
1  4  228000
1  5  199000
1  6  174000
1  7  155000
1  8  198400
1  9  170000
1 10  276700
2  1  476000
2  2  604200
2  3  436000
2  4  335000
2  5  259000
2  6  360000
2  7  338000
2  8  381800
2  9  425000
2 10  403600
3  1  197000
3  2  490700
3  3  244000
3  4  127000
3  5  130000
3  6   82000
3  7   82000
3  8   98500
3  9  159000
3 10  130600
end
The purpose is to calculate `sum_sales' for each `brand' as below (assume r=0.5 and for brand=1): Array
Any suggestion is appreciated.

Bootstrap: how to obtain corrected coefficients?

$
0
0
My aim is to correct the coefficients that I obtained through quantile regression by using the following command:
. bsqreg BIOTHTOES_MIN DLEEFT LSLEN1, quantile(95) reps(200) BIOTHTOES_MIN is the dependent variable
DLEEFT and LSLEN1 are the independent variables My question is whether the coefficients in the result are already corrected by a certain 'shrinkage factor'?

Problem with sampling exercise and coding

$
0
0
Hi, I need some help with the following exercise. I have two variables, Quarter and Type, and have to create a sample of 200 for each quarter (using variable quarter) so:

set seed 40
bsample 200, strata(quarter)

tab quarter

Submuestra | Freq. Percent Cum.
------------+-----------------------------------
178 | 200 25.00 25.00
179 | 200 25.00 50.00
182 | 200 25.00 75.00
183 | 200 25.00 100.00
------------+-----------------------------------
Total | 800 100.00


but then they ask me to do the same but have to have at least 70 of "type" 1 and the rest "type" 2 for each quarter. I am not able to come up with the code.



Here is the data:
Quarter type
178 1
178 1
178 2
178 1
178 2
178 2
178 2
178 2
178 1
178 2
178 2
178 2
178 1
178 1
178 2
178 2
178 1
178 2
178 2
178 1
178 2
178 2
178 1
178 2
178 2
178 2
178 1
178 2
178 1
178 2
178 2
178 2
178 1
178 1
178 2
178 1
178 1
178 2
178 2
178 1
178 1
178 2
178 1
178 2
178 1
178 1
178 2
178 1
178 2
178 1
178 2
178 1
178 2
178 1
178 2
178 2
178 2
178 2
178 2
178 2
178 1
178 2
178 2
178 1
178 2
178 1
178 1
178 2
178 2
178 1
178 2
178 2
178 2
178 2
178 2
178 1
178 1
178 1
178 1
178 2
178 1
178 2
178 1
178 1
178 1
178 1
178 1
178 2
178 2
178 2
178 1
178 1
178 1
178 1
178 1
178 1
178 2
178 2
178 2
178 1
178 1
178 2
178 1
178 2
178 1
178 2
178 1
178 2
178 1
178 2
178 1
178 2
178 2
178 2
178 2
178 1
178 1
178 1
178 1
178 1
178 1
178 1
178 2
178 2
178 1
178 1
178 2
178 2
178 2
178 1
178 2
178 1
178 1
178 1
178 1
178 2
178 1
178 1
178 2
178 1
178 1
178 1
178 2
178 1
178 2
178 1
178 1
178 2
178 2
178 2
178 1
178 2
178 2
178 2
178 1
178 1
178 2
178 1
178 1
178 2
178 2
178 2
178 2
178 1
178 2
178 2
178 2
178 1
178 2
178 1
178 1
178 1
178 1
178 1
178 1
178 2
178 2
178 2
178 2
178 2
178 1
178 2
178 1
178 1
178 2
178 2
178 1
178 2
178 2
178 2
178 2
178 2
178 2
178 1
178 2
178 2
178 1
178 1
178 2
178 2
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 1
179 1
179 2
179 2
179 2
179 1
179 2
179 2
179 2
179 2
179 2
179 1
179 2
179 2
179 1
179 2
179 2
179 2
179 2
179 2
179 2
179 1
179 1
179 2
179 2
179 2
179 2
179 1
179 2
179 1
179 2
179 1
179 1
179 1
179 1
179 2
179 1
179 2
179 1
179 2
179 2
179 1
179 1
179 2
179 2
179 2
179 1
179 1
179 1
179 1
179 2
179 2
179 1
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 1
179 1
179 1
179 1
179 2
179 1
179 1
179 2
179 2
179 2
179 1
179 2
179 2
179 2
179 2
179 1
179 2
179 1
179 1
179 1
179 2
179 1
179 2
179 2
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 2
179 1
179 1
179 2
179 1
179 2
179 2
179 1
179 1
179 2
179 2
179 2
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 1
179 2
179 2
179 1
179 1
179 1
179 2
179 1
179 1
179 2
179 2
179 2
179 2
179 1
179 2
179 2
179 2
179 2
179 2
179 1
179 2
179 1
179 2
179 2
179 2
179 2
179 1
179 1
179 2
179 2
179 2
179 2
179 2
179 2
179 1
179 1
179 2
179 2
179 1
179 2
179 2
179 1
179 2
179 1
179 2
179 2
179 2
179 2
179 2
179 2
179 1
179 2
179 1
179 1
179 2
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 1
179 2
179 1
179 1
179 1
179 1
179 1
179 2
179 1
179 1
179 2
179 1
179 2
179 1
179 2
179 2
179 2
179 2
179 1
179 2
179 2
182 2
182 1
182 1
182 2
182 1
182 1
182 1
182 2
182 1
182 2
182 2
182 1
182 1
182 1
182 2
182 2
182 2
182 1
182 1
182 2
182 2
182 2
182 2
182 2
182 2
182 2
182 2
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 1
182 1
182 2
182 2
182 2
182 2
182 2
182 2
182 1
182 1
182 2
182 2
182 1
182 1
182 1
182 2
182 1
182 2
182 1
182 2
182 2
182 1
182 2
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 2
182 2
182 1
182 2
182 2
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 1
182 1
182 1
182 2
182 2
182 1
182 2
182 1
182 2
182 1
182 2
182 2
182 2
182 2
182 1
182 1
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 1
182 1
182 1
182 1
182 2
182 2
182 1
182 2
182 1
182 2
182 2
182 1
182 2
182 1
182 2
182 1
182 2
182 1
182 1
182 2
182 2
182 1
182 1
182 1
182 2
182 2
182 1
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 1
182 1
182 2
182 2
182 2
182 1
182 2
182 1
182 1
182 2
182 2
182 2
182 2
182 2
182 2
182 1
182 1
182 1
182 2
182 1
182 2
182 1
182 2
182 1
182 2
182 2
182 1
182 2
182 1
182 1
182 1
182 2
182 2
182 1
182 2
182 2
182 1
182 1
182 1
182 1
182 1
182 1
182 2
182 2
182 1
182 1
183 1
183 1
183 2
183 1
183 1
183 2
183 2
183 2
183 1
183 1
183 2
183 1
183 1
183 2
183 2
183 1
183 2
183 2
183 2
183 2
183 2
183 2
183 2
183 2
183 1
183 2
183 1
183 2
183 2
183 2
183 2
183 1
183 2
183 2
183 1
183 2
183 1
183 2
183 1
183 2
183 2
183 2
183 2
183 1
183 2
183 1
183 1
183 2
183 2
183 1
183 2
183 2
183 1
183 1
183 1
183 2
183 1
183 2
183 1
183 1
183 2
183 2
183 1
183 2
183 2
183 2
183 1
183 2
183 2
183 1
183 1
183 1
183 2
183 1
183 2
183 1
183 2
183 2
183 1
183 1
183 2
183 2
183 2
183 2
183 2
183 1
183 2
183 1
183 2
183 1
183 1
183 1
183 2
183 2
183 2
183 2
183 2
183 1
183 1
183 1
183 2
183 1
183 1
183 2
183 1
183 2
183 2
183 2
183 2
183 1
183 2
183 2
183 2
183 2
183 1
183 1
183 2
183 1
183 1
183 1
183 1
183 2
183 1
183 1
183 1
183 2
183 1
183 1
183 1
183 1
183 1
183 1
183 1
183 1
183 2
183 2
183 1
183 1
183 2
183 2
183 2
183 2
183 1
183 1
183 2
183 2
183 2
183 2
183 2
183 2
183 1
183 2
183 2
183 2
183 2
183 1
183 2
183 1
183 2
183 1
183 1
183 1
183 2
183 2
183 2
183 1
183 1
183 2
183 1
183 1
183 1
183 1
183 1
183 2
183 2
183 1
183 1
183 2
183 2
183 2
183 2
183 1
183 1
183 2
183 2
183 2
183 2
183 2
183 1
183 2
183 2
183 1
183 2
183 2
183 2
183 2
183 2
183 2
183 1
183 2
Thanks

Starting level predicting growth rate

$
0
0
Hello,

I am running a linear growth model where I include random effects for both initial reading achievement level and a linear slope in achievement for each student.

mixed read wave || id: wave, cov(un)

I would now like to obtain Empirical Bayesian estimates of the intercept and slope and treat them as new student-level variables to see whether students' starting reading achievement levels predict their growth rates. Is the following the correct code:

predict ebslope1 ebint1 , reffects
gen bayeint1 = _b[_cons] + ebint1
gen bayeslope1 = _b[wave] + ebslope1
mixed bayeslope1 c.wave##c.bayeint1 || id: wave, cov(un)


Thanks in advance for your help.

ASCI data to Stata format without a do-file

$
0
0
Hey everyone!

Would anyone be able to help me convert (or does anyone have the state file already) the CPS Contingent Work Supplement datasets for 1995, 1997, and 1999 from ASCII to .dta format?

A link to the 1995 data can be found here: https://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/6736.

The ICPSR says you will need the codebook to reconstruct the syntax since they do not have the do file needed.

Thank you!
Jacob
Viewing all 65136 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>