Quantcast
Channel: Statalist
Viewing all 65017 articles
Browse latest View live

factor variables and time-series operators not allowed

$
0
0
*error* problem solved
post to delete

DID - right choice of regression?

$
0
0
Dear Stata community,

I want to estimate the effect of reforms on twelve EU countries on the net fund flows on investment funds. Each fund is matched to a another with respect to its country and performance and has been labeled as either cheap, equally expensive or expensive according to the costs of itself and its respective counterpart.
As already widely discussed in other topics, I created a dummy for time and treatment with this code:

Code:
generate d_time = 0 if date < date("20071101","YMD") & date > date("20051101","YMD")
replace d_time = 1 if date > date("20071101","YMD") & date < date("20091101","YMD")

generate d_treat_all = 0 & date < date("20091101","YMD")
replace d_treat_all = 1 if country <10 & date < date("20091101","YMD")
replace d_treat_all = 1 if country == 11 & date < date("20091101","YMD")
My aim is to show that more expensive funds had lower net fund flows than cheap funds after being treated. Unfortunately I am unsure about which model and how to get the desired results.

I used this code:
Code:
xtset id
xtset id date
xtreg netfundflow i.costs d_time##d_treat, cluster(id)
1 Question) Even though I have three different values for costs, my regression gives me only two values. Do you have any idea, why this could be the case?

HTML Code:
  	 		 			Linear regression 			  			Number of obs = 35916 		 		 			  			F( 5, 1446) = 7.42 		 		 			  			Prob > F = 0.0000 		 		 			  			R-squared = 0.0006 		 		 			  			Root MSE = 548.99 		 		 			  			(Std. 			Err. adjusted for 1447 clusters in id) 		 		 			  			  		 		 			  			Robust 		 		 			netfundflow 			Coef. 			Std. Err. 			t P>t [95% Conf. Interval] 		 		 			  			  		 		 			costs 		 		 			High 			-16.27294 			5.539093 			-2.94 0.003 -27.13846 -5.407421 		 		 			Equal 			-6.636716 			6.54486 			-1.01 0.311 -19.47515 6.201721 		 		 			  		 		 			1.d_time 			33.29411 			12.3204 			2.70 0.007 9.126352 57.46187 		 		 			1.d_treat 			6.112733 			5.944947 			1.03 0.304 -5.548909 17.77438 		 		 			  		 		 			d_time#d_treat 		 		 			1 1 			-27.25072 			12.22966 			-2.23 0.026 -51.2405 -3.260937 		 		 			  		 		 			_cons 			5.901751 			5.675747 			1.04 0.299 -5.231828 17.03533
2) I thought about doing the regression for each costs level, which than can be compared, thus I would circumvent this problem. I used:
Code:
bysort costs: xtreg netfundflow i.costs d_time##d_treat, cluster(id)
Is there any other possibilty to get this in one regression?

However I am not sure if my regression is the right choice for my problem. The Breusch/Pagan test for random effects suggests a FE Model, aswell as the Hausman Test.
But using
Code:
 xtreg netfundflow i.costs d_time##d_treat, cluster(id) fe
seems to misinterpret my data, since I don't want to rely only on individual fund variation of netfundflows, as I assume that variable costs has a major influence on netfundflows.

I therefore thought of the regress comand to control for the fe of costs but not on the individual level. I tried the following code:
Code:
reghdfe netfundflow i.costs i.d_treat##i.d_time, absorb(id) vce(cluster id costs)
I thought that this should absorb the fix effects on the individual level, but adds fixed effects for the costs variable? Am I right?

Since I get this results, I think I might have been wrong with this code.
HTML Code:
  	 		 			(dropped 66 singleton observations) 		 		 			(converged in 1 iterations) 		 		 			note: 1.d_treat omitted because of collinearity 		 		 			Warning: VCV matrix was non-positive semi-definite; adjustment from Cameron, Gelbach & Miller 			applied. 		 		 			WARNING: Missing F statistic (dropped variables due to collinearity or too few clusters). 		 		 			HDFE Linear regression Number of obs = 35,850 		 		 			Absorbing 1 HDFE group F( 3, 2) = . 		 		 			Statistics robust to heteroskedasticity Prob > F = . 		 		 			R-squared = 0.0137 		 		 			Adj R-squared = -0.0260 		 		 			Number of clusters (id) = 1,381 Within R-sq. = 0.0009 		 		 			Number of clusters (costs) = 3 Root MSE = 556.7085 		 		 			(Std. Err. adjusted for 3 clusters in id costs) 		 		 			Robust 		 		 			netfundflow Coef. Std. Err. t P>t [95% Conf. Interval] 		 		 			costs 		 		 			High -61.58262 49.06534 -1.26 0.336 -272.6937 149.5285 		 		 			Equal -80.7145 58.56219 -1.38 0.302 -332.6873 171.2583 		 		 			  		 		 			1.d_treat 0 (empty) 		 		 			1.d_time 45.10881 25.86124 1.74 0.223 -66.16312 156.3807 		 		 			  		 		 			d_treat#d_time 		 		 			1 1 -38.66821 21.77473 -1.78 0.218 -132.3573 55.02087 		 		 			  		 		 			Absorbed degrees of freedom: 		 		 			Absorbed FE Num. Coefs. = Categories - Redundant 		 		 			id 0 1381 1381 * 		 		 			* = fixed effect nested within cluster; treated as redundant for DoF computation
Any comment would be highly appreciated

Best regards
Nils

Code for performing Chi Squared test or Fisher's Exact test for a 2X2 table appropriately

$
0
0
Hi everyone,

I am working on a project where there is a need to develop a program (codes) to evaluate whether Chi Squared test or Fisher's Exact test is suitable for the tabulated variables (2x2 table).

For example, there are to categorical variable A and B. Both are binary.

I am thinking to first -tab a b, mis col expected- ; second if any expected value for any cell is <5, then do -tab a b, mis col exact-, otherwise do -tab a b, mis col chi-. Although having the thought in mind, I have a hard time to realize it in Stata code and could not find any relevant information on google.

Please accept my sincere appreciation on any thoughts/insights you would like to share with me.

Kindly regards,
Mengmeng

bar with different colors?

$
0
0
Dear All, I execute the following command
Code:
graph bar (mean) stv2, over(IT_d) ytitle(Output-Inflation Tradeoff)
and obtain the following figure Array
My question is that, can I have different colors for these two bars?

reshape command does not work

$
0
0
Hi everyone,
I used the reshape command to get a long format data from a wide format data. Unfortunately, the command did not work. I appreciate your advice.

* my current data
clear
input id byte(male_2010_ female_2010_ tot_2010)
1001 1 4 3
1002 2 5 4
1003 3 6 5
1004 4 6 2
1005 5 5 1
end

*what I am looking for
clear
input _id byte (pop gender) year
1001 1 1 2010
1002 2 1 2010
1003 3 1 2010
1004 4 1 2010
1005 5 1 2010
1001 4 2 2010
1002 5 2 2010
1003 6 2 2010
1004 6 2 2010
1005 5 2 2010
1001 3 2 2010
1002 4 3 2010
1003 5 3 2010
1004 2 3 2010
1005 1 3 2010
end

What does it mean to have &quot;tenure&quot; on statalist.org

$
0
0
I recently noticed that I had become a "tenured" member of Statalist.org. I'm not aware of new rights or responsibilities that came with that.

How does someone achieve tenure on this list, and what does it mean?

error using svy with khb

$
0
0
Hello,

I am trying to use the khb command with complex data and I am getting an error message that I cannot understand. From my research, it seems like khb is able to deal with svy data, so I'm a little stumped as to the problem.

I am using Stata 13.

My data is svyset with the following code:

Code:
 svyset IDNUMR [pw=WEIGHT_CATI], strata(stratacross)
The model I'm trying to run is the following:

Code:
svy, subpop(DX_aut_11 if nomiss==1): khb logit causes_genetic ///
hispanic black otherrace parented_aboveHS FPL100to199 FPL200to399 FPL400andup ///
|| all_limit sdq_CATI_high DX_dev_11 DX_int_11 pervasivedevdis autisticdis ///
multipledx autyp_unknown, concomitant(male AGE) disentangle ape summary or
And the error message I get when I do that is "khb is not supported by svy with vce(linearized); see help svy estimation for a list
of Stata estimation commands that are supported by svy."

I've tried changing the vce type and get the same message but with that type where linearized is in the above error message (So for example when I change to vce(cluster) I get the exact same error message above, except it says "khb is not supported by svy with vce(cluster); see help svy...").

When I run the exact same model except without the svy prefix, but still subsetting the data and including the pweight, it runs no problem. That code looks like this:

Code:
khb logit causes_genetic ///
hispanic black otherrace parented_aboveHS FPL100to199 FPL200to399 FPL400andup ///
|| all_limit sdq_CATI_high DX_dev_11 DX_int_11 pervasivedevdis autisticdis ///
multipledx autyp_unknown if DX_aut_11==1  & nomiss==1 [pweight=WEIGHT_CATI], concomitant(male AGE) disentangle ape summary or
I know that subsetting the data this way instead of subpop & using just the pweight instead of pweight + strata + cluster are affecting only my standard errors and not the coefficients, but I'm having trouble understanding why the code for the the first model above won't work (svy + subpop + khb). Any help is much appreciated; I've been through most of the khb posts on here and I didn't see anything similar, so if I missed something please point me in that direction. Thank you!

Lydia

Expression for lagged placeholder variable $ML_y1

$
0
0
I am using Stata's ML routine to estimate the parameters of a conditional density function that includes the values of a dependent variable and also its prior values. I would like to define a one-period lagged placeholder variable based on the original dependent variable. Obviously, $ML_y1[_n-1] is unrecognised syntax so I am wondering if there is an elegant way to do this. Defining another placeholder variable $ML_y2 is not working for my problem since it effectively doubles the estimated parameters from 5 to the unmanageable number of 10. Any thoughts?

Replace missing annual doctor data w/ data from different years/patients for same doctor ID in patient dataset

$
0
0
Hello,

I have a longitudinal dataset that has data about patients (unique patient identifier = PID, patient data collection for time-varying variables begins in 1990) and data about doctors (unique doctor identifier = DID, data collection for time-fixed variables about doctors begins in 2001 and is therefore missing prior to 2000). The data includes the year of data collection and a series of patient and doctor variables. Doctors entered the data in different years, some as late as 2003, and some only participated for one year. Doctors are associated with multiple patients in each year and individual patients see multiple doctors over time, so there is no unique DID + year combination nor are there unique PID + DID combinations. A unique identifier based on the DID would need to specify DID first, then PID as well as year. Here is an example of what the data looks like, sorted by DID and then by Doc_Var1:
PID Year DID Doc_Var1
1 2001 100 2
5 2003 100 2
1 2002 110 .
5 2004 110 .
2 2002 120 1
3 2003 120 1
4 2002 120 1
4 2003 120 1
3 2002 120 2
4 2004 120 .
5 2000 120 .
5 2002 130 .
1 2003 150 1
2 2003 150 .
3 2000 150 .
2 2001 200 2
1 2000 200 .
2 2000 200 .
4 2000 200 .
3 2001 310 1
3 2004 310 .
2 2004 400 2
4 2001 400 .
1 2004 500 2
5 2001 500 2
I need to replace missing data for variables about doctors (e.g. Doc_Var1 in column 4) that is missing for observations prior to 2001 using non-missing data for the same doctor that was collected during any doctor-patient interaction during or after 2001. In addition to doctor data that is missing because this data was not part of the survey prior to 2001, data is also missing for unknown reasons, during or after 2001, as is the case in the 3rd and 4th rows of this sample data for patients #1 and #5 in 2002 and 2004 for DID 110). Because of this second kind of missing data, I can't just sort the data by the DID and the variable for which there is missing data (as the data above is currently sorted using DID and Doc_Var1) and then replace missing data with data from the cell directly above it, as doing so would reference data about a different doctor: in the case of DID 110 in lines 3 and 4 of this sample data, such an approach would replace the missing data for DID 110 with data about DID 100 for Doc_Var1 in line 1 of this sample data. In these cases, the data cannot be replaced by this method and must be left missing.)

The problem that I'm facing in my rudimentary code is that there is no way to identify a missing observation by a doctor in combination with a year -- it's only possible to identify (and therefore replace) data that can be identified by a patient ID and a year. But replacing data using the PID and the year would simply replace missing data that is supposed to be about a specific doctor with data collected about, potentially, a completely different doctor who saw that same patient in a different year. So this is my problem:

How do I tell Stata to:
1. Find observations with missing data for Doc_Var1 (doctor variable one, fixed over time).
2. For each Doctor ID (DID) with missing values for Doc_Var1, replace missing data for Doc_Var1 with non-missing data for Doc_Var1 for that same DID in years that data exists.

Ideally, I only want to replace missing data. Some of the data has "errors" in it that result in supposedly "time-fixed" data changing over time. I want to keep these "errors" in the data for now. I therefore expect that, for instance, a loop used to replace missing data for Doc_Var1 might return different values for the Doc_Var1 for the same doctor over time if a particular doctor had differing values in 2003 and 2004 that were used to replace the missing data.

Conceptually, I'm trying to do something like this:

foreach DID year, {
replace Doc_Var1_YearWithMissingData = 1 if (Doc_Var1_YearWithMissingData == . & Doc_Var1_AnyOtherYear == 1)

replace Doc_Var1_YearWithMissingData = 2 if (Doc_Var1_YearWithMissingData == . & Doc_Var1_AnyOtherYear == 2)
}


I've also tried a much more tedious approach using multiple conditions included as "if" qualifiers with specified years in wide format (one line for each respondent with "_year" following each variable name), but I am quite certain this approach does not work as it references the PID by default rather than the DID:

gen Doc_Var1_Impute_2000 = Doc_Var1_2000

#delimit ;

replace Doc_Var1_Impute_2000 = 1 if Doc_Var1_Impute_2000 == . &
(
Doc_Var1_Impute_2001 == 1 |
Doc_Var1_Impute_2002 == 1 |
Doc_Var1_Impute_2003 == 1 |
Doc_Var1_Impute_2004 == 1 ) ;


Both of these approaches fail. The second approach fails because it just replaces missing values with other values for the same patient in different years--NOT necessarily replacing missing data about a particular doctor in 2000. This problem is caused by the fact that the data can only be organized in Stata by PATIENT rather than by DOCTOR because there is no way to identify specific observations in this data by two variables--e.g. i(DID) j(year)--that uses Doctor ID (DID) because doctors saw multiple patients each year. It is possible, by contrast, to identify specific observations with a unique patient identifier (PID) and the year -- i(PID) j(year). The first approach produces results identical to the second approach, so it seems likely that this code also does the same thing.

Any guidance would be greatly appreciated.

As an aside: I am just beginning to learn how to use loops and foreach statements. Code that has more steps (or separates discrete actions on different lines) is easier for me to translate theoretically and is far more helpful for the moment, even if it feels stilted, repetitive, or just plain ugly to an advanced user. That said, I eagerly welcome and appreciate any and all guidance.

Thanks in advance,
Andrea

sequential logit

$
0
0
I am running sequential logistic regression to infer about usage of drug on crime. there are 4 waves of interviews. I am trying to construct tree at which I am totally confused.
First I define my two transitions as 0000, and 0111, and then run seqlogit for the tree (1:2) which didnt give desired results (coef<0). I am just wondering if I am making mistakes at the stage of constructing tree


g drugtrans =1 if ayanydrug==0 & byanydrug==0 & cyanydrug==0 & dyanydrug==0 (948 missing values generated) .
replace drugtrans =2 if ayanydrug==0 & byanydrug==1 & cyanydrug==1 & dyanydrug==1 (60 real changes made) .
seqlogit drugtrans i.respsex i.arst, tree (1:2)
Transition tree:
Transition 1: 1 : 2
Computing starting values for:
Transition 1
Iteration 0: log likelihood = -249.33364
Iteration 1: log likelihood = -249.33364
Number of obs = 1,649
LR chi2(3) = 16.75
Log likelihood = -249.33364 Prob > chi2 = 0.0008
drugtrans Coef. Std. Err. z P>z [95% Conf. Interval]
respsex
female .1750619 .2702427 0.65 0.517 -.3546041 .7047279
arst
2 -.5273682 1.163988 -0.45 0.650 -2.808742 1.754006
3 -2.129618 .4539332 -4.69 0.000 -3.019311 -1.239925
_cons -1.367098 .4349749 -3.14 0.002 -2.219633 -.5145628

Combining and overlapping graphs using values within variables

$
0
0
Hi everyone,

I have four variables, Country, AgeCategory, EducationLevel, and Wages. I am trying to create a line graph of wages for Agecategory by educationlevel. I used following command:
twoway line Wages AgeCategory if Country=="Nepal", by(Educationlevel)

With this command I get two graphs for two values I have within education level. However, I want to have both lines for education level categories overlapped in the same graph so that I can repeat the command for other countries and combine the graphs. Wages and Agecategory are numeric variable while others are string. I had to make destring agecategory because linegraph was showing typemismatch error. Any help would be appreciated. Thanks.

tabulation with too many values

$
0
0
I have 60 categories for which I want to portray values of a discrete variable taking 8 values in either tabular form. So tab and tab2 respond with "too many values". It's important for me to identify categories for which I have zero quantity of my variable. With table commands apparently failing because of the number of categories, any ideas about how to proceed?

Weights for ordered logistic regression model

$
0
0
Hello folks, I am trying to run a multilevel ordered logistic regression because the outcome is an ordinal variable. However, I have sampling weights which I need to plug in the model as well. According to the manual for meologit which could be found here: https://www.stata.com/manuals/memeologit.pdf on page 5, it says that I need two types of sampling weight: one for level 1 and one for level 2. However, the data I am working on only had sampling weights for level 1 (the lower level). How should I deal with this? below is the syntax I am trying to use. I put question mark on the sampling weight option for level 2, which is not provided in the data. Since students were tested repetitively, so level 1 is individual student assessment scores and level 2 is student.

Code:
meologit selfconcept c.age##c.age i.group [pw = wt_sa1] || StudentID:, pweight(??) mle cluster(Cluster)

univariate summary : problem of zero values in percentiles

$
0
0
Hello,
I'm working on data of 240000 observation (U.S banks), I used -univar- command to have minimum, 25th, 50th, 75th percentiles, and maximum for each variables, but for some variables I found zero in percentiles. there is the code :

HTML Code:
univar QA_PTA PTR LNTA


I use Stata/SE 13.0; Thanks

new issues with esttab

$
0
0
Hi,

I wrote a simple program to export the results of my crosstabs into an excel file using estab, matselrc, and putexcel. I used this program numerous times without problems but now I get weird results. I am using Stata 15.

Code:

global brfss obese

foreach var in $brfss {

capture noisily svy, subpop(if year==2010): tab mracealpha `var', row ci percent obs format (%7.1f)
esttab ., cells("obs b(fmt(1)) lb(fmt(1)) ub(fmt(1)) se(fmt(1))")unstack noobs

matlist r(coefs)
matrix results=r(coefs)
matlist results
*return list
matselrc results r1 , r(10/17) c(1/5)
matlist r1

}


new issues with esttab

$
0
0
Hi,

I wrote a program to export the results of my survey weighted cross tabulations into an excel file using esttab, matselrc, and putexcel (Stata 15.1). I used this program numerous times without problems, but now I get no results after the esttab command. It appears the esttab command is not storing the estimates in memory. Has there been a change in the esttab command? How can I work around this issue?


Code:

global brfss obese

foreach var in $brfss {

capture noisily svy, subpop(if year==2010): tab mracealpha `var', row ci percent obs format (%7.1f)
esttab ., cells("obs b(fmt(1)) lb(fmt(1)) ub(fmt(1)) se(fmt(1))")unstack noobs

matlist r(coefs)
matrix results=r(coefs)
matlist results
*return list
matselrc results r1 , r(10/17) c(1/5)
matlist r1

putexcel set draftresults.xlsx, sheet(ModelSum`var'1) modify
putexcel set draftresults.xlsx, sheet(ModelSumsmoker1) modify
putexcel B2=matrix(r1)

}

I get this Stata ouput:

HTML Code:
(running tabulate on estimation sample)

Number of strata   =        18                  Number of obs     =     18,559
Number of PSUs     =    18,559                  Population size   =  8,238,102
                                                Subpop. no. obs   =     18,559
                                                Subpop. size      =  8,238,102
                                                Design df         =     18,541

-------------------------------------------------
mracealph |            obese, BMI>=30
a         |          No          Yes        Total
----------+--------------------------------------
     AIAN |        53.5         46.5        100.0
          | [42.3,64.3]  [35.7,57.7]
          |        85.0         61.0        146.0
          |
    Asian |        92.3          7.7        100.0
          | [90.3,93.9]    [6.1,9.7]
          |      1195.0         94.0       1289.0
          |
    Black |        66.1         33.9        100.0
          | [61.1,70.8]  [29.2,38.9]
          |       429.0        212.0        641.0
          |
 Multiple |        76.3         23.7        100.0
          | [70.9,80.9]  [19.1,29.1]
          |       294.0        111.0        405.0
          |
     NHPI |        70.9         29.1        100.0
          | [58.5,80.8]  [19.2,41.5]
          |        64.0         33.0         97.0
          |
    White |        76.7         23.3        100.0
          | [75.7,77.7]  [22.3,24.3]
          |     12401.0       3580.0      15981.0
          |
    Total |        78.2         21.8        100.0
          | [77.3,79.1]  [20.9,22.7]
          |     14468.0       4091.0      18559.0
-------------------------------------------------
  Key:  row percentage
        [95% confidence interval for row percentage]
        number of observations

  Pearson:
    Uncorrected   chi2(5)         =  528.8654
    Design-based  F(4.92, 91168.18)=   37.7259    P = 0.0000

Note: 23 strata omitted because they contain no subpopulation members.

------------



------------
------------

.
end of do-file


. matlist r(coefs)

             | active
             |       obs          b         lb         ub         se
-------------+-------------------------------------------------------
         p11 |        .y     .53501         .y         .y   .0571526
         p12 |        .y     .46499         .y         .y   .0571526
         p21 |        .y   .9228453         .y         .y    .009021
         p22 |        .y   .0771547         .y         .y    .009021
         p31 |        .y   .6612203         .y         .y     .02494
         p32 |        .y   .3387797         .y         .y     .02494
         p41 |        .y   .7629963         .y         .y   .0255429
         p42 |        .y   .2370037         .y         .y   .0255429
         p51 |        .y   .7087532         .y         .y   .0576917
         p52 |        .y   .2912468         .y         .y   .0576917
         p61 |        .y   .7671453         .y         .y   .0051494
         p62 |        .y   .2328547         .y         .y   .0051494 

Graph - correlation and frequency information

$
0
0
Dear all

I would like to perform a graph on Stata showing the frequencies of the variables on the circles and the correlation among them on the lines. The figure shpuld be similar to this graph: https://goo.gl/Mg8S9q.

Is it possible?

Thank you.

My best, Bruno.

values changes to date format when a stata file is exported to csv file

$
0
0
Hi everyone,
I have a big dataset including the age group variable. The variable contains "0-4", "5-9" etc. After exporting the dataset to a csv file, the "5-9" changes to a data format such as 9-May, 14-Oct, etc. Is there any way to avoid this conversion?
Best,
Nader

confidence interval for chi2 statistic following lrtest

$
0
0
Hi Statalist,

I have conducted multiple regression analyses in a SEM framework (using STATA 13.0). To compare the model fit of the covariate-only model versus the full model (with predictors of interest), I conducted a likelihood ratio test, which produced a chi-square statistic and an associated p-value. Is there a way to compute a 95% CI for the chi-square statistic (from the LR test)? Below is the exact syntax. Thanks!
sem (RC_CNQTOTALT2 <- Female XComorbT1 aget1_centered EducationT1 ///
MOS_StrucT1_w MOS_EPAT1 MOS_tangT1 FFMP_NT1), nocapslatent ///
method(mlmv)

estimate store m1

sem (RC_CNQTOTALT2 <- Female XComorbT1 aget1_centered EducationT1 ///
MOS_StrucT1_w@0 MOS_EPAT1@0 MOS_tangT1@0 FFMP_NT1@0), ///
nocapslatent method(mlmv)

estimate store m2

lrtest m1 m2

cmp command with one endogenous variable but has more than one instruments

$
0
0
Dear all,

My dependent variable is ordered and I decided to use IV-oprobit following cmp Stata module provided by David Roodman. One question raised here is:

How to write cmp command in Stata if we only have one endogenous variable but instrumented with more than one instrument, i.e. 5 instrumental variables. Is that look like this:

cmp (Y = X1 X2 X3) (X2 = Z1 Z2 Z3 Z4 Z5 X2 X3), ind($cmp_oprobit $cmp_cont)

Z is instruments for X2, whereas X2 and X3 are just controls.

Can anyone help me?

Rian
Viewing all 65017 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>