# Reporting Multiple Regression Analysis in SPSS

Looking for a **Multiple Regression Analysis Test in SPSS**? Doing it yourself is always cheaper, but it can also be a lot **more time-consuming**. If you’re not good at SPSS, you can **pay someone to do your SPSS task** for you.

## How to Run Multiple Regression Analysis Test in SPSS: Explanation Step by Step

### From SPSS menu, choose Analyze – Regression – Linear

### Since we want to examine whether the level of depression, level of stress, and age predict students’ level of happiness, our dependent variable is happiness, and our independent variables are Depression, Stress, and Age. Therefore, from the left box, we will transfer variable happiness into the Dependent box and variables Depression, Stress, and Age into Independent(s) box.

### Click on the Statistics tab and open a new window. In the box Regression Coefficients, check Estimates, Confidence intervals. In the box Residuals check Durbin-Watson. Also check Model fit, Descriptives, Collinearity diagnostics. Click Continue.

Click on the Plots tab to show scatterplot for residuals. From the left box transfer ZRESID into Y box, and ZPRED into X box. Click Continue and OK. In the Standardized residual plots box, check Histogram and Normal probability plot.

### The results will appear in the output window.

## How to report a Multiple Regression Analysis results: Explanation Step by Step

### How to Report Descriptive Statistics Table in SPSS Output?

The first table in the output window shows descriptive statistics (mean, standard deviation, and number of observations) for our variables: Happiness, Depression, Stress, and Age.

### How to Report Correlation Table in SPSS Output?

The second table shows the correlation between variables.

Pearson correlation coefficient shows statistically significant and negative relationship between level of happiness and level of depression, [r(99) = -.313, p = .001]. As the level of depression increases, the level of happiness decreases. On the other hand, the table shows a statistically non-significant and positive relationship between the level of happiness and level of stress, [r(99) = .076, p = .227]. Finally, the test shows statistically non-significant and positive relationship between level of happiness and age, [r(99) = .077, p = .225].

### Variable Entered Table SPSS Output?

The following table from the output shows which variables we used (dependent and independent) and method (Enter).

### Model Summary Table in SPSS output?

The following table shows the Model summary – Pearson correlation coefficient R, R-square. Adjusted R-square, standard error of the estimate, and Durbin-Watson statistic.

R-square shows what percent of the variance in the dependent variable explains with independent variables. Hence, R^{2} = .124, indicates that just 12.40% of the variance in the level of happiness explains by the level of depression, level of stress, and age.

Durbin-Watson statistic shows whether there is autocorrelation in the model. To clarify, the rule of thumb is that the DW statistic is approximately 2.00, and there is no autocorrelation. According to Field (2009), values from 1 to 3 are acceptable for DW statistics, and there is no autocorrelation. Since in our example, the DW statistic is 1.193, we conclude that there is no autocorrelation.

### How to report the ANOVA table of Regression Analysis in SPSS Output?

The next table shows the ANOVA results. P-value (column Sig.) must be lower than .05 to results be statistically significant so the results of ANOVA were significant, F(3, 95) = 4.50, p = .005.

### How to report Regression Analysis in SPSS Output?

P values show Sig. If p > .05, then the independent variable does not significantly predict the dependent variable, on the contrary, the IV significantly predicts the DV.

The table shows that the level of depression is p = .001 < .05, so the depression significantly predicts happiness. For the level of stress, p = .314 > .05, so the stress does not significantly predict happiness. And finally, Forage, p = .195 > .05, so age does not significantly predict the DV.

**Unstandardized coefficients** are ‘raw’ coefficients produced by regression analysis when the analysis is performed on original, unstandardized variables. Unlike standardized coefficients, which are normalized unit-less coefficients, an unstandardized coefficient has units and a ‘real-life’ scale.

Finally, It represents the amount of change in a dependent variable Y due to a change of 1 unit of independent variable X. (Source)

In our example, unstandardized coefficient B for depression is negative, so we can say that level of depression negatively predicts the level of happiness. In other words, if the level of depression increases for one unit, the level of happiness will decrease by .145 units.

## How to Interpret a Multiple Regression Analysis Results in APA Style?

A regression analysis was computed to determine whether the level of depression, level of stress, and age predict the level of happiness in a sample of 99 students (N = 99).

#### The equation for the regression line is the level of happiness = b_{0} + b_{1}*level of depression + b_{2}*level of stress + b_{3}*age. R^{2} = .124 indicates that just 12.40% of the variance in the level of happiness is explained by the level of depression, level of stress, and age.

To clarify, the results of ANOVA were significant, F(3, 95) = 4.50, p = .005. Therefore, we must reject the null hypothesis that the slope of our regression line is zero. Levels of depression, stress, and age significantly predict the level of happiness. In conclusion, If the level of depression increases for one unit, the level of happiness will decrease by .145 units.

Visit our “How to Run Multiple Regression Analysis in SPSS” page for more details. Moreover, go to the general page to check Other Reporting Statistical Tests in SPSS. Finally, If you want to watch SPSS videos, Please visit our YouTube Chanel.