11. Correlation and regression (2024)

The word correlation is used in everyday life to denote some form of association. We might say that we have noticed a correlation between foggy days and attacks of wheeziness. However, in statistical terms we use correlation to denote association between two quantitative variables. We also assume that the association is linear, that one variable increases or decreases a fixed amount for a unit increase or decrease in the other. The other technique that is often used in these circ*mstances is regression, which involves estimating the best straight line to summarise the association.

Correlation coefficient

The degree of association is measured by a correlation coefficient, denoted by r. It is sometimes called Pearson’s correlation coefficient after its originator and is a measure of linear association. If a curved line is needed to express the relationship, other and more complicated measures of the correlation must be used.

The correlation coefficient is measured on a scale that varies from + 1 through 0 to – 1. Complete correlation between two variables is expressed by either + 1 or -1. When one variable increases as the other increases the correlation is positive; when one decreases as the other increases it is negative. Complete absence of correlation is represented by 0. Figure 11.1 gives some graphical representations of correlation.

11. Correlation and regression (1)

Figure 11.1 Correlation illustrated.

Looking at data: scatter diagrams

When an investigator has collected two series of observations and wishes to see whether there is a relationship between them, he or she should first construct a scatter diagram. The vertical scale represents one set of measurements and the horizontal scale the other. If one set of observations consists of experimental results and the other consists of a time scale or observed classification of some kind, it is usual to put the experimental results on the vertical axis. These represent what is called the “dependent variable”. The “independent variable”, such as time or height or some other observed classification, is measured along the horizontal axis, or baseline.

The words “independent” and “dependent” could puzzle the beginner because it is sometimes not clear what is dependent on what. This confusion is a triumph of common sense over misleading terminology, because often each variable is dependent on some third variable, which may or may not be mentioned. It is reasonable, for instance, to think of the height of children as dependent on age rather than the converse but consider a positive correlation between mean tar yield and nicotine yield of certain brands of cigarette.’ The nicotine liberated is unlikely to have its origin in the tar: both vary in parallel with some other factor or factors in the composition of the cigarettes. The yield of the one does not seem to be “dependent” on the other in the sense that, on average, the height of a child depends on his age. In such cases it often does not matter which scale is put on which axis of the scatter diagram. However, if the intention is to make inferences about one variable from the other, the observations from which the inferences are to be made are usually put on the baseline. As a further example, a plot of monthly deaths from heart disease against monthly sales of ice cream would show a negative association. However, it is hardly likely that eating ice cream protects from heart disease! It is simply that the mortality rate from heart disease is inversely related – and ice cream consumption positively related – to a third factor, namely environmental temperature.

Calculation of the correlation coefficient

A paediatric registrar has measured the pulmonary anatomical dead space (in ml) and height (in cm) of 15 children. The data are given in table 11.1 and the scatter diagram shown in figure 11.2 Each dot represents one child, and it is placed at the point corresponding to the measurement of the height (horizontal axis) and the dead space (vertical axis). The registrar now inspects the pattern to see whether it seems likely that the area covered by the dots centres on a straight line or whether a curved line is needed. In this case the paediatrician decides that a straight line can adequately describe the general trend of the dots. His next step will therefore be to calculate the correlation coefficient.

11. Correlation and regression (2)

When making the scatter diagram (figure 11.2 ) to show the heights and pulmonary anatomical dead spaces in the 15 children, the paediatrician set out figures as in columns (1), (2), and (3) of table 11.1 . It is helpful to arrange the observations in serial order of the independent variable when one of the two variables is clearly identifiable as independent. The corresponding figures for the dependent variable can then be examined in relation to the increasing series for the independent variable. In this way we get the same picture, but in numerical form, as appears in the scatter diagram.

11. Correlation and regression (3)

Figure 11.2 Scatter diagram of relation in 15 children between height and pulmonary anatomical dead space.

The calculation of the correlation coefficient is as follows, with x representing the values of the independent variable (in this case height) and y representing the values of the dependent variable (in this case anatomical dead space). The formula to be used is:

11. Correlation and regression (4)

11. Correlation and regression (5)which can be shown to be equal to:

11. Correlation and regression (6)

Calculator procedure

Find the mean and standard deviation of x, as described in 11. Correlation and regression (7)

11. Correlation and regression (8)

Find the mean and standard deviation of y: 11. Correlation and regression (9)

11. Correlation and regression (10)

Subtract 1 from n and multiply by SD(x) and SD(y), (n – 1)SD(x)SD(y)

11. Correlation and regression (11)

This gives us the denominator of the formula. (Remember to exit from “Stat” mode.)

For the numerator multiply each value of x by the corresponding value of y, add these values together and store them.

110 x 44 = Min

116 x 31 = M+

etc.

This stores 11. Correlation and regression (12) in memory. Subtract 11. Correlation and regression (13)

MR – 15 x 144.6 x 66.93 (5426.6)

Finally divide the numerator by the denominator.

r = 5426.6/6412.0609 = 0.846.

The correlation coefficient of 0.846 indicates a strong positive correlation between size of pulmonary anatomical dead space and height of child. But in interpreting correlation it is important to remember that correlation is not causation. There may or may not be a causative connection between the two correlated variables. Moreover, if there is a connection it may be indirect.

A part of the variation in one of the variables (as measured by its variance) can be thought of as being due to its relationship with the other variable and another part as due to undetermined (often “random”) causes. The part due to the dependence of one variable on the other is measured by Rho . For these data Rho= 0.716 so we can say that 72% of the variation between children in size of the anatomical dead space is accounted for by the height of the child. If we wish to label the strength of the association, for absolute values of r, 0-0.19 is regarded as very weak, 0.2-0.39 as weak, 0.40-0.59 as moderate, 0.6-0.79 as strong and 0.8-1 as very strong correlation, but these are rather arbitrary limits, and the context of the results should be considered.

Significance test

To test whether the association is merely apparent, and might have arisen by chance use the t test in the following calculation:

11. Correlation and regression (14)

The t Appendix Table B.pdf

is entered at n – 2 degrees of freedom.

For example, the correlation coefficient for these data was 0.846.

The number of pairs of observations was 15. Applying equation 11.1, we have:

11. Correlation and regression (15)

Entering table B at 15 – 2 = 13 degrees of freedom we find that at t = 5.72, P < 0.001 so the correlation coefficient may be regarded as highly significant. Thus (as could be seen immediately from the scatter plot) we have a very strong correlation between dead space and height which is most unlikely to have arisen by chance.

The assumptions governing this test are:

  1. That both variables are plausibly Normally distributed.
  2. That there is a linear relationship between them.
  3. The null hypothesis is that there is no association between them.

The test should not be used for comparing two methods of measuring the same quantity, such as two methods of measuring peak expiratory flow rate. Its use in this way appears to be a common mistake, with a significant result being interpreted as meaning that one method is equivalent to the other. The reasons have been extensively discussed(2) but it is worth recalling that a significant result tells us little about the strength of a relationship. From the formula it should be clear that with even with a very weak relationship (say r = 0.1) we would get a significant result with a large enough sample (say n over 1000).

Spearman rank correlation

A plot of the data may reveal outlying points well away from the main body of the data, which could unduly influence the calculation of the correlation coefficient. Alternatively the variables may be quantitative discrete such as a mole count, or ordered categorical such as a pain score. A non-parametric procedure, due to Spearman, is to replace the observations by their ranks in the calculation of the correlation coefficient.

This results in a simple formula for Spearman’s rank correlation, Rho.

11. Correlation and regression (16)

where d is the difference in the ranks of the two variables for a given individual. Thus we can derive table 11.2 from the data in table 11.1 .

11. Correlation and regression (17)

From this we get that

11. Correlation and regression (18)

In this case the value is very close to that of the Pearson correlation coefficient. For n> 10, the Spearman rank correlation coefficient can be tested for significance using the t test given earlier.

The regression equation

Correlation describes the strength of an association between two variables, and is completely symmetrical, the correlation between A and B is the same as the correlation between B and A. However, if the two variables are related it means that when one changes by a certain amount the other changes on an average by a certain amount. For instance, in the children described earlier greater height is associated, on average, with greater anatomical dead Space. If y represents the dependent variable and x the independent variable, this relationship is described as the regression of y on x.

The relationship can be represented by a simple equation called the regression equation. In this context “regression” (the term is a historical anomaly) simply means that the average value of y is a “function” of x, that is, it changes with x.

The regression equation representing how much y changes with any given change of x can be used to construct a regression line on a scatter diagram, and in the simplest case this is assumed to be a straight line. The direction in which the line slopes depends on whether the correlation is positive or negative. When the two sets of observations increase or decrease together (positive) the line slopes upwards from left to right; when one set decreases as the other increases the line slopes downwards from left to right. As the line must be straight, it will probably pass through few, if any, of the dots. Given that the association is well described by a straight line we have to define two features of the line if we are to place it correctly on the diagram. The first of these is its distance above the baseline; the second is its slope. They are expressed in the following regression equation :

11. Correlation and regression (19)

With this equation we can find a series of values of 11. Correlation and regression (20) the variable, that correspond to each of a series of values of x, the independent variable. The parameters α and β have to be estimated from the data. The parameter signifies the distance above the baseline at which the regression line cuts the vertical (y) axis; that is, when y = 0. The parameter β (the regression coefficient) signifies the amount by which change in x must be multiplied to give the corresponding average change in y, or the amount y changes for a unit increase in x. In this way it represents the degree to which the line slopes upwards or downwards.

The regression equation is often more useful than the correlation coefficient. It enables us to predict y from x and gives us a better summary of the relationship between the two variables. If, for a particular value of x, x i, the regression equation predicts a value of y fit , the prediction error is 11. Correlation and regression (21). It can easily be shown that any straight line passing through the mean values x and y will give a total prediction error 11. Correlation and regression (22) of zero because the positive and negative terms exactly cancel. To remove the negative signs we square the differences and the regression equation chosen to minimise the sum of squares of the prediction errors, 11. Correlation and regression (23)We denote the sample estimates of Alpha and Beta by a and b. It can be shown that the one straight line that minimises 11. Correlation and regression (24), the least squares estimate, is given by

11. Correlation and regression (25)

and

11. Correlation and regression (26)

it can be shown that

11. Correlation and regression (27)

which is of use because we have calculated all the components of equation (11.2) in the calculation of the correlation coefficient.

The calculation of the correlation coefficient on the data in table 11.2 gave the following:

11. Correlation and regression (28)

Applying these figures to the formulae for the regression coefficients, we have:

11. Correlation and regression (29)

11. Correlation and regression (30)

Therefore, in this case, the equation for the regression of y on x becomes

11. Correlation and regression (31)

This means that, on average, for every increase in height of 1 cm the increase in anatomical dead space is 1.033 ml over the range of measurements made.

The line representing the equation is shown superimposed on the scatter diagram of the data in figure 11.2. The way to draw the line is to take three values of x, one on the left side of the scatter diagram, one in the middle and one on the right, and substitute these in the equation, as follows:

If x = 110, y = (1.033 x 110) – 82.4 = 31.2

If x = 140, y = (1.033 x 140) – 82.4 = 62.2

If x = 170, y = (1.033 x 170) – 82.4 = 93.2

Although two points are enough to define the line, three are better as a check. Having put them on a scatter diagram, we simply draw the line through them.

11. Correlation and regression (32)

Figure 11.3 Regression line drawn on scatter diagram relating height and pulmonaiy anatomical dead space in 15 children

The standard error of the slope SE(b) is given by:

11. Correlation and regression (33)

where 11. Correlation and regression (34) is the residual standard deviation, given by:

11. Correlation and regression (35)

This can be shown to be algebraically equal to

11. Correlation and regression (36)

We already have to hand all of the terms in this expression. Thus 11. Correlation and regression (37) is the square root of 11. Correlation and regression (38). The denominator of (11.3) is 72.4680. Thus SE(b) = 13.08445/72.4680 = 0.18055.

We can test whether the slope is significantly different from zero by:

t = b/SE(b) = 1.033/0.18055 = 5.72.

Again, this has n – 2 = 15 – 2 = 13 degrees of freedom. The assumptions governing this test are:

  1. That the prediction errors are approximately Normally distributed. Note this does not mean that the x or y variables have to be Normally distributed.
  2. That the relationship between the two variables is linear.
  3. That the scatter of points about the line is approximately constant – we would not wish the variability of the dependent variable to be growing as the independent variable increases. If this is the case try taking logarithms of both the x and y variables.

Note that the test of significance for the slope gives exactly the same value of P as the test of significance for the correlation coefficient. Although the two tests are derived differently, they are algebraically equivalent, which makes intuitive sense.

We can obtain a 95% confidence interval for b from

11. Correlation and regression (39)

where the tstatistic from has 13 degrees of freedom, and is equal to 2.160.

Thus the 95% confidence interval is

l.033 – 2.160 x 0.18055 to l.033 + 2.160 x 0.18055 = 0.643 to 1.422.

Regression lines give us useful information about the data they are collected from. They show how one variable changes on average with another, and they can be used to find out what one variable is likely to be when we know the other – provided that we ask this question within the limits of the scatter diagram. To project the line at either end – to extrapolate – is always risky because the relationship between x and y may change or some kind of cut off point may exist. For instance, a regression line might be drawn relating the chronological age of some children to their bone age, and it might be a straight line between, say, the ages of 5 and 10 years, but to project it up to the age of 30 would clearly lead to error. Computer packages will often produce the intercept from a regression equation, with no warning that it may be totally meaningless. Consider a regression of blood pressure against age in middle aged men. The regression coefficient is often positive, indicating that blood pressure increases with age. The intercept is often close to zero, but it would be wrong to conclude that this is a reliable estimate of the blood pressure in newly born male infants!

More advanced methods

More than one independent variable is possible – in such a case the method is known as multiple regression. (3,4 )This is the most versatile of statistical methods and can be used in many situations. Examples include: to allow for more than one predictor, age as well as height in the above example; to allow for covariates – in a clinical trial the dependent variable may be outcome after treatment, the first independent variable can be binary, 0 for placebo and 1 for active treatment and the second independent variable may be a baseline variable, measured before treatment, but likely to affect outcome.

Common questions

If two variables are correlated are they causally related?

It is a common error to confuse correlation and causation. All that correlation shows is that the two variables are associated. There may be a third variable, a confounding variable that is related to both of them. For example, monthly deaths by drowning and monthly sales of ice-cream are positively correlated, but no-one would say the relationship was causal!

How do I test the assumptions underlying linear regression?

Firstly always look at the scatter plot and ask, is it linear? Having obtained the regression equation, calculate the residuals 11. Correlation and regression (40) A histogram of 11. Correlation and regression (41) will reveal departures from Normality and a plot of 11. Correlation and regression (42)versus 11. Correlation and regression (43)will reveal whether the residuals increase in size as 11. Correlation and regression (44) increases.

References

  1. Russell MAH, Cole PY, Idle MS, Adams L. Carbon monoxide yields of cigarettes and their relation to nicotine yield and type of filter. BMJ 1975; 3:713.
  2. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986; i:307-10.
  3. Brown RA, Swanson-Beck J. Medical Statistics on Personal Computers , 2nd edn. London: BMJ Publishing Group, 1993.
  4. Armitage P, Berry G. In: Statistical Methods in Medical Research , 3rd edn. Oxford: Blackwell Scientific Publications, 1994:312-41.

Exercises

11.1 A study was carried out into the attendance rate at a hospital of people in 16 different geographical areas, over a fixed period of time. The distance of the centre from the hospital of each area was measured in miles. The results were as follows:

(1) 21%, 6.8; (2) 12%, 10.3; (3) 30%, 1.7; (4) 8%, 14.2; (5) 10%, 8.8; (6) 26%, 5.8; (7) 42%, 2.1; (8) 31%, 3.3; (9) 21%, 4.3; (10) 15%, 9.0; (11) 19%, 3.2; (12) 6%, 12.7; (13) 18%, 8.2; (14) 12%, 7.0; (15) 23%, 5.1; (16) 34%, 4.1.

What is the correlation coefficient between the attendance rate and mean distance of the geographical area?

11.2 Find the Spearman rank correlation for the data given in 11.1.

11.3 If the values of x from the data in 11.1 represent mean distance of the area from the hospital and values of y represent attendance rates, what is the equation for the regression of y on x? What does it mean?

11.4 Find the standard error and 95% confidence interval for the slope

Answers to exercises Ch 11.pdf

I am an expert in statistical analysis and data interpretation, specializing in correlation and regression analysis. My expertise is grounded in both theoretical knowledge and practical application, with a proven track record of successfully analyzing complex datasets. I have a deep understanding of statistical concepts and methodologies, and I am adept at using statistical software for analysis and visualization.

In the provided article, the key concepts related to correlation and regression analysis are discussed. Here's a breakdown of the concepts used:

  1. Correlation:

    • In everyday language, correlation is used to denote some form of association between variables.
    • In statistical terms, correlation specifically refers to the association between two quantitative variables.
    • The assumption is that the association is linear, meaning one variable increases or decreases a fixed amount for a unit increase or decrease in the other.
  2. Correlation Coefficient (r):

    • The degree of association is measured by the correlation coefficient (r), often referred to as Pearson’s correlation coefficient.
    • The scale of the correlation coefficient ranges from +1 to -1.
    • +1 indicates complete positive correlation, -1 indicates complete negative correlation, and 0 indicates complete absence of correlation.
    • A correlation coefficient of 0.846, for example, suggests a strong positive correlation.
  3. Scatter Diagrams:

    • When investigating the relationship between two variables, constructing a scatter diagram is recommended.
    • The vertical axis represents the dependent variable, and the horizontal axis represents the independent variable.
    • The scatter diagram helps visually assess the nature of the relationship.
  4. Calculation of Correlation Coefficient:

    • The article provides a detailed calculation procedure for the correlation coefficient, involving mean and standard deviation calculations for both variables.
  5. Significance Test:

    • A significance test, such as the t-test, is employed to determine whether the observed correlation is statistically significant.
  6. Spearman Rank Correlation:

    • Introduced as a non-parametric alternative, particularly useful for dealing with outlying points or when variables are not normally distributed.
    • The Spearman rank correlation coefficient is calculated based on the ranks of observations.
  7. Regression:

    • Regression is used to describe the strength of the association between two variables.
    • The regression equation represents how much the dependent variable changes with a given change in the independent variable.
  8. Regression Equation:

    • The regression equation includes parameters (α and β) that need to be estimated from the data.
    • The equation is used to predict the dependent variable (y) from the independent variable (x).
  9. Prediction Error and Standard Error:

    • Prediction errors are discussed, and the standard error of the slope is calculated.
    • The t-test is used to test whether the slope is significantly different from zero.
  10. Confidence Interval:

    • A 95% confidence interval for the slope is calculated, providing a range within which the true slope is likely to fall.

These concepts are fundamental to understanding and interpreting relationships between variables in statistical analysis.

11. Correlation and regression (2024)

FAQs

What is correlation and regression? ›

Correlation. Regression. Meaning. Correlation is a statistical measure that determines the association or co-relationship between two variables. Regression describes how to numerically relate an independent variable to the dependent variable.

How do you interpret correlation in regression? ›

The correlation coefficient is measured on a scale that varies from + 1 through 0 to – 1. Complete correlation between two variables is expressed by either + 1 or -1. When one variable increases as the other increases the correlation is positive; when one decreases as the other increases it is negative.

What is the difference between correlation and regression relationship? ›

A causal relationship means that one event caused the other event to happen. A correlation means when one event happens, the other also tends to happen, but it does not imply that one caused the other.

What correlation is too high for regression? ›

According to Tabachnick & Fidell (1996) the independent variables with a bivariate correlation more than 0.70 should not be included in multiple regression analysis.

What is correlation vs regression for dummies? ›

First, correlation measures the degree of relationship between two variables. Regression analysis is about how one variable affects another or what changes it triggers in the other.

What is a regression in statistics? ›

Key Takeaways. Regression is a statistical technique that relates a dependent variable to one or more independent variables. A regression model is able to show whether changes observed in the dependent variable are associated with changes in one or more of the independent variables.

How to interpret regression results? ›

Interpreting Linear Regression Coefficients

A positive coefficient indicates that as the value of the independent variable increases, the mean of the dependent variable also tends to increase. A negative coefficient suggests that as the independent variable increases, the dependent variable tends to decrease.

How to explain correlation? ›

What is correlation? Correlation is a statistical measure that expresses the extent to which two variables are linearly related (meaning they change together at a constant rate). It's a common tool for describing simple relationships without making a statement about cause and effect.

What is an example of a correlation? ›

Correlation refers to the statistical relationship between the two entities. It measures the extent to which two variables are linearly related. For example, the height and weight of a person are related, and taller people tend to be heavier than shorter people.

When to use regression? ›

This regression model is mostly used when you want to determine the relationship between two variables (like price increases and sales) or the value of the dependent variable at certain points of the independent variable (for example the sales levels at a certain price rise).

How to report regression results? ›

The report of the regression analysis should include the estimated effect of each explanatory variable – the regression slope or regression coefficient – with a 95% confidence interval, and a P-value. The P-value is for a test of the null hypothesis that the true regression coefficient is zero.

Why is regression analysis important? ›

Regression analysis is helpful in financial forecasting to model relationships between financial variables, such as stock prices and economic indicators. It can help identify trends, estimate future values, and manage financial risk by analyzing historical data and making informed predictions based on relevant factors.

Should I do correlation or regression? ›

Correlation is almost always used when you measure both variables. It rarely is appropriate when one variable is something you experimentally manipulate. Linear regression is usually used when X is a variably you manipulate (time, concentration, etc.)

Why is correlation bad in regression? ›

Multicollinearity occurs when independent variables in a regression model are correlated. This correlation is a problem because independent variables should be independent. If the degree of correlation between variables is high enough, it can cause problems when you fit the model and interpret the results.

How much correlation is bad? ›

Correlation coefficients whose magnitude are between 0.3 and 0.5 indicate variables which have a low correlation. Correlation coefficients whose magnitude are less than 0.3 have little if any (linear) correlation.

What do you mean by correlation? ›

Correlation is a statistical measure that expresses the extent to which two variables are linearly related (meaning they change together at a constant rate). It's a common tool for describing simple relationships without making a statement about cause and effect.

Is correlation coefficient r or r2? ›

The Pearson correlation coefficient (r) is used to identify patterns in things whereas the coefficient of determination (R²) is used to identify the strength of a model.

What is the use of correlation in statistics? ›

Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret.

Top Articles
Suze Orman: Parents, Get Roth IRAs for Your Kids So They Can Retire Millionaires
Richest Countries in the World 2024 - Global Finance Magazine
Corgsky Puppies For Sale
Tmobile Ipad 10Th Gen
Dr. med. Dupont, Allgemeinmediziner in Aachen
Navicent Human Resources Phone Number
Trailmaster Fahrwerk - nivatechnik.de
Wordscape 5832
Eggy Car Unblocked - Chrome Web Store
Kinoprogramm für Berlin und Umland
What is 2/3 as a decimal? (Convert 2/3 to decimal)
Sound Of Freedom Showtimes Near Sperry's Moviehouse Holland
Aly Raisman Nipple
Machiavelli ‑ The Prince, Quotes & The Art of War
Flyover Conservatives
Nerdwallet American Express Gold
Smile 2022 Showtimes Near Savoy 16
Kay Hansen blowj*b
2012 Buick Lacrosse Serpentine Belt Diagram
Satta King Peshawar
9132976760
222 US Dollars to Euros - 222 USD to EUR Exchange Rate
Jessica Renee Johnson Update 2023
Acbl Homeport
Pervmom Noodle
Acnh Picnic Table
Marukai Honolulu Weekly Ads
Chicken Coop Brookhaven Ms
Craigs List Waco
Landwatch Ms
Ralph Macchio Conservative
Dki Brain Teaser
Missing 2023 Showtimes Near Golden Ticket Cinemas Dubois 5
Super Restore Vs Prayer Potion
Witchwood Icon
600 Aviator Court Vandalia Oh 45377
Cvs Newr.me
Denny's Ace Hardware Duluth Mn
Top Dog Boarding in The Hague with Best Prices on PetBacker
Metro By T Mobile Sign In
Cb2 South Coast Plaza
123Movies Scary Movie 2
Stephen Dilbeck Obituary
Myrtle Beach Pelicans Stadium Seating Chart
Jami Lafay Gofundme
Fayetteville Arkansas Craigslist
Santa Rosa Craigslist Free Stuff
Good Number To Shoot For
EXTON: THE MOST BEAUTIFUL CHOCOLATE BOX VILLAGE IN RUTLAND
Nine Star Hegemon Body Art
Gulfstream Park Entries And Results
Latest Posts
Article information

Author: Duncan Muller

Last Updated:

Views: 5686

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Duncan Muller

Birthday: 1997-01-13

Address: Apt. 505 914 Phillip Crossroad, O'Konborough, NV 62411

Phone: +8555305800947

Job: Construction Agent

Hobby: Shopping, Table tennis, Snowboarding, Rafting, Motor sports, Homebrewing, Taxidermy

Introduction: My name is Duncan Muller, I am a enchanting, good, gentle, modern, tasty, nice, elegant person who loves writing and wants to share my knowledge and understanding with you.