Total Pageviews

Followers

Search This Blog

Tuesday 25 December 2012

RM- Basic Statistics for Research in Management(Uploaded for BMS students for project work statistical analysis part)

1. Population:  The word 'population' or Universe denotes aggregate or group of individual objects of any nature whose general characteristics are studied by a statistical investigation. The population may finite or infinite.

2. Sample : Sample is a finite sub set of the population and the number of items in a sample is called size of a sample. It may be large or small sample.

3. The standard deviation of sampling distribution of statistic is known as standard error.

4.  Statistical constants of population namely mean (μ) and variance (s2) etc, which are usually referred as parameter.  The statistical measures from sample observation are known as mean (x) and S.D (S), variable (S2).

5.  "A hypothesis in statistics is simply a quantitative statement about a population". It is based an assumptions.

6.  Null hypothesis is the hypothesis, which is tested for possible rejection under the assumption that it is true and is denoted as Ho

7. Alternative hypothesis is the statement about the population, which gives an alternative to the null hypothesis and is denoted by H1.

8.  Type I and Type II error:  Rejection of the hypothesis when it should be accepted is known as Type I error. Acceptance of a hypothesis when it should be rejected is known as Type II error.
                                        Accept Ho                                Reject Ho
Ho is true                         Correct decision                         Type I error
Ho is false                        Type II error                              Correct decision
9.  In testing a given hypothesis, the maximum probability with which we could be willing to risk is called level of significance of the test.

10. critical value:  The value of the test statistic, which separates the sample space into rejection region and the acceptance region, is called the critical value.

11. Procedure for testing of hypothesis:  1. Set up the Null hypothesis: Ho     2. Set up the alternative hypothesis: H1                3. Choose an appropriate level of significance             4. Calculate the test statistic Z= t-e(t)/ s.e(t)             5. Compare the computed value with the table value.  if Z > table value : Reject the Null ;  Z < table value: Accept the Null

12. One-tailed test: In any test, the critical region is represented by a portion of the area under the probability curve of the sampling distribution of the test statistic. A test of any statistical hypothesis where the alternative hypothesis is one tailed (right or left tailed) is called a one-tailed test.

13.Two-tailed test: A test of statistical hypothesis where the alternative hypothesis is two tail. Ho:
μ = μo against the alternative hypothesis H1: μ >μo and H1: μ< μo is known as two tailed test in such case the critical region is given by the portion of the area lie in both the tails of the probability curve of the test statistic.

14. non-parametric test: The tests, which do not depend upon the population parameters such as mean and the variance, they are called non-parametric tests. Non-parametric statistics is a collection of tools for data analysis that offers a different approach to many of the decision problems. Non-parametric tests are distribution free. That is they do not require any assumption to be made about population. They are simple to understand and easy to apply when the sample sizes are small. Non-parametric test make fewer and less stringent assumptions than do the classical procedures. It is less time consuming.  The following are the methods used in non-parametric tests. They are: 1. The sign test 2. A Rank sum test 3. The one sample Runs Test 4. The kruskal wallis or H test 5. The spearman's Rank correlation procedure

15.  Correlation analysis deals with the association between two or more variables. The following are the significances of correlation: There are some kinds of relationship between variables. For example relationship between price and supply, income and expenditure etc. The two variables are closely related. That is the estimate the value of one variable given the value of another. The effect of correlation is to reduce the range of uncertainty.  If two variables tend to move together in the same direction. That is an increase in the value of one variable is accompanied by an increase in the value of other variable.  If two variables tend to move together in opposite directions so that an increase or  decrease in the value of one variable is accompanied by a decrease or increase in the value of other variable then the correlation is called negative or inverse correlation.

16. Rank correlation coefficient:  In 1904, Charles Edwin Spearman a British psychologist found out the method by determining the coefficient of correlation by ranks. This measure is useful in dealing with qualitative characteristics such as intelligence, beauty, morality, character etc. Features of spearman's correlation coefficient : 1. The sum of the difference of ranks between two variables shall be zero
That is d = o 2. Spearman's correlation coefficient is distribution free.

17.  "Regression is the measure of the average relationship between two or more variables in terms of the original units of data". Uses of regression analysis: 1. Regression analysis provides estimates of value of the dependent variable from values of the independent variable. 2. With the help of regression coefficients, we can calculate the correlation coefficient (r) and the coefficient of determination (r2). 3. The regression analysis is highly useful and the regression line equation helps to estimate the value of dependent variable, when the values of independent variables are used in the equation

18. A time series may be defined as a collection of readings belonging to different periods of some economic variable or composite of variables.  The following are the various components of time series. 1. Trend
2. Seasonal charges 3. Cyclical charges 4. Irregular or Random fluctuations. The changes in the value of variable in different periods of time are due to so many factors. These factors are called the components of a time series.

(Source: Text book by S.P.Gupta, Indira Gupta, Notes given by Girija vallaban sir, IGNOU study material)


No comments:

Post a Comment