How is it measured?
For any given data set, standard deviation is measured as the dispersion or spread of data points from their average or mean value. It tells you how scattered or far away each data point is from the mean, and this comparison helps define the risk. The farther away the data points are from the mean, or higher the deviation, greater the risk. If the individual values are closer to the mean, the risk is low.In investing, it is measured as the variation of return or price of an asset at different points during a given period from the mean value. It helps determine the investment risk or market volatility. If the returns or prices are scattered too far away from the mean, it signifies higher risk or volatility, and vice versa.
How is it calculated?
Standard deviation is represented as SD or Greek letter sigma. To simplify the calculation, the following steps can be used.
- First calculate the mean of the data set. This is done by adding all the price points and dividing these by the total number of observations.
- Next, subtract each point’s value from the mean.
- Square all these figures.
- Add up these values and divide it by the total number of data points minus one.
- Apply square root to this value.