Rajan's growth report on states is flawed: Bibek Debroy
Bibek Debroy, Professor at Centre for Policy Research says the report doesn't give raw data. In sum, this entire report is rather cavalier in approach.

Illustration: Raj Verma

Bibek Debroy
12
Gujarat's rank among Indian states in terms of development, according to the Rajan panel.
As a final example, let's take the sex ratio. The all-India child sex ratio (0-6 years) was 945 in 1991, 927 in 2001 and 919 in 2011. The Gujarat child sex ratio was 928 in 1991, 883 in 2001 and 890 in 2011. Between 2001 and 2011, the all-India child sex ratio worsened, while that in Gujarat improved marginally. However, 890 is still bad.
If one is interested in the impact of government policies, sex ratio at birth is an alternative indicator to child sex ratio. For all-India, this was 894 in 2001 and 906 in 2011. For Gujarat, it was 837 in 2001 and 909 in 2011. I am inclined to interpret this improvement as the positive impact of policies. I can multiply examples across indicators, but it boils down to an absolute versus increment issue, plus a time-line problem. Most human development rankings use old data. The last health survey, for instance, was in 2005/06. It is worth remembering this.
Let's turn now to the Raghuram Rajan committee. In any inter-state ranking, you have to do the following:
(a) Identify the variables - often paucity of data prevents you from including variables you might otherwise have wished to include. Environment is a good example of that, as is morbidity.
(b) Figure out a method of normalisation. To take an example, there are large states and small ones. To make a comparison possible between large states and small ones, you can divide the variable by population or by geographical area. That is known as normalisation.
(c) Assign weights to the identified variables.
(d) Decide on a method of aggregation.
These are serious issues. Any tinkering with these changes the value of the index and the consequent ranking. Therefore, you need to give justifications for the choices you make from (a) to (d) and play around with some kind of simulation exercise, to check how robust your index values are.
Is there any neat way to figure out why Gujarat does relatively badly in the Raghuram Rajan report? Not really, because the report doesn't give raw data. In sum, this entire report is rather cavalier in approach
Perhaps these are the variables to include, perhaps not. Why was the exercise carried out? To ensure that citizens throughout the country have access to the same quality of public goods and services. In that case, if there are states where a substantial segment of the population lives in villages that have population sizes less than 500, and if there is hilly and difficult terrain too, the costs of delivery will be higher. So, should that not have been a criterion too? In contrast, is SC/ST categorisation per se an indicator of backwardness? There is some research for SC populations, which show that if you control for other factors, being SC per se is not an inherent disadvantage. Should the female literacy rate have been made a criterion or the female work participation rate? The monthly per capita consumption expenditure was taken as a surrogate indicator of income. Should that have been taken or a specific component of that, such as private consumption expenditure on non-food items, or on education and health?

What about use of LPG? On connectivity, one would have thought pucca roads was a very good indicator. In financial inclusion, why only banks? Why not post offices? What about the number of policemen? I am not suggesting that one or the other of these variables should be taken, or that the cluster of variables accepted by the committee should have been rejected. I am making the point that there should be some discussion for acceptance and rejection. Has the committee given any justification for its choice? Not really.
Consider another point. Growth is an increment. For the other indicators, one can either take increments, or one can take the absolute value. To take an example, I can consider the infant mortality rate, or I can take the improvement in the infant mortality rate. There will be a difference in the rankings depending on what I do. Yes, incremental improvements will eventually affect the base. But should I compare increment (growth) with increment or growth with the base? I am making the point that there are value judgments involved in such exercises, so one needs to be careful. Of those 10 clusters of variables, notice that nine can improve (or deteriorate) over time. But the percentage of SC/STs is not a variable of that nature. A state is stuck with the number of SC/STs it has.
Consider now the data sources. Take education. The data comes from the National Sample Survey Office (NSSO). The point is that large-size NSSO sample surveys occur typically at intervals of five years. Should one base identification on data that will become available after five years? For school education, the committee could have got a decent set of data, every year, from the district information system for education. In addition, data on several variables come from the census. That means data will only be available after 10 years.
It may be fine to use estimates. However, academic rigour requires that it be clearly stated these are estimates, not actuals based on surveys. I don't find that mentioned anywhere in the report.
What about normalisation? It's by no means obvious that normalisation by population is necessarily superior to normalisation by geographical area. On weights, PCA (principal components analysis) is superior because the weighting system is generated from the exercise. We have been told equal weights have been used, because those are easier to understand. I don't think facility of understanding should have been a criterion. Assuming it was a criterion, the present exercise isn't particularly easy to understand. It smacks of arbitrariness and subjectivity.
Having said this, is there any neat way to figure out why Gujarat does relatively badly in the Raghuram Rajan report? Not really, because the report doesn't give raw data. In sum, this entire report is rather cavalier in approach.
(The author is Professor, Centre for Policy Research)