this piece appeared on abpnews at on june 11th.

the original content i sent them was the following:

The news about the appalling murder of a baby in Gurugram was startling in the extreme. News reports say that the 8-month old baby and her 19-year old mother were abducted by three men in an auto-rickshaw. The mother was gang-raped by all of them, and when the baby bawled, one of the men casually picked her up and threw her on the concrete divider on the road, presumably wounding her mortally. Then they abandoned mother and baby, and drove off.

The mother walked miles with the dead baby in her arms, refusing to believe she was dead. She went to a hospital where the doctor pronounced the baby dead. In her grief, she refused to accept this judgment, and went to a second hospital. After grieving all night over her baby, she finally had to accept that the child had been cruelly murdered.

It is tempting to attribute it to the general callousness to human — especially female — life that seems to afflict the environs of Delhi. Of course all of us remember the sad case of Jyoti Singh Pandey and how she was gang raped and then disembowelled with iron rods forcibly stuck into her vagina, and how the most vicious of the criminals then walked away with the ‘minor’ defense.

But the fact is there are parallels elsewhere. Just a month of two ago, several communists attacked a car belonging to a BJP activist in Kerala, and casually threw his 10-month-old baby out of the window. So it’s not as though this type of monstrous behavior is somehow confined to a geographical area: it is

We are also inured to violence against women, and the draconian laws to protect women seem to have neither deterrent nor legal effect. Remember, for instance, the gang rape of a mother and daughter traveling on a highway just a few weeks ago. A schoolteacher aged 55 was raped and murdered just days ago. In May, man was sentenced to hang for luring a 4-year old girl with chocolate, raping her and then crushing her head with a stone.

There was the horrifying murder in Kerala of Sowmya who was chased around and pushed off a train, raped on the tracks and then had her head crushed with a stone by a one-armed beggar named Charly Thomas. In a legal miracle, this itinerant beggar was able to hire an expensive, hot-shot lawyer who got his death sentence turned into life imprisonment (and surely bail and release in some general amnesty down the road).

Then there was Jisha, a Scheduled Caste law student who was brutally tortured, with her intestines spilling out from a blow from a blunt instrument, and her genitals slashed with a knife.

It is rumored that it was a politically motivated crime, because her biological father, a prominent politician, did not want her to demand a share of his property. The case has been ‘solved’ by finding, without much evidence, a migrant laborer to take the blame.

With crimes like these rampant, we have to wonder: is it that modern Indians have become abysmally cruel towards women and towards children in particular? Or is it just that these crimes are now better reported? Of small comfort is the fact that per capita violence against women is much lower in India than in, say, the US: it is just that the scale of population in India means that the absolute numbers are high.

But I have to believe that violence against children is a new — and appalling — trend. Indians on average have tended to be indulgent of children, although that only applies to one’s own, and most of us have tolerated child-laborers in restaurants, homes and building sites. But to casually murder or injure a baby just because it belongs to a woman you’re about to rape, or to a political rival, is a new low in the coarsening of the Indian mind.

I wonder if this has to do with drugs or alcohol. Drunk people obviously lose their inhibitions, and those on hard mind-altering drugs live in an altered state of reality. Ideological extremism works exactly like a mind-altering drug, and it probably is easy to demonize opponents as ‘the enemy’. We see that in the systematic and regular attacks by communists in Kerala, in their absolute disdain for Hindu sentiments (as in the recent cow-killing episodes), and in their willingness to use violence as though the Other were just vermin to be exterminated.

There is also the other side: would-be criminals know that because the justice system is so cumbersome that is no deterrent. Political operatives know that even if they are convicted of horrific offenses — for instance, a middle-aged sweeper woman was raped to death with a broom-handle inside a Congress party office in Kerala — their godfathers will be able to bail them out eventually.


my piece in

my piece in abpnews on this topic. net result, like all the other NAC-#altleft nonsense, is stupid.

this is what i wrote years ago about the then-existing EVMs, which i thought were a crock. things have improved with VVPATs, but still have grave doubts about them. i still don’t trust computers.

the election commissioner’s letter is hardly reassuring, and i don’t think this is the last word on the matter. what indiresan et al said earlier was total bullshit, and i hope things have improved since then. but i have my doubts about the chips involved, the firmware, and the processes. the whole thing needs a thorough security audit.

see my two part essay from 2015, exactly two years ago:

this was (afaik) the first instance of anyone in india talking explicitly about america’s omnipresent #deepstate and its baleful influence on india. i take due pride in that 🙂

however, there *was* an earlier use of the term (in 2013) for the congress and its ecosystem, and so here’s due credit to centerrightindia. good guys, the folks behind the indian #deepstate is partly a creation of the US #deepstate, as seen in the obvious funding of various anti nationals and think tanks. much like a godman named caldwell invented ‘dravidianism’ as part of divide-and-rule.

this was published on abp news live yesterday (sun mar 12):

my swarajya magazine piece from the mar 2017 issue. also online at

here is the original copy i sent them. they dropped the diagram and links, added a typo and dropped a crucial phrase in the web version 🙂

Innovation nation: How do we deal with biased artificial intelligence?


Rajeev Srinivasan


At this point, it is conceivable that machine intelligences, at once omniscient and tireless, will in a short while take on human capabilities. The stories of the chess-playing computer Deep Blue, the quiz-show champ Watson, and the Go champion Deep Mind have enthralled us and shown us a vision of a future run by impartial and untiring machines. In fact, I wonder whether we should replace some judges, doctors and lawyers with machines, as they will be up to date, will not have bad hair days, and will not get burned out by work. But it turns out there is a potentially fatal flaw: these golems may be as biased as their masters.


This has to do with the nature of the new technique that has turned the latest machine intelligences into marvels of modern ingenuity: deep learning. Artificial intelligence as a discipline has been languishing for a couple of decades. In the late 1980s, there was much excitement about it, and I remember writing an optimistic paper about it in a journal, but then it failed to meet expectations especially in relation to natural language processing. So AI became sort of a bad joke: always twenty years away in future. So much so that people began to say ‘expert systems’ and ‘neural networks’ instead of ‘artificial intelligence’. See a report on AI in the June 2016 issue of The Economist


In the last few years, there has been a renaissance in AI, and it started with novel applications of neural networks. These are modeled, obviously, on biological mechanisms, and in particular the human brain and its collection of interconnected neural pathways. Neurons that connect to each other are given ‘weights’ and an ‘activation function’ that control how they link up with the next neuron by firing an output signal. This is analogous to the ‘rules’ set up in other types of expert systems.


Computer-simulated neural networks are built using multiple layers, each of which has a specific function. These layers are needed so that an input triggers multiple firings of the neurons, and finally results in the expected outcome based on the input. It turns without that simple problems may need a few layers, but as problems grow more complex, the problem of adjusting the weights by hand becomes intractable. These weights need to be just so to make sure that when an event happens, it triggers the activation function appropriately. These adjustments are termed ‘learning’.



Recently, computer scientists figured out that the ‘learning’ could be automated: that is, the network can calculate by itself the weights needed to be imposed on individual neurons in the many layers of the network. This is done by presenting the network with a large data set, and by observing the data, it is able to figure out the right weights to impose. It is again analogous to a rule-based system subtly correcting the rules through experience, much as a human baby learns that “fire is hot”, but “if I sit a safe distance from a fire, it is pleasant”.


The giant data sets are available on the Internet through the massive troves maintained by companies such as Google and Facebook. Once the networks were programmed to ‘watch’ these data sets, they began to quickly ‘learn’ about the limited domains they were exposed to. Given their ability to crunch vast amounts of data, these ‘deep learning’ algorithms, which control multi-layered neural networks, in a sense became ‘self-aware’, and could quickly gain mastery in their domains: for example, such a system, when shown a lot of cat videos, eventually figures out what a cat is and what it will do.


If this sounds more than a little creepy, and reminds you of the self-aware ‘Skynet’ in the Terminator series, you are not alone. However, we are still far away from a Skynet takeover, because the deep learning systems have deep knowledge only in narrow specialized areas, and not a generalized intelligence that can take over civilization as we know it. At least not yet, although the New Scientist reported in October  that Google’s Deep Brain network invented an encryption method all by itself. One of these days it may refuse to tell us the key. A scary thought, indeed.


Fortunately, at the moment, the worst example of the culture clash between narrow and general intelligence is the chatbot that Microsoft unleashed on Twitter. It learned rapidly from what it saw, and began to spew racist and sexist tweets, at which point Microsoft hastily pulled it offline.


That is funny, but therein lies a problem. The very fact that the data sets have to be chosen and fed to the network seems like a mundane and uninteresting point. But it is not. Who chooses the data sets for the neural networks? Typically some young, white, male, geeky, Silicon Valley youngster. His (yes, his) world-view is likely to be appropriate to his social group: to venture a guess, Star Trek, hipster culture, Starbucks, and so on. Thus the values of the neural network will simply reflect his unconscious biases.


The New Scientist ran a story recently about how playing the computer game Grand Theft Auto could teach autonomous cars to drive. Yes, in the US, but it certainly will not teach a neural network how to drive a car in India. That may seem like a trivial example, but it is symptomatic of what happens in other domains as well.


Another article in the World Economic Forum site talked about how algorithms can be sexist too. Thus there is a concern that subtle biases can creep into the decision making apparatus that we increasingly rely on: and it is not even overt, but purely inadvertent.


Another article from the WEF talks specifically about the ethics of self-driving cars and how an MIT project actually gives normal people a chance to wrestle with the ethical dilemmas (in terms of doing the least damage to the least number of people, a Utilitarian approach, or the ‘do no harm’ approach of Isaac Asimov’s ‘Laws of Robotics’).

In a sense, this reminds me of how the Western world-view assumes that it is the one and only correct way of doing things or even imagining what to do. Thus, Dreamtime of the Australian aborigines or maya of Indic thought are seen as deviant, infantile, or even plain downright wicked when compared to the obvious ‘truth’ of Western Cartesian science, although that itself is full of unconscious dogma.


How do these Western notions of how things should be affect our lives? Go no further than the Indian Constitution and judicial system. In 1947, it was assumed that the Western paradigms were the best, and therefore we grafted them, even if not wholly suitable, onto our society. This has led to numerous fault lines, but that is beyond our scope here.


Western assumptions about almost everything have been deeply internalized by Indians, and it is a truism that nothing of indigenous origin is given any respect unless it is ‘digested’ (as Rajiv Malhotra would say), repackaged and sold to us: even yoga is facing this fate these days. So how will this work with biased artificial intelligence?


Will the deep learning algorithm in your smartphone or data analytics system or your car assume that the life of an Indian is worth less than the life of a white person, based on Gunga-Din stereotypes or downright racist films it has accidentally seen as part of its data set? If there is a quick decision that has to be made in a life-and-death situation, will your self-driving car choose to kill two Indians rather than one white person? Such philosophical questions take on an added edge when combined with racism and colonialism.


There is no simple answer to this, other than to develop indigenous neural networks that, for instance, have been fed relevant data sets that are appropriate to our environment, a revival of that old idea of ‘appropriate technology’. There’s no malign human intelligence out there skewing the world in unfortunate ways, but the information asymmetry in the real world may well end up affecting the virtual world as well.


And given our tendency to take as the gospel truth (do you see what I mean about bias in that very phrase?) anything that a computer, especially an obviously clever deep learning system, spews out, it would be wise for us to pause a moment before jumping in feet first. This may also go back to the wretched question of education in India – whether it would be better for us to study in Indian languages or in English which brings its own baggage – but it will not be long before biased AI will be a legal, and not only moral, issue.


1420 words, 15 November 2016

1480 words, updated 3 February 2017