What happens when AI guides the handcuffs?
Part 1
Special | As AI Took Over Policing in Delhi, Who Bore the Brunt?
An investigation by The Wire and Pulitzer Center uncovered troubling instances where individuals were arrested solely on the basis of facial recognition – without solid corroborating evidence or credible public witness testimonies.
New Delhi: In the hours of a March morning in 2020, Ali’s* life changed forever – and AI-powered facial recognition technology was at the centre of why.
Ali was arrested from the narrow alleys of Chand Bagh – a poor locality in Northeast Delhi. What followed was more than four and a half years of pre-trial incarceration. Trapped in a muddle of legal delays and procedural limbo, he waited as his case crept through the Indian judicial system until he was finally granted bail.
“I was beaten mercilessly; on some occasions, the assault was so severe that there was profuse bleeding and my flesh was torn. They used batons against me, and there were times I was kicked so brutally that I struggled to breathe,” he said about those days. The Wire has asked the commissioner of police, Delhi and the secretary, Union Ministry of Home Affairs, to respond to these allegations of custodial torture, but no response had been received till the time of publication.
Ali is one of the 29 accused in the Ratan Lal Murder Case (FIR 60/2020). The case pertains to the communal violence that erupted in Delhi on February 24, 2020, when, according to the police, protestors against the controversial Citizenship (Amendment) Act used sticks, baseball bats, iron rods and stones to attack policemen on Wazirabad Road, Chand Bagh. According to the chargesheet, Head Constable Ratan Lal was struck by a bullet and later succumbed to this injury.
Despite chargesheets asserting the presence of “sufficient material” against several of them, 27 of the 29 accused have been granted bail, and one has been discharged from the case. In the years spent behind bars, with no trial in sight, many, like Ali, have lost loved ones and seen their livelihoods vanish, and now face crushing debt.
How were they ‘identified’?
The investigating officer of the Ratan Lal Murder Case, Inspector Gurmeet Singh of the Crime Branch, told The Wire that the case was ‘solved’ using advanced technologies, including video and image enhancement tools (Amped FIVE by Amped Software) and facial recognition software (AI Vision by Innefu Labs). He confirmed that all the accused named in FIR 60/2020 were identified using these tools.
Special Public Prosecutor Amit Prasad, who has been representing the Delhi Police, told the Delhi high court during a hearing, “The usage of AMPED software based on digital recognition is sufficient to establish [the identity of] the correct individual.” Though Prasad refused to respond to The Wire’s queries, he confirmed that Amped tools were used by the police.
The police obtained CCTV footage from multiple alleys near the crime scene in Chand Bagh. Additionally, three private videos were submitted by onlookers: Harsh’s* video (1.48 minutes) recorded from Gym Body Fit Garage, Skyride Video (1.37 minutes), and Yamuna Vihar Video (40 seconds).
Defence counsel Raman*, who represented Ali in the Delhi high court, told The Wire that the police obtained CCTV footage from the surrounding alleys and selected frames capturing a full-frontal or even a side-profile view of each accused. He added, “These images were then fed into the facial recognition software and matched against the individuals appearing in the three private videos acquired by the police.”
Raman stated that Ali was ‘identified’ through facial recognition software: “Even the methodology of facial recognition was not disclosed, which is quite surprising. Based on what the prosecution said in court, I gather that the police had CCTV footage of the alleys/gullies near the scene of the crime. They seemed to have extracted images from that footage and matched them against the private videos (like Harsh’s*) using the FRS. Even if I assume that the CCTV footage was correct, I can surely say that the persons in the Harsh* video and the CCTV footage do not match.”
Advocate Raman explains that the person in Harsh’s video, who is seen pelting stones at the police, is wearing a black shirt with a white jacket, while Ali, in the CCTV footage of the main road, is wearing a different coloured shirt and no jacket.* Describing the video, he says, “Furthermore, the Harsh video was a side profile of a person standing far back, rather against the wall. He could not be part of the melee of people conducting a physical attack. But the police allege that Ali threw a stone from roughly 50 to 100 meters away onto the police crowd. Whether the stone landed or caused any injury is neither mentioned nor established.”
Since his arrest, Ali has been battling with severe depression. The toll on his family has been equally devastating. His mother, who weighed around 68 kg in 2020, has withered to just 30–35 kg. Her blood sugar often spikes to critical levels, sometimes reaching as high as 500. Years of poverty and malnutrition in Ali’s absence have caused her to lose all her teeth, and she has now begun experiencing bleeding during urination.
Ali said, “Our financial situation has deteriorated to a point where we cannot even afford basic medical treatment for our mother. We are trapped in a severe financial crisis, burdened by a debt of nearly Rs 20–25 lakh, and it feels as though our lives have been pushed back by at least two decades.”
Speaking to The Wire, he also described systemic discrimination against Muslim prisoners in jail. He alleged that they were routinely humiliated, asked their names, and if identified as Muslim, were assigned degrading tasks. “We were forced to scrub toilets and mop floors with our bare hands, denied even basic cleaning tools like wipers,” he recalled. The abuse extended beyond physical violence. “I was constantly humiliated, called a terrorist, and subjected to unbearable psychological torment. I spent countless days crying and praying – as did my mother,” he alleged.
Speaking about the desperation during the COVID-19 lockdown, Ali explained, “Prisoners would fight for the slightest chance to help unload heavy supply trucks, just to earn 5–7 extra biscuits. The food rations were grossly insufficient – often just 50–100 ml of watery khichdi and two teaspoons of vegetables, barely enough to survive.”
Mohammed* is another accused in the Ratan Lal Murder Case, who, according to his lawyer Advocate Uday*, was also ‘identified’ using facial recognition software. He had to spend around two years behind bars without any trial before he finally walked out on bail. Despite being an undertrial and not yet convicted of any crime, he, like Ali, was also compelled to clean toilets and perform menial labour, even though prison labour for undertrials is supposed to be only voluntary. His bail pleas were rejected not once but four times, even as his wife struggled through a difficult pregnancy and his parents’ health rapidly declined in his absence.
He told The Wire that in one of the videos, the police had ‘identified’ a man as him, even though the individual was at least five inches taller than him, with noticeably different hair length, footwear and even a different number of shirt pockets. The ‘identification,’ he pointed out, hinged primarily on clothing – specifically a white shirt and black pants, an outfit worn by several individuals in the footage. This allegation raises serious concerns about not just the accuracy of the identification but also the efficiency of the tools used.
Advocate Uday took the high court through one of the private video footage collected by the police.. He argued that his client could not be clearly identified as he was not distinctly visible in the footage. Also, the clothes worn by him (white shirt and black pants) were similar to those of many others present in the video, thereby failing to establish his identity at the scene of the crime. Uday also pointed out that no available camera footage captures Mohammed damaging CCTV cameras, contrary to the prosecution’s allegations. He added, “The logo present on [his] white shirt appears in one video and is absent in another. Despite this, the police claimed that the two persons in the videos are the same.”
Both the defence counsels told The Wire that facial recognition was used to ‘identify’ Mohammed and Ali: “No Test Identification Parade (TIP) was conducted for either of them, which means whoever identified them already knew them and their physical description.”
In several cases, while granting bail to the accused, the court observed that both the authenticity of the video footage and the validity of its analysis are issues to be examined during the trial.
Uday contends that since all the accused are residents of the area where the incident occurred, their presence in the vicinity is not surprising. He further asserts about Mohammed, “His mere presence in the alley near his house does not conclusively establish that he was involved in rioting which took place near the main road. There are numerous cases of gang fights, free fights and communal rioting, where many people are just curious bystanders. We call it the ‘curious bystander exception’ to the principles of unlawful assembly.”
Mohammed, who once ran a modest store selling second-hand bags to support his family of five, has been left penniless after his incarceration. His shop is gone, and his livelihood is shattered. Today, he is forced to sell those bags on the pavement outside Jama Masjid – an existence that is not only precarious but also irregular, as he is frequently summoned for court appearances that interrupt any chance of stability.
Inside the prison, he was allegedly also subjected to discrimination and degrading treatment by police authorities solely because he was Muslim. He keeps repeating in a low, broken voice: “Jail bahut buri jagah hai (Jail is a very bad place).”
To survive and continue fighting his legal battle, Mohammed has been forced to take on loans amounting to several lakhs of rupees – a crushing burden that only grows heavier by the day. With no steady income, no end to his legal ordeal in sight, and a family still depending on him, he sees no hope.
After reviewing police reports, filing multiple RTI applications, speaking with investigating officers, defence lawyers and AI experts, and analysing court documents related to cases involving more than 50 accused individuals, The Wire found at least one case in which several accused were “identified” using facial recognition technology – sometimes based solely on side or even rear profiles captured in video footage. Despite the absence of any public witnesses confirming the presence of these accused at the crime scene, the police allegedly proceeded with the arrests.
Also, in response to an RTI filed in 2022, the Delhi Police acknowledged that facial recognition technology was used to investigate “over 750 cases related to the North East Delhi riots” and that the results were presented as evidence against those arrested. That accounts for at least 98.9% of all riot-related cases (758 in total) being “solved” with the help of facial recognition technology.
Similarly, in March 2020, Union home minister Amit Shah told the Rajya Sabha that 1,922 perpetrators had been identified through facial recognition software, comprising at least 73.3% of the 2,619 people arrested in connection with the riots till last year.
However, media reports indicate that more than 80% of the cases heard so far have resulted in acquittals or discharges, raising serious questions about the reliability of a technology the police appear to have relied on so heavily.
The cases of Ali and Mohammed underscore this troubling trend: both were arrested solely on the basis of facial recognition matches, without any corroborating evidence or credible public witness accounts. What drove the police to place such unwavering faith in a technology that is criticised globally for its inaccuracies, inherent biases and potential to cause wrongful incarceration and infringe upon human rights?
Why are the police rushing to use unaccountable AI technologies?
Months after Prime Minister Narendra Modi came to power, he rolled out the five-point concept of SMART policing (Strict and Sensitive, Modern and Mobile, Alert and Accountable, Reliable and Responsive, as well as Tech-savvy and Trained) at the 49th All India Conference of Director Generals/Inspector Generals of Police and head of all central police organisations on November 30, 2014.
Since then, police forces across Indian states have hastily plunged into the race to adopt Artificial Intelligence (AI)-based tools for law enforcement, with numerous awards and government initiatives introduced to encourage the trend. Many state police departments have begun relying on AI-based Automated Decision-Making Systems (ADMS). These include the Facial Recognition System (FRS) and Crime Mapping, Analytics & Predictive System (CMAPS) in Delhi, Trinetra and CrimeGPT in Uttar Pradesh, Punjab AI System (PAIS) in Punjab, Automatic Number Plate Recognition System (ANPR) in Madhya Pradesh, Artificial-intelligence based Human Efface Detection (ABHED) system in Rajasthan, and Telangana State Police – COP (TSCOP) in Telangana.
The widespread deployment of these AI-driven tools in law enforcement is unfolding in the absence of any regulatory framework or accountability mechanism to oversee their usage. Despite the critical implications of AI in policing, there are no comprehensive legislations or statutory guidelines to ensure ethical implementation, data security and safeguards against potential misuse.
In this spectacle of ‘revolutionising’ law enforcement through AI, several private companies – such as Innefu Labs, AMPED Software, Pelorus Technologies and Staqu Technologies – have emerged as prominent players. These firms have secured contracts with various state police forces, the Indian Army, intelligence agencies, the Election Commission of India, public sector banks, and other key government departments, positioning them as significant stakeholders in national security and governance. By supplying AI-based technologies to government agencies, these companies not only make a fortune but also allegedly gain access to vast amounts of data with significant commercial value.
The unchecked reliance on opaque, ‘black-box’ algorithms supplied by these private entities has fueled concerns over the rise of a surveillance state. Experts say that these AI systems, lacking transparency, perpetuate pre-existing biases against marginalised communities – including Muslims, Dalits and Adivasis – leading to discriminatory policing practices. Despite these problems, the police are allegedly acting solely based on facial recognition technologies in several instances and even making arrests in the absence of public witnesses or other corroborating evidence.
Facial recognition software used by the Delhi Police
An official document titled ‘Best Practices In Delhi Police’ announces digital initiatives taken by the police in alignment with the PM’s directives on SMART policing. It states, “Delhi Police has acquired the Facial Recognition System and integrated it with the Missing Children/Persons and Found Children/Persons module of the Zonal Integrated Network system (ZIPNET) to track the missing children reported missing from Delhi.”
The police confirmed this in a Right to Information (RTI) Act response to the Internet Freedom Foundation (IFF) dated February 20, 2020, and added that facial recognition software was procured under the direction of the Delhi high court in the Sadhan Haldar v NCT of Delhi case in March 2018.
However, it is pertinent to note that while the court order specifically directed the Delhi Police to obtain facial recognition technology for tracking missing children, the police admitted to using it for investigations as well.
The ‘Best Practices’ document also confirms that facial recognition is used for “surveillance and detection of suspects at crowded places like Railway stations, Bus terminals, and large gatherings like sports events, Public Rallies, etc.” Although it does not explicitly clarify whether ‘public rallies’ include protests, the police themselves admitted in RTI replies to using facial recognition software in investigations related to the farmers’ protest-Red Fort violence (2021) and the Jahangirpuri riots (2022) cases.
Srinivas Kodali, a hacktivist and tech transparency expert, told The Wire, “Facial recognition technology is being used for far more than just locating missing children. It is primarily used to track a broad list of people under the radar of law enforcement agencies. This includes criminals, former offenders, and persons of interest, including those suspected of having ties to terrorism, Naxalite movements, etc. Initially, various state police forces implemented their own facial recognition systems independently. But the State wanted to integrate all of them, which led to the proposal of the National Automated Facial Recognition System to interlink all of these cameras, allowing the Ministry of Home Affairs (MHA) to monitor every activity nationwide.”
How do the police use facial recognition?
Law enforcement agencies use facial recognition for ‘identification’ purposes – what is called a 1:many technique. It involves feeding the image of a person’s face, extracted from a photograph/video, into the software. The software then analyses the input image and attempts to find a match within the entire database maintained by the authority to confirm the identity of the individual.
The output is a list of potential matches, each accompanied by a ‘confidence score’ or ‘probability match’ which represents the likelihood that the suspect matches an individual in the database. A higher confidence score indicates a stronger likelihood that the system has correctly identified the suspect.
The police personnel operating the system then select a match from the list generated by the software. They also determine the minimum ‘confidence score’ required for a match to be considered a suspect. This decision, being subjective, has the chance of being significantly influenced by personal biases or prejudices against certain religions, races, classes or communities, potentially affecting the accuracy and fairness of the identification process.
The absence of legal provisions
In 2017, the Supreme Court of India recognised privacy as a fundamental right guaranteed by the Constitution, requiring any State intrusion into it to meet four essential tests – legality, necessity, proportionality and procedural safeguards.
However, the Delhi Police have admitted in an RTI response that no legal opinion was sought prior to procuring facial recognition technology, and no specific rule governs its use. Privacy advocates argue that this could be viewed as a violation of the Supreme Court’s judgment, potentially making the Delhi Police’s use of facial recognition technology unlawful.
Nonetheless, the Delhi Police continue the rampant and unregulated use of facial recognition technology in its investigations.
1. In March 2020, Union home minister Amit Shah told the Rajya Sabha that more than 25 computers were analysing CCTV footage to identify perpetrators of the Northeast Delhi violence, and, till then, over 1,900 perpetrators had been identified through facial recognition software.In 2022, the Delhi Police admitted to using facial recognition technology to investigate “over 750 cases related to the North East Delhi riots” and presenting the results as evidence against the arrested individuals. However, they failed to specify the relevant sections of the Indian Penal Code and Code of Criminal Procedure under which such evidence would be made admissible in court.
2. Media reports revealed that at least since Prime Minister Modi’s Ramlila Maidan rally in December 2019, the Delhi Police have been using facial recognition software to screen crowds. A report in the Indian Express mentioned that it was the first time the police used a set of facial images collected from footage filmed at various protests in Delhi to identify ‘law and order suspects,’ ‘habitual protesters’ and ‘rowdy elements’ from the crowd at the rally.Since the protests against the Citizenship (Amendment) Act in Delhi, the police have routinely videotaped almost every major protest in the city, sometimes through drones. This footage helped build a dataset of ‘select protesters’, reportedly used to keep ‘miscreants who could raise slogans or banners’ out of the rally. The report stated, “Each attendee at the rally was caught on camera at the metal detector gate and live feed from there was matched with the facial dataset within five seconds at the control room set up at the venue.”
The report further stated that the Delhi Police has so far created a photo dataset of 1,50,000 ‘history sheeters’ for routine crime investigations, 2,000 images of terror suspects and a third category of ‘rabble-rousers and miscreants’ (no formal definition has been provided for this category).
After the issue came to light, Project Panoptic, run by the IFF, sent a legal notice to the home secretary, Ministry of Home Affairs and the Commissioner of Police, Delhi on December 28, 2019, asking them to halt the use of facial recognition in Delhi. IFF called this “an illegal act of mass surveillance”. The notice argued that facial recognition technology is also in “breach of the principle of proportionality”, emphasising that data collection must be necessary and evidence-based, and not blanket or indiscriminate, especially when lacking probable cause of suspicion.
Despite this notice and repeated concerns raised by digital rights organisations, the use of facial recognition by the Delhi Police has continued.
Responding to an RTI query filed by The Wire, the Provisioning and Logistics Department of the Delhi Police acknowledged on February 24, 2025, “A total of 6630 CCTV Cameras were installed in the area of 50 Police Stations of 12 Districts through M/s Bharat Electronics Limited. As per the provision of the contract, the Facial Recognition Technology feature is available in 10% of the CCTV Cameras as per user district requirements. The utilisation of the FRT feature in the CCTV Cameras is being done by the user districts as per their requirement.”
What is the accuracy of the Delhi Police’s facial recognition technology?
The ‘accuracy’ of facial recognition software refers to how well it can correctly identify a person from a database of known individuals while minimising false positives (incorrectly identifying someone as another person) and false negatives (failing to identify a known person) in various conditions. The accuracy can vary significantly under different conditions, such as lighting, camera angles, image quality, and facial expressions.
Technology expert Jake Laperruque wrote for Project On Government Oversight in 2018, “Facial recognition accuracy can vary significantly based on a wide range of factors, such as camera quality, light, distance, database size, algorithm, and the subject’s race and gender.” He warned that even a 90% accuracy rate, promised by some of the most advanced systems, is “an unacceptable risk when the end result is the possible arrest of or even the use of force (including deadly force) against an innocent person”.
For reference, before deploying its facial recognition system in early 2011, the US Federal Bureau of Investigation (FBI) conducted a test that found “roughly one in seven searches of the FBI system returned a list of entirely innocent candidates, even though the actual target was in the database”. This occurred despite the software achieving an 86% accuracy rate in those tests.
In contrast, nearly eight years later, in 2018, the Delhi Police informed the high court that their facial recognition software had an accuracy rate of just 2%. By 2019, this figure had dropped below 1%, prompting the Ministry of Women and Child Development to acknowledge that the system could not reliably distinguish between boys and girls.
In the case of Sadhan Haldar v. NCT of Delhi, heard on January 22, 2019, the Delhi high court expressed concern over the technology’s ineffectiveness. Referring to over 5,000 missing children cases in Delhi over the previous three years, the bench observed: “We are told that the use of ‘Facial Recognition Software’ has not helped in cracking any case of missing children so far, which comes as a surprise. It is most unacceptable that the software adopted by the Delhi Police after due diligence has not borne any results.”
While the Innefu Labs, a private vendor supplying facial recognition technology to the Delhi Police, claims on its website that its facial recognition system achieved an accuracy of 98.3% (as of June 11, 2025), the Delhi Police have not yet disclosed the accuracy rate of their facial recognition technology in response to multiple RTIs filed by The Wire. When contacted, Innefu Labs’ co-founder and CEO, Tarun Wig, refused to comment.
Adding to the lack of transparency around its accuracy rate, the Delhi Police have also set a notably low threshold of ‘confidence score’ for classifying a match as ‘positive’. In response to an RTI in 2022, they revealed that “All matches above 80% similarity are treated as positive results while matches below 80% similarity are treated as false positive results which require additional ‘corroborative evidence.’”
In 2018, the American Civil Liberties Union (ACLU) conducted a test on Amazon’s facial recognition tool, ‘Rekognition’. The test found that the software incorrectly matched 28 members of the US Congress with individuals in a criminal database, falsely identifying them as people who had been arrested for a crime. The Congress members of colour were incorrectly matched at disproportionately higher rates. The ACLU conducted the test using the software’s default ‘confidence threshold’ of 80% – the same threshold used by the Delhi Police.
In such a scenario, “The Police could continue to investigate anyone who may have gotten a very low score. Thus, any person who looks even slightly similar could end up being targeted, which could result in targeting of communities who have been historically targeted,” argues researcher Anushka Jain, former policy counsel at IFF.
Several lawyers told The Wire about another legal blind spot: the police aren’t legally bound to disclose to the accused if they were identified using facial recognition. This denies the accused an opportunity to challenge the identification process or the reliability of the match. A Panoptic Project article argues, “The defendant should also be provided with access to the software’s source code to meaningfully challenge the evidence presented against them.”
Even the supplier of facial recognition technology to the Delhi Police – Pelorus Technologies – acknowledged the limitations of the tool. When asked by The Wire whether the tool’s accuracy is affected in poorly lit environments such as alleys or gullies, CEO Rahul Dwivedi admitted, “Yeah, of course! It depends on the camera, it depends on the light, it depends on the angle of the image it takes.”
How well-trained are the Delhi Police to use AI tools?
The Bureau of Police Research and Development (BPR&D) serves as the central nodal agency for police training across India. It is responsible for designing training modules and implementing capacity-building programmes for law enforcement personnel. At the state level, individual police departments also conduct their own training initiatives.
The Wire thoroughly examined various training manuals used by the Delhi Police. We found that the ‘National Syllabus for Directly Recruited Sub-Inspectors’ includes only a single mention of AI. Out of the 2,620 instructional periods, just one each is dedicated to AI and facial recognition technology, showing that emerging technologies in modern policing aren’t given much importance when it comes to training.
In 2024, the Specialised Training Centre in Rajendra Nagar conducted some specialised training sessions for Delhi Police personnel. All personnel ranking from head constables to inspectors, were taught only a half-day course on videography and photography at crime scenes, held on four occasions across the year. Sub-Inspectors (SIs) to Assistant Commissioners of Police (ACPs) were provided with merely a one-day training on “Social Media Investigation and Open Source Intelligence (OSINT)”, also conducted four times. One-day sessions on CCTV footage handling, DVR forensics and “emerging trends in forensic science and contemporary forensic techniques” were offered to SIs and inspectors on four separate dates each.
The Chanakyapuri-based Academy for SMART Policing offered specialised workshops on “handling CCTV footage,” “drone technology,” “social media investigations”, and “open-source intelligence”, each limited to just two half-day sessions across 2024. Also, training was provided exclusively to officers at the rank of ACP and above. This limited scope raises concerns about effectiveness, as frontline investigations are primarily conducted by inspector-level officers who were excluded from these sessions.
The Dwarka-based Cyber Training Division offered no training on “facial recognition technology” in 2024.
The Status of Policing in India Report 2019, by the NGO Common Cause, assessed police capacity and adequacy across states from 2012 to 2016. The report found, “In Delhi, in-service training is imparted to almost all the higher rank officers every year”, but training for Constables and Sub-Inspectors/Assistant Sub-Inspectors remained “very low”. Only 11.7 % of police personnel received in-service training during the period, with just 2.49% of the total Delhi Police budget spent on training.
The Wire spoke to Aakansha Saxena, assistant professor at the Rashtriya Raksha University (RRU), a Gujarat-based institution offering specialised training to police forces in states including Gujarat, Punjab, Karnataka, Delhi and Odisha. Saxena, who heads RRU’s Centre for Artificial Intelligence, expressed uncertainty about the extent of facial recognition technology training within the Delhi Police, stating, “I don’t know whether all the police officers are trained or not, but yes, some of them have definitely been trained.” She also confirmed that Delhi Police’s facial recognition systems are linked to the “Aadhaar (UIDAI)” and “driving license” databases.
Surprisingly, despite her role in training Delhi Police personnel, Saxena was unaware of key details like the facial recognition system’s developer company, accuracy or other technical specifications.
Saxena explained that training curricula are jointly developed by the university’s dean and vice-chancellor, in collaboration with senior police officers. She noted, “Since police officers are not adaptive to this new technology [AI], they want to learn how to integrate it into their investigations…despite having other substitute methods.” When asked whether AI tools are used in nearly every case, she confirmed that they are, highlighting growing reliance on such technologies in policing.
Saxena stated that the training programme educates police officers on threat intelligence, AI-powered targeting systems, the development of facial recognition systems, identification of their vulnerabilities, methods that attackers may use to exploit them and defensive strategies. Officers are also trained on deepfakes – what they are, how they’re created and how to differentiate between deepfakes and authentic content.
When asked about allegations of wrong identification by Delhi Police’s facial recognition technology, Saxena acknowledged it was possible, noting officers “are well aware of how the accuracy of the FRT can depend on several things and there is no guarantee of correct identification in every case”.
When asked to share the training modules provided to the Delhi Police, the RRU administration did not respond.
Are the Delhi Police upholding their own guidelines?
The Standard Operating Procedure (SOP) outlined in the Bureau of Police Research and Development’s Compendium of Scenarios for Investigating Officers (2024) mandates that, in case of a riot, the investigating officer (IO) should run the accused’s photo, if obtained from CCTV, through facial recognition software to gather leads on their identity.
This directive raises a critical question: If facial recognition is an integral part of the Delhi Police’s SOP, why is it given minimal emphasis in training? The disparity between its prescribed use in investigations and its limited emphasis in training suggests a gap in preparedness and effective implementation.
The SOP clearly states that “there should be no delay in the Test Identification of the accused,” as it is “an important aspect in the investigation”. However, The Wire’s analysis of court documents from cases involving at least 50 accused individuals found that Test Identification Parades (TIPs) were not conducted for any of them. Despite the clear directive, this vital procedure was largely absent, underlining the lack of adherence to investigative protocols. The Wire has asked the Delhi Police and the Union Ministry of Home Affairs about this lapse, but no response has been received so far.
The SOP also outlines additional investigative steps, stating that the IO may also weigh the possibility of advanced scientific tests like Gait Analysis. Despite this directive, none of the training programmes or workshops focused specifically on Gait Analysis or other advanced AI-based investigative tools.
CCTV+ drones + facial recognition = mass surveillance?
Delhi is among the most surveilled cities in India. In August 2022, during a meeting at the Delhi Police headquarters, home minister Amit Shah said, “Surveillance is a major component of policing in preventing and investigating crime,” and recommended integrating all CCTV systems, including those in public spaces and by civil bodies, with the police control room.
However, studies suggest that CTV surveillance isn’t distributed evenly across all parts of Delhi. The deployment of CCTV cameras, instances of over-policing, and patterns of discriminatory targeting are significantly more concentrated in certain neighbourhoods.
An empirical study, conducted by Vidhi Centre for Legal Policy in 2021, revealed that the use of facial recognition technology by the Delhi Police “will almost inevitably disproportionately affect Muslims, particularly those living in over-policed areas like Old Delhi or Nizamuddin”, potentially increasing their likelihood of being targeted by law enforcement.
In her research, Jai Vipra, an AI policy scholar at Cornell University, highlights the structural inequalities embedded in surveillance practices in Delhi. Through her paper, she demonstrates that “two factors in particular – the uneven distribution of police stations across space, and the uneven distribution of CCTV cameras across space – are likely to result in a surveillance bias against certain sections of society more than others in Delhi.”
The study observes, “In Delhi, our data reveals that two kinds of areas are much more policed than others: (1) areas housing government and diplomatic offices, i.e., Central Delhi; and (2) areas with a proportionally higher Muslim population. These areas have a higher proportion of police stations compared to their relatively lower population. Technology, especially that of predictive policing, constitutes an intensification of policing in this form and can disproportionately target Muslims in Delhi.”
The paper further elaborates, “FRT in policing in Delhi is likely to employ data used from CCTV cameras across the city. This would mean that areas with relatively more CCTV cameras would be over-surveilled, over-policed and thus subject to more errors than other areas… The evident over-policing of Muslim areas can result in the use of FRT in policing in Delhi disproportionately targeting Muslims.”
Statements by police officials also suggest heightened policing in Muslim-majority areas such as North East Delhi. Referring to the 2020 Delhi riots, Joy Tirkey, Deputy Commissioner of Police (North East Delhi), stated in February 2024, “If one area in Delhi needs actual ground-level policing, then it is the northeast district… Not only is the northeast district prone to crime, but it is also a communally sensitive area.” He added, “Since the 2020 riots, we have been closely associated with the northeast district.”
Kodali noted, “Surveillance is not limited to CCTVs. Facial recognition is also deployed through mobile phones [police stopping people at random and taking photos], particularly in cities like Hyderabad and Delhi.”
Do the police really care about privacy?
The Delhi Police have admitted that no privacy impact assessment was conducted before the deployment of facial recognition technology. They further stated, “While investigating any case, the investigating officer is empowered as per law to explore all possible information to identify and legally prosecute the offender.”
The police also refused to reveal the stage of the investigation at which the use of facial recognition technology is generally brought in. Regarding the databases used in conjunction with facial recognition technology, the Delhi Police said, “Convict photographs and dossier photographs maintained with police under section 3 & 4 Identification of Prisoners Act 1920.” They refused to provide an exhaustive list of the databases linked to the system.
The Identification of Prisoners Act, 1920 has now been repealed and replaced by the Criminal Procedure (Identification) Act, 2022 (CPIA), which allows for wider categories of data (fingerprints, palmprints, footprints, iris and retina scans, and other physical and biological samples) to be collected, analysed and stored from any person who has been arrested, whether undertrials or convicts – an incredibly large-scale collection of personal data.
ISO/IEC 27001 is the international standard for information security management, offering a structured framework for risk assessment and the implementation of security controls. When asked if their facial recognition technology complies with this standard, the Delhi Police responded to The Wire, “No such ISO/IEC 27001 was asked [for] in the tender.”
Anushka Jain told The Wire, “Since we are not aware of the level of protection and security provided to Delhi Police’s facial recognition technology data, we cannot know if the company(ies) supplying the technology have access to the Delhi Police’s data.”
What do experts recommend?
Concerns around the misuse of facial recognition technology are not specific to India. Globally, data privacy and civil liberties advocates have voiced serious objections about its unchecked deployment. The American Civil Liberties Union has cautioned that facial recognition can be used “in a passive way that doesn’t require the knowledge, consent, or participation of the subject.” Organisations such as the Electronic Frontier Foundation, Algorithmic Justice League and Amnesty International have called for a moratorium and even outright bans on the use of this technology.
A report by The Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative listed some guardrails that people should be demanding when it comes to facial recognition software:
1. There should be limits on how long data should be stored.
2. Data sharing should be restricted.
3. There should be a clear notification when facial capture is being done.
4. Minimum accuracy standards need to be met.
5. Third-party audits are also required.
6. Collateral information collection (metadata) must be minimised.
These recommendations highlight the urgent need for comprehensive regulation and oversight – something that remains notably absent in many countries, including India, even as they race to use facial recognition in policing.
The Wire has sent the commissioner of police, Delhi and secretary, Union Ministry of Home Affairs, a list of detailed questions about the use of AI tools and facial recognition technology, the alleged inaccuracies and procedural lapses, and the opacity involved in the use of these technologies. The police commissioner’s office said that these queries have been forwarded to the Special Branch of the Delhi Police, which is handling the 2020 Delhi riots cases, but no further response has been received despite reminders.
*Names of accused persons and their lawyers have been changed to protect the identities of the accused.
Part 2
Looking Back at the Camera: How Controversial AI Firms Shaped Delhi’s Predictive Policing
New Delhi: The Bureau of Police Research and Development, in its Compendium of Equipment (From April 2014 to March 2019), Volume II, noted that the Delhi Police bought two facial recognition systems on March 28, 2018. While the software was originally manufactured by Innefu Labs Pvt Ltd, it was supplied to the Delhi Police by a vendor named Pelorus Technologies Pvt. Ltd. Both of these companies – and others who have sold AI technology to the Delhi Police – are worth looking into. An investigation by Amnesty International has linked Innefu Labs to the illegitimate use of spyware against human rights activists elsewhere in the world.
Tender documents obtained through a Right to Information (RTI) response revealed that the two facial recognition systems – (01 Static and 01 Portable) – were supplied to the Delhi Police’s Crime Unit for two purposes. First, to match missing persons with unidentified persons found. Second, to compare criminal photographs/portraits seized/prepared during investigations with online criminal dossier systems.
According to the supply order sent from the Office of the Deputy Commissioner of Police, Prov. & Logistics, Delhi, these systems cost a total of Rs 1,02,66,000 (roughly $1,20,000), which was spent from the central funds of the Indian government.
Apart from this, Pelorus Technologies was also awarded a two-year comprehensive annual maintenance contract at a cost of Rs 16,99,200 for two years – Rs 8,49,600 per annum – including GST after the expiry of the three-year warranty.
Innefu Labs Pvt. Ltd has also published a detailed report on their collaboration with the Delhi Police – ‘Case Study Delhi Police’ – which confirms these details. Another post about AI Vision – the facial recognition system designed by Innefu Labs – states, “Delhi police also used the same product [AI Vision] to identify potential rioters in the 2020 North East Delhi riots. The product achievement found its mention in the parliamentary speech of Hon. Home Minister Mr. Amit Shah while discussing the Delhi riots.” The post further claims, “During the first four days of its implementation, Innefu’s product [AI Vision] helped in identifying 3000 missing children. National Commission for Protection of Child Rights also advocated the use of this software to help trace missing children.”
On March 12, 2020, Union home minister Amit Shah informed parliament that 1,922 perpetrators of the Northeast Delhi riots had been identified using facial recognition on the video footage received from the public.
Innefu Labs: The manufacturer
Innefu Labs describes itself as an “AI-driven company developing cutting-edge technology to carry out Predictive Intelligence and Cyber Security solutions.” According to its website, the company’s “intelligent biometric and facial recognition software is being used in multiple organisations across India & the Middle East” and its “AI-based data analytics & Machine Learning solutions are in use with multiple Law Enforcement Agencies across Asia”.
An investigation by The Wire into the background of Innefu Labs has revealed multiple allegations implicating the company in serious human rights violations.
Case 1: Attack on a human rights activist in Togo
In 2021, Amnesty International uncovered a targeted digital attack campaign against a prominent human rights activist in Togo, West Africa. According to the findings, the activist was targeted in late 2019 and early 2020 using spyware deployed on both Android and Windows devices.
An investigation by Amnesty International Security Lab found that the spyware used in these attack attempts is tied to an attacker group known in the cybersecurity industry as the Donot Team, which has previously been connected to attacks in India, Pakistan and neighbouring countries in South Asia. Digital records identified during this investigation revealed that hundreds of individuals across South Asia and the Middle East were targeted using the Donot Team’s Android spyware.
The investigation also exposed a link between the Donot Team spyware and a company supplying AI-based predictive policing tools to Indian law enforcement agencies. The company was Innefu Labs Pvt. Ltd.
Key evidence linking Innefu Labs to the spyware
Amnesty International found two key pieces of evidence connecting Innefu Labs to the Donot Team Android spyware and to the specific infrastructure used to deliver the Android spyware to the activist in Togo.
1. Investigators found a screenshot from an infected test Android phone exposed on a Donot Team server. The screenshot contained two keyboard suggestions – URLs – one of which was an IP address tied to Innefu Labs. The Innefu Labs IP address would only be suggested by the keyboard if the attacker using the test phone had previously interacted with it.
2. The same Innefu Labs IP address was recorded in log files left publicly exposed on the bulk[.]fun website which was used to distribute Donot Team spyware.
This is significant as it shows that the Innefu Labs IP address was not only linked to the testing of the Donot Team Android spyware, but also to the specific internet infrastructure involved in the distribution/deployment of some of the spyware tools which have been previously linked to the Donot Team. The Amnesty International report states, “The technical evidence suggests that Innefu Labs is involved in the development or deployment of some Donot Team spyware tools. These tools may then be used by a range of hacker-for-hire actors, which are grouped under the ‘Donot Team’ cluster.”
The report, however, clarifies that there is no sufficient evidence to indicate whether Innefu Labs had any direct involvement or knowledge of the targeting of the Togolese activist. It notes, “Although the Innefu Labs IP address is connected to both the spyware distribution website and the Donot Team spyware, Innefu Labs may not necessarily know how any third parties are using these spyware tools.”
Nonetheless, the report underscores the broader concern: “This case highlights the threat ‘hacker-for-hire’-type attacks pose to human rights defenders and civil society globally. ‘Hacker-for-hire’ attacks are offensive cyber operations performed by a threat actor (“a hacker”) normally on behalf of paying customers. These customers may include domestic government agencies, foreign governments or commercial entities.”
It further observes that multiple “hacker-for-hire” companies have advertised themselves as “legitimate cybersecurity services while covertly carrying out offensive digital attacks for their clients.”
police facial recognition surveillance
Innefu Labs’ response
In response to the queries sent by Amnesty International, Innefu Labs denied any knowledge of the Donot Team and stated that it had not exported digital surveillance tools or services to any country, including authorities in Togo.
However, the company also acknowledged that it does not have a human rights policy. When asked whether it follows any process for carrying out human rights due diligence, Innefu Labs did not provide any information.
Amnesty International notes, “That companies like Innefu Labs and BellTroX [another Indian cybersecurity company linked to several attacks] are operating in India without adequate regulation is a serious concern for human rights.” It also points out that India, as a member of the Wassenaar Arrangement – a voluntary export control regime of 42 nations committed to regulating the transfer of conventional weapons and dual-use technologies – has pledged to implement export controls on targeted surveillance technologies.
Case 2: A major data breach
In January 2024, a hacker operating under the alias ‘PreciousMadness’ claimed on the Russian Anonymous Marketplace (RAMP) – a forum on the dark web – to have gained unauthorised access to Innefu Labs’ internal systems. The hacker’s offering included (a) unauthorised access to crucial components of Innefu’s infrastructure, such as Fortinet VPN and Microsoft 365 Services, priced at $1,300; and (b) 54 GB of exfiltrated data, available at additional cost. ‘PreciousMadness’ also advertised the same unauthorised access on other platforms like XSS and Exploit Forums.
An investigation done by The Cyber Express (a cybersecurity news publication) with the help of multiple independent researchers revealed, “The data breach at Innefu Labs has led to the exposure of sensitive information belonging to various Indian and overseas entities. This includes individuals, major conglomerates, politicians, and even agencies of the Indian government.”
A pressing question emerges: Why do the Delhi Police continue to collaborate with Innefu Labs – a company with a troubling history, including a data breach allegedly carried out by the hacker ‘PreciousMadness’ and reported links to targeted surveillance technologies such as the Donot Team spyware?
Despite multiple attempts to secure an interview and the submission of detailed queries, Innefu Labs CEO Tarun Wig declined to respond.
Other tools provided by Innefu Labs
In addition to facial recognition software, Innefu Labs provides a suite of tools designed to support law enforcement operations. In a blog post titled ‘Radicalisation on social media’, the company notes that its social media monitoring tools “analyse online behaviour patterns” and deploy Open Source Intelligence (OSINT) to track the preferences, interests, opinions, etc. of “individuals who may be at the risk of being radicalised”. The post further states that this monitoring can, later, “help in developing selected interventions”.
One such tool, Innsight – described as a “big data OSINT tool built on AI-ML capabilities” – is deployed by law enforcement and intelligence agencies to “track and analyse open channels of mass communication for surveillance and intelligence gathering”. As stated on the company’s blog, “Innsight is actively helping our clients to identify key actors behind protests or riots, daily monitoring of social media activities, bot analysis, link analysis for network detection, and social media reports.”
Innefu Labs has listed Delhi Police, Tamil Nadu Police, the Central Reserve Police Force (CRPF) and the Defence Research and Development Organisation (DRDO) as the featured customers of its Innsight tool on LinkedIn. The company has advertised Innsight as “one tool for 360-degree profiling of people, post-event analysis by ingesting data from social media, leaked databases, WhatsApp groups and Telegram channels to identify key influencers”.
In a separate blog post, Innefu Labs asserts that the OSINT analysis conducted through Innsight uncovered “unforeseen propaganda in action” during the Citizenship (Amendment) Act/National Register of Citizens protests. The post further claims, “It was discovered that the issue was appropriated by foreign forces to destabilise the harmony of the country and the government, thereby creating internal conflict. Pakistan is recorded to be the hotspot and point of origin of anti-India narrative, calling India a ‘Terror State’ and ‘Nazi.’ Bots and trolls are being used as pawns to peddle misinformation, and rumours are also being circulated about Police brutality and the concentration camps in India.”
An intelligence report by the company, titled ‘CAB 2019: Mass Polarisation over Social Media’, identifies key international influencers and personalities “taking a negative stance on the bill while influencing the general public sentiment”. The report details how Innefu’s tools assisted the police in uncovering the ‘polarisation’ surrounding the CAA, stating that the protests aimed “to not only spread a negative image of India globally but to also create internal proxy war, propaganda videos and information of Indian Police Brutality”.
Pelorus Technologies: The supplier
Pelorus Technologies, a Mumbai-based company, is a distributor of specialised digital forensics, intelligence and surveillance technology to law enforcement agencies, governments and private enterprises. It supplies Delhi Police with the facial recognition system developed by Innefu Labs. In addition to this, Pelorus Technologies markets several other Innefu products like Call Data Record (CDR) Analysis Software, and describes Innefu Labs as its in-house software development and research wing.
In March 2018, Pelorus Technologies also helped the Delhi Police in setting up a Mini Forensic Science Institute lab at the Police Training School in Dwarka, Delhi. As mentioned in the Compendium of Equipment (2014-19), Volume I by the Bureau of Police Research and Development, Pelorus Technologies won the contract of Rs 50,69,962 from the central funds of the Government of India.
The Delhi Police also deploy a variety of other forensic and surveillance tools, many of which have been repeatedly linked with allegations of human rights violations. The Union government’s decision to collaborate with the companies manufacturing these tools – detailed below – could have serious implications on the future of policing in India and the nature of companies that will be entrusted with supplying AI and forensic technologies to law enforcement agencies.
According to the Compendium of Equipment (April 2019 to March 2020) — Volume II, published by the BPR&D, the mobile phone forensics and analysis tools manufactured by Swedish firm MSAB, Israeli company Cellebrite and Russia’s Oxygen Forensics were sold to the National Cyber Forensic Lab by a third-party distributor named 3rd Eye Techno Solutions. Like Pelorus and Innefu Labs, these companies too have international track records that could raise eyebrows.
In connection with the Ratan Lal murder case, the Delhi Police received a report from the Cyber Forensics Lab at CERT-In on the analysis of digital evidence such as mobile phones, hard drives and other electronic devices. The report – titled ‘Digital Forensic Data Retrieval & Analysis Report’ and accessed by The Wire – listed MSAB’s XRY and Cellebrite’s Universal Forensic Extraction Device (UFED) and UFED Physical Analyser among the cyber forensic tools employed by CERT-In during the investigation.
In March 2025, the Ministry of Home Affairs approved a request by the Delhi Police to exempt several such technologies from the Make in India clause. This exemption permits the procurement of these products from foreign vendors, bypassing the usual preference for domestic suppliers. Such waivers are typically granted when equivalent goods or services are not readily available from Indian manufacturers or when certain eligibility criteria are not met.
The tools approved for exemption included a range of high-powered forensic and surveillance technologies sourced from global vendors:
1. MSAB’s extraction software XRY
2. Two tools from Israeli giant Cellebrite: (1) UFED 4PC and (2) Digital Collector and Inspector.
3. Detective by the Russian company Oxygen Forensics
4. MOBILEdit by the Czech firm Compelson Labs
5. Two tools from the Canadian company Magnet Forensics: Axiom and Outrider
6. Tools from Netherlands-based Foclar: Impress and Mandate
7. Evidence Center by US-based Belkasoft
8. Recon ITR and Recon Lab from the American firm Sumuri
9. OSForensics from the Australian company PassMark Software
Cellebrite and its troubling human rights record
The Delhi Police make extensive use of Cellberite’s mobile and cyber forensic tools. The Israeli giant, Cellebrite, is a digital forensics company that develops technologies enabling law enforcement agencies, enterprise firms, and service providers to collect, review, analyse, and manage digital data. Its flagship product line, the Universal Forensic Extraction Device (UFED), is widely used for mobile phone data extraction and analysis.
Founded in 1999, Cellebrite claims to serve “6,900 customers across federal, state, local, and enterprise sectors, including 90% of relevant public safety agencies.” According to the company, its tools have been used in over 5 million investigations across the globe.
surveillance AI
However, Cellebrite’s products have repeatedly been at the centre of international controversy. They have allegedly been deployed by several authoritarian regimes to facilitate unauthorised surveillance, suppression of dissent, and human rights abuses.
In December 2024, Amnesty International reported that Serbian police and intelligence services used Cellebrite’s UFED tools to unlock the phones of journalists and activists. Thereafter, a spyware – identified as NoviSpy – was allegedly installed during detention to enable remote surveillance and full access to devices, including camera and microphone control.
In 2021, Botswana police reportedly used Cellebrite technology on multiple occasions to extract data from the phones of journalists, student activists and opposition leaders.
According to reports by the Committee to Protect Journalists and the Israeli newspaper Haaretz, the head of Russia’s Investigative Committee stated that in 2020, law enforcement agencies had probed cellphones 26,000 times the previous year using Cellebrite’s data extraction tools. Thereafter, in 2021, Cellebrite announced that it had stopped selling to Russia and Belarus, but Russian investigative agencies continued to reference the company’s products in official reports and training materials as late as 2022.
In July 2020, during a court hearing, filings by the Hong Kong police revealed that Cellebrite’s phone hacking technology had been used to break into 4,000 phones of Hong Kong citizens, including prominent pro-democracy politicians and activists.
In Bahrain, authorities allegedly used UFED to extract private WhatsApp messages and photos from the phone of a political activist shortly after his arrest in 2013. The extracted data – marked “evidence” in his trial – was presented by Bahraini prosecutors and played a key role in his conviction, despite his claims of having been tortured in custody.
The list goes on. Cellebrite’s phone extraction tools have also drawn sharp criticism from privacy advocacy groups worldwide, many of whom have called for moratoriums on their use due to the ease with which they can extract personal data from smartphones.
MSAB: Another controversial vendor
Headquartered in Sweden, MSAB is a renowned world leader in mobile forensic technology, supported by a global network of distributors. Its flagship software, XRY, enables the extraction and analysis of data such as contacts, call logs, pictures, SMS, MMS and application data from mobile devices.
XRY has faced widespread criticism for its role in digital privacy breaches and its potential use in surveillance attacks on human rights activists and journalists. The technology can retrieve passwords and authentication tokens, enabling authorities to remotely access users’ online accounts such as Google, Facebook, cloud storage services and more. Therefore, the software is widely used by law enforcement and intelligence agencies, military organisations, and forensic laboratories in over 100 countries worldwide, including India.
MSAB’s role in crackdown on protesters in Myanmar
Media reports indicate that MSAB’s technology has allegedly been used by Myanmar’s security forces to extract data from the mobile phones of protesters, activists, journalists and civilians.
Thousands of people were arrested, charged or sentenced as part of the crackdown on nationwide protests in Myanmar following the 2021 military coup. According to the Assistance Association for Political Prisoners, a Myanmar civil society group, “Some of them may have had information extracted from their phones by security officers using digital forensic technology bought from Western companies before the coup.”
MSAB confirmed selling its forensic tools to the Myanmar police in 2019. Also, leaked 2019 budget documents from Myanmar’s Ministry of Home Affairs revealed allocations for the procurement of MSAB units for the 2020–2021 fiscal year. According to the notations in the budget, as reported by The New York Times, those MSAB field units could download the contents of mobile devices and recover deleted items.
In response to these allegations, MSAB has said that limited technology was sold to police working for a civilian government and that the licenses for these forensic devices were cancelled after the 2021 coup. However, its earlier products (sold in 2019) still remain in the hands of Myanmar’s security forces, and MSAB’s own website states that its products can be used even with an expired license.
Massive data leak involving Cellebrite and MSAB
In January 2023, a total of 1.83 terabytes of data was leaked from two leading digital forensics companies – Cellebrite and MSAB. By that time, the Delhi Police had already partnered with both companies and had begun using their tools.
The data was reportedly shared with the hacktivist collective Enlace Hacktivista by an “anonymous whistleblower”. Approximately 1.7 terabytes of the total leaked data belonged to Cellebrite, and another 103 GB to MSAB. It was believed to include system information, technical documentation and some customer-related documents. However, the leaked data did not appear to contain specific client identities.
Enlace Hacktivista issued a statement saying, “An anonymous whistleblower sent us phone forensics software and documentation from Cellebrite and MSAB. These companies sell to police and governments around the world who use them to collect information from the phones of journalists, activists and dissidents. Both companies’ software is well documented as being used in human rights abuses.”
In response, an MSAB spokesperson denied any breach, stating: “MSAB has not been hacked. All customer data is safe, and so are all systems, code, or information internal to MSAB.”
Despite the scale of the alleged breach, the Delhi Police did not sever ties with either firm. In fact, as recently as January 2025, the Ministry of Home Affairs approved a request from the Delhi Police to exempt Cellebrite and MSAB tools, among others, from the Make in India procurement clause, paving the way for continued purchases.
Pelorus Technologies’ partnership with Cellebrite and MSAB
Pelorus Technologies is a key partner of Cellebrite’s and was recently honoured with the “Outstanding Partner Excellence Award – APAC” by the company. Pelorus Technologies also lists a range of Cellebrite tools among its mobile forensics offerings. Their partnership is confirmed by Cellebrite’s official website, which refers to Pelorus Technologies as their “channel partner” for Milipol India 2025 – an international event on homeland security supported by the Ministry of Home Affairs.
In early 2021, Pelorus Technologies also announced a strategic collaboration with MSAB. As its national distributor for India, Pelorus is responsible for delivering MSAB’s advanced digital intelligence solutions to the Indian investigative and law enforcement agencies.
Highlighting his company’s association with MSAB, Pelorus Technologies CEO Dwivedi told the media in February 2021, “MSAB’s focus is on helping investigative agencies with improved data collection and analysis. At Pelorus, we are also keen to improve intelligence technology available in the country to create safer communities. Thus, our goals are the same – making the world a safer place to live in.”
When asked about the collaboration despite MSAB’s controversial background, Dwivedi told The Wire, “The companies we collaborate with have to be credible, have to be doing the right thing, should not have to have a history of doing any wrong. We tie up with them only if they do not have a history of supporting any illegal thing in the world.” He added, “What we understand is that MSAB banned Myanmar four years ago, and they are not supplying any equipment for Myanmar anymore.”
About the concerns around data privacy, Dwivedi said, “We never get into the privacy part. Once we apply to law enforcement agencies, it is their responsibility to handle it with proper care and not do illegal things.”
The Wire pulled a list of shipments imported by Pelorus Technologies Ltd. Using the commercial intelligence platform Sayari, we tracked these shipments, categorised as “computers and data processing machines.”
Our investigation identified 118 shipments from Cellebrite Asia Pacific (a Singaporean subsidiary of Cellebrite) and an additional 23 shipments from MSAB, as of June 09, 2025 — both companies often tied to supplying surveillance and digital forensics tools.
Image/video enhancement tools: The precursor to facial recognition analysis
The images/videos derived from CCTV footage and other private sources during criminal investigations are often of poor quality and require enhancement before any facial recognition algorithms can be applied to them. To bridge this gap, forensic image and video enhancement tools are deployed. One such tool used by the Delhi Police is AMPED FIVE, as confirmed to The Wire by the investigating officer of the Ratan Lal Murder Case, Inspector Gurmeet Singh, Crime Branch, Delhi Police.
AMPED FIVE is designed by AMPED Software, an Italian company founded in 2008 that offers image and video analysis and enhancement solutions for forensic, security, and investigative applications. Its products are distributed through a global network of partners like Pelorus Technologies, which supply them to law enforcement agencies worldwide.
Martino Jerian, founder and CEO of AMPED Software, refused to name the company’s clients, citing “confidentiality agreements and data protection policies”. However, he confirmed to The Wire, “Our products, Amped FIVE and Amped Authenticate, are in use by several Indian law enforcement and forensic units.”
AMPED FIVE is widely known for its advanced image and video enhancement capabilities, including features like noise reduction, sharpening, colour correction and distortion correction, which are valuable for forensic investigations where visual evidence might be crucial.
AI police facial recognition
During court proceedings related to the Ratan Lal Murder Case, special public prosecutor Amit Prasad admitted using “AMPED software based on digital recognition” to establish the identity of one of the accused. Advocate Uday*, who represents several Delhi riots accused, also confirmed the police’s use of AMPED Software and its reference by the prosecution in court.
In response to concerns over the potential misuse of the company’s tools, Jerian told The Wire over email, “We have licensing terms that prohibit unlawful use, and we reserve the right to revoke access immediately. While we’ve never encountered such a case, we remain alert and ready to act. As per the nature of the products, the possibility of unethical use is remote…we mostly do low-level image processing, without understanding the image’s semantic content or doing any kind of face or object recognition.”
However, it remains unclear whether using AMPED’s video enhancement tool as a pre-processing step for facial recognition, especially in cases where the latter may have contributed to wrongful incarceration, would fall under the company’s definition of ‘unethical use’. The Wire sought clarification from Jerian on AMPED’s position on this, and he said, “In your example, we would consider unethical the case where a user intentionally produces wrongful outputs to impair a correct facial recognition.”
On June 5, 2020, the Directorate of Forensic Science Services under the Ministry of Home Affairs constituted a committee of experts from central and state forensic science laboratories to draft a manual for the establishment and upgradation of forensic science labs across the country. In its recommendations, the committee endorsed AMPED FIVE as a preferred software for the acquisition and analysis of CCTV footage.
Pelorus Technologies lists several AMPED products, including AMPED FIVE, among the tools it supplies to investigative and law enforcement agencies worldwide. Jerian, however, refused to disclose which distributor supplies AMPED’s tools to the Delhi Police.
When asked about the terms of AMPED’s contract with Pelorus Technologies, particularly regarding ethical safeguards, Jerian refused to share specific details. He stated, “Our contracts require full legal compliance, including anti-corruption laws, export regulations, and GDPR-based data protection standards. We also require our distributors to meet the same obligations through our onboarding and audit process.”
It remains unclear how these ethical safeguards accommodate Pelorus Technologies’ partnerships with companies like Cellebrite and MSAB that have faced allegations of human rights violations.
Which databases are connected to Delhi Police’s facial recognition technology?
There remains a lack of transparency around the databases used for matching input images in the Delhi Police’s facial recognition system. A 2020 report released by the Delhi Police highlighting their achievements included a chapter titled ‘Investigations into North-East Delhi Riots’, co-authored by Pramod Singh Kushwah, Deputy Commissioner of Police, Special Cell, and Joy Tirkey, Deputy Commissioner of Police.
The chapter outlined the use of video analytics and facial recognition in the investigation, stating: “945 CCTV footage and video recordings were obtained from multiple sources, including CCTV cameras installed on the roads, video recordings from smartphones, video footage obtained from media houses, and other sources were analysed with the help of video analytic tools and facial recognition systems. The photographs therein were matched for multiple databases, which included Delhi Police criminal dossier photographs and other databases maintained by the government. This helped in identifying persons involved in riots, which proved helpful in taking legal action after corroboration with other supporting evidence.”
The report also confirmed the use of the e-Vahan database (a national database of registration details of all vehicles) and driving license databases for further identification. However, the reference to “other databases maintained with the government” in the report remains vague, leaving uncertainty about the full scope of databases accessed by the Delhi Police’s facial recognition systems. According to police officials who spoke to The Wire, the Election Commission of India’s electoral roll — the database linked to the Electoral Photo Identity Card (EPIC) — was also used to identify suspects in Delhi Riots cases.
The report also does not offer any information on data retention practices, such as how long the processed images are stored for, raising further concerns about privacy safeguards and potential misuse.
In February 2021, then Delhi Police Commissioner S.N. Shrivastava addressed the media at an annual press conference at Delhi Police headquarters. Detailing the role of technology in the investigation of the 2020 Northeast Delhi riots, he said, “231 of the 1,818 arrests [total] so far had been made possible using the latest technological tools. Of 231 such arrests, 137 persons were identified through our facial recognition system. The FRS was matched with police criminal records, and many accused were caught. Over 94 accused were identified and caught with the help of their driving license photos and other information.”
Srinivas Kodali, a data and privacy researcher, spoke to The Wire about the vast and often opaque mechanisms through which the Delhi Police and other law enforcement agencies access digitised personal data. He said, “Once data is digitised, law enforcement agencies can access it in one way or another, either directly or by requesting it under various legal provisions. Several laws explicitly grant such access. For example, the Registration of Births and Deaths Act maintains a structured list of records, while the Telecom Act provides authorities with access to a wide range of telecommunications data, including call data records, internet protocol data records, and tower data records. Additionally, under the Sarais Act of 1867, police are authorised to collect hotel check-in data. This means that when a guest submits a photograph for identification at a hotel in Delhi, that information can be shared with the Delhi Police. In essence, any form of digitised data – whether personal records, communication logs or travel details – either remains directly accessible to law enforcement or can be lawfully demanded by them.”
He highlights the extensive reach of digital surveillance mechanisms in India by citing an example of the Disha Ravi case, where “authorities leveraged the Information Technology (IT) Act to demand access to a Google Document shared between climate activists Disha Ravi and Greta Thunberg during the farmers’ protests. This allowed them to trace and identify every individual who had contributed to the document.”
When it comes to facial recognition and surveillance, he adds, “authorities may seek access to extensive photographic databases, with electoral data being one of the most significant. The EPIC (Electoral Photo Identity Card) database contains photographs of nearly every Indian voter, making it one of the largest repositories. Other key databases include the passport database and the UIDAI (Aadhaar) database, which store photographs and biometric data.”
police AI facial recognition
In a parliamentary address on March 12, 2020, home minister Amit Shah responded to concerns over alleged violations of the right to privacy during the investigation into the 2020 Delhi communal violence. He stated that the “Delhi Police has not used Aadhar data for facial recognition of perpetrators, and while the Government respects the right to privacy, it cannot supersede the quest for bringing perpetrators of riots to justice.” He further said, “The Delhi Police has abided by the Supreme Court’s guidelines on right to privacy during the course of investigation.”
However, Kodali explains how the law enforcement agencies often circumvent the court rulings on Aadhar data: “Legally, biometric data from Aadhaar is not supposed to be shared with law enforcement, as per Supreme Court and high court rulings. However, these restrictions do not seem to apply to photographs. While UIDAI may not provide direct access to law enforcement, police can still obtain photographs through other channels. For example, KYC (Know Your Customer) processes – whether for land records, banking, or other government services – collect photographs that may be shared with government agencies, which in turn can provide them to law enforcement.”
“Additionally, the Crime and Criminal Tracking Networks and Systems serve as a centralised platform linking various law enforcement databases. It integrates data from the Fingerprint Bureau, which holds extensive fingerprint records, and the prison system, which tracks inmate visits. Beyond police databases, government office access systems in Delhi, controlled by the Ministry of Home Affairs, track individuals entering government buildings, including journalists and RTI applicants. Similarly, DigiYatra, a digital air travel verification system, claims not to share personal data but often shares metadata, which can still be used for tracking. Other sources of data include visa applications, which store photographs, and airport travel records,” says Kodali.
The Wire has reached out to the commissioner, Delhi Police and secretary, Union Ministry of Home Affairs asking a number of questions about the companies whose products are being used, the processes involved, the tools’ accuracy and the companies’ questionable global track record. No response has been received till the time of publication.
*Names of accused persons and their advocates have been changed to protect the identities of the accused.