A new report by the Dallas Morning News has revealed that the use of artificial intelligence (AI) to monitor students is widespread in schools nationwide. Many campuses say that computer monitoring is necessary to protect and save the lives of students at risk of harming themselves or others. But some observers are crying foul. “When students know they are being monitored with AI, they are less likely to share true thoughts online and are more careful about what they search,” Nir Kshetri, a professor at the University of North Carolina at Greensboro, who studies digital communities, told Lifewire via email. “This can discourage vulnerable groups, such as students with mental health issues, from getting needed services. When students know that their every move and everything read and written is watched, they are also less likely to develop into adults with a high level of self-confidence.”
Watching Students
The news report claims that the company Social Sentinel offered schools technology to scan social media posts from students. At least 37 colleges have used Social Sentinel since 2015. Social Sentinel has claimed its service can’t be used for monitoring protests, but the report found that it was used for that purpose. “During an event, threat alerts can provide important insight about the leaders or agitators who may want a confrontation with law enforcement, the general climate of the crowd, and the potential for crowd growth,” Social Sentinel wrote in an email according to the report. Navigate360, the company that acquired Social Sentinel in 2020, called the investigation inaccurate and outdated. Some industry insiders say that the increasing use of AI for surveillance is inevitable. “The paradox of social media is that as our digital footprints grow, so too does our use of social media,” Chris Piche, the founder of Smarter AI, a company that connects CCTV with software-defined AI cameras, told Lifewire in an email interview. “This is where government regulations play a key role. For example, European and American governments have introduced the Right to be Forgotten under GDPR, CCPA, and other US state regulations.”
Minorities Under an AI Microscope
Kshetri said the use of AI to monitor students could have a disproportionate impact on minorities. Black students’ chances of being suspended are more than three times higher than that of their white peers, he said. “After evaluating flagged content, vendors report any concerns to school officials, who take disciplinary actions on a case-by-case basis,” he added. “The lack of oversight in schools’ use of these tools could lead to further harm for minority students.” Leading AI models are 50 percent more likely to flag tweets written by African Americans as “offensive” than those written by others, Kshetri said. They are 2.2 times more likely to flag tweets written in African American slang. “These tools also affect sexual and gender minorities more adversely,” he added. The company Gaggle has “reportedly flagged ‘gay,’ ’lesbian’ and other LGBTQ-related terms because they are associated with pornography, even though the terms are often used to describe one’s identity.” Surveillance system vendors use insecure systems that hackers can exploit, Kshetri said. In March 2021, computer security software company McAfee found several vulnerabilities in student monitoring system vendor Netop’s Vision Pro Education software. For instance, Netop did not encrypt communications between teachers and students to block unauthorized access. “The software was used by over 9,000 schools worldwide to monitor millions of students,” Kshetri added.” The vulnerability allowed hackers to gain control over webcams and microphones in students’ computers." Many companies offer schools ways to monitor students via the internet. Social media platforms have proprietary monitoring software, for example, to detect and block bots and spam, Piche noted. Colleges, companies, and governments use social media monitoring software (or SMMS for short) from vendors, including Hootsuite, Sprout Social, and Zoho Social. “Just like a town square, social media is a public forum, not a private one,” he added. “Whenever we use social media, we leave digital footprints which are shared and sold by social media platforms to third parties, for example by Twitter to Elon Musk and Facebook to Cambridge Analytica.”