By: Prof. Ts. Dr. Manjit Singh Sidhu
Undoubtedly one of the greatest achievements of humanity is the World Wide Web which has reformed the world by allowing those who are connected to communicate and share information in ways, which only a few decades ago, would have appeared like science fiction. Despite the benefits, we would fine all the hard and unpleasant reality with this technology. For example, online trolling, bullying and stalking are prevalent and they are making the real lives of victims intolerable and, in some cases unliveable. Unprecedented connection has been brought about by the growth of the digital age, but it has also brought about a new era of online ills that pervade our digital environments. The Internet is becoming a shelter for many types of abuse that seriously harm people’s lives, such as transphobia, stalking, and cyberbullying. This article examines the various aspects of online threats, breaking down the problems at their core, looking at the conditions that make them possible, and considering possible remedies.
First, let us try to comprehend the Internet hazards. Online threats are a broad category of actions that have the malicious intent to damage persons or groups. One of the most common types of bullying is cyberbullying, which is when someone uses Internet platforms to harass, threaten, or degrade other people. Another pernicious type is stalking, which uses the enormous databases of private information that are readily accessible online to follow, monitor, and harass people nonstop. Furthermore, the spread of discriminatory and hate speech, including transphobia, makes online spaces even more poisonous and creates an atmosphere where disadvantaged people are more likely to be excluded and vulnerable. This also begs the issue of why some individuals believe they are free to express offensive things on the Internet. Do people feel free to act in ways that they would not choose to in their offline life while they are online?
What then makes Internet abuse possible? Offenders are given a false sense of anonymity and remoteness by the digital environment, which gives them the confidence to behave in ways they may not otherwise in person. It is difficult to hold people responsible for their activities since it is simple to create several online personas and to hide one’s identity. Furthermore, because online communication is immediate and widely dispersed, damaging remarks may spread fast to a large audience and have a lasting negative influence on the mental health and general wellbeing of victims. This increases the impact of online abuse.
The development of successful intervention measures requires an understanding of the psychology behind online abuse. Abusers on the Internet frequently display aggressive, narcissistic, or power and control-hungry tendencies. They may indulge these urges without consequence because to the anonymity the Internet provides, and they get gratification from the pain they inflict on other people. In addition, the absence of prompt repercussions for their conduct serves to further legitimize their conduct, so sustaining an abusive cycle that may be hard to interrupt.
It is critical to understand that society, with all its complexity and flaws, is reflected on the Internet. The frequency of abuse on the Internet reflects larger social problems including prejudice, inequality, and structural unfairness. In addition to technology fixes, cultural shifts are also necessary to address the ills associated with the Internet. The first steps in creating a more secure and welcoming online community are teaching people about digital citizenship and encouraging empathy and respect in online interactions. Prof. Andy Phippen asserts that “the Internet is just a collection of cables, wires, and routers; it does not have a dark side.” The Internet reflects the worst aspects of society.
In the combat against Internet abuse, technical solutions are just as important as societal measures. Effective identification and mitigation of hazardous conduct can be facilitated by the implementation of strong content moderation methods and reporting systems. Moreover, strengthening data security and privacy controls might enable people to secure their online personas and lessen their vulnerability to misuse. Effective creation and use of these solutions depend on cooperation between IT firms, legislators, and civic society.
Artificial intelligence (AI) holds promise as a tool for combating online abuse, capable of analysing vast amounts of data and detect patterns indicative of harmful behaviour. Artificial intelligence (AI)-driven content moderation systems could automatically detect and eliminate harmful information, lightening the workload for human moderators and boosting the effectiveness of response systems. AI does, however, have certain restrictions and moral issues to address. To avoid biases and unforeseen repercussions, it is essential to make sure AI systems undergo extensive testing and review, as well as training on a variety of datasets.
To effectively address the widespread problem of online harms, a multidisciplinary strategy incorporating sociological, psychological, and technological aspects is needed. We may endeavour to establish a more secure and welcoming digital environment for all by comprehending the underlying reasons of online harassment, enabling people to defend themselves, and using technology sensibly. It is essential that we keep this as a top priority and work together across sectors to provide all-encompassing solutions that promote the values of justice, decency, and respect in our online interactions.
The author is a Professor at the College of Computing and Informatics, Universiti Tenaga Nasional (UNITEN), Fellow of the British Computer Society, Chartered IT Professional, Fellow of the Malaysian Scientific Association, Senior IEEE member and Professional Technologist MBOT Malaysia. He may be reached at firstname.lastname@example.org