By Y.Z. Ya’u
The long standing debate around the control of the social media in Nigeria, last week took a new turn with the release by NITDA of the draft Code of Practice for Interactive Computer Service Platforms/ Internet Intermediaries. Predictably, the code has launched a new controversy around the motives of government for coming up with the code at this time of our history.
One ground of suspicion about the intention of the code is that it is coming at about the time we are entering electioneering campaign period. Mischief says that this government that benefited greatly from the use of social media in the run up to the 2015 general elections when it was in opposition does not want to be hurt the same way it had used the same social media to hurt the campaign aspirations of the former ruling party.
But there have also been many initiatives in the last seven or so years by this government to control the use of the social media. They include the anti-social media bill, the hate speech bills, and many other efforts, including the suspension of the operations of Twitter in the country for six months. These have fuelled suspicions on the part of the public that this government is only too happy to make it difficult for citizens to make use of the social media.
The expressed tone of the code is to make the social media safe for citizens, which is a noble objective, but it is important to make sure in achieving a safer social media space, we do not make it impossible to use.
There is no doubt that the social media is like any other technology, being misused in the country. This misuse manifests in various forms such as the spread of misinformation and disinformation, the proliferation of hate and dangerous speech, commodification of nudity, child pornography, sexual exploitation and human trafficking as well as recruitment of young people to violent gangs such as terrorists and bandits. There also other crimes such as scamming, impersonation, identity theft, etc. All these make the cyber space to be a site which many fear to venture.
These are however not peculiar or unique to social media or even to Nigeria.
Every technology is capable of being used and misused and people are socialised into the socially useful uses of these technologies at an early contact these technologies so that they grow to know how to use them for the benefit of society.
These are not the products or the consequences of social media. They predate it. They are in fact the projection of the offline versions of these crimes. That for centuries we have not been able to stamp them out means that it will be naive to think that they can just be eradicated by certain codes. Codes do help but not everything can be cured by codes. And many of the ills of social media are of that nature. They require entirely different approach.
Admittedly the ills of the social media have been counterproductive to the essence of social media. But it will not work by throwing the baby with the bathwater.
Moreover, not all users of social media indulge in these anti-social uses. In reality, very few people engage in these.
However, this is not to say that what the small minority does is not worrisome. In the place I work, we have spent considerable length of time fighting many of those. For instance, since 2014, we having running an observatory for monitoring and countering hate speech in the country.
We have also engaged in sensitization programmes to enlighten and alert Nigerians about the dangers of hate speech and what we could collectively do to sanitize the cyber space of these. We are also identifying and countering fake news, misinformation and disinformation as well as combating gender violence online. In all these, we have sought the partnership of all stakeholders, including government, to develop national strategies to deal with these issue, drawing from global best practices.
However, government discourse of the problem tends to focus on control than on education and empowering citizens to know their limits of their freedom which would be most helpful and it is from this perspective that I see the weaknesses of the code as a solution. The code is sweeping in many of its assumptions and prescriptions,
Take, for example, it wants to criminalize platforms providing space for the crimes of the users. Had that road been taken, the internet as we know it today will not have existed.
Following this logic of pushing the burden of misuse of the users on the platform providers, one of the provisions of the code says that “A platform must acknowledge the receipt of the complaint and take down the content within 24 hours”.
This gives the government the challenged power to make unilateral determination and classify items and to be the judge and prosecutor.
Platform providers operate on a multi-layered architecture that requires escalation processing for a decision to be reached.
Much of the issues that go to the top are those about interpretation and most cannot be resolved within 24 hours unless the intention is to say that whatever the government says it does not want, becomes the law that cannot be contested nor be subject to independent and neutral interpretation. And this can easily lead to abuse. Even on seemingly settled matter of deleting nudity, the code does not make exceptions. For instance, certain levels of nudity are needed for educational purposes. Certain nudity could be used to mobilize against certain crimes and to raise awareness, so when you make a no-exception case, the government simply makes it difficult to use relevant images for these purposes.
There are provisions also that seek to outsource the functions of government to the platform providers.
One of these says that they should “exercise due diligence to ensure that no unlawful content is uploaded to their platform”. Such a task is the responsibility of the police and other law enforcement agencies. Platform providers are not content providers or owners and cannot have the capacity to carry out such due diligence to ensure that the over billions of users do not upload “unlawful” content.
Another says that platform providers should “make provision for verifying official government accounts and authorised government agencies”. This is the responsibility of government through its relevant agencies. If government is unable to come up with an enforceable guideline for the use of social media by its agents and officers, it should not push that burden to third parties.
The elements of control-thinking can also be seen when vague terms are used. For instance, we all know that certain content could cause psychological harm to people. But there is no scale about psychological pains and persons react differently, having different thresholds of being affected. Without certain rules to establish levels of pains, this can lead to arbitrariness. If I write that a minister has been involved in corrupt deals, he or she can plead “psychological harm” and both myself and the provider are in trouble.
The code also deploys a stacking technique; thus loading offenses of different nature over a single line. Take for instance article 2(c) of part 11 which requires platforms to inform users not to create, publish, promote, modify, transmit, store or share any content or information that “is defamatory, libellous, pornographic, revenge porn, bullying, harassing, obscene, encouraging money laundering, exploiting a child, fraud, violence, or inconsistent with Nigeria’s laws and public order”. Clearly libel and defamation are offenses that have clear laws whether they are committed offline or online. So why add them here? They can be the herring to frighten users of social media.
Even innocuous terms as “false or misleading” are difficult to define. Is an item misleading because of intent or due to the effect? If I put a content and someone feels misled, will that be misleading simply because someone thinks it is misleading, perhaps due his (mis)interpretation? Or because he or she draws the wrong conclusion? Or because of the materiality of the item? Being misled is not a straight cause and effect logic.
As for falsity, it can also be the test of limit to access to what counter factuality that comes to live after publication. In other words, a material could be true at the point of publication and becomes false after publication. In this case there was no intent to publish false item and in this case, no false information was published even if by the time it is read, it is not no longer true.
Finally, it seeks to order platforms to Preserve any information concerning a person that is no longer a user of a Platform due to withdrawal or termination of registration, or for any other reason. This has the effort of pre-empting the efforts to ensure the right to forget. When information about people who are they no longer users of a platform is forced be retained, it would be used somehow and breach not all their right to forget but also their privacy.
One way to think about the code is to recognize that the digital space is an extension of our civic space. The civic space is what materialises our humanity and citizenship, it is the embodiment of our human rights, this been so, the digital space is also a concretization of these rights, their projection online.
The notion of a digital civic spec presupposes a regime of digital rights that are the projection of our offline human rights. They include the right to freedom of expression, the right to organization, and more importantly, the right to privacy. Many of these crimes we see are derogation of these rights. Many commit these the infractions that the code lists because they do not see their codification to protect citizens from digital abuse.
Government itself has been guilty of abuse the digital rights of citizens through intrusive digital surveillance and failure to ensure that all citizens have access to digital space.
In this sense, the best government is best advised to get the Digital Rights Bill passed and signed. This has a wholesome provision of rights and responsibilities along with measures for enforcement rather than limits its gaze on criminalization which seems to be the tone of the Code. This will help both government and citizens as well as the platform providers. in the end, it will cost less and achieve more when government focuses on educating users than in prosecuting them. There are useful parts to the Code but its underlying assumptions and prescriptions are suspect and subject to being abused. By all means let the providers by corporate citizens of this country with clear responsibilities but we as citizens also want our freedom be respected.
* Y.Z. Ya’u is the CEO of CITAD, Kano