In a world where everyone feels free to share everything and unfiltered is a popular hashtag, do social networks have a responsibility to help keep our discourse civil? Or should they serve as platforms for anyone to share their voice no matter what?
This is a tough and often politically charged question. The internet has afforded us unprecedented access to audiences and a place that allows us, unfettered by the bounds of social norms, or even personal identification, to voice our hopes, fears, outrages, love of kittens, hate of almost anything without fear of reprisal by anyone except our sometime anonymous online peers. Is the great democratizer? Or is it a safe place to promote fear and hate? And, to the point of this writing, if it becomes the latter, who’s responsible for changing the discourse?
Nextdoor, a social platform dedicated to connecting neighbors, just announced that they are testing an algorithm that helps prevent racial profiling. After coming under scrutiny for, in the eyes of their opponents, “promoting fear-mongering and racial profiling,” Nextdoor decided they would take a crack at doing something about it. So, they are more than encouraging their users to think before they post, they are forcing them to.
Nextdoor is the type of place where you post your daughter’s babysitting business, ask for advice on contractors, give away unused furniture and, this is the critical part, talk about and alert neighbors to crime. It is the way and the words members use in talking about crime that causes issues. When people start using words like, “black man lurking,” or, “suspicious-looking Hispanic woman,” are they creating undo fear among their neighbors for groups of people? Are they simply trying to keep their streets safe?
Interestingly, unlike other platforms, Nextdoor requires that you use your real name and you must register your address before you are approved to participate. So it is not anonymous. So people can call their neighbors out by name – or even in person – for this behavior. Or, as is being done, Nextdoor can just prevent it from the start by not allowing them to be posted in the first place. Here’s the excerpt from a recent piece on NPR:
“When a user goes to post about a crime or suspicious activity, in the Crime & Safety section, a new form requires two physical descriptors — e.g. Nike sneakers, blue jeans, crew cut, brunette — if the user chooses to include the race of the person.”
An algorithm under development spot checks the summary of the suspicious activity for racially charged terms, as well as for length. If the description is too short, it is presumed to lack meaningful detail and is unacceptable.
So that’s good – right? Or is it? If the person posting stopped and thoughtfully changed the way that they are thinking and talking about race and crime, that would be a good outcome. But, if they simply find better ways to fill out a form and the result is less public discourse, albeit less neighborly discomfort, is that OK? Are they doing the right thing to stop racial profiling or are they limiting people’s personal expression? Is this the role technology should play in forming public discourse? I don’t purport to have the answers and, for me at least it is difficult when the subject of the discourse is something that I feel very strongly about but it is certainly something we will be wrestling with more and more.