welcome to soas’s first TECH society, where you can access GLOBAL TECH NEWS, AND be introduced to skills and resources for the future. So whether you’re a computer whiz or a curious beginner, join us!

What’s Fair in the Age of Big Data ?

What is fair in the age of big data?

An UnBias workshop has us asking: Is the internet fair for all?

Imagine logging onto a web browser or a social media platform and being presented with an unorganised mishmash of all the information online. Narrowing down a relevant search term would be so cumbersome that you probably wouldn’t want to log back on. In come algorithms, machine learning that helps you to find information more effectively, by tailoring results and creating an increasingly individualised experience.

As helpful as algorithms are, some dangerous outcomes have emerged from how they are being used. Filter bubbles and echo chambers ensure that users only see the information that we know or already agree with, at the same time allowing us to become ignorant of opposing views and creating a bias in our knowledge. So it comes as a surprise when the results of Brexit demand that the UK leaves the EU, or a reality tv mogul assumes one of the most powerful roles in the world. What the outcomes might have been were all sides armed with unbiased information we will never know.

I have noticed the filter bubble on a personal level. Perhaps because I don’t use the platform often enough, Instagram assumes that a young black African woman in the diaspora wants to tune into the feeds of the Real Housewives of Atlanta, afro hair care tutorials and hiphop dance memes. As entertaining as I find all those things to be, this stereotyping misses the nuances of my character and interests that might be surprising. But for frequent users the knowledge of tastes and behaviours that the algorithms store become overly exaggerated and quite eery. Ads appear for products before you have even searched them leading some to wonder if our devices are now listening to us, and if we can ever really know?

But whether an algorithm completely misses a mark or eerily responds to a passing thought by showing where you can purchase a new laptop, should it also have the right to choose for you, and exercise an opinion on your behalf? Giving up the freedom to self-regulate potentially leaves it up to companies to decide what is deemed appropriate. So who actually makes those calls, especially when the employees of large digital firms lack diversity in race, gender and cultural origin. Such bias representation could result in limited understanding of what can be considered appropriate or valuable points of view. Would an individual who makes a video about his islamic beliefs be immediately flagged as a terrorist? Will non-Western views be seen as problematic, dangerous or simply irrelevant? A power paradigm comes into play.

But it is not only the tailoring of information that comes into question, it is knowing and understanding how that information is mined and what decision an individual has in that process. Did you know that Facebook creates ghost profiles? So even if you do not create a profile, there is information about you already hidden in its systems. You might wonder how long your data exists after you delete it? At Snapchat, the photo sharing app whose modus operandi is images that disappear after 24 hours, the image is hidden on your profile for at least a year. If users do not explicitly opt in and agree to these terms their data is essentially being violated. Our privacy is put at risk when we do not know who can collect our data, how it is kept, to what ends that data is used and who profits from it. As such, there is not enough transparency around what is essentially a commodification of users.

A final thought on the grave implications of Information placed in wrong or biased hands. For example, research around programmes that use algorithms to select potential re-offenders become a point of racial discrimination in the USA. It is to be expected that an algorithm, which relies on the data it receives to learn and organise its statistical assumptions will return biased results. This is because the data that it is presented with is already skewed because of human bias. Furthermore although we still can not comprehend what the magnitude of all this data means, humans, not machines are determining what we are doing with this data. So we have to ask ourselves, are the algorithms really to blame, or is it our society?

Learn more about the UnBIas project here: https://unbias.wp.horizon.ac.uk/about-us/

How do you view algorithmic bias? Is the AI to blame or is it a reflection society? Share your opinion in the comments below.

Tech News In Brief

Tech News In Brief