The phrase “big data and analytics” refers to the use of large data sets to predict things, such as user behaviors, diseases, business trends, etc. With the evolution of technology and the abundance of information provided by the use of such technologies, people have been developing methods to put the information to use in a variety of applications. This can be philanthropic in nature, such as in the health industry or criminal investigations. It is also widely used in business, marketing, and by the government.
One of the major ethical concerns with big data and analytics has to do with the use of personal information captured within big data. Often users sign away their rights to their information whenever they try to access a new service or app. How their information is used is almost completely unknown to the users, and it is often shared without their knowledge. Many companies use this information to analyze a customer’s behavior and then tailors its ads or content to fit the customer’s preferences. This is why when you Google a product you will suddenly get ads from Amazon or other stores trying to sell you that product. This predictive effect can be beneficial to consumers, providing them with a personalized experience that is usually more convenient. However, this can lead to uncomfortable surveillance systems that could be dangerous to people living in countries where certain behaviors can lead to punitive consequences.
A potential example of this is China’s social credit system, which is fueled in part by big data collected on its citizens. The idea behind this system is to encourage citizens to engage in more trustworthy behavior. Bad behavior, such as jaywalking, stealing, or playing video games for too long, can result in losing points. Good behavior, such as donating to a charity, will add points. If a person falls below a certain point value, they can be blacklisted from certain activities, such as booking a stay at a nice hotel or buying plane tickets. China has long had a problem with corruption, and this system is meant to combat that. However, this kind of surveillance can be a slippery slope. Who decides the difference between good and bad behavior?
Another abuse of big data and analytics can be seen with Facebook users and Cambridge Analytica during the 2016 US election campaign. Cambridge Analytica misused data that was obtained from Facebook via questionable means to essentially profile and target users for political purposes.
In interaction design, it is often preferable to provide your users with an experience that feels personalized and predictive. Users want their needs to be met before they realize they have them. However, the trouble comes in deciding when it is appropriate to use and collect personal information from users. As a designer, it is important to consider the needs of the user, both with regards to their experience as well as their inherent right to privacy. These needs must be balanced carefully when developing an application. If tasked with designing a system that presents a potential invasion of privacy, as with the examples above, it is necessary to take a step back and question whether that is in the best interest of the user.
Above all, design as a practice should serve the users and improve their interaction with the world; this must be kept at the forefront of every design.