Alliance Manchester Business School - AMBS
Article By
Michelle Carter

Michelle Carter

Professor of Management Science at Alliance Manchester Business School.

Privacy Paradox

We may need to sacrifice our individual privacy for the collective good.

To what extent do people ‘identify’ with the technologies they use? Namely, do they readily adopt new technologies and happily identify with them (such as many do with a smartphone), or are they more likely to resist such technologies?

For those in the first camp, and that probably includes many of us, we identify so closely with a particular piece of technology that we actually start taking on the capabilities of the technology as part of ourselves. We try to use as many features of the technology as we can, as often as we can, and are happy to be innovative with it too. By the way, it also means we feel very negative emotions if we are parted from a technology we identify with (such as when we lose our phone).

These issues have strong relevance in the workplace too. If an employee identifies strongly with a new technology that their company or organisation is keen to use then that has potentially beneficial implications for productivity. Conversely, if the employee doesn't identify with it then there can be more negative consequences.

Sharing information and AI

These questions around technology identity are intertwined with wider debates around the use of AI and concerns that we are sharing too much personal information with technology applications.

Many newer technologies that use AI and machine learning algorithms rely on users’ willingness to give up personal information. Ultimately the more personal information we all share, the more advanced and targeted AI algorithms can become, and the more they can be adapted by companies and organisations to suit their needs. In fact, for AI technologies to deliver on their promise, they need copious amounts of data to identify patterns relevant to making predictions.

However, if people are unwilling to share personal information, or information comes from only a subset of people, then AI performs poorly and makes inaccurate predictions or risky or biased recommendations.

The rapid growth of technologies that use AI shows that even when individuals are aware that they present a threat to privacy, they continue to use them.

Threat to privacy

If users are concerned about how their information will be used, or if it will be safe, conventional wisdom suggests they should be less willing to share. Yet the rapid growth of technologies that use AI shows that even when individuals are aware that they present a threat to privacy, they continue to use them. What we are touching on here is the so-called privacy paradox where although people are often concerned about their privacy, they are still prepared to give up a lot of personal information online.

The question is, are users who ‘identify’ with technology (or view themselves as more tech-minded) more willing to give up personal information, even if they think doing so may put them at risk? Does their need to use the technologies they identify with override their concerns around privacy?

A particular piece of research I am carrying out explores this question in the context of dating apps which allow us to shed light on the privacy paradox because one has to give up a lot of personal information in order for the app to work as intended. Dating apps use AI to help people find potential matches, so the success of a romantic match in real life depends heavily on the willingness of users to give up as much personal information as possible.

Featured Course

Data and AI for Leaders

Discover how data and AI can drive your business decision-making and keep you future-fit.

View course details

Into the future

AI will soon be incorporated into everyday work and personal tasks, revolutionizing business, education, and daily life. So in the future we may need to sacrifice our individual privacy for the collective good.  

But for that to happen we need to know that AI is trustworthy, yet we’re a long way from having that assurance. The question is what will develop faster, AI’s capabilities or the safeguards needed to harness the full potential of AI for society?

Related Articles

A surgeon in scrubs working during an operation.

Systems Leadership

A team of researchers have produced a landmark review of health care system leadership.

Read
Alejandra Navea Parra and Hien Dao

Research with Impact

Hien Dao and Navea Parra have been awarded Doctorial Studies Awards.

Read
A black woman clicking on abstract artificial icons.

Ethical Challenge

Organisations need to ensure AI is explainable, transparent and responsible.

Read
A busy street in Dhaka, Bangladesh

New governance

Traditional notions of auditing often don’t work in developing countries.

Read

Quick links