Living with zero digital footprint isn’t easy – it may not even be possible in today’s digital world. Nowadays, everything from banking and filing your tax returns to shopping and applying for an ID or passport can be done much faster and more conveniently through digital channels than through face-to-face interactions (provided you have the right smart device and software).
But even if you’re an enthusiastic fan of the convenience that digital advances have brought us, you may still be one of the many concerned about how artificial intelligence (AI) is evolving, and how it uses your personal data to do so. AI models learn from our interactions online to become more intuitive. Whether it’s training chatbots or predicting what you need before you ask, your data plays a big role.
AI and your data: What’s really at stake?
AI works by comparing patterns in large data sets, and as the tech keeps evolving, the demand for data is only going to increase. This means more competition for your personal info, which is why it’s important to take control and protect your privacy.
Privacy laws based on fair information practices (FIP) help, but they don’t cover everything. We need better regulations that look at the bigger picture, not just individual privacy rights. FIP themselves are not legally binding, but they have shaped many data protection laws and frameworks worldwide. In South Africa, the Protection of Personal Information Act (POPIA) aligns with several FIP principles, such as data collection limitations, purpose specification, and security safeguards.
Although laws like this are beginning to address AI use, there’s still a long way to go before we have solid, global data protection.
In the meantime, here’s how you can make less of your data available to AI:
1. Think before you post
We’ve all scrolled through social media sharing random thoughts and updates, and you probably think it’s harmless – and most of the time it is. In the context of an online thread, people can generally spot when you’re being sarcastic, ironic, or humorous, or just blowing off steam because you’re angry. But taken out of the context, the same post could be incredibly offensive. And once you post it, it’s out there. Even if you delete it, someone may have already taken a screenshot. That data can be incorporated in AI models, or used in identity theft or deepfakes. A good rule of thumb is that if you wouldn’t want something quoted under your name on a billboard over the highway, it’s best not to post it.
2. Read the fine print
Privacy policies can be time-consuming and boring, so you might think you don’t need to read them. But you do – and it pays to make the effort. Since companies are legally required to tell you how your data is collected, used, and shared, that information will usually be tucked away in the fine print. Knowing the details of the privacy policy – including what you can opt out of – helps you make better decisions about what to share and what to keep private. It may also motivate you to push for better data protection laws.
Some platforms set data sharing to ‘On’ by default – make sure you opt out if you want to
3. Check your privacy settings
Platforms change how they use your data all the time, and you may not even notice when the change happens. Check your privacy settings regularly – not just the general settings on your desktop, but also the settings for each app and program on your device. This will enable you to maintain control and adjust your settings according to what you want to share. If you see something that you’re not comfortable with but can’t change, it might be time to stop using that app or program.
4. Be cautious with chatbots
Chatbots are convenient, quick and relatively easy to use. However, sharing your personal information with them can be risky, since you don’t know who is handling this information behind the scenes. The data you share might end up being used to train the bot or stored in a vulnerable database. Treat chatbots like you would a real person – don’t share anything you wouldn’t tell someone you don’t trust.
How Nedbank keeps your data safe
Nedbank’s self-help chatbot, Enbi, is a perfect example of how AI can work for you while keeping your privacy intact. You can use Enbi with confidence, knowing your data is protected by bank-grade security and used responsibly to improve your experience. We will never sell your personal information or use your data without your consent.
Red flags to watch out for
- Opt-out defaults
Some platforms set data sharing to ‘On’ by default – make sure you opt out if you want to.
- Hidden settings
Privacy settings can be hard to find. Check your account settings regularly to stay in control of them.
- Confusing language
T&Cs can be complicated. If you don’t understand something, ask the service provider, or do some online research.
Embrace change – but stay in control
Keeping up with technology is tough, but avoiding it isn’t the answer either. While laws continue to catch up with how we handle data, it’s up to you to stay aware of the risks and be mindful of what you share.