Times aren’t just changing—they’ve changed.
Social work has shifted from paper files and office visits to encrypted portals, video calls, text messages, and AI-assisted documentation. A typical day might now include reviewing a midnight crisis text, running back-to-back telehealth sessions, and updating notes with predictive software.
This isn’t just about convenience—it’s about fundamentally different ways of connecting, assessing, and caring for clients. Every digital tool we choose, every platform we adopt, and every online interaction carries ethical weight.
Risks are often invisible:
- A seemingly secure app may store data in multiple countries.
- A quick text response may become part of a legal record.
- An AI-generated assessment may reinforce historical biases.
Most social workers are navigating this new world with little formal guidance. Licensing boards and graduate programs often lag behind technology. That leaves practitioners making ethical decisions about tools they don’t fully understand—sometimes without realizing it.
The urgency comes from this gap between adoption and understanding. We can’t wait for perfect guidance. We need frameworks for making ethical decisions about technology now.
Digital Tools Don’t Replace Ethics—They Amplify Them
It’s easy to think technology creates entirely new ethical problems. In reality, it amplifies the old ones.
Consider confidentiality. In the paper era, we locked file cabinets. Now, it means encrypting devices, securing cloud storage, managing passwords, and protecting even the metadata that reveals when and how often we communicate.
The principle hasn’t changed—protect client information—but the application has become more complex.
- A text message that feels casual may still contain protected health information.
- A telehealth session still requires technical safeguards.
- An AI-assisted note still needs human oversight to prevent bias.
Choosing a free platform over a secure one isn’t just a budget issue—it’s an ethical choice. Responding instantly to a client’s emotional text may feel kind but can blur professional boundaries.
Technology magnifies our obligations. It requires more care, not less.
NASW Code of Ethics: Digital Applications
The NASW Code of Ethics wasn’t written for a digital world, but its core principles apply directly. Three standards stand out for today’s practice:
1.03 Informed Consent
Social workers must “use clear and understandable language to inform clients of the purpose of the services, risks related to the services, limits to services…”
In digital settings, this includes:
- Explaining which platforms you use and why.
- Clarifying what happens to client data.
- Describing how AI or third-party tools might influence care.
Informed consent now means helping clients understand not just what you do, but how technology changes what you do.
1.04 Competence
Social workers “should provide services and represent themselves as competent only within the boundaries of their education, training, license, [and] certification.”
Competence now includes digital literacy—knowing how the tools you use actually work, how they protect data, and what their limitations are.
You don’t need to be a tech expert. But you do need to understand your tools well enough to use them safely and explain them clearly to clients.
1.07 Privacy and Confidentiality
The obligation to protect client information now extends to every digital medium: texts, emails, video sessions, cloud files, and even device backups.
Digital confidentiality often depends on factors outside your control—vendor security practices, client devices, internet stability. Transparency and client education are your best safeguards.
Making Ethical Tech Choices
How do we apply these principles when choosing tools? These four questions can guide ethical decision-making whenever technology enters your practice:
1. Purpose and Necessity
Does this tool address a real clinical need or just make things faster? Ethical use means benefits must clearly serve clients, not just convenience or efficiency.
2. Transparency and Control
Can you explain to clients how the tool affects their care? Do they have meaningful choice about whether it’s used?
3. Privacy and Security
What data does it collect, where is it stored, and who has access? What happens if the vendor changes policies or you switch platforms?
4. Bias and Equity
Could the tool disadvantage some clients or reinforce existing inequities? Ethical technology use requires considering not just how tools work for typical users, but how they affect those most at risk of exclusion.
Used with care, technology can reinforce—not replace—the values at the heart of social work.
Excerpted from:
Digital Ethics in Social Work Practice, a continuing education course available through SWTP CEUs (ASWB ACE Provider #2486).

