Skip to content

Guidance on the Ethical Use of AI in Social Media

As part of the Professional Board for Physiotherapy, Podiatry and Biokinetics’s ongoing commitment of guiding and assisting practitioners, this article provides guidance on the ethical use of AI in social media platforms.

Artificial Intelligence (AI) is no longer a distant future concept; it’s now embedded in our daily lives, especially in the digital realm. With applications such as ChatGPT and sophisticated image generators becoming commonplace, health professionals need to consider how this technology intersects with their responsibilities, particularly when it comes to social media.

In the healthcare environment, social media offers opportunities for patient engagement, professional branding, and information sharing. However, with the rise of AI content generation, including text, images, and videos, health professionals must be cautious. Tools can now produce content autonomously from a simple set of instructions or “prompts,” potentially generating more information than initially anticipated. While this offers great convenience, it poses serious ethical and legal challenges.

What is AI?
AI refers to systems designed to simulate human intelligence capable of learning, reasoning, and making decisions. These systems can generate new content, predict trends, and analyse large data sets. While AI has revolutionised many industries, its use in healthcare requires special attention, as AI systems do not fully understand the distinctions of medical ethics or the human experience.

Risks in Using AI for Social Media
Although AI offers incredible convenience, such as generating content in text, images, or videos, it can lead to serious pitfalls. Health professionals must remember that the content generated by AI tools, while helpful, might contain unforeseen or unintended information. This emergent information could violate existing ethical rules and create unintended consequences in the professional practice of physiotherapists, podiatrists, and biokineticists.

For example:

  • AI-generated content might inadvertently make promises or guarantees that are not legally or ethically acceptable.
  • AI may generate text that contradicts established health knowledge or ethical standards.
  • Content created by AI could mislead patients, causing undue anxiety or encouraging unnecessary health related interventions.

Legal Responsibility
Regardless of how the content is generated, health professionals remain legally accountable for any material published under their name or associated with their practice. Using AI does not absolve practitioners from adhering to the HPCSA ethical guidelines and rules. Specifically, the practitioner will be held responsible if AI generates unprofessional, misleading, or deceptive content.

Practitioners should ensure that the content shared on their social media profiles aligns with the Ethical Rules of Conduct for Practitioners as outlined in the HPCSA’s Ethical Booklet 2, which includes prohibitions against canvassing and touting, and emphasises the importance of professionalism, truthfulness, and the welfare of patients.

Recommendations

Thorough Review:

  • Always review AI-generated content thoroughly before publishing it on social media to ensure it complies with ethical standards and reflects professional integrity

Avoid Deceptive Claims:

  • Do not make guarantees, promises, or untruthful claims on your social media, regardless of whether the content was generated by AI.

Maintain Professionalism:

  • Social media should reflect your professional conduct. AI tools should enhance, not replace, the personal touch and human judgment required in the healthcare sector.

Stay Informed:

  • Be proactive in understanding the ethical and legal implications of AI in healthcare. The rapid evolution of this technology means the rules and standards could change quickly.

References

  • Guler, N., Kirshner, S. N., & Vidgen, R. (2024). A literature review of artificial intelligence research in business and management using machine learning and ChatGPT. Data and Information Managementhttps://doi.org/10.1016/j.dim.2024.100076
  • Mannuru, N. R., Shahriar, S., Teel, Z. A., Wang, T., Lund, B. D., Tijani, S., … & Vaidya, P. (2023). Artificial intelligence in developing countries: The impact of generative artificial intelligence (AI) technologies for development. Information Developmenthttps://doi.org/10.1177/02666669231200628
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). Opinion Paper:“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management71https://doi.org/10.1016/j.ijinfomgt.2023.102642
  • Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V., & Biancone, P. (2021). The role of artificial intelligence in healthcare: a structured literature review. BMC Medical Informatics and Decision Making21, 1-23. https://doi.org/10.1186/s12911-021-01488-9
  • Al Kuwaiti, A., Nazer, K., Al-Reedy, A., Al-Shehri, S., Al-Muhanna, A., Subbarayalu, A. V., … & Al-Muhanna, F. A. (2023). A review of the role of artificial intelligence in healthcare. Journal of Personalized Medicine13(6), 951. https://doi.org/10.3390/jpm13060951

Last Updated on 13 February 2025 by HPCSA Corporate Affairs