Units 9-11 - Collaborative Discussion 3

Ethics of Deep Learning and Generative AI

This three-week collaborative discussion explores the ethical implications of deep learning and generative AI technologies. The discussion examines critical issues including bias amplification, copyright concerns, misinformation risks, and transparency challenges, while considering practical governance solutions through legal frameworks and technical standards.

Initial Post

The arrival of technologies that create content using deep learning models clearly raises serious ethical issues that need to be carefully considered. Deep learning, a part of machine learning, uses complex networks to learn from huge amounts of data (Sarker, 2021), while generative AI uses these models to produce new content, like text or images, that copies the patterns it has learned. The ethical problems that come from this are complex and important.

First, models trained on huge amounts of unfiltered internet data often pick up and magnify existing biases in society, leading to harmful stereotypes in what they create (Bender et al., 2021). Second, training these models also brings up serious questions about copyright and who owns the data, as they often use copyrighted material without permission or giving credit. Another major issue is the spread of misinformation; very realistic 'deepfakes' and AI-written text can be used as weapons to damage public trust and control conversations. Finally, many of these complex models are not transparent, which makes it hard to hold anyone accountable or to figure out why a system made a harmful or wrong decision.

While some might say these are just tools to help human creativity, their ability to create content on their own and on a large scale presents new risks. The European Union's Artificial Intelligence Act, an important new law reaching its final stages in 2024, tackles these dangers by requiring more transparency from general-purpose AI models (European Parliament, 2024). In practice, handling these risks requires both technical fixes and better rules. Practical steps should include requiring clear records of the data used for training and how the models behave, as suggested in frameworks like the NIST AI Risk Management Framework (NIST, 2023), and using strong processes where humans review the AI's work in important situations to ensure there is real human control and accountability.

References

  • Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623. DOI: https://doi.org/10.1145/3442188.3445922
  • European Parliament. (2024) EU AI Act: first regulation on artificial intelligence. Available from: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence [Accessed 7 October 2025].
  • NIST. (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). Available from: https://www.nist.gov/itl/ai-risk-management-framework [Accessed 7 October 2025].
  • Sarker, I. H. (2021) Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Computer Science, 2(6): 420. DOI: https://doi.org/10.1007/s42979-021-00815-1

Summary Post

Our discussion on the ethics of deep learning and generative AI has been both timely and insightful, confirming that these powerful technologies bring a host of complex challenges alongside their benefits. The conversation has effectively moved from identifying key ethical problems to discussing practical, multi-layered solutions.

My initial post outlined the core ethical issues: systemic bias inherited from training data, copyright infringement, the potential for widespread misinformation, and a lack of transparency in "black box" models. The peer response from Ali Yousef Ebrahim Mohammed Alshehhi reinforced these points, rightly emphasising the real-world harm that amplified biases can cause and highlighting the value of specific governance tools like bias audits and model cards for improving transparency (Mehrabi et al., 2021; Mitchell et al., 2019).

Our course readings provide a broader context for this debate. The materials from Unit 9 demonstrate the sheer power and rapid advancement of deep learning, with systems now outperforming humans in complex tasks like image recognition (Forbes, 2015). This capability is precisely what makes the ethical stakes so high. Furthermore, as highlighted in Unit 10, there is immense commercial pressure to deploy these technologies to boost productivity (World Economic Forum, 2022), creating a tension between the drive for innovation and the need for responsible implementation.

The applications we explored in Unit 11, from creative content generation to the integration of AI in complex industrial systems (Malmi et al., 2016; Wang et al., 2016), show how deeply embedded this technology is becoming in society. This integration makes robust governance essential. Our discussion correctly identified that the solution is not purely technical but requires a combination of legal frameworks like the EU AI Act, industry standards such as the NIST AI Risk Management Framework, and a commitment to meaningful human oversight. By combining these approaches, we can work towards harnessing the benefits of deep learning while mitigating its significant ethical risks.

References

  • Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) 'On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?', FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623.
  • Forbes (2015) Microsoft's Deep Learning Project Outperforms Humans In Image Recognition. Available at: https://www.forbes.com/sites/michaelthomsen/2015/02/19/microsofts-deep-learning-project-outperforms-humans-in-image-recognition/ (Accessed: 18 October 2025).
  • Malmi, E., Zosa, E., Kannala, J. and Toivonen, H. (2016) 'Dopelearning: A computational approach to rap lyrics generation', in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 245-254.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. and Galstyan, A. (2021) 'A Survey on Bias and Fairness in Machine Learning', ACM Computing Surveys, 54(6), pp. 1–35.
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D. and Gebru, T. (2019) 'Model Cards for Model Reporting', in Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229.
  • Wang, S., Wan, J., Li, D., and Zhang, C. (2016) 'Towards smart factory for industry 4.0: a self-organized multi-agent system with big data based feedback and coordination', Computer Networks, 101, pp. 158–168.
  • World Economic Forum (2022) How Deep Learning can improve productivity and boost business. Available at: https://www.weforum.org/agenda/2022/01/deep-learning-business-productivity-revenue/ (Accessed: 18 October 2025).

Reflection

This discussion crystallized the fundamental tension in contemporary AI development: the race between capability and accountability. The "stochastic parrots" critique (Bender et al., 2021) particularly resonated—these systems achieve remarkable performance while potentially lacking genuine understanding, creating unique ethical challenges absent in traditional software.

What struck me most was how the ethical issues compound rather than exist in isolation. Bias amplification doesn't just harm individuals; when combined with deepfakes and lack of transparency, it threatens entire information ecosystems. Peer feedback on governance tools like model cards (Mitchell et al., 2019) offered concrete pathways forward, but the discussion revealed these technical solutions require accompanying regulatory frameworks like the EU AI Act to be effective.

The integration of deep learning into Industry 4.0 applications (Wang et al., 2016) highlighted a critical insight: as these systems move from experimental to embedded, the window for establishing ethical guardrails narrows. This underscores the urgency of proactive governance rather than reactive regulation after harm occurs.

References

  • Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) 'On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?', FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623.
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D. and Gebru, T. (2019) 'Model Cards for Model Reporting', in Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229.
  • Wang, S., Wan, J., Li, D., and Zhang, C. (2016) 'Towards smart factory for industry 4.0: a self-organized multi-agent system with big data based feedback and coordination', Computer Networks, 101, pp. 158–168.
Email
GitHub
LinkedIn