Big Data and Deep Learning Models

Authors

  • Daniel Sander Hoffmann Universidade Estadual do Rio Grande do Sul (UERGS)

DOI:

https://doi.org/10.5007/1808-1711.2022.e84419

Keywords:

Artificial Intelligence, Artificial Neural Networks, Big Data, Black Boxes, Deepfakes, Deep Learning

Abstract

Although deep learning has historically deep roots, with regard to the vast area of? artificial intelligence and, more specifically, to the study of machine learning and artificial neural networks, it is only recently that this line of investigation has developed fruits with great commercial value, starting to have thus a significant impact on society. It is precisely because of the wide applicability of this technology nowadays that we must be alert, in order to be able to foresee the negative implications of its indiscriminate uses. Of fundamental importance, in this context, are the risks associated with collecting large amounts of data for training neural networks (and for other purposes too), the dilemma of the strong opacity of these systems, and issues related to the misuse of already trained neural networks, as exemplified by the recent proliferation of deepfakes. This text introduces and discusses these issues with a pedagogical bias, thus aiming to make the topic accessible to new researchers interested in this area of? application of scientific models.

References

Arbesman, S. 2013. Five Myths About Big Data. The Washington Post. http://www.washingtonpost.com/opinions/five-myths-about-big-data/2013/08/15/64a0dd0a-e044-11e2-963a-72d740e88c12_story/. Access: 29.03.2021.

Berggren, K et al. 2020. Roadmap on Emerging Hardware and Technology for Machine Learning. Nanotechnology 32(1): 012002. https://doi.org/10.1088/1361-6528/aba70f. Access: 03.05.2021.

Bernard, E. 2021. Introduction to Machine Learning. Champaign: Wolfram Media, Inc.

Boyd, D. & Crawford, K. 2012. Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Information, Communication & Society 15(5): 662–679.

Buolamwini, J. & Gebru, T. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81: 1-15.

Cichy, R. & Kaiser, D. 2019. Deep Neural Networks as Scientific Models. Trends in Cognitive Sciences 23(4): 305-317.

de Ruiter, A. 2021. The Distinct Wrong of Deepfakes. Philosophy & Technology 34: 1311-1332.

Elgendy, N. & Elragal, A. 2014. Big Data Analytics: A Literature Review Paper. In: P. Perner (ed.), Advances in Data Mining: Applications and Theoretical Aspects, p. 214-227. Cham: Springer.

Erasmus, A.; Brunet, T. D. P.; Fisher, E. 2021 What is Interpretability? Philosophy & Technology 34: 833-862.

Fallis, D. 2021. The Epistemic Threat of Deepfakes. Philosophy & Technology 34: 623-643.

Frankish, K. & Ramsey, W. M. 2014. Introduction. In: K. Frankish & W. M. Ramsey (ed.), The Cambridge Handbook of Artificial Intelligence, p.1-11. Cambridge: Cambridge University Press

Goodfellow, I.; Bengio, Y.; Courville, A. 2016. Deep Learning. Cambridge, MA: MIT Press.

Hancock, J. T. & Bailenson, J. N. 2021. The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking 24(3): 149-152.

Hao, K. 2021. Deepfake Porn is Ruining Women’s Lives. Now the Law may Finally Ban it. MIT Technology Review. https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/. Access: 22.07.2021.

Haykin, S. 2009. Neural Networks and Learning Machines. 3rd Edition. New Jersey: Pearson.

Kandel, E.R.; Barres, B. A.; Hudspeth, A. J. 2013. Nerve Cells, Neural Circuitry, and Behavior. In: E. R. Kandel; J. H. Schwartz; T. M. Jessell; S. A. Siegelbaum; A. J. Hudspeth (ed.), Principles of Neural Science, p. 21-38. 5th Edition. New York: McGraw Hill.

Karras, T.; Laine, S.; Aila, T. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), p. 4396-4405.

Karras, T.; Laine, S.; Aittala, M; Hellsten, J.; Lehtinen, J; Aila, T. 2020. Analyzing and Improving the Image Quality of StyleGAN. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), p. 8107-8116.

Leonelli, S. 2018. The Time of Data: Timescales of Data Use in the Life Sciences. Philosophy of Science 85(5): 741–754.

Leonelli, S. 2020. Scientific Research and Big Data. In: E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Summer 2021 Edition. https://plato.stanford.edu/archives/sum2021/entries/science-big-data/. Access: 31.07.2021.

Lipton, Z. C. 2018. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability is both Important and Slippery. acm queue 16(3): 1-27.

McCulloch, W. S. & Pitts, W. H. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5: 115-133.

Mirsky, Y. & Lee, W. 2021. The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys 54(1): 1-41.

Montavon, G.; Samek, W.; Müller, K. R. 2018. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing: A Review Journal 73: 1-15.

Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. 2019. Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations. Science 366: 447-453.

Park, D. H.; Hendricks, L. A.; Akata, Z.; Rohrbach, A.; Schiele, B.; Darrell, T.; Rohrbach, M. 2018. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 8779-8788.

Rini, R. 2020. Deepfakes and the Epistemic Backstop. Philosopher's Imprint 20(24): 1-16.

Russell, S. & Norvig, P. 2020. Artificial Intelligence: A Modern Approach. 4th Edition. Boston: Pearson.

Seung S. & Yuste, R. 2013. Neural Networks. In: E. R. Kandel; J. H. Schwartz; T. M. Jessell; S. A. Siegelbaum; A. J. Hudspeth (ed.), Principles of Neural Science, p. 1581-1600. 5th Edition. New York: McGraw Hill.

Somers, M. 2020. Deepfakes, Explained. MIT Sloan School of Management. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained/. Access: 28.07.2021.

Sterling, P. & Laughlin, S. 2015. Principles of Neural Design. Massachusetts: MIT Press.

Symons, J. & Alvarado, R. 2016. Can we Trust Big Data? Applying Philosophy of Science to Software. Big Data & Society 3(2): 1-17.

Thagard, P. 1990. Philosophy and Machine Learning. Canadian Journal of Philosophy 20(2): 261-276.

van Veen, F. & Leijnen, S. 2016. A Mostly Complete Chart of Neural Networks. https://www.asimovinstitute.org/neural-network-zoo/. Access: 30.09.2021.

Zednik, C. 2019. Solving the Black Box problem: a normative framework for explainable artificial intelligence. Philosophy & Technology 34: 265-288.

Downloads

Published

2022-12-13

Issue

Section

Articles