Inquiry-Based Research Essay

Inquiry Based Research Essay –

The Truth of Technology in Our Society

Mavel Hidalgo

English Composition 11000

Ian Murphy

The City College of New York

There is no doubt that technology is playing a critical role in developing societies. This is because we depend on technology in all aspects of our life. It has been of great aid to humanity for years, everywhere from planning logistics of feeding thousands to providing education. We walk around with pocket-sized computers capable of helping us navigate and interact with the world in various ways. This is a great example on how technology and its tools have been used to improve the outcomes of social needs over the years. However, this unparalleled technology has also become a tool to undermine truths and shift perceptions with various media forms and unfiltered strings of fake news content. What is classified as a truth in one situation may be different in another. This is because different perspectives may assimilate different truths depending on the specific media being consumed. 

For many people social media has become the best way to receive and keep track of the latest news. Whether staying informed by social media be done by browsing the multitudes of apps available for users such as Facebook, Twitter and Instagram, these social media sites share content which can alter a personas perception. Social media has also been targeted for spreading false information and campaigns as well as allowing users to further share the misguided information (Gray, 2017). In today’s world, the excessive use of technological attributes such as photoshop can be found in the majority of the photos we see. Photoshop is a software that is extensively used to work with pixel-based images and is used in a wide variety of ways. Whether that be photoshopping images of a product to enhance its quality or models being viewed on billboards and mainstream fashion magazines. We are endlessly surrounded by unrealistic and photoshopped images constantly in advertisements, magazines, and billboards. Celebrities often define what culture and society believe to be beautiful and almost all the images we see of celebrities are edited to some extent. 

Because of this, young people feel pressured to conform to unrealistic expectations which have been promoted through digitally altered photos. This a prime example of how our truths and perceptions can be altered through the technology that is photoshop. Instead of enhancing the quality of photos, photoshop in modern technology is set to distort people’s body images and change them into something they are not. Models who are already thin are made even smaller by magazines in order to sell a fabricated image. Imperfections are erased to further satisfy and meet the needs of societal beauty standards. The negative impact these photos have in society is very damaging, especially for those people who are struggling with their self-image. 

These pictures lead us to believe that beauty can only exist in an unrealistic body type and flaws must be hidden instead of being embraced. It is a constant pattern in the media we consume, instead of focusing on healthy tactics, we are presented with the same stock Barbie-like figures (Viner, 2016). Usually, these images are viewed, and we don’t even realize the photo has been edited to showcase an unrealistic view of what is considered to be attractive in society. This then leads to all types of unhealthy issues, and confidence deterioration, seen more frequently in teens. These are not safe or healthy ideals to be sent out to impressionable audiences. We, as a society must collectively learn that different is beautiful and seeing those highly edited photos for exactly what they are, fake.

Widespread misinformation being shared online throughout the various apps and online sources available for users, has presented with being one of the main reasons why technology has had a way of shifting our reality. Working out what to trust and what not to believe has always been a part of human interaction. Modern technologies are amplifying the widespread of misinformation without the public realizing it. For example, while researching on a specific topic search engines direct users to sites that inflame their belief in false information. Social media connects users with likeminded people which can further help feed into their false belief. Many social media sites use automated social accounts that impersonate humans in order to allow misguided information to be suggested. Viewing and producing videos, blogs, posts, and tweets as units of information has become so easy that literally anyone can do. This makes the credibility of the pieces being shared unknown and therefore users are susceptible to sharing misinformation. Since we are unable to process all the material posted online, our brain decides what we should pay attention based on likes, opinions, and biases. These shortcuts influence the type of information we search for, comprehend, remember, and repeat to a harmful extent. Because of this, anything posted online with the correct advertising can receive attention and even be used as a legitimate source (Ciampaglia, Menczer, 2019). 

This is why we need to understand how algorithms use and manipulate our vulnerabilities in order to protect ourselves. According to the University of Warwick in England and at Indiana University Bloomington’s Observatory on social media, teams are using cognitive experiments, simulations and data mining and artificial intelligence to comprehend the cognitive vulnerabilities of social media users. Machine learning aids and analytics are being used in order to aid against technological manipulation. These tools are being used by journalists, civil-society organizations, and common users to detect the spread of false narratives, fake news literacy and inauthentic posts (Ciampaglia, Menczer, 2019). 

Our social media feeds are often full of the nonending array of news and posts, that many of us can view only the top few items. The first few posts on our feed that we see are the posts which we reshare with those around us and this is the way we propagate senseless information. This content has the potential risk of harboring fake or illegitimate news-based sources which further confuse and shift our truths and perceptions. 

Researchers have simulated how information overload in social media networks limits the human brains capacity to pay attention to high quality content. Graphs and figures were generated to simulate the steps of the spread of information overload and how fake news can leak through. These figures were generated in mode of model networking, where users online are represented with nodes. Meanwhile, the users’ connections with others, such as friends and family, are represented with lines that link to others in the web who then share or reshare with their linked users. Investigator have discovered that as the number of contents in the social network surrounding one person rises, the quality of those that propagate at rapid paces falls, which indicates the network circle has become smaller (Ciampaglia, Menczer, 2019). In this study it was stated that circle size indicates the quality of the last piece of content shared. If the circle is small in a network this means that the information being shared is high in quality yet low in informative factors and low credibility. If the circle presents a larger size this means that in this specific network, the information load being shared is high, as well as the sources credibility. In networks that operate similar to these, quality of shared information is relatively low. 

Memes are a great example on how popularity patterns could potentially leave room for biases and false information to be further shared. This is because memes are barely noticeable, yet a few spread widely. Memes were used in the Observatory on social media at Indiana University Bloomington study and in the simulated world the memes being shared had no specific quality to them. In this study virality resulted from statistical consequences of information spread in social networks of people who presented with limited attention. When memes of higher quality were used in the simulation, researchers noticed little improvement in the overall quality of the final shared content. These studies revealed that our inability to view everything that is posted or shared on our feed inevitably leads us to share media that is partly true or completely fabricated, even when we would like to see important and high-quality information. Therefore, information overload can alone explain why media and fake news become viral and how this has the power to skew the way we view reality.

Three types of biases that make social media vulnerable to both intentional and accidental misinformation which then leads to changes in societal perception were identified using the Observatory on social media at Indiana University resources (Ciampaglia, Menczer, 2019).

Cognitive biases originate in the way the brain processes the information that every person encounters every day. Experiences, age, lifestyle, education, and many other factors help shape our brains way of reacting to certain things, especially when it comes to technology. The biggest single piece of technology everyone around the globe uses for one thing or another is the internet. With this very useful and entertaining advance, really anyone can post pictures, videos, and any type of information online. Videos and images can and are altered every single day by people all over the world. This includes the mainstream media which can shift perspectives of its viewers, the government with political paraphernalia and even ordinary people sharing biased views with those people closest to them. If we really think about the prospect of how technology in a certain way can bend our truths and perceptions, we can clearly see how it has been happening for the longest time now. The brain can deal with only a finite amount of information, too many incoming stimuli can cause an information overload. This problem has serious implications for the quality of information on social media. 

When the brain feels like it is becoming too overwhelmed with information, it uses multiple tricks in order to keep us healthy. This is usually effective, yet this method can also turn into biases when applied in the wrong context. When a person is deciding whether to share a post that appears on their social media feed, a cognitive shortcut may occur. This is because people are very affected by emotion evoking feelings in a headline, even though this may not be a good indicator to a credible or accurate article. 

Another source of bias can come from society. One might think, how can society possibly influence what I decide to share and pay attention to online. In reality, when people connect directly with their peers, social biases that guide their selection of friends come to influence the information they see. It has been found that it is possible to determine the political leanings of a user in platforms like twitter and Instagram by simply look at the persons friend preference. The tendency to evaluate information more favorably if it comes from a person we know or someone in our own social circle can create a sense of security that is ripe for our truth and reality to be manipulated. This can happen either consciously or unintentionally. 

This third and final group of biases arise directly from algorithms which are used to determine what a specific user is able to see while browsing the web. This is because both social media and web browsers utilize them. This personalization technology is designed to select only the most relevant and engaging content for each individual (Ciampaglia, Menczer, 2019: Reed, 2022). In doing so this technological asset may end up reinforcing the cognitive and social biases of people browsing the internet on a daily, thus making them more susceptible to fall for the media’s manipulation and half-truths.

Social media platforms and browsers employ detailed advertising tools which are built into these platforms to allow disinformation campaigners exploit biases. This is done by having tailoring messages or other form of media content generated for people who already have incline perceptions to believe the false advertising due to their unconscious biases. Another add on problem to this are the effects some platforms like Facebook use in links. This is done, in order to isolate people from diverse perspectives by clicking a seemly innocent link and further strengthening confirmation biases.

Social media platforms expose users to less diverse set of sources than nonsocial media like sites, such as Wikipedia. This is because the work is being done at a level of a whole platform like any current app we use as opposed to a single user operation or search, like looking for a specific answer in a search engine. Another important aspect of social media is the information that is trending on the platform being used according to what is receiving more attention. This is called popularity biases. An algorithm designed to promote popular content may negatively impact the overall quality information on the platform (Shao et al., 2018; Meserole, 2018). It has been seen time and time again in how what rises quickly in popularity on social media usually does not hold any substantial accuracy. This feeds into existing cognitive biases and what appears to be popular content in social media disregarding its quality stays reinforced in society. 

These algorithm biases can be manipulated by social media bots. These bots are specialized programs that interact with humans through social media accounts. Many accounts are mandated by bots disguised as real people in order to propagate whatever content is selling on the platform and to its users. Most social bots are harmless and actually help apps thrive by connecting users to legitimate source-based information. However, some conceal their true nature and are used for malicious intents such as propagating malware and boosting disinformation. Falsely creating appearance of movements and organizations are also a form of manipulation done by bots created for platforms we use every day. In order to study these manipulation strategies, tools which detect social bots are used as to try and diminish the increase of fake advertising and social manipulation in social media. 

The truth is individuals, institutions and even entire societies can be, and are manipulated through social media and technology constantly. It is important to discover how our different biases and societal pressures interact with each other and potentially create even more complex vulnerabilities. Especially when dealing with platforms that can interact and further fuel damaging claims which can create tendencies in people that can be extremely difficult to correct in the long run. Understanding our biases and not allowing technology and societal pressures to invade our senses with false advertising allows us to better protect ourselves against technological manipulation. Programmatic tools and learning how to notice social companies’ manipulation are the best way to not be manipulated online (Meserole, 2018). 

The dominance of the technological world in the past few years has made great impact in advances for humanity. As much aid as technology has and still brings to humans in our society, its rise has presented us with tactics in which media can be manipulated to fit societal norms and models that will sell. We have been fed norms as to what is acceptable, considered to be beauty standards or even true in society, and this is has been easier done with the help of technology. The reality is more and more people are using technology in these ways to further push their agenda, which will ultimately grant them the attention craved from society. Nevertheless, it is important to use digital technology sensibly and with creativity so we can improve our personal and professional relationships. If we continue to senselessly get sucked into screens and consuming media that, intentionally or not, may shift our views, it can slowly deteriorate our mental health.

References

Ciampaglia, G. L., & Menczer, F. (2018, June 21). Biases make people vulnerable to misinformation spread by social media. Scientific American. Retrieved May 5, 2022, from https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/ 

Gray, R. (2017). Lies, propaganda and fake news: A challenge for our age. BBC Future. Retrieved from https://www.bbc.com/future/article/20170301-lies-propaganda-and-fake-news-a-grand-challenge-of-our-age 

Meserole, C. (2022, March 9). How misinformation spreads on social media-and what to do about it. Brookings.Retrieved from https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/ 

Reed, E. (2022, January 13). The psychology of how and why people believe half-truths and lies. Sponsored. Retrieved from https://sponsored.bostonglobe.com/pmi/why-misinformation-spreads-psychology/ 

Shao, C., Hui, P.-M., Wang, L., Jiang, X., Flammini, A., Menczer, F., & Ciampaglia, G. L. (2018). Anatomy of an online misinformation network. PLOS ONE13(4). https://doi.org/10.1371/journal.pone.0196087 

Viner, K. (2016, July 12). How technology disrupted the truth |. The Guardian. Retrieved from https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth