text
stringlengths
0
2.12k
The experiment that I outline in the following paper is designed to shed light on any relationship between information exposure and attentional cognition.
The past few decades have been so information-filled and information-dependent that they have been appropriately deemed the “Information Age.” This trend can be attributed to increases in the accessibility, interconnectedness, and potency of technology, specifically big data. There are now more than 2.5 quintillion bytes of data created each day (Lu et al., 2014), enough to fill twenty billion human brains (Marois et al., 2005). Since we cannot physically intake all this information, we must navigate the sea of data to identify what is pertinent to us. The problem of prioritizing and processing select information is nothing new; it is likely that our brains evolved attentional mechanisms that internalize only the most important bits of information necessary to form a complete conception. But now, I hypothesize that our attentional mechanisms are having to work overtime because of the influx of information, and this is causing our attention to become less attune.
Empirically testing my hypothesis will contribute to discovering the true nature of attention. Theories of attention are variable, and there is no one widely accepted account. Many theorists agree that attention can be broken down into passive and active states. Exogenous attention is used when our attention is unconsciously grabbed by something, while endogenous attention is at work when we consciously shift our attention to something. There is also speculation as to whether attention has a limited capacity and/or is a modular process. If attention does have a limited capacity, then either low-level or high-level perceptual processing -- or both -- are bottlenecked in such a way. If attention is a modular process, in the Fodor-ian sense of the term, then it must only operate on a certain kind of input (domain specific), run automatically in the presence of that input (mandatory processing), and be impervious to background knowledge or motivation (informationally encapsulated).
Past studies considering the impact of information exposure on attention have focused on attention-span, which is the length of time an observer can attend to an object. While some studies affirm an effect, others discredit these claims and deny any impact of information exposure on attention-span at all. I, however, will be looking at the impact of information exposure (dependent variable) on change blindness (independent variable), a perceptual phenomenon that occurs when a visual stimulus changes and the observer does not notice. Vision is one of the attentional mechanism’s tools for gathering information, and for this reason change blindness is an appropriate dependent variable. To accentuate change blindness, the observer’s attentional mechanism can be exploited by flashing back screens between image or animation changes; this disrupts any exogenous cues that may draw attention and instead forces the observer to use only their endogenous attention, which picks up on changes much slower.
The independent variable, information exposure, can be quantified using market integration: a measure of how easily two or more markets can trade with each other. The greater a group’s market integration, the more advanced their society is, the more technology their constituents possess, and the more information their constituents are exposed to. The measures of market integration I will be using are the percentage of a group’s calorical diet purchased in the market. These values vary from the 0 percent market-integration of the Hadza, a modern hunter-gatherer people living in northern Tanzania, to the 100 percent market integration of the urban center Accra, Ghana. In other words, the Hadza encounter little, if any, additional information besides themselves and their natural surroundings while Accranians are faced with significantly more information on a day-to-day basis (Ensminger et al., 2014).
The experiment that I propose will draw a random sample of 30 people, 15 men and 15 women, from the Hadza (0 percent market integration), Maragoli (Western Kenyan community with 46 percent market integration), and Accra (100 percent market integration). In the first trial, all 90 participants will be individually exposed to a series of 30 pairs of random scenes on a computer ranging from industrial images to portrait photographs, each with focal objects (foreground) and contextual objects (background). The image presentation will alternate between 500 milliseconds of the original image, 80 milliseconds of a blank screen, 500 milliseconds of the original image with a slight change, and 80 more milliseconds of blank screen. This will continue until a minute elapses or the participant presses a key, indicating that they noticed the change, which they will have to orally identify after their trial is over to check for accuracy. Any misidentified changes will be subtracted from that participant’s score. The second trial will follow in the same manner as the first, except instead of still images, the presentations will be animated to control for any confounding stimuli (Masuda et al., 2006). Once the two trials are over, scores will be tallied for the Hadza, Maragoli, and Accra participants, and compared against their market integration values.
If my hypothesis is not supported by the data, and greater market integration does not correlate with lower scores on change blindness tests but instead with higher scores, then doubts would be raised toward current attentional theories. This finding would maintain that information-intake does not distress our attentional filters, begging the question: are our attentional mechanisms bottlenecked at all? Further research would need to be done on limited capacity theories to reach an empirically-backed answer. Other interesting research to pursue given a falsified hypothesis may include replicating the experiment without random images and instead with images from a social media platform. A Hadza participant from rural Tanzania whose society has 0 percent market integration and little-to-no technology would be much less accustomed to Instagram screenshots or Tiktok videos than an Accranian participant whose urban center has 100 percent market integration and technology at the fingertips. If it can be shown that the Accranian participants perform better than the Hadzans on change blindness testing with social media imagery because of their society’s greater market integration, this may count against the modular nature of the attentional mechanism. Specifically, this would jeopardize the attentional mechanism’s informationally encapsulated nature because it is drawing on information about technological interfaces from elsewhere in the mind.
If my hypothesis is supported by the data, and greater market integration correlates with lower scores on change blindness tests, then limited capacity theories of attention would be upheld. Replication would still be needed, but confidence in the hypothesis that information exposure distresses filters of attention would increase, as would confidence in attention’s bottleneck-like architecture. Ethical concerns about big data and mass information would be raised too. If information exposure is wearing down the precision of our attentional mechanisms, then what other cognitive havoc is it wreaking? As an information-dependent society, we may need to look into proactive measures to artificially filter and sort information before it gets presented to the consumer. We would also need to pursue research that dives deeper into which populations are most vulnerable to attentional decay.
I would like to see this experiment carried out because of its cognitive and societal implications. It may revitalize our conception of attention. Or, it may warrant big data and information reform. Both are equally important.
Nine years ago, on the day before Father's Day, I was scrambling to find a last-minute card. I went to every pharmacy in town but didn't find anything Dad-worthy. With time winding down and few options left, I took things into my own hands and made a card myself. I called it: the iDad, an iPad-shaped card with handwritten notes hidden beneath app icons. My Dad loved the card (in a way that all parents love their kid's work) and I loved it too. Since then, I have been handmaking holiday cards for family members every year, and my love for being creative with technology continues to grow.
I want to deliver solutions that people love with McKinsey. Instead of paper and scissors, I'm now using machine learning, virtual reality, and other, more advanced tools, to assess and create. However, my goal of having a positive impact on the end-user of my projects has still remained the same. As a Solution Delivery Analyst, I will be able to use my technical knowledge and creative thinking to affect organizations across social, healthcare, and public disciplines – the people who can benefit from constructive problem-solving the most.
I have the technical skills. Last year, public sector consulting firm Marts & Lundy asked me to help them answer a simple question: “How do we know who to direct fundraising resources towards in higher education?” In response, I built a model to predict how much money candidates are likely to donate to colleges and universities. With little guidance, I networked my way to a massive philanthropy dataset and used it to fuel a five-layer neural network that I programmed using Python and TensorFlow. In the end, the model predicted candidates' donations with ~80% accuracy and helped Marts & Lundy's consultants conserve significant outreach resources.
I have the creative mind. This summer, as a Technology Consultant Intern with EY, I created one of the first auto dealerships in the metaverse. My team's client – an online auto retailer – wanted to explore the viability of expanding into the metaverse. To answer their questions, I assessed the business case using industry comparisons and built a proof-of-concept. I taught myself to use Unity and program in C# in less than a week and created a 3D world that my internal team and our client loved. I devised novel UI/UX schema like custom search tools and first-person experiences all within a five-week timeline. The model is now being turned into a full-scale platform.
I feel like the Solution Delivery Analyst role was made for me. You are looking for a candidate who can take a theoretical or practical problem, identify the steps that need to be taken and the people that need to be involved to solve that problem, and then help visualize, prototype, and execute that solution. My technical skills and creative nature have prepared me to do just that: assess, solve, envision, and build. I'm really excited to discuss my fit with McKinsey and appreciate you reviewing my application.
I was excited to see that Digitas advertised its need for an Experience Design Intern on Indeed. You are looking for a candidate who is filled with ideas, good at backing up those ideas, and motivated to do so. My experience with design thinking, communications, and marketing would be an asset to your firm’s mission of improving the connection between brands and people. Digitas particularly attracts me because of its idea-driven approach, which separates Digitas from the rest of the solution-first marketing industry and appeals to my love for big-thinking.
I am an experienced designer and communicator who co-founded and co-produced TEDxScarsdale, a lecture series in my hometown. I learned how to give recruiting pitches, work with clients, and advertise and market an event. In turn, our speakers taught me about public speaking and the fundamentals of verbally or visually conveying a message. With my design and communication knowledge, I emceed and sold over 500 tickets for the inaugural talk. I will bring this same energy and skillset to the Experience Design Internship with Digitas.
I further developed marketing and communication skills during my internship with Patti Conte, Ltd. Two summers ago, I had the privilege of working with a public relations firm that promoted up-and-coming musicians in the New York area. Through pitching artists to media outlets and writing press releases, I grew my proficiency in creating compelling messaging, maintaining professional contacts with key partners, and persuasively selling products and ideas -- all skills that could be put to work at Digitas. Effective marketing and communication helped me sign two new artists to the firm and secure more than fifteen positive artist review publications.
My goal is to build upon my experience in a position that leverages and further enhances my skills in design thinking, communications and marketing. At the same time, I feel that I can contribute to the Digitas team’s aim of revolutionizing connection in the marketplace. Given the shaky climate surrounding COVID-19, I would be willing to organize a remote internship plan, and am already proficient in Zoom, Skype, and other offsite communication platforms. I look forward to speaking with you about my fit within the Experience Design Internship through one of those platforms or over the phone. Thank you for considering my candidacy.
America’s two-party system is plagued by antipathy and inaction. As Americans filter into one party or the other, political attitudes and ideals are becoming as binary as the two-party system itself. At the polls, voters habitually adopt the policy stances of their preferred party. On the Senate floor, representatives vehemently refuse to reach across the aisle. At the end of the day, Americans turn on cable news, returning to the comfort of their political echo chambers. Perhaps this is why John Adams warned in 1780 that “a Division of the Republick into two great Parties … is to be dreaded as the greatest political Evil under our Constitution.”
Although the American Constitution lends itself to a two-party makeup, the rise of a third, fourth, or fifth party is not impossible. And, from a psychological standpoint, the diversification of competing ideologies ought to be welcomed with open arms. Currently, America’s two-party system is incompatible with the psychological dispositions of American voters because it demotivates bipartisan information-seeking and imparts polarizing unconscious attitudes. Instead, psychological dispositions warrant a party system that observes attitudinal differences but does not promote staunch polarization. Enter: the multi-party system.
In the following analysis, I discuss the psychological implications of two political factors – partisanship and media – on American voters in two-party and multi-party systems. First, I argue that a multi-party system is more conducive to effective partisanship because it motivates information-seeking behavior that fits party identifications to people’s actual beliefs. Then, I argue that a multi-party system creates a more amicable media environment and tempers hostile unconscious attitudes. Finally, I end with a discussion of whether a multi-party system in America is truly realizable.
Partisanship
Americans are inclined to side with a party because it allows them to be political actors without needing political literacy. Forming political attitudes is as easy as figuring out what the stance of one’s preferred party is. Electing a preferred party is as easy as mapping one or two matters of personal importance to the party’s policy ideals. In doing so, voters minimize the cognitive resources spent on making political judgments and maintain confidence that those judgments reflect their values and affirm their prior beliefs (Lavine et al., 2012).
But endorsing an entire belief system based on an issue-specific consideration leaves quite a large margin of error in assuming the rest of the party’s political judgments. Who’s to say that just because the Democrats and I both support gun control, I will also support increasing income tax? Still, voters forego important information in favor of adopting party cues and making such assumptions. In a study of tax policy opinions in the 2012 California general election, participants in the aggregate ignored inequality information and instead chose to position themselves along party lines (Boudreau et al., 2014). The consequences here cannot be understated. Using party cues as substitutes for information risks making irrational political judgments that don’t align with one’s actual beliefs. It also leads to a less informed and less knowledgeable electorate. Most importantly, it mitigates compromise. Rather than considering issues objectively or on a case-by-case basis, people endorse the opinions of their preferred party and, without consuming any perspective-shifting information, do not cross party lines.
So, the ideal party system will look to correlate partisan beliefs with people’s actual beliefs and motivate thoughtful information processing as much as possible. How do the two-party and multi-party systems match up?
In a two-party system, it’s easy for people to form partisan identities on the basis of few issue-specific considerations. Because there are only two parties, and the parties’ ideologies are relatively polar, discerning where the parties stand on a given issue is clear-cut. If my determinant issue consideration is restrictive immigration, it’s easy to see that Republicans support closed-door policy, Democrats support open-borders, and that I should identify with the Republicans. Forming a partisan identity based on personally important matters is straightforward and there’s no need to further research policy stands. Information-seeking behavior is motivated by uncertainty – a confidence gap between one’s experienced level of confidence and their desired level of confidence (Lavine et al., 2012). But there is little to no uncertainty in determining partisanship in a two-party system; if I care most about restrictive immigration, I simply endorse the Republicans. In the absence of uncertainty, not only are people be unmotivated to learn more, but they process any political information they do come across peripherally, picking up on party cues and choosing to accept or reject the information along party lines. Without engaging with the issue under consideration, people assume that their party’s stance is congruent with their own (Petty et al., 1996). Yet, it is equally – if not more – likely that the majority of the party’s policy stances and political values do not reflect what one would choose to uphold had they considered political information thoughtfully.
Multi-party systems, on the other hand, demand more issue-specific considerations than two-party systems do. With more parties, the political spectrum grows wider and more convoluted. No longer is one party the opposite of another; instead, parties may overlap, supporting similar ideals but different policies or vice versa. In a multi-party system with coinciding ideologies, uncertainty emerges in deciding where to pledge partisan allegiance. Electing a partisan identity requires some distinguishable X factor that will set one party more in line with one’s beliefs than the rest. Uncertainty, then, motivates further research into parties’ stances on other personally important matters or consideration of ideals beyond just those that are issue-specific. Uncertainty also motivates systematic processing of that information, resulting in genuine political knowledge and conscientious attitude formation (Lavine et al., 2012). This catalyzes a positive feedback loop: the more politically aware individuals become, the more likely they are to continue relying on issue-relevant values rather than party cues when forming political judgments (Kam, 2005). Unlike the demotivational responses evoked by a two-party system, a multi-party system motivates systematic consideration of information during the partisan identification process. In turn, a multi-party system produces more knowledgeable political actors and tailors party identifications more closely to people’s actual beliefs.
Media
What Americans know is not only driven by personal importance and issue-relevancy, but also by what is most readily available. For the average American, the quickest and easiest point of entry into the political world is media – cable news, political journals, digital social networks, etc. (Delli Carpini et al., 1996). But most of these media outlets are not in the business of broadcasting objective facts; in America, objective facts don’t sell. Rather, most media outlets garner viewership using polarizing political narratives that appeal to one side of the aisle. Such narratives are riddled with hostile rhetoric that paints the opposing party as the enemy (every good story has a villain).
Naturally, the information that we consume constitutes what we know. Yet, the information that viewers deduce from media is relatively homogenous. As Taber and Lodge (2006) illustrate in a series of experiments exploring how citizens evaluate arguments about affirmative action and gun control, when people are free to self-select the source of the arguments they read, they choose confirming arguments over disconfirming arguments (Taber et al., 2006). Americans are poised to avoid contradictory information and return to the same narratives time and time again. Eventually, this can form unconscious attitudes. The more exposed an individual is to a particular political message, the stronger an evaluation they form, the more accessible in memory it becomes, and the stronger influence it will have on cognitive processes and behaviors (Arcuri, 2008). When political messaging is polarizing and hostile, consequent unconscious attitudes inhibit any possibility of attitude change or compromise.
The ideal party system will subdue the hostility of the media environment and stifle the formation of polarizing unconscious attitudes. Can two-party or multi-party systems satisfy these conditions?
A two-party system breeds hostility because there is a salient tension between ingroup and outgroup. The enemy in a two-party system – the lone outgroup – is easy to identify and vulnerable to hostile narratives. As of late, particularly in the Trump-era election cycles, political attitudes in America across the aisle and in the headlines have become almost primal in nature. Anthems as pervasive as “Lock her up!” and “Stop the count!” have the power to instill unconscious attitudes that further entrench party lines. A 2015 study comparing implicit partisan affect with other social divides in America found that hostile, unconscious feelings toward opposing parties cause political polarization that is even stronger than polarization elicited by race (Iyengar et al., 2015). America is far from healing its racial divide, and perhaps even further from reducing political polarization. As long as outgroup hostility prevails in media, the two-party chasm will continue to grow.
In contrast, multi-party systems foster a more amicable media environment because there is not as much opportunity for narrative-stitching. Different parties clash on some policy stands and agree on others, thus there is no consistent, clear enemy (no one likes a reality TV show with new characters each episode). Unconscious attitudes require message repetition and emotional provocation, but with an erratic and less hostile narrative, unconscious attitudes are less likely to form. Dissociating political tensions across a number of parties helps alleviate the targeted messaging and polarizing attitudes we saw in two-party systems. It’s easy to villainize outgroups when there’s only one, but much harder to spin a compelling story when villains are diverse and some of them occasionally act like the good guys.
Discussion
In its current two-party state, America is a nation divided against itself. The ease with which party identifications are formed demotivates bipartisan information-seeking, enables peripheral processing of political information, and misaligns partisan ideals with people’s actual beliefs. The sharp division between parties fuels hostile narratives in media, which impart polarizing unconscious attitudes and entrench conflict along party lines.
A nation divided against itself cannot stand, and multi-party systems offer a solution to growing polarization. More parties mean more issues have to be taken into consideration to assign partisanship, thus motivating systematic processing of novel information and mapping people’s actual beliefs to their party’s ideologies. Without such fine-grained polarization, the media environment in a multi-party system placates and does not instill potent unconscious attitudes.
The question still remains as to whether a multi-party system could ever come into being in American politics. Right now, given the great schism between Democrats and Republicans, there does not seem to be room for a third contestant. However, if the Left-Right stalemate persists, it seems plausible that some other ideology might challenge those that have long guided the nation. And surely, for the sake of partisanship, media effects, and the nation as a whole, we would hope so.
I was excited to see that Future Media advertised its need for a Partnerships & Marketing Intern on Indeed. You are looking for a candidate who is a relentless and self-sufficient worker, skilled organizer, and talented researcher and marketer. My experience building a popular lecture series and marketing at a public relations firm would be an asset to Future Media’s mission of fostering new initiatives by bolstering partner and customer relationships. Future Media particularly attracts me because of its careful attention to art and design -- personal and professional interests of mine.
I am an experienced marketer who co-founded and co-produced TEDxScarsdale, a lecture series in my hometown. I learned how to search for and approach potential clients remotely, give recruiting pitches, and market an event using social media. In turn, our speakers taught me about public speaking and the fundamentals of verbally or visually conveying a message. With my research, communication, and business strategy knowledge, I emceed and sold over 500 tickets for the inaugural talk. I will bring this same energy and skillset to the Partnerships & Marketing Internship with Future Media.
I further developed marketing skills during my internship with Patti Conte, Ltd. Two summers ago, I had the privilege of working with a public relations firm that promoted up-and-coming musicians in the New York area. Through sourcing and pitching to media outlets, leading email marketing campaigns, and writing press releases, I grew my proficiency in creating compelling messaging, solidifying and maintaining professional contacts with key partners, and persuasively selling products and ideas -- all skills that could be put to work at Future Media. Effective marketing helped me sign two new artists to the firm, secure more than fifteen positive artist review publications, and cement the firm’s brand identity.
My goal is to build upon my experience in a position that leverages and further enhances my skills in marketing. At the same time, I feel that I can contribute to the Future Media team’s aim of spearheading the media industry using partnerships and customer connection. Given the shaky climate surrounding COVID-19, I would be willing to organize a remote internship plan, and am already proficient in Zoom, Slack, and other offsite communication platforms. I look forward to speaking with you about my fit within the Partnerships & Marketing Internship through one of those platforms or over the phone. Thank you for considering my candidacy.
Alumni often bear the misconception of being old-timers trying to relive their glory days (we get it, Terry, you were in AD), but they are the thriving backbone who promote Dartmouth for students like me to make the most of our time on campus. I want to have the opportunity to connect with alumni so I can learn how to be an effective alumni myself. Over the past term and a quarter, I have been opened to this amazing institution that was shaped by those who came before me. Coming from a monotonous town, I found a vibrant culture at Dartmouth that I had always been missing. However, this place still has its imperfections (one that comes to mind is the amount of wasted budget on funding and maintaining an unnecessarily broad course offering). I want to have the ability to effect change at Dartmouth now and develop the skills to do so once my four years are up. Future generations of Dartmouth students deserve an even better experience than the life-altering one that I have just begun. Life runs in circles, and to be able to join and strengthen an impactful, wide-reaching circle -- made up of the club members in my class, the upper and underclassmen I would grow to know, the faculty, and numerous alumni -- would be so meaningful.
Hill Winds Society is a four-year commitment with weekly meetings on Mondays and various events throughout the year. Please talk about any current commitments you have at Dartmouth, and tell us about a time when you committed to a group (a club, a team…) (max 1250 characters)
On campus I am a full-time member of the Dartmouth Brovertones a capella group and I also frequent the Dartmouth Philosophy Society, Hillel, and Overcooked. Through these clubs -- especially the Brovertones which I am committed to upwards of 6 hours a week -- I have learned about self-management and met some of my best friends. When it comes to commitments, there is often some vague conception of what exactly one is committing to -- is it the time? Is it the club as an entity? In co-founding and co-presidenting TEDxScarsdale during high school, I learned that the true commitment of being in a group is to the people that the group involves. Our club had a rigid structure of meetings and timetables which were all very clear, but my commitment to the co-presidents, club members, TED speakers, audience members, and school administration could be called on at any time. I assumed the responsibility of being a part of this unique circle and as a result had to be available as a club-member and as a friend whenever I was needed.
Hill Winds Society is comprised of members who each contribute a unique perspective to our group. What unique perspective would you contribute to Hill Winds Society? (max 1250 characters)
My unique perspective exists in my personality and my Dartmouth experience. I am a perfectionist. This can be a bad trait when I am trying to grind out an essay the night before it is due, but it does fuel me with a desire to always seek improvement. With the power of Hill Winds Society, I can use this perfectionism in its strongest form to improve on the flaws I see in the College and foster meaningful relationships with alumni. As for my Dartmouth experience, I had no prior attachment to or expectation of the College coming into freshman year. Yet, everyday I pick my head up to see something even more beautiful about this school than the day before. An instance that comes to mind that epitomized the Dartmouth experience for me was playing a game of pickup basketball at the Choates one of my first weeks here. A group of Hanover high schoolers walked by and challenged my buddies and I to a game. We smoked the high schoolers 15-3, but it was more than that. The experience revealed to me (1) the connection to the surrounding community that Dartmouth has and (2) how small, Dartmouth moments like this can bring a bunch of strangers together over a love for the College and its offerings.
As a member of Hill Winds Society, you would be participating in networking events with Dartmouth alumni who may have different backgrounds and perspectives from your own. Tell us about a time when you connected with someone different from yourself. (max 1250 characters)
The most polarizing experience with someone that I’ve had was with my childhood best friend, Matt. Matt knew me before I was even born. As we began to develop personalities, we realized that we were complete opposites. While I enjoyed sports and being active, Matt preferred film and art. Instead of abandoning our friendship, we decided to investigate each other’s worlds. We would spend the first half of the day at Matt’s house, pretending to be Alfred Hitchcock and making YouTube videos that 10 years down the road we would look back on and cringe into another dimension. In the afternoon we would go over to my house, where I tried to make Matt overcome his scooter-phobia (yes, he had a severe fear of scooters). Over time, we saw that our commonalities only helped us learn so much. Instead, delving into each other’s unique spheres made us exponentially more well-rounded people. Otherwise, we would have remained polarized and lived childhood to only half of its potential. While Matt and I have grown apart in recent years due to the different roads life has taken us down, we will always be grateful to one another for showing each other unknown worlds.
Language and music do not strike the casual observer as similar; it would be hard to confuse a Mozart symphony with an Obama speech. But structural parallels between linguistic and music theory and recent findings in neuroimaging show that the way we learn, understand, and produce music and language are not all that different. Capitalizing on their likeness may have far-reaching implications for text-generating AI systems.
As they relate to the human mind, language and music both (1) rely on syntactic structure and (2) involve the same brain regions for processing. Musical and linguistic syntax are hierarchical structures that organize inputs into predictable, meaningful patterns. In language, speech sounds are combined to form words, words are arranged to form sentences, meaning is derived from sentences, and contexts are interpreted from meaning. It is the linguistic syntax -- the set of rules that arranges words and phrases -- that connects a language's structure with its meaning (Jackendoff, 2009). Similarly, in music, multiple levels of organization give rise to meaningful sequences. Tones are combined to form scales, specific notes of scales are harmonized to create chords, chords that share scalar properties comprise a key, a key’s tones and chords are organized into song-form, and from song-form the listener derives a sensuous experience. The musical syntax -- song-form, the ordering of tones and chords -- links music’s structure with its meaning in the same way that linguistic syntax does (Patel, 2010).
Mapping language and music processing in the human brain has revealed surprising overlap and breadth. Research during the 1960s found that patients with damage to certain areas of the left hemisphere developed aphasia, while those with damage to certain areas of the right hemisphere suffered from amusia (Kleist, 1962). These results led to the conclusion that the left hemisphere was strictly responsible for language and the right for music. Specifically, it was thought that language was restricted to Broca's area, which governs the meaning of words, and Wernike’s area, which dictates syntactic structure. But with the advent of fMRI, scientists discovered that Broca’s and Wernike’s areas were also activated when processing music both aurally and visually (Steinbeis et al., 2008). Further fMRI studies have shown that regions of the right hemisphere operate during language and music processing as well. It has since been suggested that the left and right hemispheres have broad functions that language and music both recruit. The left hemisphere specializes in hierarchical sequencing like syntax and meaning while the right processes contour-based patterns and emotional affect. Put simply, the left hemisphere identifies denotation and the right interprets connotation (Harvard-Smithsonian, 2010).
The similarities between language and music reveal symbolic and physical structures that the human brain uses to create and communicate: rigid syntax organizes structural components, the left hemisphere processes its literal meaning, and the right hemisphere extrapolates this meaning.
Understanding and modeling these building blocks has allowed music-producing AIs to get off the ground. IBM Watson Beat, a neural network that composes original music, is a foremost example. The system learns by consuming audio files broken down into their core syntactic elements, which are linked with information on emotions and musical genres. These structural reference points allow IBM Watson Beat to interpret and classify music; they also resemble the building blocks of language and music used in the human brain. The project started in 2016, and by 2018 IBM Watson Beat was reliably producing coherent, original music (IBM, 2018). Today, some music artists like Taryn Southern use music-producing AI as inspiration (Deahl, 2018) while others are using it to create entire songs or albums, such as the first AI-produced album “Hello World” (Hello, n.d.). There is still room to grow and perfect music-producing AI, but recent progress is reflective of an accurate understanding of how the human brain processes music.
IBM Watson Beat’s demonstrated comprehension of music theory is remarkable, but its ability to form connections between musical elements and emotions are what allowed it to pioneer artificial music production. Previous attempts to create a music-producing AI overlooked the right hemisphere’s role in music contextualization and as a result could not create cohesive pieces. The connection between musical data points and emotion is so integral because the human brain does not create music out of thin air. Rather, the musician draws on sensuous experience as the muse for a piece. When music is created, human experience is translated to musical syntax using emotion. For instance, “broken up with” is represented by minor keys and “sexual pleasure” is represented by major keys. Conversely, when music is listened to, musical syntax is translated to an affective human experience by interpreting emotion in the right hemisphere of the brain. The right hemisphere uses emotion as the mediator between human experience and musical syntax, which gives a musical piece coherence (Madell, 2002).
Text-generating AIs, on the other hand, have not seen as much success. GPT-2, produced by OpenAI, is the most advanced text-generating AI to date. Given a prompt, the neural network’s objective is to predict the most likely next word. GPT-2 produces impressive text, often many paragraphs long, that can convince the naive reader. This step forward in narrative capacity has put GPT-2 ahead of other text-generating AI like Word2vec and Google Translate, which struggle to create any narrative at all. But there is still an element of creativity and coherence missing. GPT-2 needs a thorough prompt to generate text from, and even with that crutch, its logicality eventually falls off the rails (Radford, 2019). GPT-2 moves as quickly as possible between words to decide which is most likely to follow in sequence, but strict mathematical probability omits any consideration of the meaning that mediates language (Hofstadter, 2020).
I propose that the shortcoming of GPT-2 and other text-generating AIs may be that they overlook the right hemisphere’s role in understanding and creating language. Since the right hemisphere is responsible for extrapolating connotations, a lack of coherence among outputs of text-generating AIs could be attributed to a lack of context that the right hemisphere usually provides. Like music, words and sentences are not spontaneously created. They are guided by emotions, experience, and context, which the right hemisphere uses in translation with linguistic syntax. When music-producing AIs adopted mechanisms to imitate the right hemisphere, the coherence of their outputs dramatically improved. While music is more mathematical than language, and thus music-producing AIs are easier to program than text-generating AIs, language’s lack of mathematical structure only means that understanding it requires relying even more so on the right hemisphere’s contextual processing. Given the similarities between language and music processing in the human brain, and the need for a meaning-mediator in text-generating AI, perhaps the same solution that worked for music-producing AI can be applied to text-generating AI. Further work will need to be done to identify the specific roles of the right hemisphere in language processing and how it mediates language.
During our journey in this course, we have seen numerous examples of how architecture can elicit feelings of uncanniness. From short stories to film noir to musical masquerades, the context surrounding an uncanny experience has been anything but consistent. As a result, I grew curious about what architectural features are most likely to activate the uncanny. Specifically, I wanted to know: what happens when we ask an objective source, such as an artificial text-to-image generator, to interpret the uncanny? How will an artificial intelligence (AI) – an engine trained on billions of image samples – choose to illustrate uncanny experiences?
I first conducted a meta-analysis of the texts and films we have studied over the past ten weeks (The Jolly Corner, The Bridge on the Drina, The Third Man, Citizen Kane, and Blade Runner). I read the texts and screenplays for each work, picking out terms, phrases, and quotes with uncanny associations. I then fed various combinations of keywords to Midjourney, a text-to-image generator, which returned artificial illustrations. Next, I compared the artificially generated images to imagery from the original texts and films I had analyzed. I looked for similarities and differences in the “uncanniness” of each image, trying to identify what the author, director, or Midjourney might have done to effectively activate an uncanny experience. Lastly, I grouped these examples into broader categories of architectural features that I deemed to be most indicative of “uncanniness.”
The following paper is a discussion of my findings. I first define “uncanny” and offer some background information about artificial text-to-image generation. Then, by comparing the images I have extracted and generated, I argue that the features most closely tied to uncanny experiences are light, context, and scale. Finally, I broaden the paper’s scope to advocate for the potential of generative AI to deepen our understanding of architecture and the uncanny.
Background
A deep dive into the uncanny necessitates a concrete definition of what “uncanny” really means. We have seen many different uses of the term across countless contexts, but the definition that sticks with me is “taking something familiar and making it unfamiliar.” This definition makes the uncanny tangible and actionable while also maintaining the emotional affect we often associate uncanniness with. As I go on to analyze what architectural features create uncanny experiences, I will specifically be looking to see how those features take something familiar and make it unfamiliar.
This discussion also requires some background into artificial text-to-image generation. A text-to-image model is a machine learning model that takes a natural language description as input and produces an image matching that description. These models were initially quite naïve, but due to recent advances in data processing power, new models like DALL-E 2, Midjourney, and Imagen have approached the quality of real photographs and human-made art. They begin as a long series of digits between 0 and 1; these weights are subsequently trained using billions of samples of images and their associated alternate captions (the text you see describing an image when it fails to load on a webpage). Over time, the models adjust their internal digits to build associations between text inputs and image outputs. What makes these models so interesting is that they create an entirely unique image.2 This is encouraging for the present experiment because the image outputs we get from “uncanny” text inputs are the result of what an objective machine, trained billions of times over, believes “uncanniness” to look like. Through a series of extremely informed calculations, we can visualize uncanny phenomena like never before.
Methods
I analyzed two texts (The Jolly Corner and The Bridge on the Drina) and three screenplays (The Third Man, Citizen Kane, and Blade Runner), pulling terms and quotes that we had collectively deemed “uncanny” in class discussions. I then experimented using different combinations of terms and phrases as text inputs for Midjourney; in general, I found that either a series of phrases or one descriptive quote returned the best outputs. I varied the style of text-input I used in the examples presented in the Figures section, which includes 9 of the 52 input-output pairs I tested. Each artificially generated image is presented side-by-side with a comparison image from the original source (be it an original movie, image, or film-adaptation). In the Results section, I group my results into three categories of architectural features that I argue are most likely to elicit an uncanny experience: light, context, and scale.
Results
Feature #1: Light
Light, and particularly the absence of it, has an incredible ability to evoke the uncanny. Light does not necessarily have to be an intrinsic property of architecture; oftentimes, it is added on top of an existing structure to turn it from something familiar into something wildly unfamiliar. In this way, you can make architecture feel alien without changing anything about the architecture itself. This is what makes Figure 1b and Figure 1c feel uncanny to me. The Jolly Corner is riddled with mentions of varieties of light such as “autumn,” “crepuscular,” “uncertain,” and “electric.” As seen in Figure 1c, A building which was once a joyful, Victorian townhouse, familiar to both Spencer Brydon and the viewer, is rendered unfamiliar by the orange glow of nightfall. Midjourney picks up on the notion of a “white electric lustre” in Figure 1b by making a grand Victorian window the only illuminative source in the scene. Both The Jolly Corner and generative AI do a good job of making traditionally comforting architecture feel haunted with their use of unfamiliar light.
Conversely, The Jolly Corner and Midjourney also exploit an absence of light to create a sense of uncanniness. Darkness turns familiar architecture in foreign realms by inhibiting vision, the sense that we rely on most to navigate the world. Without the power of sight, we lose most of our ability to corroborate familiarity. Think about a trip into a dark basement – even if it is in your own house, you may still feel an urge to get in and get out as quickly as possible. Darkness causes the human mind to run rampant with theories and fears about what could be on the other side of the shadow, regardless of how unlikely it may be. This is what gives Figure 2b and Figure 2c an uncanny aura. In Figure 2c, Spencer Brydon has only a flickering candle (which itself is uncanny) to illuminate his familiar childhood home. But with a limited field of view, Brydon begins to duck and dodge at every dark corner. He grew up in this home, yet the darkness it casts at dusk forces him to wonder what sorts of specters may lurk in its shadows. Figure 1b follows a similar logic. The silhouetted shadow of a human is eerie in its own right, but the vignetted corners of the image compounds our fear response. Darkness can leave spine-chilling questions unanswered, such as where the person’s shadow is coming from and what else may be found in the dark.
Silhouettes are a particular kind of darkness with a proclivity for uncanniness, since they leave an especially disconcerting question unanswered: who is it? Most of the media we analyzed in this course featured some sort of human silhouette, and The Third Man may be the exemplar of them all. Figure 3c illustrates how silhouettes manifest in The Third Man. While police and Holly Martins are waiting for Harry Lime – a fugitive of homicidal proportions – a large silhouette appears on the wall of a war-torn square. They have some knowledge, that the shadow is that of a human, but their vision is still occluded enough to push their minds into unfamiliarity and fear (irrationally so, as the silhouette turns out to be a man with balloons). Figure 3b is an artificial image generated using phrases from The Third Man screenplay. Clearly, Midjourney has picked up on the uncanniness of a dark silhouette, filling in edges of the image with more dark architecture to accentuate the phenomenon of visual occlusion.
Feature #2: Context
The context that architecture is situated within has the power to influence whether it is interpreted as uncanny. The human mind builds understanding through associations, and if we come to associate architecture with an uncanny history or circumstance, we will likely consider the architecture uncanny on its own as well. Consider a haunted house: for 364 days of the year, a haunted house is simply just a house, but on Halloween (when you are told that hundreds of clowns died there last year) a familiar house can turn profoundly uncanny.
The Bridge on the Drina is another example of how context alters perception. As we discussed in class, the Bridge on the Drina is not an intrinsically uncanny bridge. If you were walking through Višegrad on a sunny day, you might even say that the bridge is especially beautiful or comforting. However, once we have read the tales that Ivo Andriс́ recounts – that a mother walled her children inside the bridge, that a rebel was publicly hung from its arches, that the surrounding townspeople were enslaved to ensure its construction – the bridge becomes wildly unfamiliar and eerie. Figure 4b captures the idea of “associative uncanniness.” When I had input descriptors about the bridge alone, such as “skillfully built stone bridge with a mysterious history,” Midjourney returned traditional, familiar bridges. However, once I included mentions of how the bridge “turned the town into a hell,” the color scheme turned dark and fiery as seen in Figure 4b. A similar association was made when I added “exploited workers” to the text input. The subsequently generated image is Figure 5b, featuring a bridge rendered uncanny when you learn that the depicted workers are enslaved.
Both Figure 4b and Figure 5b look eerily similar to Figure 5c, which shows the Bridge on the Drina surrounded by a wartime haze after it was partially destroyed during World War I bombings. It is a testament to the power of context how artificially generated images, just by nature of their historical contexts, can look like a real-world image of familiar architecture turned into unrecognizable rubble.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2
Add dataset card