Blog Feed

The Hashtag that Became a Movement: #MeToo Fiction 2017-2021 and Beyond

In 2017, a viral hashtag became a worldwide movement. Though the phrase “me, too” was first used in reference to sexual violence in 2006 by activist Tarana Burke, the hashtag #MeToo gained widespread attention when, on 15 October 2017, actress Alyssa Milano used it on Twitter and encouraged fellow survivors to follow suit.

A woman with blonde hair, wearing a gray turtleneck holds a white poster in front of her face. The poster says #MeToo. Her nails are painted red.

Milano’s tweet was a call to arms to expose the ubiquity of sexual violence, and that call was answered more than 12 million times across various social media platforms in the first 24 hours alone. Scholars Jessica Ringrose, Kaitlynn Mendes, and Jessalynn Kellar note that #MeToo is the most high-profile example of a growing movement towards “digital feminist activism, [following] a growing trend of the public’s willingness to engage with resistance and challenges to sexism.” 

Four years later, #MeToo is synonymous with the global fight against sexual violence. It was no surprise, therefore, that the movement would be explored in feminist fiction. But how is #MeToo fiction influencing the cultural conversation, and vice versa?

A black cat sits on a black table with a black background. The cat's bright yellow eyes are visible.

The Cat Person Effect

Feminist scholar Catharine R. Stimpson asks, “Does a single book change a life? Not by itself, for nothing exists in isolation. A lightning bolt needs a sky charged with electricity and a vulnerable ground.” Stories can both document social change and serve as a means of bringing about that change. Good fiction can reinforce or disrupt narratives, provide a warning, or offer new and enticing possibilities for a better society. 

Gayle Green, author of Changing the Story: Feminist Fiction and the Tradition (1991), described the feminist fiction of the late 1960s and early 1970s as “so close to the pulse of the times that it is possible to use it as documentary of and commentary on the social and political scene.” Much the same could be said about more recent feminist writing and about #MeToo fiction in particular. 

Academic and critic Rita Felski also explores the idea of fiction as a form of social construction and historical record. She writes in Beyond Feminist Aesthetics: Feminist Literature and Social Change that fiction “does not reveal an already given identity, but is itself involved in the construction of this self as a cultural reality.” Arguably the most influential piece of fiction associated with the #MeToo movement is Kristen Roupenian’s Cat Person, published in the New Yorker in October 2017. The story chronicles a bad date, bad sex, and the ugly, misogynistic aftermath between college student Margot and thirty-something Robert, and has been described as the first short story to go viral.

At its heart, Cat Person is about the grey areas of consent. The story was read, shared, and discussed worldwide, with many young women expressing how relatable they found Margot and her experience. Cat Person is uncomfortable to read, forcing the reader to experience an awkward, unwanted sexual encounter up close and to acknowledge that people (particularly women) sometimes say yes because it seems less risky than saying no. 

Unlike many portrayals of sexual violence in fiction, Cat Person contains no crime or clear act of violation. Instead, it embraces the ambiguities of sex, power, and the limits of virtual communication. Since the sex portrayed in the story is at least nominally consensual, in that Margot outwardly gives consent despite her discomfort and even revulsion, it raised challenging questions and sparked fierce debates about personal responsibility, consent, and whether Robert is the villain of the piece or not. 

Cat Person’s viral success can be attributed to the fact that it captures, at precisely the right moment, a conversation that was at the forefront of cultural consciousness. It seems likely that, in another fifty years, stories like Cat Person will be viewed as historical social commentary in the same way as feminist fiction of the ‘60s and ‘70s.

To return for a moment to Stimpson’s memorable metaphorical question, when real-world events charge the sky and prime the ground, stories can be those lightning bolts of clarity.

A bolt of white lightning flashes against the background of a dark sky.

Reading in the #MeToo Era

Regardless of authorial intent, any post-2017 piece of fiction that deals with sexual violence will be understood in light of the #MeToo movement. Kate Elizabeth Russell’s 2020 novel My Dark Vanessa—which tells the story of an abusive relationship between fifteen year old Vanessa Wye and her English teacher, Jacob Strane—took eighteen years to write, according to the author. In an interview with Fiona Sturges for The Guardian, Russell admitted that she was nervous about the timing of the novel’s publication and did not want to be viewed as opportunistic.

Though My Dark Vanessa does make reference to a collective social-media-based reckoning similar to #MeToo, it is clear (given the lengthy writing, revision, and development time) that Russell did not originally conceptualise the novel as a #MeToo story. But does authorial intent matter, and to what extent? I argue that it matters far less than the context into which the work is released. 

A novel about sexual abuse published in 2020 can never be separated from the #MeToo movement, and any audience will receive the work in that context. Regardless of the precise timing of its writing and acceptance for publication, a novel like My Dark Vanessa will be viewed as a #MeToo story and consumed in light of that reality.Similarly, Sofka Zinovieff’s Putney was released in 2018 and, according to the author, written and accepted for publication before the #MeToo movement gained traction. Even so, Zinovieff acknowledged in a 2019 interview with Eleni Papargyriou that Putney will inevitably be perceived as “part of the zeitgeist” in relation to the movement.

Book covers for My Dark Vanessa by Kate Elizabeth Russell (left), and Putney by Sofka Zinovieff (right).
Book covers for My Dark Vanessa by Kate Elizabeth Russell (left), and Putney by Sofka Zinovieff (right).

Where Will #MeToo Fiction Go Next? 

Something of a second wave of #MeToo fiction is currently in progress. In the wake of several significant real-world events, from the sentencing of Harvey Weinstein to the murder of Sarah Everard in London, the #MeToo movement has seen a resurgence in activism and engagement across the world. It seems likely that a new generation of #MeToo fiction will follow. 

Reviewing the canon of fiction about sexual violence reveals a notable trend away from futuristic, dystopian themes in the last two to three years. Instead of Margaret Atwood’s Gilead in The Handmaid’s Tale or the nightmarish isolated island of Jennie Melamed’s Gather the Daughters (2017), many of the post-#MeToo additions to the canon bring sexual violence into everyday locations: schools, homes, campuses, bedrooms. Susan Choi’s Trust Exercise explores harmful power dynamics between teenagers and adults at a performing arts school. Kate Walbert’s His Favorites tackles the abuse of a student by a teacher. Rosie Price’s What Red Was explores sexual assault alongside other contemporary concerns such as addiction, class, and family dysfunction. Even ostensibly dystopian fictions often tackle immediate and pressing real-world concerns. For example, Leni Zumas’ The Red Clocks imagines an America without legal abortion, a reality that seems to come closer each year. 

#MeToo forced societies around the world to acknowledge the enormity of the sexual violence problem. It challenged the idea that violations are rare and committed only by monsters, and showed us that the “monsters” are ordinary people who walk among us. Fictions related to the #MeToo movement will likely continue to follow this same path. I believe we will continue to see more stories anchored in the real world and real (or at least realistic) experiences, as well as those that embrace ambiguity and nuance, asking difficult questions about the nature of consent and power. It is also my hope that we will hear from more diverse voices. As of now, the #MeToo movement and associated literature is disproportionately dominated by white, cisgender, heterosexual and able-bodied women with educational and financial privilege. Though this is beginning to change, there is a long way to go before the canon is truly representative of the vast array of stories and experiences that exist. What will be the next Cat Person—the next story that captures the heart of an issue at precisely the right moment? I don’t know, but I look forward to finding out.

The Prospects For Limiting Nuclear War And The Strategy Of “Escalate To De-escalate” – A Research Note

The most recent version of the United States  Nuclear Posture Review (NPR), written in 2018 during the Trump administration, claims that Russian strategy “mistakenly assesses that the threat of nuclear escalation or actual first use of nuclear weapons would serve to ‘de-escalate’ a conflict on terms favorable to Russia.” This strategy is encapsulated in the phrase “escalate to de-escalate” (E2DE), which may be defined as a strategy in which a state attempts to escalate a conflict with the express purpose of deterring further military action by the adversary and/or terminating the conflict on terms favorable to itself. 

At first glance, the E2DE strategy might appear to be paradoxical and counter-intuitive. How might a country go about escalating a conflict and de-escalating it at the same time? Nevertheless, many decision makers in the United States, including national security officials, assume E2DE  to be part of the current Russian nuclear weapons strategy. The logic of this strategy is as follows: If one side of a conflict employs a sudden or sharp escalation, i.e. the crossing of an important threshold or a dramatic movement beyond previous limitations, the other side may capitulate. Capitulation would occur, the logic continues, because the receiving state understands (after the dramatic escalatory move) that its adversary is more committed, resolved, and willing to escalate to higher levels of violence than the receiving state.      

A bald white man in a black jacket and khaki pants stands in the middle of a black and white checkered floor. He is speaking to a room of white men and women, who are all dressed in business clothing. In the background is a row of flags of the world.
U.S. Naval War College (NWC) staff members listen to a brief during a wargame reenactment of the Battle of Jutland at NWC in Newport, Rhode Island. The historical World War I naval battle was fought May 31, 1916, between the British Royal navy’s Grand Fleet, under British Adm. Sir John Jellicoe, and Imperial German navy’s High Seas Fleet, under German Vice Adm. Reinhard Scheer. The battle was later studied in great depth at NWC by Fleet Admirals Chester W. Nimitz, Ernest J. King and William F. Halsey, and helped shape U.S. Navy warships, tactics and doctrine in the years leading up to World War II. During the wargame reenactment, Rear Adm. P. Gardner Howe III, NWC president, commanded the German High Seas Fleet and retired Rear Adm. Samuel J. Cox, director, Naval History and Heritage Command, commanded the British Grand Fleet. (U.S. Navy photo by Chief Mass Communication Specialist James E. Foehl/Released)

         The latest U.S. Nuclear Posture Review argues that the Russian assessment is mistaken, and yet the same E2DE strategy was a bedrock of U.S. and NATO policy throughout the Cold War. J. Michael Legge, a former analyst for the RAND corporation, explains the development and implementation of NATO Cold War nuclear strategy thoroughly in his 1983 piece. He writes: “The strategy formally recognized that if deterrence failed… NATO might have to resort to using TNW [Theater Nuclear Weapons] in a further attempt to end the conflict by convincing the Soviet Leadership that they had miscalculated.” If the U.S. and NATO assumed that E2DE might work then, why is faith in this strategy now a dangerously mistaken belief? Indeed, it is possible to argue that the strategy did work, as a deterrent strategy at least, since the U.S. and NATO never had to defend themselves from a Russian invasion of eastern Europe.

Other questions remain regarding the potential effectiveness of an escalate to de-escalate strategy in terms of deterrence as well as, more importantly in my view, in terms of what happens when the strategy is employed not as a deterrent threat but an escalatory attack. First, how prevalent is belief in the strategy’s efficacy among decision makers in the U.S.? Secondly, why (or under what conditions) do experts believe such a strategy might work? Finally, does evidence exist to support belief in the efficacy of E2DE strategy? My dissertation research seeks to answer these questions through a multi-method approach utilizing expert interviews, a survey experiment and a historical review of wargames and military exercises specifically related to the concept of limited nuclear war.

I argue that a majority of the U.S. strategic community believes that “limiting” nuclear war is difficult and unlikely but nevertheless believes the U.S. should develop specific strategies and capabilities for limited nuclear war, rather than simply relying on other deterrence strategies, such as assured retaliation or asymmetric escalation. I also suggest that a significant portion of the U.S. strategic community believes that nuclear adversaries embrace a strategy of “escalate to de-escalate” with nuclear weapons. Furthermore, I hypothesize that a significant portion of experts believe that the U.S. needs to have a similar strategy in response, both to deter adversaries as well as to respond in kind. Adopting this strategy is potentially catastrophic. If both parties to a nuclear conflict believe that escalation is a path to coercive success and war termination, a cyclical reciprocation of destructive proportions is a likely result.  

In order to interrogate this intuition, my first research question asks: What do U.S. leaders, experts and members of the United States strategic community, including decision makers in the nuclear command and control enterprise, think about the feasibility of conducting limited nuclear war? In other words, what are their beliefs about the ability to control and limit escalation in a nuclear war? I also ask how these experts think about the strategy of E2DE among nuclear powers. I plan to conduct a series of semi-structured interviews among members of the U.S. strategic community, which includes a variety of high-ranking military officers, civilian Department of Defense officials, think tank analysts, and other members of U.S. nuclear command and control organizations. Thankfully, due to my ongoing military service as an officer in the U.S. Navy, I have unique access to many of these individuals and my previous military experiences and contacts will be of great help in this research. 

A grey, cylindrical rocket launches into the air with a jet of fire behind it and two pillars of smoke. The rocket is at an angle and appears to be moving upwards away from the ocean.
An unarmed Trident II D5 missile launches from the Ohio-class ballistic missile submarine USS Nebraska (SSBN 739) off the coast of California. U.S. Navy photo by Mass Communication Specialist 1st Class Ronald Gutridge

With a deeper understanding of the beliefs of this strategic community, the next step will be to compare those beliefs to empirical evidence. Are these leaders and potential decision makers correct in their assessment of the viability of such a strategy? 

I will investigate two different but complementary sets of data. First, I will conduct a survey experiment utilizing a hypothetical future scenario between the U.S. and a smaller nuclear power. In this experiment, respondents will represent the U.S. and will be asked about their preferred response when placed in a situation where the adversary attempts to achieve war termination through escalation, i.e. an attempt at E2DE. This will help me answer the question of whether or not the employment of nuclear weapons (detonation of a nuclear weapon to achieve some physical and psychological effect on the adversary) in a conflict makes escalation more or less likely than an equivalent conventional (non-nuclear) attack.

The next component of my study will address the question: What historical evidence exists from past wargames and military training exercises to support or refute a belief that a strategy of E2DE might work among nuclear powers? To investigate this question, I will conduct a historical review of wargames and military exercises conducted by the U.S. and NATO, and other countries where available, in the nuclear era (post-1945) to assess the relationship between conflict escalation and war termination, or the strategy of E2DE among nuclear states. A wargame, as defined by wargaming expert Peter Perla is “a warfare model or simulation that does not involve the operation of actual forces, and in which the flow of events is shaped by decisions made by a human player or players.”

My goal in this portion of my research is to examine available records of wargames and exercises, like Operations Sagebrush, Carte Blanche, and Able Archer, akin to what Reid Pauly, professor of nuclear security and political science at Brown University, did with U.S. wargames in “Would U.S. Leaders push the button?” In his piece, Pauly systematically reviewed past wargames with elite level participants as a research method to assess when and why leaders might choose to initiate the use of nuclear weapons. In my project, the universe of cases would include games and exercises in which deliberate escalations were perpetrated by at least one side, whether  use of nuclear weapons or other forms of escalatory attacks. I will be looking at instances where one side attempts to escalate to de-escalate, whether or not through nuclear attacks, and what adversary response and escalation dynamics occurred in the wake of this decision. 

A missile flies through the air. The missile is grey and cylindrical with two airplane wings coming out of each side. The side of the missile says "U.S. Air Force." The tail of the missile has a white star inside of a blue circle. The tip of the missile is white.
Boeing’s AGM-86B Air Launched Cruise Missile © Reuters

As an example, in 1967 two very high-level politico-military wargame exercises known as BETA I and II – 67 were conducted by researchers and senior Department of Defense officials at the Pentagon. During these games, both sides, representing the U.S. and the Soviet teams, experimented with attempts at E2DE with both conventional and nuclear weapons. The attempts were only successful once out of the four tries made by both sides. The one successful attempt was accomplished with conventional weapons, with the nuclear attempts resulting in cyclical reciprocation ending in massive nuclear exchange. Numerous other wargaming records are available for similar analysis and may be able to tell us important things about the dangers or merits of escalating to de-escalate.

One advantage to this method is that in games where the debates and arguments around decision making have been recorded it is possible to gather information about how decision makers were thinking and what their reasoning was. As Pauly recently explained for the Watson Institute, crisis simulations are useful as research tools in order to “see problems in different ways, anticipate unintended consequences, generate unanticipated outcomes, pose new questions to ask, and reveal unknown assumptions.”
         My research agenda asks important questions, the answers to which are likely to inform decision makers’ strategies for deterrence as well as their likelihood of engaging in conflicts that risk nuclear escalation. As the United States, Russia, Pakistan and other states increasingly explore the idea of lower-yield, shorter range, high accuracy weapons for “tactical” or “limited” use, and update their existing nuclear arsenals (in some cases bringing back weapons systems previously retired), understanding escalation dynamics in a nuclear war is of the utmost urgency. My project aims to help the U.S. strategic community and potential policy and decision makers to be cognizant of their own beliefs, to be aware of available evidence to support or challenge those beliefs and to acknowledge the implications if beliefs and evidence are misaligned. At a minimum, these misalignments may result in inefficient use of limited resources. Of more concern might be deterrence strategies and policies that are ineffective and may reduce stability between nuclear powers. Most importantly, if leaders are wrong about the ability to employ nuclear weapons as a de-escalatory measure the potential consequences could be a devastating nuclear war, something which is clearly in no one’s best interest.

A new personalized cancer treatment – will ‘GliaTrap’ be able to lure and treat cancer cells to prevent tumor recurrence?

What if you get diagnosed with cancer? What if your beloved family member, partner, or friend gets diagnosed with cancer? The news may fill you with fear and despair. Particularly, what if you get diagnosed with glioblastoma (GBM)? GBM is the most aggressive type of brain cancer with an average overall survival of 15~21 months after the first diagnosis. Moreover, GBM patients’ 5-year survival rate is less than 7%, one of the lowest among all cancers. Although treatment for other types of cancer is becoming more and more successful, current treatment options for GBM are largely ineffective and inevitably result in relapse and death. However, at the Laboratory of Cancer Epigenetics and Plasticity at Brown University and Rhode Island Hospital, we are working on innovative new treatments for GBM. One of these projects is called GliaTrap.

What’s Drug discovery process?

How does a new treatment get discovered? The drug discovery process is divided into three steps: 

1. Drug Discovery and Development. 

2. Preclinical Research 

3. Clinical Research. 

A cartoon image of three grey mountains. One mountain is labeled with "Challenge 1: Drug discovery and treatment." One mountain is labeled with "Challenge 2: Pre-clinical research." One mountain is labeled with "Challenge 3: Clinical research." This last mountain is topped with a green flag that says "Goal."
GliaTrap development plan.

During Step 1, researchers elucidate the mechanisms of disease progression, which leads to the discovery and development of a treatment that inhibits the disease process. Once a potential therapeutic candidate is selected, this candidate will go to Step 2 where researchers test the safety, side effects, how the drug affects the body, how the body responds to the drug, and so forth. Preclinical research requires a different laboratory setting than an Academic Research Lab and it should be monitored by a third party (e.g. the FDA in the US). Once this therapeutic candidate is determined to be safe enough, then this treatment will go to Step 3, Clinical Research, where its efficacy in human patients will be tested. This entire process takes about 10-15 years for a single treatment candidate to become available to patients.

Current therapies for GBM include surgical removal, chemotherapy, radiation therapy, or a combination of those. Each treatment modality has its own advantages and disadvantages. Surgery removes most of the bulk tumor but it cannot remove individual cells, which remain in the brain. Chemotherapy is normally administered to treat these remaining GBM cells, however it is challenging to specifically target the distributed GBM cells without killing the surrounding healthy normal cells. Radiation therapy has similar disadvantages as chemotherapy since targeting only cancer cells without damaging the surrounding healthy normal cells is impossible. As explained above, all the current approaches face huge clinical challenges, which makes GBM currently impossible to treat.

 Innovative cancer treatment “GliaTrap” : GliaTrap lures the cancer cells and attacks them.

To address this challenge, we are developing a new technique for GBM therapy: GliaTrap. GliaTrap basically functions just like a Japanese cockroach trap “Gokiburi hoihoi”, a container that houses foods to attract cockroaches and drugs to kill the attracted cockroaches. For the concept of GliaTrap, you should think of cancer cells in the brain like the cockroaches in my example. (Figure 2). GliaTrap uses a biocompatible material called hydrogel, like the container of the Gokiburi hoihoi, to house food and drugs that lure and kill cancer cells. Food for cancer cells is called a chemoattractant, and GliaTrap uses this molecule to lure the residual GBM cells post-surgery to the vicinity of the empty space, just like a cockroach trap uses food to attract cockroaches. Once these cancer cells are attracted to GliaTrap, GliaTrap uses an anti-tumor agent to kill those cells at the vicinity of the empty space without causing significant damage to healthy cells, just like cockroach traps use drugs to kill the cockroaches. We hope that GliaTrap will be able to eliminate the remaining cancer cells from the surgery to prevent tumor recurrence.

GliaTrap can utilize not only anti-tumor agents, but also lure/use the body’s natural immune cells. Anti-tumor agents in GliaTrap can be replaced with immune cell activators, molecules that boost the ability of immune cells to attack cancer cells. GliaTrap can serve as a new treatment delivery method in concert with surgical removal and chemotherapy. GliaTrap combines targeted capture and drug release to increase therapeutic efficacy and safety by selectively killing the cancer cells that surgical removal and chemotherapy might miss. As a result, GliaTrap could increase the survival rate of GBM patients.

On the left is a cartoon cockroach outside of a cartoon trap that has poison disguised as food inside it. The next panel shows the cockroach entering the trap enticed by the food. The last panel shows the cockroach dead inside the trap due to poison.
How cockroaches mimic the GliaTrap system.

Looking forward, GliaTrap can potentially be applied to other types of invasive cancers that don’t have effective current treatments such as pancreatic cancer. Pancreatic cancer has a similar treatment protocol – surgical removal followed by chemotherapy, radiotherapy, or a combination of those. GliaTrap could be implanted into the empty space created by removal of pancreatic cancer cells, and perform in a similar way as described for GBM by choosing an optimal chemoattractant for pancreatic cancer cells. To ensure the coverage of capturing cancer cells, genetic profiles of cancer cells can be investigated and optimal chemoattractants can be utilized. Chemoattractants and therapies can be selected based on the genetic profiles of cancer patients, and GliaTrap can be tailor-made for each patient. With continued effort, GliaTrap could become a platform for combination therapies for various types of cancers contribute to personalized treatments options.

The current challenge for GliaTrap research.


The GliaTrap project has great potential but as every paradigm shifting discovery, it comes with many challenges. It needs a lot more studies to prove its effectiveness and safety before it can be applied to patients.  Ultimately, with our work at the Laboratory of Cancer Epigenetics and Plasticity, we hope to help patients and their loved ones to no longer view the diagnosis of cancer as a death sentence, but rather as a challenge that can be overcome with the right treatment.

References:

1. Louis, D. N. et al. The 2016 World Health Organization Classification of Tumors of the Central  Nervous System: a summary. Acta Neuropathologica 131, 803–820 (2016). 

2. Toms, S. A., Kim, C. Y., Nicholas, G. & Ram, Z. Increased compliance with tumor treating fields  therapy is prognostic for improved survival in the treatment of glioblastoma: a subgroup analysis of  the EF-14 phase III trial. J Neurooncol 141, 467–473 (2019). 

3. Wang T, Suita Y, Miriyala S, Dean J, Tapinos N, Shen J. Advances in Lipid-Based Nanoparticles for Cancer Chemoimmunotherapy. Pharmaceutics. 2021; 13(4):520. https://doi.org/10.3390/pharmaceutics13040520

4. Tapinos, N., Sarkar, A. & Martinez-Moreno, M. Systems and Methods for Attracting and Trapping  Brain Cancer Cells. (2017).

Indigenizing Colonization: How Indigenous Knowledge Can Help Us Do Better When Looking to Colonize Other Planets

When you think of colonizing a planet, your mind may turn to a science fiction-like existence: new and cutting-edge technologies you could never have dreamed of; humans living in enclosed habitats; and harsh, unforgiving environments that must be tamed in order to survive. What you may not think of is that humans have done it before—here, on Earth.

I am a member of the Shinnecock Nation and a planetary scientist. Originally, I saw my native identity as extraneous to my scientific career. How could my indigenous knowledge ever help me when researching a completely different world? But the more I delved into my work, the more I saw there were problems that could be solved using “Two Eyed Seeing”

Two Eyed Seeing is a term originally coined by Mik’maw elder Albert Marshall and introduced to me by Dr. Roger Dube, a Mohawk Native from the Rochester Institute of Technology. The term refers to using western and indigenous scientific approaches simultaneously. The indigenous approach to science places an emphasis on observation and working in a way that is synergistic with what the natural world already offers, while western science follows the typical scientific method of posing a question and conducting an experiment. Importantly, because of the focus on synergy with the natural world, indigenous science generally has a lower impact on environmental surroundings when used responsibly.

Multi-colored red and yellow corn on a black tabletop
The multi-colored kernels of the Bear Island flint corn planted during the experiment.

The inaugural manned mission to Mars is expected in 2024 for SpaceX and in the 2030’s for NASA, and with humans reaching the Red Planet we may be headed towards colonization. The first step to approaching Mars’ colonization through a more indigenous lens is to remember that we must view the planet as a living thing and as a provider. In many North American indigenous cultures, we refer to the land that indigenous people inhabit as “Turtle Island”, a term that harkens back to a creation story1 which describes how we live on the back of a giant turtle moving through the oceans. In that sense, while you have been permitted to live on this being, you must also respect it, for it too is alive. Mars may not be as prolific a provider as Earth, but there are resources there that can be worked in tandem with rather than simply exploited. We don’t have to be a resource-hungry culture going from planet to planet using up everything that we can and moving on.

Every kilogram of resources imported from Earth costs large amounts of money, fuel, and time to reach Mars. If we brought fertilizer and soil there, both highly dense items, these would be literally worth more than their weight in gold. Thus, the respect for the resources on Mars becomes important not only from a moral standpoint, but also from economic and logistical standpoints. On Mars, water-ice is abundant beneath the surface, especially in polar regions. It can be melted for drinking, daily necessities and other purposes. It can also be transformed into rocket fuel by splitting the water molecules into its constituent hydrogen and oxygen atoms. Building materials found on Mars, such as easily accessible iron from meteorites on the surface and regolith,  could be used to build habitats with 3D printing. Through an indigenous approach we can learn to utilize these resources while sustaining them for long-term growth and future exploration. Traditionally, many indigenous communities in the Americas grew their own food, amended soil naturally and organically, and were able to create a self-sufficient, near-vegetarian community. Corns, beans, and squash, known to many tribes as “the three sisters”, were grown together in a beneficial, symbiotic arrangement quite different from the monocrop, non-rotational farming that is currently popular in the food growth industry. The beans added nitrogen back to the soil to be used by the corn and squash, the corn provided a pole for the beans to climb, and the squash served as a living mulch that fought off pests with its prickly texture. These three foods together rounded out the complete nutritional needs of a human, however they were not the varieties you are used to buying in a grocery store.

Twenty-four small green pots with white labels sticking out of their tops, all are placed in black crates
Each pot had two seeds planted in it. The pots in the foreground have Miraclegro soil, the next set has MGS-1, and the last set has MGS-1C (the global mars soil simulant with clay added).

Due to colonization and the forced removal of native peoples, as well as the assimilation tactics used, most tribes no longer grow their own food and many heritage species have been lost. The switch to grocery store varieties has seriously impacted native communities, especially those in “food deserts” where the reservation residents do not have a true supermarket nearby. The increased sugars in today’s varieties, along with low food budgets forcing people to choose less healthy options has caused an epidemic of Type 2 diabetes, with rates as high as 60% among the adults of some tribes. Traditional or “heritage” indigenous foods are higher in nutritional value and many were cultivated to be resistant to various specific environmental conditions. These resistances were developed over thousands of years of seed selection for desirable traits and this work can be utilized and continued in an off-planet habitat where a unique and unfamiliar environment will allow certain seeds to thrive and become the newly selected seeds.

According to a talk given at the American Indian Science and Engineering Society Conference in 2020 by Dr. Gioia Massa of NASA’s Kennedy Space Center, the current focus for food growth in a Mars habitat is on crops that can be eaten fresh or, with the future addition of a heating apparatus, staple crops that can be consumed with minimal preparation and cooking. While using the three sisters as the main crops may not be viable for the early missions, as the post-preparation needs of a crop are fundamentally important to optimizing astronaut time, the variety of each of the crops considered, as well as the production methods, can be scrutinized as well.

One method that would save significant transportation cost and would put us a step closer to future terraforming would be to use a direct sow method of plant production; in other words, to use the soil available on Mars to grow the plants. The general martian soil is not hospitable to plants; it is sandy, low in nutrients, and in some areas has high levels of salts and perchlorates which are poisonous to the emerging plant life. However, that doesn’t mean that there aren’t areas which may be hospitable.

My main research focus is on the geochemistry of alteration minerals on Mars, specifically on clays. Clays were critical for the development of early life on Earth. Clay particles provide a high surface area and protective layers for microbes as well as a high level of preservation potential. For this reason, they may be the best chance of finding possible traces of former life. Clays may also be the key to the proliferation of life on the planet.

Eight small green pots with white labels sticking out of their tops. Two of the pots have small green sprouts
This photo was taken just as the last seedlings emerged from the clay amended mars soil (MGS-1C). The two in pot 4 and the one in pot 5 emerged earlier on, but the single seedlings in pot 1 and 2 can just be seen poking out of the soil by this time. All germinated seedlings survived healthily to the end of the experiment.

With the support of my PhD advisors Jack Mustard and Jim Head, I decided to test the viability of growing heritage crops in martian soils, and to determine if the soils with a large clay component would allow for viable plants to grow. The plant variety I chose was Bear Island flint corn, which was traditionally grown on islands with isolated ecosystems by the Chippewa/Ojibwa tribe and was ground into meal and flour. This variety was recently popularized within indigenous communities in the Midwest by the tribal food sovereignty activist Winona LaDuke because it is resistant to drought, high winds, and contains nearly 12% protein, more than twice the amount as other varieties.

I planted the corn in three soil types: MiracleGro Seed Starter Formula (a control for comparison), Exolith lab’s MGS-1 (a martian soil simulant representative of the general martian soil composition), and MGS-1C (an amended version of MGS-1 that contains 40% smectite clays and is representative of the soil at the Mars Perseverance planned landing site). The corn was kept in a grow chamber at ideal conditions for corn growth (65% humidity, 16 hours of light, and 22ºC), cared for daily by the wonderful folks at the Brown Plant Environmental Center, and never fed fertilizer or other additives. Other studies that have successfully grown plants in martian soils have mainly added nitrogen based fertilizer, which would be extremely expensive to bring due to its weight.

The seeds planted in the MiracleGro had an 81.25% germination rate (13/16); they germinated in only 4 days after planting. The seeds in the MGS-1 soil had a 0% germination rate (0/16); nothing was able to grow at all. Interestingly, the seeds in the MGS-1C had a 31.25% germination rate (5/16) and ranged in time to germination between 17-21 days. The published germination time for this variety of corn was 9-14 days under normal conditions, and admittedly these conditions were far better than normal. The published germination time is significantly more than that shown with the MiracleGro soil, but less than that seen from the MGS-1C seeds.

Three clear plastic cases in a grow chamber each with eight green pots inside
The potted seeds were placed in a grow chamber in the Brown Plant Environmental Center which was kept at 65% humidity and 22ºC with 16 hours of light. The trays originally had plastic lids to encourage the seedling germination, but after they began to emerge in each tray, the lid was removed as to not inhibit growth.

In martian-type soil with a clay component, the corn was able to germinate. This means that we can use the soils present on the planet rather than bringing in other resources if a landing site with sufficient clay content is chosen. The benefit of using certain heritage plants is their viability in difficult environmental conditions. Corn may not be a crop grown by the first missions, but looking past the common plant varieties seen today and considering traditional heritage crops will still allow knowledge of indigenous food practices to be utilized. By using a direct sow method, the plants that are grown in these soils will begin to produce seeds more adapted to the planet, continuing the centuries-old practice of selecting plants for hardiness. . 

Other native principles, such as using all parts of a resource, similar to the zero waste movement today, point towards a sustainable cycle where we could use the inedible parts of plants to compost and rejuvenate the soils, or perhaps even use pre-composted human waste to add fertilizer and increase rates of germination and growth. Native people speak about building for the seventh generation. Mars will eventually be colonized, so we should take steps now to ensure that it will be done in a way that we can be proud of seven generations later. I believe that by considering the people who were most affected by the colonization that occurred on this planet, we can learn the lessons we need to effectively and honorably colonize another.

You’re Not Alone.

We’re all on unsure footing here. We weren’t sure what this week and the return to classes — albeit in an entirely different format — would look like, and we weren’t sure what The Ratty would look like in the wake of the changes to the Brown community. Rather than pushing forward, pretending everything was functioning as normal, we wanted to address what this situation feels like to grad students. And because we are primarily a blog, we wrote about it. The rest of this article features our editors discussing how they’re dealing with digital learning, sheltering-in-place, and the world in the wake of the pandemic. I wasn’t sure how I was going to introduce such a peculiar, composite article, so to prepare you I thought I would provide a list of various titles this piece has been known by:

  • Ratty Editors Vent About Being A Grad Student During COVID-19
  • Ratty Editors in Isolation
  • Grad Students in Isolation
  • I Have the Drive to Create but Am Paralyzed by Anxiety, What Should I Do?
  • What If We All Just Vented Our Feelings into a Google Doc?

Professionally, I thought social distancing would be a cinch. I’m a computational chemist – no wet lab, no on-site instrumentation, no live specimens, and thus, no physical location required! Yet the strain to perform my work has… well… soared in intensity, weighing heavier each day, as the mental and emotional burdens grow.

I’m an avid climber and aikido practitioner – two physical, social activities that I thrived on. My drive to research was fueled by these outlets, and I called on them regularly to reset for each new day. Then, I was told to stop. To refrain from my restorative lifelines, in order to prevent the worst. Even though I understood, I felt wounded and afraid as my lifelines suddenly vanished.

I’m afraid to feel loneliness and despair. I’m anxious, uncertain of each step forward. I’m angry — regrettably, at myself — when I struggle to accept these emotional pains as “reasonable” explanations for delays. I yearn to return to our earlier status, to break free of this physical confinement and emotional turbulence. I continue to hope that this situation will evaporate. Yet, I accept that this may be the norm for quite some time.

So, I’ve begun improving how I carry this new burden. I’ve found time to self-reflect. I might be climbing my door frames. My friends and I, near and far, have embraced remote connectivity. For as long as this may last, I aim to be kind to myself, to create new outlets, and to brace for the rest of the ride.

-Len Sprague


Honestly, the week off before spring break came as a relief to me. I’m studying for my comprehensive exams, and I was being handed extra time to focus on my reading lists instead of class preparations. So I holed up in my apartment, surrounded by antiquated computer hardware and piles of what material I was able to grab from the library before it closed.

And I’ve been able to accomplish so little.

Comps are an inherently stressful time, no matter how often your advisors repeat the fact that they shouldn’t be. And I was already scared — afraid that I wouldn’t be a good enough student, that I would be deemed unworthy to continue my education here. But now, in addition to the fear that I won’t pass, that I don’t belong, there’s the fear of the Academia I will enter into even if I succeed. Job positions have been put on hold, hiring frozen, and some schools have even closed permanently. The world on the other side of these exams is unimaginable; right now, it’s hard to conceive that I can make it there, and that I’ll recognize the landscape if I do.

An excerpt from an email telling me an internship I applied for was no longer running.

And then there’s the guilt. I’ve watched my friends lose jobs and close their businesses in an effort to flatten the curve with no assurance that they’ll ever reopen. Others post about taking their family members to the hospital, sick with the virus, and being unable to visit them, to be with them as they convalesce (or don’t). I’ve been so fixated on my uncertain future that I’ve lost sight of what others have sacrificed, and while I know I have the right to my anxiety, I still feel guilty about being upset over *so* much less. So I’ve tried to donate what I can, especially to circus studios that I have counted as a second home, but now it’s near the end of the month and the declined payments and overdraft notices are coming in.

A screenshot of my email inbox, circa Sunday morning March 29.

And then I’m angry — at the people online who tell me it’s okay for this semester to be bad, that our energy should be spent not on ensuring “A”s in classes but on supporting our fellow humans. But it’s not okay for me to phone in my comps. And how dare all these talented artists and community establishments make their work available online, when I can’t spend my time accessing it because I have to study? And the nerve of my friends to want to check in on me and reach out over Zoom and Discord, when I’m staring blankly into space trying to muster the energy to do the work that I have to do?

I will take breaks in the middle of reading chapters to sob, and then, drained, try to find where I left off on the page. But it’s never what I remember reading.

-E.L. Meszaros


Uncertainty makes me uncomfortable and always has. I am an obsessive planner; keeping my life scheduled and in order does a lot to keep any anxieties at bay. This time of crisis is the clear opposite of planned and scheduled, which has left me feeling anxious in a way that I can’t quite put my finger on. In perhaps a strange twist, I was able to get a lot of work done in the week off we were given before Spring Break. I dove back into projects with gusto, projects that had long been left on the back-burner of my to-do list. After all, I am in the humanities – if I am able to get my hands on reading material, I can do my job. Then communications from professors started to come in. 

I am very lucky to have some truly compassionate professors this semester. It is no coincidence that their classes were the ones in which I always felt time moved too quickly, where I wanted nothing more than to talk through these ideas for another hour. Emails from them have been kind, clear, and gentle. Reading them eased more anxiety than I could have guessed. However, these professors are contingent faculty, on the job market when most institutions have hiring freezes. I wish their compassion and understanding in this time when their tenured counterparts are not always doing the same could be rewarded with some kind of support. Of course, it won’t be. 

I tell myself that I am angry about how unfair all of this is. Unfair to those students who look to their schools as a safe haven from their difficult backgrounds. Unfair to those contingent faculty doing the most they can for their students while struggling with their own precarity. Unfair to those grad students who have been desperately seeking feedback from advisers and knowing there is no way they will get it now. But I think I’m mostly angry about the loss of the things that kept me sane throughout grad school that I no longer have access to, the things that my professors probably didn’t realize I needed to keep going with my work.

I miss my weekly climbing gym dates where E.L. and I would challenge our bodies and let off steam about the latest week as a grad student. I miss my early morning long runs where I got my head on straight before sitting down in my office. I miss my LGBTQ running group and the wisdom of people who had dealt with the same problems and always had ample advice. I miss my bookshelf. I miss riding my bike to campus. I miss a lot. For now, I try to schedule Zoom meetings with friends to get some or any of these back in any form possible. As classes start back up virtually this week, I guess I am waiting to see how successful these replacements will come to be.

Sara Mohr


I find myself in the fortunate position where I am able to continue my research unabated in Providence, while my family in Canada and India are also largely unaffected by the ongoing crises. Admittedly, there are minor inconveniences and a few challenges: using a slow VPN connection to transfer files back and forth from storage servers at Brown, finding new ways to exercise from a cramped apartment, and assisting bewildered technophobic professors with the transition to online classes.

However, I cannot complain too much considering the nightmare many of my international student colleagues are grappling with: the sheer frustration from their research coming to a grinding halt, made worse by the feeling of helplessness as the number of cases continues to dramatically increase back home for their family and friends. I can only empathize and offer words of encouragement. Know that we are all in this together, that our community is strong, and “this, too, shall pass”.

-Jay Bhaskar


We don’t have any answers. Everyone wears isolation and pandemic differently. We suggest that starting from a place of kindness and compassion is probably good, but we’re not sure what the next steps are. Brown Counseling and Psychological Services remain open — a good resource if you aren’t sure where to start. And in the meantime:

You Are What You Do Not Eat: The Problematic Relationship between Fashionable Bodies and the Consumption of Food from Nineteenth-century France to Now

Content Warning: The content of this piece engages with the topic of eating disorders. 

As I was scrolling through my Instagram feed one morning, I stumbled across an “inspiration” page. Among snapshots of long-limbed models posing in Parisian couture ateliers and close-up shots of clavicles protruding from power pink, feather-stitched garments, appeared images of decadent food—chocolate-covered croissants, overflowing cheese boards, and creamy pasta dishes. The page staged a clear aesthetic cross-fertilization between economic wealth, physical slenderness, and rich, “pretty-looking” food. The trickery and the dishonesty of this association lies in thinking of this fattening food as being consumed by the emaciated beauty who appears in the picture beside it. Although the women looked positively starving, the ostentatious display of food hinted at their supposed—probably contrived—bon vivant nature. Perhaps unwittingly, this entire page tapped into stereotypical representations of femininity in French culture, where changing fashion trends, cultural roles, and dietary regimes require that, while she must remain slender, the French woman never holds back.

Nuremberg and Venetian Women, Albrecht Dürer

The gazelle-like creature of the “ideal” model goes back to mid-nineteenth-century France, during which time both dresses and bodies were getting slimmer and longer. Women were becoming more active, leaving their stovetops for more enthralling pursuits. The corset’s tyranny was fading and women’s bodies were starting to be liberated from centuries of restraint and decades of containment. Paul Poiret’s designs were much more draped than they were structured, thus liberating women’s upper bodies and elongating their silhouettes. Coco Chanel made hemlines go up and waistlines go down, and clothing—rather than supporting and shaping the body—was slowly but surely reclaiming its own space. 

Meanwhile, although access to good quality food improved during the nineteenth century, the typical French diet remained meagre. In his book France Fin-de-Siecle, Eugen Weber describes the eating habits of the French as “a continuous fast” (Weber, 65). Fashion magazines and beauty manuals of the time encouraged women to not overeat: overeating was described as gastrolary — harmful to gut health — and perceived as greedy, almost immoral. In her Cabinet de Toilette, the Baroness Staffe recommends the following daily diet: a glass of milk for breakfast, an egg and a vegetable for lunch, and a light dinner that must exclude meat, liquors or wines, condiments, and spices. She even encourages eating to be done secretly, safe from the prying eyes of husbands or domestic servants. But around the dinner table, it was recommended that women continue to adopt the air and attitude of someone who both enjoys and engages in the arts of the table. 

In nineteenth-century France, economic wealth and access to food have always gone hand-in-hand. The type of performative eating on display at the dinner table was limited to the women of the bourgeoisie, those who could afford a great deal more than what they were encouraged to consume. In the nineteenth century, a slender figure could be obtained through voluntary self-inflicted hardships rather than through a painful remodelling of the body by items of clothing. As dangerous and unsafe as it was, a corset could have made a plump body look slimmer. As the corset fell out of vogue, it became harder for women to look thinner than they actually were, since food restriction required time, commitment, and consistency. 

Nowadays, fitness and Instagram models have attempted—sometimes with success—to restore the reputation of the corset’s cheap sister: the waist trainer. However, thinness achieved through food control remains a popular method. While the deformation of the body by fashion(able) objects sounds bad enough, a self-inflicted method of starvation seems even worse to me. Food restriction may cause irreversible damage to the organs and the flesh, including thyroid malfunction, severe dehydration, heart failure, and other complications. But in order to reach the highest peak of glamour, I argue that one must never make this sacrifice visible. A woman appearing to indulge in decadent eating is perceived as glamorous as long as she physically looks like she never does. 

We can observe the unfolding of this specific stratagem in modern fashion videos. The world renown fashion and lifestyle magazine Vogue recently started publishing short videos of models getting (runway) ready, giving viewers a glimpse into what their daily lives look like. In a video showcasing the Victoria’s Secret model Taylor Hill, simply entitled “Bergdorf! Bodegas! Hot Cheetos!”, we see Hill lying on the floor of a luxurious fitting room at Bergdorf Goodman, one of New York City’s most famous and costly stores. She is wearing a sumptuous baby blue gown covered in silver sequins and taffeta flowers, with a bowl of chips nestled between her breasts. “I can eat a whole bag [of Cheetos] in, like, one go,” she says after having already taken a bite out of a lobster sandwich. Suki Waterhouse, in Vogue’s “Diary of a Model” video, is seen ordering a grilled cheese and fries at a restaurant before going to a Jeremy Scott fashion shoot. In “How Model Birgit Kos Gets Runway Ready”, the twenty-four-year-old Dutch model enthusiastically asks for a plate of crepes. 

In none of these videos, however, do we ever see the models take more than one small bite of the junk food in front of them. Indeed, Vogue seems to force-feed the spectator with the distorted idea that stick-figure models eat vast quantities of food every day. The magazine also intends to trick us into thinking that these models’ staged behaviors are absolutely authentic. Could this be an attempt to make the women seem more relatable? Could it also serve the false depiction of the model-like figure as a surreal or unreal creature? A goddess whose body would not be subjected—like us—to the laws of nature? In any case, we are given an idea contrary to the familiar notion that a woman must suffer for beauty.

As a fashion scholar and a freelance model myself, I find it to be the most extraordinary insult to the legitimacy of the fashion industry to make fashion enthusiasts believe these icons are no different than the girl next door, to make it look like the woman who embodies timeless, mysterious, modern beauty standards also has fingers covered in Cheeto dust. This is not to say I wish for Vogue to showcase proudly starving models, nor do I assume that models who claim to eat nothing other than kale and lettuce are lying. I think that fashion should avoid going out of its way to convince us that traditional beauty standards can be achieved through unhealthiness and excess. I believe this process actually takes away from the enunciative role of fashion as an elaborate creative system, both capable of producing beauty and rendering us sensible to it. Instead, it convinces us all that fashion beauty standards are attainable, even and especially when one engages in excess, and reminds us that a true mark of effortless elegance—in good old French tradition—is to seemingly engage in excess without ever truly doing so. 

The Hebrew Bible from Below and Beyond

The Hebrew Bible serves as the foundation of several modern religions, from Judaism to Lutheranism. The study of this ancient text is a complex and multi-layered discipline, embracing methodologies from a variety of fields and drawing influence from as many places as it reaches. Bias in biblical scholarship is widespread, affecting both scholarly training and commonly used sources, meaning that certain viewpoints are often privileged over others. In particular, scholars of the Hebrew Bible often overlook the role of Egyptian historical actors and non-elites of the ancient world. One way to ensure the inclusion of such traditionally marginalized voices  is to employ socio-anthropological and historical-critical methods in biblical scholarship.

A green and blue map of the regions of ancient Israel with each location labeled in French.
The regions of ancient Israel (labels in French). Wikimedia Commons.

Scholarship of the Hebrew Bible focuses primarily on analysis of the Bible as a composite text,  a collection of originally independent stories combined into one document long after the historical period each tale claims to describe. One theory used to describe the text’s composition  is known as the Documentary Hypothesis. This hypothesis posits the existence of four independent, original sources known as the Jahwist (Yahwist), Elohist, Deuteronomist, and Priestly texts, which were later combined within the Pentateuch to form the Hebrew Bible as it is known today. Scholars argue that each of these original source texts contains a specific agenda and a particular perspective. In order to determine the cultural context which informs each individual text, scholars must choose what kinds of comparative evidence to foreground in their research, introducing another layer of bias into the study of the Hebrew Bible.

Many biblical scholars approach their research from the standpoint of either archaeological or textual evidence. The refusal to integrate the two approaches often means that scholars lack a complete picture of a particular text’s history, which might be achieved by using all the available evidence. Due to the standard path laid out for a biblical scholar-in-training, the most common sources for comparative evidence, both textual and archaeological, include Mesopotamia (modern Iraq and eastern Syria), and the Levant (modern Israel, western Syria, Jordan, Lebanon, and southeastern Turkey). This choice of geography, made by generations of scholars, is predictable. Textual comparisons between the Hebrew Bible and ancient Mesopotamian literature, for example, are numerous. Yet the refusal to integrate archaeology and textual criticism into biblical scholarship, as well as the continued focus on comparisons with the Ancient Near East, has meant that the Bible’s connection to other ancient cultures remains under-scrutinized.

The author with a scaraboid he excavated at the Iron Age site of Tell Halif, Israel
The author with a scaraboid he excavated at the Iron Age site of Tell Halif, Israel

While textual comparisons with Mesopotamian materials are useful, it is important to recognize the potential biases of Mesopotamian authors. These writers likely represent elite scribal and political classes, with the requisite wealth and status to be exposed to language learning in an advanced professional position. But what about the non-elites? Do their lifestyles reflect the influence of the conquerors of their land coming from far-off Mesopotamia? To untangle this complexity, we must incorporate comparative materials from other cultures bordering the Levant and Mesopotamia to elucidate the lives and beliefs of the non-elites within ancient Israelite society. If the texts reflect upper-class biases, how can we discern elements of the lifestyles of non-elites, particularly those that are influenced by a foreign entity?

Foreign powers in the ancient world tended to display tactics of political imperialism, economic imperialism, and cultural imperialism. Cultural imperialism can be used as a lens by the historian to examine the impact of a foreign culture upon all levels of society. In modern terms, cultural imperialism is most commonly used to describe the influential media of world powers, such as the United States, infiltrating daily lives and influencing cultures across the globe. For instance, the term was used recently by the president of the Canadian Broadcasting Corporation in regard to Netflix. The term can, however, be used to discuss the ancient world, and provides an important framework for examining how foreign powers outside of Mesopotamia exerted great influence over the Levant during the biblical period.

My work on multiple archaeological excavations of Iron Age Israelite sites (c. 1000-586 BCE), primarily domestic areas far from ancient cities, suggests the value of new perspectives. Early on, I was struck by the absence of material culture in these sites related to Mesopotamia, in comparison with fairly regular finds of Egyptian, or Egyptianized, objects. While Mesopotamia is cast as the enemy in the literature of the Israelite period (c. 1000-586 BCE), the Levant was under Egyptian control during the Late Bronze Age (c. 1500-1200 BCE) and is simply closer to Egypt than to Mesopotamia. Why, then, do we continue to rely almost solely on Mesopotamian materials in comparative work when the archaeological evidence frankly demands a focus on Egypt? The reality is that, by the time the Hebrew Bible was being composed, Egyptian rulers had lost much of its influence in the region and was not a political threat in the minds of the biblical authors, except for a brief period in the late seventh century BCE.  Remnants of Egypt’s powerful distant past remain in the minds of the authors, represented in stories such as the Joseph novella.  Unfortunately, arguments about Egyptian influence on the Hebrew Bible tend to lead to meaningless debates, resulting in the few new perspectives regarding the impact cultural contact with Egypt and other neighboring societies on the people of the Levant and on the content of the Hebrew Bible.

The author at the Late Bronze Age Egyptian Governor’s House at Beit She’an (Stela is a replica)
The author at the Late Bronze Age Egyptian Governor’s House at Beit She’an (Stela is a replica)

I argue that Israelite cultural identity is more closely related to that of Egypt, especially at the lower echelons of society. In fact, Egyptian-style scarabs, scaraboids, and Bes figurines are central to local Israelite domestic religion and culture. This is in stark contrast to the portrait of Israelite culture painted within the Hebrew Bible, which displays a gradual shift to centralized worship of YHWH in Jerusalem, particularly under the reigns of Hezekiah and Josiah during the eighth and seventh centuries BCE. This shift is, in my opinion, solely textual, based on the specific religious and political agendas of the scribes who authored these biblical texts. As members of the Jerusalem elite, the scribal school saw as its enemies the Neo-Assyrians and, later, the Neo-Babylonians of Mesopotamia, who threatened to overtake their position in Israelite culture. At the same time, however, Israelite domestic life amongst the populace continued to function as it had for several centuries. This continuation represented not the Mesopotamian culture that threatened the elites but rather a local identity that reflected many aspects of neighboring Egyptian culture, lingering after years of Egyptian rule.

The archaeological record displays Egyptian cultural imperialism reaching down even to the lower rungs of society. The prevalence of Egyptian, or Egyptianized, material culture, like the examples mentioned above, points to influences from the Israelites’ Egyptian neighbors which is not echoed by political powers in Mesopotamia. While biblical scholars will likely continue to use Mesopotamian material as a key point of comparison, we must be aware that influences from other powers such as the Egyptians and the Hittites may not always be reflected in the textual record.

I identify as a historian and scholar of the Hebrew Bible and the Ancient Near East, though many in my field would avoid such a title. Employing both literary and historical methodologies provides a framework for incorporating additional evidence into the study of this ancient text. I study the complex creation of the Hebrew Bible in conjunction with a variety of textual and archaeological evidence in order to reconstruct the historical, social, and political realities of the period. This extra-biblical evidence is extensive, including texts written in Sumerian, Akkadian, Hittite, multiple stages of the Egyptian language, Ugaritic, Aramaic, and other languages that range in time period from about 3,000 BCE to the 1st millennium CE. By incorporating this additional material, I seek to understand groups that are often overlooked in traditional analyses but have important perspectives to offer on the historical context of the Hebrew Bible’s creation. Rather than continuing to search for comparative evidence in the literature of Mesopotamian elites, we must recognize the global character of the Ancient Near East as well as its deep local social networks of actors. Drawing on historical methods like cultural imperialism and focusing on traditionally overlooked cultures encourages scholars to think about the Hebrew Bible from below and beyond.

Fake News and the Agency of Women in Viking Age Iceland

[A note on pronunciation of Old Norse: ‘ð’ and ‘Þ’ are both pronounced ‘th’; ’æ’ is pronounced like the ‘e’ in ‘bed’; ‘j’ is pronounced like ‘y’.]

We live in an era of ‘fake news.’  Fraudulent Facebook accounts and alternative facts have shined a new spotlight on the importance of equal and uncompromised access to the truth. Are biased information sources purely a modern symptom of today’s politics and the unregulated wilderness of the internet? The women of Viking Age Iceland might beg to differ. At times, disinformation and false reporting were utilized to devastating effect in the sagas recorded by medieval Icelandic authors. Even within this temporally distant and culturally distinct context, we can examine how fake news was wielded against medieval women in explicit efforts to undermine their agency.

In 1000 CE, on a small, glaciated island almost a thousand miles from mainland Europe, news meant oral testimony carried on horseback from homestead to homestead, or ferried across storm-tossed oceans on the tongues of travelers. In a world of slow, oral news, far removed from the infrastructure of modern media, we can revisit basic questions about the dissemination of information we moderns might take for granted. What was newsworthy? Where did news come from? Who was responsible for its circulation? How was information verified, and who was able to access it? All of these questions are difficult for scholars of the Viking Age to answer; written sources of the period are few, and those that do exist don’t privilege oral news. In other words, no letters, newspapers, or notice-boards tell us how information was presented in 11th-century Iceland. 

With limited contemporaneous textual records of Viking Age Iceland, we have to turn to alternative sources to piece together answers to these questions. What we know about the lives of Viking migrants and Icelandic settlers around the turn of the first millennium comes primarily from archaeological sources, genealogical records, and the later Icelandic sagas. The sagas were written in Old Norse during the 12th and 13th centuries CE, two or three hundred years after the settlement of Iceland, by Christian clerics, or other church-taught men, in large vellum manuscripts. The sagas relay entertaining legends of Icelandic settlement and details of fiery family feuds, but they are a problematic source for a historian of the Viking Age, given the centuries-wide gap between their creation and the time being described. Whether or not the sagas can be treated as settlement-era sources, they can tell us what 12th-century Icelanders believed or hoped life was like for their ancestors, and they can reveal the attitudes and morals of their later (elite, male) authors.

As is the case for many medieval written sources in Western Europe and beyond, the sagas and other Icelandic texts of the period privilege the actions and perspectives of men. Icelandic laws, first written down in the 13th century but likely codified in an oral tradition much earlier, suggest that women had little de jure authority, though they did have the right to divorce their husbands (for, among other reasons, wearing low-cut shirts). 

Despite the fact that women had fewer rights and limited access to wealth or education, the Icelandic sagas are notable among other medieval sources for their rich depictions of outspoken and intimidating woman characters wielding de facto power within the family and sometimes in society at large. The 13th-century Laxdælasaga, or Saga of the Laxdalers, is so sensitive to the experiences of women that some scholars even suggest it may have been written by a woman. 

Whether or not it comes from a woman’s hand, Laxdælasaga revolves around a host of complex women characters. Many episodes detail the frustrations of navigating social, legal, and physical structures created by and for men. One of these obstacles is the process of obtaining information, a relatively tedious project for everyone in the medieval world, but particularly so for women living on isolated farms, where news traveled only as fast as the fastest Icelandic pony could tölt

Generally confined to the home and discouraged from travelling on their own, women probably relied on male visitors to relay news from the outside world. Middlemen controlling women’s access to information results in notable and familiar problems for which we now have modern buzzwords, such as ‘gaslighting’, ‘alternative facts’, and, of course, ‘fake news.’ 

Guðrún Ósvífrsdóttir. Illustration by Andreas Bloch, “Vore fædres liv” (PD-US)

Guðrún Ósvífursdóttir is one of the protagonists of Laxdælasaga, a beautiful and intelligent farmer’s daughter who nonetheless has difficulty finding and keeping a good man. Her first marriage to Þorvaldr is brief, unhappy, and ends in divorce. Her second husband, Þord, drowns at sea. Finally, Guðrún meets the dashing saga hero Kjartan Óláfsson. They flirt in secret, defying her father’s wishes, and fall passionately in love. 

Before they marry, Kjartan tells Guðrún he wants to seek his fortune in Norway. Angry, Guðrún demands that Kjartan take her with him on the voyage.

“Guðrún said: ‘I want to go with you this summer. Then I could forgive you for arranging this trip so suddenly. After all, it isn’t Iceland I’m in love with.’ ‘It can’t happen,’ said Kjartan. ‘Your brothers are young and your father is old, and there won’t be anyone to take care of them if you leave home. So, wait for me for three winters.’” [Translated from Old Norse by the author]

Kjartan’s decision to sail to Norway alone, despite Guðrún’s request, is a catalyst for the tragic conflict that occurs later in the saga. Like all good romantic dramas, Laxdælasaga involves a love triangle. Guðrún loves Kjartan, Kjartan loves Guðrún…and so does Kjartan’s closest childhood friend, Bolli. Because of their friendship, Bolli accompanies Kjartan on the journey to Norway, but he doesn’t forget about the woman left behind.

Though Kjartan doesn’t explicitly point to Guðrún’s gender as the reason for refusing to bring her along, his dismissal of her desire to travel highlights a clear division between gendered spaces in medieval Iceland. Women tend to the home while men are left to farm, to fish, to study, to vote, and to travel abroad. Kjartan reminds Guðrún of her responsibility towards her younger brothers and elderly father, who will be left unprotected if she were to pursue her desire to travel. 

Emphasis on a woman’s domestic role as grounds for impeding her movement appears in many modern studies of the migration of women. For example, women who emigrated from the country of Georgia in the 1990s were vilified for leaving their families behind. Referring to the “feminization of migration” in Georgia, social scientists Hofmann and Buckley observe, “most respondents described it as unnatural, challenging the male role as breadwinner and female responsibilities for childcare and eldercare.” The clear delineation of gendered occupations is deployed as a barrier to women’s movement outside the home as much today as it was a thousand years ago. Confinement to the home means prohibition from male spheres of political, social, and economic exchange—more often than not, the places where news happens. 

The knowledge and experience gained from travel abroad are traditionally available only to men. In Laxdælasaga, the first thing Kjartan and his followers do when they arrive in Norway is ask other men for tíðindi, or tidings. They catch up on the gossip, such as it was in early medieval northern Norway, undoubtedly including plenty of rumors about who won what battles, the best English beaches for landing a raiding party, and who the king’s sister currently favors. Disinformation and fake news, as we’ll see later on, can be a powerful tool of political and psychological maneuvering in a world without third-party fact-checking services. As the saga continues, Kjartan cozies up to the Norwegian king and starts to make a name for himself as a competent warrior and all-around Icelandic heartthrob. 

Bolli returns early to Iceland, leaving Kjartan at the Norwegian court. He heads straight for Guðrún, armed with all the instruments of modern psychological warfare. Bolli deliberately turns Guðrún against her former lover, describing how Kjartan is enjoying his newfound fame in Norway. He insinuates that Kjartan’s heroic qualities have caught the eye of the king’s marriageable sister, and implies that Kjartan has forgotten Guðrún and their old attachment. 

Guðrún at first refuses to believe him, but Bolli enlists the help of her father and brothers, who together spin stories about Kjartan’s reprehensible behavior and undermine Guðrún’s convictions, until she begins to believe that Kjartan is not the man she thought he was; a classic example of what would today be termed gaslighting. Without any way of communicating with Kjartan, and unable to travel to Norway to ascertain the truth for herself, Guðrún is coerced into marrying Bolli instead.

When Kjartan returns to Iceland a few months later, he is distraught to discover that Guðrún is married to his best friend. News of his arrival and the truth about his stay in Norway reaches Guðrún, revealing Bolli’s deceit. She confronts her husband about his campaign of misinformation, but he demures: “Bolli declared that he had said what he knew to be the truth.” You can almost imagine the deafening shrug. Here, news is weaponized against a woman by a man armed with the facts and determined to twist ‘the truth’ to his own ends. 

Kjartan, dead on the lap of Bolli. Illustration by Andreas Bloch, “Vore fædres liv” (PD-US)

Resentment rages between the three characters, even as Kjartan moves on and marries another woman. After a series of escalating offenses occurs over several years, Bolli, egged on by his brothers, finally takes up a sword against his friend. Kjartan, refusing to fight, casts away his shield and allows himself to be fatally stabbed. Bolli takes the dying Kjartan in his arms and pours out his remorse at being driven to such a terrible act. Soon after, Kjartan’s sons avenge their father by killing Bolli. 

The tragic conclusion hints at an unexpected but relatively lucid Viking Age moral. A great deal of grief originates from Bolli’s decision to modify facts, and from Guðrún’s isolation from the masculine realms of movement and information exchange. If Guðrún had accompanied Kjartan on his journey as she requested, if she had been supplied with all available information or been able to verify the news she received some other way, the saga’s tragic conclusion might have been avoided. Based on the arc of this episode, it would seem the author of Laxdælasaga regards the obstruction of a woman’s movement and access to information as inappropriate and potentially perilous. Manipulation of facts and deliberate misinformation leads to two deaths and an unhappy ending for everyone involved. 

Other brief but telling episodes in medieval Icelandic literature hint at a tacit approval of the movement of women. We see Viking Age heroines throughout the western diaspora (Iceland and the British Isles) commissioning their own ships, setting out on long journeys, and striving to form their own networks of information exchange through kin and marital ties. It may be that these women are simply literary figures playing out imagined fantasies that would never have been possible for real women of the time; or, perhaps these examples reveal some awareness of the importance of the agency of women. 

In this modern era of fake news and alternative facts, we might do well to remember some of the simpler lessons of Icelandic history. Honesty, as a medieval Icelander would probably tell you, is the best policy. Obscuring the truth leads only to blood feud and bitter regret.

Karia, Then and Now

The Cows of Alabanda

For historians, it is easy to view the past as a hermetically sealed world, like a petri dish that we can subject to tests and analyses without fear of contamination. However, this failure to admit that we are implicated in the very thing we are trying to study can allow ideas and practices to fester, unnoticed until some jolt forces us to confront them. Too often, however, this perpetuates problematic ideologies and ignores the fact that many of these historic sites have a modern presence — with modern people living modern lives — too. It wasn’t until I had the chance to travel to the places I had been studying that I received such a jolt that led me to question my role within my field, and my field’s role in the world.

Well, how did I get here?

On a hot afternoon in June of 2017, I found myself wandering over the remains of Alabanda, and around the small cluster of houses of the modern village of Doğanyurt that perch atop them. Alabanda was an ancient city in the Southwest corner of what is now Turkey, and through the centuries was inhabited by native peoples, Greeks and Romans before being abandoned. 

When returning from my research in the field, I found some buildings, a tomb, a theatre, the course of a wall running up over the hillside; the blare of the call to prayer. For someone who studies the ancient Aegean World, it was an idyllic end to the day.  

Image of the ruins of a Roman Theater in Alabanda. Photo taken from above with rocks and dried shrubs in the foreground. Semicircular stone structure with trees in the background in the middle of the image.
Alabanda. View looking down at modern village over the ruins of a Roman Theatre (Photo by author)

When I headed back to my rental car, I found a local farmer was watering a small herd of cows nearby. Summoning up all the Turkish I had learned over the past year, I greeted him with a simple “Merhaba!” (Hello).

He seemed nonplussed that I should know even that much Turkish, but we managed to strike up a very simple conversation. I asked him what he thought of the ruins, and the fact that he lived on top of the ruins of a 2000 year old city. 

“Not much,” was his philosophical reply. He explained that he and his father had been employed to help excavate the city whenever the archaeologists came by, but beyond that, he did not profess any particular attachment to the heaps of stone and brick.

“And you,” he rejoined, “what brings you here?”

I struggled to formulate an answer. To be sure, in Turkish, I only had the vocabulary of an 8 year old, but as I stood there face to face with this man and his cows, it wasn’t my vocabulary that made it hard to formulate my response. What indeed was I, an American student from Suburban Philadelphia doing wandering around this out-of-the way village in southwest Turkey? 

Reflecting on this experience has opened a whole host of other questions about my position in my field and in the world, as well as the responsibilities someone who studies people long dead has to the living. 

Getting into Classics

Here at Brown, I am in the Ancient History Program, which is co-sponsored by the Classics and History departments. I identify more with the Classics department because that is the world I have lived in, well, half my life I suppose. I had the fortune of being able to take Latin classes starting in 7th grade, and even Ancient Greek in 9th. I stayed in Classics because I had good teachers and liked learning the languages. It wasn’t till the end of college that I really became interested in studying history, rather than literature. 

More and more, I became interested in studying the native inhabitants of what is now modern Turkey. Now these peoples have long been known to Classicists but only indirectly: here is no surviving literary tradition in their own languages so much of what we think we know about them comes from Greek and Roman sources. Unfortunately, the one-sided and often prejudiced views of the Greeks and Romans seeped into later views of the natives (as Edward Said documents, Orientalism has a long pedigree). 

Map of Ancient Anatolia, depicted with land in shades of ivory with ocean in brown.
Map of Ancient Anatolia © Finley, M. I. (1977). Atlas of classical archaeology. New York: McGraw-Hill.

The upshot is that we don’t actually know much about these peoples. With my research in the Ancient History Program here at Brown, however, I am trying to rectify that situation by looking at other kinds of evidence, such as material culture and the small amount of inscriptions written on durable materials that has survived. But in order to do this, I have had to step beyond the bounds of what most consider the traditional turf of Classics.  

Classics is usually defined as the study of the Ancient Greeks and Romans, their history, culture, literature, etc. For centuries, it has been a cornerstone of elite, liberal education in the West. As such, it has remained a generally conservative field, slow to adopt innovations in theory and practice. Moreover, it has a lot of colonial, racist, and sexist baggage: the Spanish conquistadors saw themselves as new Romans, bringing civilization to the New World; the Nazis idealized the ancient Spartans as models for the Ubermensch; and the Alt-right is using Stoic philosophy to “prove” that women are irrational and emotionally unstable.

Scary stuff, and not something that makes one proud of one’s field. But as one who loves my field nevertheless, and wants to help it change for the better, I see setting a new research agenda as one small way  to tackle this baggage. At least, this is what I thought as I headed to the coast of Turkey in 2017.

Colonization of the past 

On the one hand, I felt I had to attempt to slough off my field’s colonialist baggage by focusing on other ancient Mediterranean cultures besides the Greeks or Romans. Post-colonial theory has made its way into Classics, and with it the realization that — surprise, surprise! — the Greeks and Romans may not be the best sources of information about all the peoples they traded with, fought, and conquered. So I hope that in my research, I am helping to de-center the Greeks and Romans.

But on the other hand, while trying to escape the colonialist perspective of our sources, am I just perpetuating the colonialist practices of western academia? One of my favorite quotes is L. P. Hartley’s “The past is a foreign country; they do things differently there.” But just like any country, the past can and has been colonized. In this case, I am talking about the process by which American and European scholars claimed Greco-Roman history as their own, thus denying it to the modern inhabitants of places like Greece and Turkey. So, when I went to Turkey just to look at its ancient monuments, and asked people if they cared much about them, was I not just perpetuating this trend? 

Fringes of Classics 

Even apart from these questions, my choice to study the ancient inhabitants of Turkey has consequences for my possible career in Classics. Although  the field is trying to evolve, I still feel very much like this research lies on the fringe. Even to my own colleagues I often have to explain a lot (like, why DID I take a class in Hittite, a language even older and deader than Latin?) And yea, it makes me nervous about the job market; what school needs someone to teach their students about the Lydians, Karians or Lykians —  names no one has heard of? It may be hip to say you’re studying “ancient subalterns,” but can you make a career out of that in a field that in many ways is still focused on a canonical set of texts?

Is this a pigeon? meme, with a man in glasses gesturing toward a stone structure with Greek letters and the caption reading "Is this classics?"
Well, is it? (Meme modified by author)

Now I admit, researching on the fringe has pushed me to make connections with other departments; I’ve taken classes in Archaeology and Assyriology and the connections I’ve made with people in those departments have meant the world to me and my research. In fact, one of the biggest reasons I came to Brown was the promise of low disciplinary walls — and in this I have not been disappointed. I do believe there is much to be gained in questioning disciplinary boundaries. 

So, to end this the way I end all my papers: I don’t have any clear-cut answers. 

As with any career, being an academic requires a balancing act between the practical and aspirational. What I can say is that it is easy to get caught up in the day-to-day work of being a grad student, focusing only on what you are doing and not thinking about why you are doing it and who it might affect. 

Karia, Then and Now
Karia, then and now.Left, a carved stone from the temple to Zeus at Alabanda; right, a shop in the resort town of Bodrum. The axe was a symbol associated with Zeus in ancient Karia. (Photos by author).

At Alabanda, and everywhere else I visited in Turkey, I could not escape this question of who; for everywhere around me, people were making their living on its ruins, through tourism connected to it, and under biases baked into it. The question of who owns that past and who gets to shape it is far from academic. 

Welcome to The Ratty

Public scholarship is a critical component of research work here in 2020. Reaching out to wider public audiences allows scholars to generate interest in their subject matter, cultivate relationships with other scholars, institutions, and funding sources, and combat dangerous ideas that pervade often insular fields. Yet despite the value of public outreach (and the high quality of our education here at Brown), we are not provided with any training in how to engage in such scholarship.

That’s where The Ratty comes in.

The Ratty is a blog for graduate students at Brown University that is designed as both a platform for showcasing public scholarship but also a means by which students can get the training they need to become public scholars. 

Graduate students will write and publish an article that presents their research in a way that the public can grapple with but that doesn’t speak down or obfuscate complexity. They will work with our team of trained editors to create an article geared toward a public audience building from standard academic research models. Through this process, grad students will learn more about the differences between academic and public writing, will gain experience in pitching and editing, and in the end will be able to point to a digital publication of their writing and a digital author page of their contributions.

With The Ratty, we’re trying to fill a couple of gaps we’ve noticed from our own experience: the gaps in public scholarship training, but also the gap in just experiencing the editing process. Much of the time we submit papers to our professors and we get their feedback, but that’s it — you can choose to never open that feedback document and never submit a further edited version of your work. But it’s in that second back-and-forth that you really start to make big changes and real productivity happens in your writing. 

This interaction, however, can be pretty emotionally trying, especially if it’s something you’ve never experienced before. Sure, we’ve all probably been gutted by critique on papers from teachers, but we haven’t necessarily had to push back against their critique and we haven’t had to respond to any of the more emotional changes that have been asked of us. That part of the editing process can be vicious, and this is true in grad school as well as in academic publishing. It’s another one of our goals with The Ratty to help students get used to the editing process in a way that is a little kinder. The world is often unkind, and we don’t have to be that way. 

But we’re not limited to articles! The Ratty also wants to work with students who are interested in showcasing their research in other media — videos, comics, mixed media, etc. While our editors are specifically trained to work with public writing, we also understand that writing isn’t always the best way for students to showcase research, and it’s not always the way that the public is most interested in engaging with you. If you have ideas for other formats and styles of presentation, The Ratty is interested in hearing about it.

We’re also interested in using The Ratty to help train graduate students in Public Scholarship in other ways. Every day we interact with interesting people on Twitter, Instagram, etc. who are really committed to public scholarship, and they all do it in very different ways. So, we are planning to host a yearly speaker series, “The Ratty Presents,” where we bring in people who engage in public scholarship from lots of different fields. This will hopefully allow us to continue to evolve our understanding of public scholarship and push the ways in which The Ratty can help graduate students at Brown engage with the public.

There is real life inspiration for our logo: Daryl (2018-2019), who has carved a place in our hearts and in our branding.

So that’s what The Ratty is and what we hope to do with it, but why is it called The Ratty? Our inspiration is, perhaps obviously, our community’s nickname for Sharpe Refectory. “The Ratty” is something that the students call this dining hall — it’s not University-sanctioned, but the nickname is still known and used outside of the study body. The Ratty, too, is a student-led initiative. A lot of people are frustrated with how the job market has changed, the rules for PhDs have changed, but our training hasn’t been updated to reflect that at all. The Ratty is about taking back at least one aspect of our education. We’re going to take the reins and train ourselves in how to do those things that we know are important.

Right now, the entire team of The Ratty — from managing editors down — are women, people of color, or both, which speaks volumes about who feels like they need to push the boundaries of academia. These are the people who feel like they’re kept out of the traditional model that we’re still trying to use to train our PhD students. This further solidifies our commitment to working to bolster our training through The Ratty in order to help especially those people that academia often leaves behind.

So join us! You can pitch us to work with our editors in bringing your research to a public audience, and we’re always interested in training more editors. The Ratty is reclaiming student space in scholarship, making it public and loud. Sometimes it takes a small rat to make a big change.