Blog Feed

A new personalized cancer treatment – will ‘GliaTrap’ be able to lure and treat cancer cells to prevent tumor recurrence?

What if you get diagnosed with cancer? What if your beloved family member, partner, or friend gets diagnosed with cancer? The news may fill you with fear and despair. Particularly, what if you get diagnosed with glioblastoma (GBM)? GBM is the most aggressive type of brain cancer with an average overall survival time of 15~21 months after the first diagnosis. Moreover, GBM patients’ 5-year survival rate is less than 7%, one of the lowest among all cancers. Although treatment for other cancers is prevalent, current treatment options for GBM are ineffective and inevitably result in relapse and death. Such devastating facts would fill you and your loved ones with despair. However, what if you learned that a new treatment could stop the progression of GBM and significantly increase your chances of survival? What if you heard that you could manage your cancer with this new treatment? Dr. Nikos Tapinos and I at Brown University are developing this new treatment, one that would bring hope to you and your loved ones.

What’s Drug discovery process?

How does a new treatment get discovered? The drug discovery process is divided into three steps: 

1. Drug Discovery and Development. 

2. Preclinical Research 

3. Clinical Research. 

A cartoon image of three grey mountains. One mountain is labeled with "Challenge 1: Drug discovery and treatment." One mountain is labeled with "Challenge 2: Pre-clinical research." One mountain is labeled with "Challenge 3: Clinical research." This last mountain is topped with a green flag that says "Goal."
GliaTrap development plan.

During Step 1, researchers elucidate the mechanisms of disease progression, which leads to the discovery and development of a treatment that inhibits the disease process. Once a potential treatment candidate is selected, this candidate will go to Step 2 where researchers test the safety, side effects, how the drug affects the body, how the body responds to the drug, and so forth. Preclinical research also requires a larger testing setting monitored by a third party (e.g. the FDA in the US) to verify functionality…. Once this treatment candidate is determined to be safe enough, then this treatment will go to Step 3, Clinical Research, where the efficacy of this treatment in human patients will get tested. This entire process takes about 10-15 years for a single treatment candidate to become publicly available. 

In order for us to develop a new treatment, however, we need to identify why current treatment is not effective for GBM (in other words, what part of the tumor/cancer current treatment can and cannot treat).Current treatment for GBM is majorly composed of surgical removal, chemotherapy, radiation therapy, or a combination of those. Each treatment has its own disadvantage that makes it ineffective for GBM. Surgical removal cannot perfectly remove GBM and leaves the remaining GBM cells in the brain. Chemotherapy is normally administered to treat these remaining GBM cells, however it is challenging to specifically target the distributed GBM cells without killing the surrounding healthy normal cells. Radiation therapy is a powerful treatment, but has similar difficulties as chemotherapy since shooting lasers specifically at all the cancer cells without damaging the surrounding healthy normal cells is impossible. As seen in these cases, all the current treatment approaches have huge clinical challenges, which makes GBM currently impossible to treat. In our research, Dr. Nikos Tapinos and I are at Step 1 (Discovery and Development), and we have been investigating the mechanism of GBM metastasis, the development of malignant growth beyond the initial cancer site, and testing the efficacy of our treatment candidate in test tubes and animals. Our challenge is to develop an approach that overcomes the surgical disadvantage.

 Innovative cancer treatment “GliaTrap” : GliaTrap lures the cancer cells and attacks them.

To address this challenge, Dr. Nikos Tapinos and I are developing a new technique for GBM therapy: GliaTrap. GliaTrap basically functions just like a Japanese cockroach trap “Gokiburi hoihoi”, a container that houses foods to attract cockroaches and drugs to kill the attracted cockroaches. With GliaTrap, cancer cells are the cockroaches. GliaTrap uses a biocompatible material called hydrogel, like the container of the Gokiburi hoihoi, to house foods and drugs that lure and kill cancer cells. Food for cancer cells is called a chemoattractant, and GliaTrap uses this molecule to lure the residual GBM cells post-surgery to the vicinity of the empty space, just like a cockroach trap uses foods  to attract cockroaches. Once these cancer cells are attracted to GliaTrap, GliaTrap uses an anti-tumor agent to kill those cells at the vicinity of the empty space without causing significant damage to healthy cells, just like cockroach traps use drugs to kill the cockroaches. GliaTrap will be able to eliminate the remaining cancer cells from the surgery to prevent tumor recurrence. 

 GliaTrap can utilize not only anti-tumor agents, but also lure/use the body’s natural immune cells. Anti-tumor agents in GliaTrap can be replaced with immune cell activators, a molecule that boosts the ability of immune cells to attack cancer cells, in hydrogel. GliaTrap’s chemoattractant potentially can attract immune cells, not only cancer cells, and these immune cells get boosted by activators to attack cancer cells,  just like cockroach trap’s food can attract spiders, not only cockroaches, and these spiders get boosted by another energy drink to attack cockroaches. 

What if GliaTrap’s chemoattractants don’t attract immune cells, just like cockroaches foods might not attract spiders? Anti-tumor agents can get replaced by artificial immune cells. Basically, immune cells are pre-placed in hydrogel and wait for cancer cells to come to the hydrogel and attack those cancer cells who invade the hydrogel, just like spiders can be placed in a container and wait for cockroaches to come to the container and attack those who invade the house.

On the left is a cartoon cockroach outside of a cartoon trap that has poison disguised as food inside it. The next panel shows the cockroach entering the trap enticed by the food. The last panel shows the cockroach dead inside the trap due to poison.
How cockroaches mimic the GliaTrap system.

As seen in these examples, GliaTrap can serve as a new treatment delivery method in concert with surgical removal and chemotherapy. GliaTrap combines targeted capture and drug release to increase therapeutic efficacy and safety by selectively killing the cancer cells that surgical removal and chemotherapy might miss. As a result, GliaTrap will magnify the survivability rate of GBM patients. 

Looking forward, GliaTrap can potentially be applied to other types of invasive cancers that don’t have effective current treatments and that have a similar treatment protocol such as pancreatic cancers. Pancreatic cancers have a similar treatment protocol – surgical removal followed by chemotherapy, radiotherapy, or a combination of those. . Pancreatic tumors can exhibit different genetic and physiological profiles from GBM – , each individual cell has their own profile. Because of this difference, the response to chemoattractants varies as well; some cancer cells respond to chemoattractant A, but other cancer cells do not. Just like human beings have a preference for foods and not all people like one kind of food. GliaTrap can be implanted into the empty space created by removal of pancreatic cancer cells, and perform similarly to GBM treatment by choosing an optimal chemoattractant for pancreatic cancers. To ensure the coverage of capturing cancer cells, genetic profiles of cancer cells can be investigated and optimal chemoattractants can be used. This is similar to restaurants performing marketing research to figure out what customers prefer and deciding what foods to provide based on the results of marketing research. Chemoattractants and therapies can be selected based on the genetic profiles of cancer patients, and GliaTrap can be tailor-made for each patient. With continued effort, GliaTrap will become a platform for combination therapies for various types of cancers and make personalized medicine come true. 

The current challenge for GliaTrap research.


GliaTrap has great potential, but comes with many challenges and needs further study to prove its effectiveness and safety before it can be widely used by cancer patients. Despite all these drug discovery challenges, we commit ourselves everyday to research and strive to perform experiments that lead to the development and application of GliaTrap. We aim to develop GliaTrap to boost the efficacy and safety of extant cancer therapies. We hope that GliaTrap will increase the survival rate while maintaining the quality of cancer patients’ lives. GliaTrap will change the paradigm of treatment selection in the field of oncology and catapult the field of cancer medicine forward. Ultimately, we hope to create a society where patients and their loved ones will no longer view any kind of cancer diagnosis as a death sentence, but rather as a challenge that can be overcome with the right treatment. We believe that GliaTrap will be the right treatment for patients, helping remove the fear of a cancer diagnosis, and bring hope to those patients and their loved ones.

References:

1. Louis, D. N. et al. The 2016 World Health Organization Classification of Tumors of the Central  Nervous System: a summary. Acta Neuropathologica 131, 803–820 (2016). 

2. Toms, S. A., Kim, C. Y., Nicholas, G. & Ram, Z. Increased compliance with tumor treating fields  therapy is prognostic for improved survival in the treatment of glioblastoma: a subgroup analysis of  the EF-14 phase III trial. J Neurooncol 141, 467–473 (2019). 

3. Wang T, Suita Y, Miriyala S, Dean J, Tapinos N, Shen J. Advances in Lipid-Based Nanoparticles for Cancer Chemoimmunotherapy. Pharmaceutics. 2021; 13(4):520. https://doi.org/10.3390/pharmaceutics13040520

4. Tapinos, N., Sarkar, A. & Martinez-Moreno, M. Systems and Methods for Attracting and Trapping  Brain Cancer Cells. (2017).

Indigenizing Colonization: How Indigenous Knowledge Can Help Us Do Better When Looking to Colonize Other Planets

When you think of colonizing a planet, your mind may turn to a science fiction-like existence: new and cutting-edge technologies you could never have dreamed of; humans living in enclosed habitats; and harsh, unforgiving environments that must be tamed in order to survive. What you may not think of is that humans have done it before—here, on Earth.

I am a member of the Shinnecock Nation and a planetary scientist. Originally, I saw my native identity as extraneous to my scientific career. How could my indigenous knowledge ever help me when researching a completely different world? But the more I delved into my work, the more I saw there were problems that could be solved using “Two Eyed Seeing”

Two Eyed Seeing is a term originally coined by Mik’maw elder Albert Marshall and introduced to me by Dr. Roger Dube, a Mohawk Native from the Rochester Institute of Technology. The term refers to using western and indigenous scientific approaches simultaneously. The indigenous approach to science places an emphasis on observation and working in a way that is synergistic with what the natural world already offers, while western science follows the typical scientific method of posing a question and conducting an experiment. Importantly, because of the focus on synergy with the natural world, indigenous science generally has a lower impact on environmental surroundings when used responsibly.

Multi-colored red and yellow corn on a black tabletop
The multi-colored kernels of the Bear Island flint corn planted during the experiment.

The inaugural manned mission to Mars is expected in 2024 for SpaceX and in the 2030’s for NASA, and with humans reaching the Red Planet we may be headed towards colonization. The first step to approaching Mars’ colonization through a more indigenous lens is to remember that we must view the planet as a living thing and as a provider. In many North American indigenous cultures, we refer to the land that indigenous people inhabit as “Turtle Island”, a term that harkens back to a creation story1 which describes how we live on the back of a giant turtle moving through the oceans. In that sense, while you have been permitted to live on this being, you must also respect it, for it too is alive. Mars may not be as prolific a provider as Earth, but there are resources there that can be worked in tandem with rather than simply exploited. We don’t have to be a resource-hungry culture going from planet to planet using up everything that we can and moving on.

Every kilogram of resources imported from Earth costs large amounts of money, fuel, and time to reach Mars. If we brought fertilizer and soil there, both highly dense items, these would be literally worth more than their weight in gold. Thus, the respect for the resources on Mars becomes important not only from a moral standpoint, but also from economic and logistical standpoints. On Mars, water-ice is abundant beneath the surface, especially in polar regions. It can be melted for drinking, daily necessities and other purposes. It can also be transformed into rocket fuel by splitting the water molecules into its constituent hydrogen and oxygen atoms. Building materials found on Mars, such as easily accessible iron from meteorites on the surface and regolith,  could be used to build habitats with 3D printing. Through an indigenous approach we can learn to utilize these resources while sustaining them for long-term growth and future exploration. Traditionally, many indigenous communities in the Americas grew their own food, amended soil naturally and organically, and were able to create a self-sufficient, near-vegetarian community. Corns, beans, and squash, known to many tribes as “the three sisters”, were grown together in a beneficial, symbiotic arrangement quite different from the monocrop, non-rotational farming that is currently popular in the food growth industry. The beans added nitrogen back to the soil to be used by the corn and squash, the corn provided a pole for the beans to climb, and the squash served as a living mulch that fought off pests with its prickly texture. These three foods together rounded out the complete nutritional needs of a human, however they were not the varieties you are used to buying in a grocery store.

Twenty-four small green pots with white labels sticking out of their tops, all are placed in black crates
Each pot had two seeds planted in it. The pots in the foreground have Miraclegro soil, the next set has MGS-1, and the last set has MGS-1C (the global mars soil simulant with clay added).

Due to colonization and the forced removal of native peoples, as well as the assimilation tactics used, most tribes no longer grow their own food and many heritage species have been lost. The switch to grocery store varieties has seriously impacted native communities, especially those in “food deserts” where the reservation residents do not have a true supermarket nearby. The increased sugars in today’s varieties, along with low food budgets forcing people to choose less healthy options has caused an epidemic of Type 2 diabetes, with rates as high as 60% among the adults of some tribes. Traditional or “heritage” indigenous foods are higher in nutritional value and many were cultivated to be resistant to various specific environmental conditions. These resistances were developed over thousands of years of seed selection for desirable traits and this work can be utilized and continued in an off-planet habitat where a unique and unfamiliar environment will allow certain seeds to thrive and become the newly selected seeds.

According to a talk given at the American Indian Science and Engineering Society Conference in 2020 by Dr. Gioia Massa of NASA’s Kennedy Space Center, the current focus for food growth in a Mars habitat is on crops that can be eaten fresh or, with the future addition of a heating apparatus, staple crops that can be consumed with minimal preparation and cooking. While using the three sisters as the main crops may not be viable for the early missions, as the post-preparation needs of a crop are fundamentally important to optimizing astronaut time, the variety of each of the crops considered, as well as the production methods, can be scrutinized as well.

One method that would save significant transportation cost and would put us a step closer to future terraforming would be to use a direct sow method of plant production; in other words, to use the soil available on Mars to grow the plants. The general martian soil is not hospitable to plants; it is sandy, low in nutrients, and in some areas has high levels of salts and perchlorates which are poisonous to the emerging plant life. However, that doesn’t mean that there aren’t areas which may be hospitable.

My main research focus is on the geochemistry of alteration minerals on Mars, specifically on clays. Clays were critical for the development of early life on Earth. Clay particles provide a high surface area and protective layers for microbes as well as a high level of preservation potential. For this reason, they may be the best chance of finding possible traces of former life. Clays may also be the key to the proliferation of life on the planet.

Eight small green pots with white labels sticking out of their tops. Two of the pots have small green sprouts
This photo was taken just as the last seedlings emerged from the clay amended mars soil (MGS-1C). The two in pot 4 and the one in pot 5 emerged earlier on, but the single seedlings in pot 1 and 2 can just be seen poking out of the soil by this time. All germinated seedlings survived healthily to the end of the experiment.

With the support of my PhD advisors Jack Mustard and Jim Head, I decided to test the viability of growing heritage crops in martian soils, and to determine if the soils with a large clay component would allow for viable plants to grow. The plant variety I chose was Bear Island flint corn, which was traditionally grown on islands with isolated ecosystems by the Chippewa/Ojibwa tribe and was ground into meal and flour. This variety was recently popularized within indigenous communities in the Midwest by the tribal food sovereignty activist Winona LaDuke because it is resistant to drought, high winds, and contains nearly 12% protein, more than twice the amount as other varieties.

I planted the corn in three soil types: MiracleGro Seed Starter Formula (a control for comparison), Exolith lab’s MGS-1 (a martian soil simulant representative of the general martian soil composition), and MGS-1C (an amended version of MGS-1 that contains 40% smectite clays and is representative of the soil at the Mars Perseverance planned landing site). The corn was kept in a grow chamber at ideal conditions for corn growth (65% humidity, 16 hours of light, and 22ºC), cared for daily by the wonderful folks at the Brown Plant Environmental Center, and never fed fertilizer or other additives. Other studies that have successfully grown plants in martian soils have mainly added nitrogen based fertilizer, which would be extremely expensive to bring due to its weight.

The seeds planted in the MiracleGro had an 81.25% germination rate (13/16); they germinated in only 4 days after planting. The seeds in the MGS-1 soil had a 0% germination rate (0/16); nothing was able to grow at all. Interestingly, the seeds in the MGS-1C had a 31.25% germination rate (5/16) and ranged in time to germination between 17-21 days. The published germination time for this variety of corn was 9-14 days under normal conditions, and admittedly these conditions were far better than normal. The published germination time is significantly more than that shown with the MiracleGro soil, but less than that seen from the MGS-1C seeds.

Three clear plastic cases in a grow chamber each with eight green pots inside
The potted seeds were placed in a grow chamber in the Brown Plant Environmental Center which was kept at 65% humidity and 22ºC with 16 hours of light. The trays originally had plastic lids to encourage the seedling germination, but after they began to emerge in each tray, the lid was removed as to not inhibit growth.

In martian-type soil with a clay component, the corn was able to germinate. This means that we can use the soils present on the planet rather than bringing in other resources if a landing site with sufficient clay content is chosen. The benefit of using certain heritage plants is their viability in difficult environmental conditions. Corn may not be a crop grown by the first missions, but looking past the common plant varieties seen today and considering traditional heritage crops will still allow knowledge of indigenous food practices to be utilized. By using a direct sow method, the plants that are grown in these soils will begin to produce seeds more adapted to the planet, continuing the centuries-old practice of selecting plants for hardiness. . 

Other native principles, such as using all parts of a resource, similar to the zero waste movement today, point towards a sustainable cycle where we could use the inedible parts of plants to compost and rejuvenate the soils, or perhaps even use pre-composted human waste to add fertilizer and increase rates of germination and growth. Native people speak about building for the seventh generation. Mars will eventually be colonized, so we should take steps now to ensure that it will be done in a way that we can be proud of seven generations later. I believe that by considering the people who were most affected by the colonization that occurred on this planet, we can learn the lessons we need to effectively and honorably colonize another.

You’re Not Alone.

We’re all on unsure footing here. We weren’t sure what this week and the return to classes — albeit in an entirely different format — would look like, and we weren’t sure what The Ratty would look like in the wake of the changes to the Brown community. Rather than pushing forward, pretending everything was functioning as normal, we wanted to address what this situation feels like to grad students. And because we are primarily a blog, we wrote about it. The rest of this article features our editors discussing how they’re dealing with digital learning, sheltering-in-place, and the world in the wake of the pandemic. I wasn’t sure how I was going to introduce such a peculiar, composite article, so to prepare you I thought I would provide a list of various titles this piece has been known by:

  • Ratty Editors Vent About Being A Grad Student During COVID-19
  • Ratty Editors in Isolation
  • Grad Students in Isolation
  • I Have the Drive to Create but Am Paralyzed by Anxiety, What Should I Do?
  • What If We All Just Vented Our Feelings into a Google Doc?

Professionally, I thought social distancing would be a cinch. I’m a computational chemist – no wet lab, no on-site instrumentation, no live specimens, and thus, no physical location required! Yet the strain to perform my work has… well… soared in intensity, weighing heavier each day, as the mental and emotional burdens grow.

I’m an avid climber and aikido practitioner – two physical, social activities that I thrived on. My drive to research was fueled by these outlets, and I called on them regularly to reset for each new day. Then, I was told to stop. To refrain from my restorative lifelines, in order to prevent the worst. Even though I understood, I felt wounded and afraid as my lifelines suddenly vanished.

I’m afraid to feel loneliness and despair. I’m anxious, uncertain of each step forward. I’m angry — regrettably, at myself — when I struggle to accept these emotional pains as “reasonable” explanations for delays. I yearn to return to our earlier status, to break free of this physical confinement and emotional turbulence. I continue to hope that this situation will evaporate. Yet, I accept that this may be the norm for quite some time.

So, I’ve begun improving how I carry this new burden. I’ve found time to self-reflect. I might be climbing my door frames. My friends and I, near and far, have embraced remote connectivity. For as long as this may last, I aim to be kind to myself, to create new outlets, and to brace for the rest of the ride.

-Len Sprague


Honestly, the week off before spring break came as a relief to me. I’m studying for my comprehensive exams, and I was being handed extra time to focus on my reading lists instead of class preparations. So I holed up in my apartment, surrounded by antiquated computer hardware and piles of what material I was able to grab from the library before it closed.

And I’ve been able to accomplish so little.

Comps are an inherently stressful time, no matter how often your advisors repeat the fact that they shouldn’t be. And I was already scared — afraid that I wouldn’t be a good enough student, that I would be deemed unworthy to continue my education here. But now, in addition to the fear that I won’t pass, that I don’t belong, there’s the fear of the Academia I will enter into even if I succeed. Job positions have been put on hold, hiring frozen, and some schools have even closed permanently. The world on the other side of these exams is unimaginable; right now, it’s hard to conceive that I can make it there, and that I’ll recognize the landscape if I do.

An excerpt from an email telling me an internship I applied for was no longer running.

And then there’s the guilt. I’ve watched my friends lose jobs and close their businesses in an effort to flatten the curve with no assurance that they’ll ever reopen. Others post about taking their family members to the hospital, sick with the virus, and being unable to visit them, to be with them as they convalesce (or don’t). I’ve been so fixated on my uncertain future that I’ve lost sight of what others have sacrificed, and while I know I have the right to my anxiety, I still feel guilty about being upset over *so* much less. So I’ve tried to donate what I can, especially to circus studios that I have counted as a second home, but now it’s near the end of the month and the declined payments and overdraft notices are coming in.

A screenshot of my email inbox, circa Sunday morning March 29.

And then I’m angry — at the people online who tell me it’s okay for this semester to be bad, that our energy should be spent not on ensuring “A”s in classes but on supporting our fellow humans. But it’s not okay for me to phone in my comps. And how dare all these talented artists and community establishments make their work available online, when I can’t spend my time accessing it because I have to study? And the nerve of my friends to want to check in on me and reach out over Zoom and Discord, when I’m staring blankly into space trying to muster the energy to do the work that I have to do?

I will take breaks in the middle of reading chapters to sob, and then, drained, try to find where I left off on the page. But it’s never what I remember reading.

-E.L. Meszaros


Uncertainty makes me uncomfortable and always has. I am an obsessive planner; keeping my life scheduled and in order does a lot to keep any anxieties at bay. This time of crisis is the clear opposite of planned and scheduled, which has left me feeling anxious in a way that I can’t quite put my finger on. In perhaps a strange twist, I was able to get a lot of work done in the week off we were given before Spring Break. I dove back into projects with gusto, projects that had long been left on the back-burner of my to-do list. After all, I am in the humanities – if I am able to get my hands on reading material, I can do my job. Then communications from professors started to come in. 

I am very lucky to have some truly compassionate professors this semester. It is no coincidence that their classes were the ones in which I always felt time moved too quickly, where I wanted nothing more than to talk through these ideas for another hour. Emails from them have been kind, clear, and gentle. Reading them eased more anxiety than I could have guessed. However, these professors are contingent faculty, on the job market when most institutions have hiring freezes. I wish their compassion and understanding in this time when their tenured counterparts are not always doing the same could be rewarded with some kind of support. Of course, it won’t be. 

I tell myself that I am angry about how unfair all of this is. Unfair to those students who look to their schools as a safe haven from their difficult backgrounds. Unfair to those contingent faculty doing the most they can for their students while struggling with their own precarity. Unfair to those grad students who have been desperately seeking feedback from advisers and knowing there is no way they will get it now. But I think I’m mostly angry about the loss of the things that kept me sane throughout grad school that I no longer have access to, the things that my professors probably didn’t realize I needed to keep going with my work.

I miss my weekly climbing gym dates where E.L. and I would challenge our bodies and let off steam about the latest week as a grad student. I miss my early morning long runs where I got my head on straight before sitting down in my office. I miss my LGBTQ running group and the wisdom of people who had dealt with the same problems and always had ample advice. I miss my bookshelf. I miss riding my bike to campus. I miss a lot. For now, I try to schedule Zoom meetings with friends to get some or any of these back in any form possible. As classes start back up virtually this week, I guess I am waiting to see how successful these replacements will come to be.

Sara Mohr


I find myself in the fortunate position where I am able to continue my research unabated in Providence, while my family in Canada and India are also largely unaffected by the ongoing crises. Admittedly, there are minor inconveniences and a few challenges: using a slow VPN connection to transfer files back and forth from storage servers at Brown, finding new ways to exercise from a cramped apartment, and assisting bewildered technophobic professors with the transition to online classes.

However, I cannot complain too much considering the nightmare many of my international student colleagues are grappling with: the sheer frustration from their research coming to a grinding halt, made worse by the feeling of helplessness as the number of cases continues to dramatically increase back home for their family and friends. I can only empathize and offer words of encouragement. Know that we are all in this together, that our community is strong, and “this, too, shall pass”.

-Jay Bhaskar


We don’t have any answers. Everyone wears isolation and pandemic differently. We suggest that starting from a place of kindness and compassion is probably good, but we’re not sure what the next steps are. Brown Counseling and Psychological Services remain open — a good resource if you aren’t sure where to start. And in the meantime:

You Are What You Do Not Eat: The Problematic Relationship between Fashionable Bodies and the Consumption of Food from Nineteenth-century France to Now

Content Warning: The content of this piece engages with the topic of eating disorders. 

As I was scrolling through my Instagram feed one morning, I stumbled across an “inspiration” page. Among snapshots of long-limbed models posing in Parisian couture ateliers and close-up shots of clavicles protruding from power pink, feather-stitched garments, appeared images of decadent food—chocolate-covered croissants, overflowing cheese boards, and creamy pasta dishes. The page staged a clear aesthetic cross-fertilization between economic wealth, physical slenderness, and rich, “pretty-looking” food. The trickery and the dishonesty of this association lies in thinking of this fattening food as being consumed by the emaciated beauty who appears in the picture beside it. Although the women looked positively starving, the ostentatious display of food hinted at their supposed—probably contrived—bon vivant nature. Perhaps unwittingly, this entire page tapped into stereotypical representations of femininity in French culture, where changing fashion trends, cultural roles, and dietary regimes require that, while she must remain slender, the French woman never holds back.

Nuremberg and Venetian Women, Albrecht Dürer

The gazelle-like creature of the “ideal” model goes back to mid-nineteenth-century France, during which time both dresses and bodies were getting slimmer and longer. Women were becoming more active, leaving their stovetops for more enthralling pursuits. The corset’s tyranny was fading and women’s bodies were starting to be liberated from centuries of restraint and decades of containment. Paul Poiret’s designs were much more draped than they were structured, thus liberating women’s upper bodies and elongating their silhouettes. Coco Chanel made hemlines go up and waistlines go down, and clothing—rather than supporting and shaping the body—was slowly but surely reclaiming its own space. 

Meanwhile, although access to good quality food improved during the nineteenth century, the typical French diet remained meagre. In his book France Fin-de-Siecle, Eugen Weber describes the eating habits of the French as “a continuous fast” (Weber, 65). Fashion magazines and beauty manuals of the time encouraged women to not overeat: overeating was described as gastrolary — harmful to gut health — and perceived as greedy, almost immoral. In her Cabinet de Toilette, the Baroness Staffe recommends the following daily diet: a glass of milk for breakfast, an egg and a vegetable for lunch, and a light dinner that must exclude meat, liquors or wines, condiments, and spices. She even encourages eating to be done secretly, safe from the prying eyes of husbands or domestic servants. But around the dinner table, it was recommended that women continue to adopt the air and attitude of someone who both enjoys and engages in the arts of the table. 

In nineteenth-century France, economic wealth and access to food have always gone hand-in-hand. The type of performative eating on display at the dinner table was limited to the women of the bourgeoisie, those who could afford a great deal more than what they were encouraged to consume. In the nineteenth century, a slender figure could be obtained through voluntary self-inflicted hardships rather than through a painful remodelling of the body by items of clothing. As dangerous and unsafe as it was, a corset could have made a plump body look slimmer. As the corset fell out of vogue, it became harder for women to look thinner than they actually were, since food restriction required time, commitment, and consistency. 

Nowadays, fitness and Instagram models have attempted—sometimes with success—to restore the reputation of the corset’s cheap sister: the waist trainer. However, thinness achieved through food control remains a popular method. While the deformation of the body by fashion(able) objects sounds bad enough, a self-inflicted method of starvation seems even worse to me. Food restriction may cause irreversible damage to the organs and the flesh, including thyroid malfunction, severe dehydration, heart failure, and other complications. But in order to reach the highest peak of glamour, I argue that one must never make this sacrifice visible. A woman appearing to indulge in decadent eating is perceived as glamorous as long as she physically looks like she never does. 

We can observe the unfolding of this specific stratagem in modern fashion videos. The world renown fashion and lifestyle magazine Vogue recently started publishing short videos of models getting (runway) ready, giving viewers a glimpse into what their daily lives look like. In a video showcasing the Victoria’s Secret model Taylor Hill, simply entitled “Bergdorf! Bodegas! Hot Cheetos!”, we see Hill lying on the floor of a luxurious fitting room at Bergdorf Goodman, one of New York City’s most famous and costly stores. She is wearing a sumptuous baby blue gown covered in silver sequins and taffeta flowers, with a bowl of chips nestled between her breasts. “I can eat a whole bag [of Cheetos] in, like, one go,” she says after having already taken a bite out of a lobster sandwich. Suki Waterhouse, in Vogue’s “Diary of a Model” video, is seen ordering a grilled cheese and fries at a restaurant before going to a Jeremy Scott fashion shoot. In “How Model Birgit Kos Gets Runway Ready”, the twenty-four-year-old Dutch model enthusiastically asks for a plate of crepes. 

In none of these videos, however, do we ever see the models take more than one small bite of the junk food in front of them. Indeed, Vogue seems to force-feed the spectator with the distorted idea that stick-figure models eat vast quantities of food every day. The magazine also intends to trick us into thinking that these models’ staged behaviors are absolutely authentic. Could this be an attempt to make the women seem more relatable? Could it also serve the false depiction of the model-like figure as a surreal or unreal creature? A goddess whose body would not be subjected—like us—to the laws of nature? In any case, we are given an idea contrary to the familiar notion that a woman must suffer for beauty.

As a fashion scholar and a freelance model myself, I find it to be the most extraordinary insult to the legitimacy of the fashion industry to make fashion enthusiasts believe these icons are no different than the girl next door, to make it look like the woman who embodies timeless, mysterious, modern beauty standards also has fingers covered in Cheeto dust. This is not to say I wish for Vogue to showcase proudly starving models, nor do I assume that models who claim to eat nothing other than kale and lettuce are lying. I think that fashion should avoid going out of its way to convince us that traditional beauty standards can be achieved through unhealthiness and excess. I believe this process actually takes away from the enunciative role of fashion as an elaborate creative system, both capable of producing beauty and rendering us sensible to it. Instead, it convinces us all that fashion beauty standards are attainable, even and especially when one engages in excess, and reminds us that a true mark of effortless elegance—in good old French tradition—is to seemingly engage in excess without ever truly doing so. 

The Hebrew Bible from Below and Beyond

The Hebrew Bible serves as the foundation of several modern religions, from Judaism to Lutheranism. The study of this ancient text is a complex and multi-layered discipline, embracing methodologies from a variety of fields and drawing influence from as many places as it reaches. Bias in biblical scholarship is widespread, affecting both scholarly training and commonly used sources, meaning that certain viewpoints are often privileged over others. In particular, scholars of the Hebrew Bible often overlook the role of Egyptian historical actors and non-elites of the ancient world. One way to ensure the inclusion of such traditionally marginalized voices  is to employ socio-anthropological and historical-critical methods in biblical scholarship.

A green and blue map of the regions of ancient Israel with each location labeled in French.
The regions of ancient Israel (labels in French). Wikimedia Commons.

Scholarship of the Hebrew Bible focuses primarily on analysis of the Bible as a composite text,  a collection of originally independent stories combined into one document long after the historical period each tale claims to describe. One theory used to describe the text’s composition  is known as the Documentary Hypothesis. This hypothesis posits the existence of four independent, original sources known as the Jahwist (Yahwist), Elohist, Deuteronomist, and Priestly texts, which were later combined within the Pentateuch to form the Hebrew Bible as it is known today. Scholars argue that each of these original source texts contains a specific agenda and a particular perspective. In order to determine the cultural context which informs each individual text, scholars must choose what kinds of comparative evidence to foreground in their research, introducing another layer of bias into the study of the Hebrew Bible.

Many biblical scholars approach their research from the standpoint of either archaeological or textual evidence. The refusal to integrate the two approaches often means that scholars lack a complete picture of a particular text’s history, which might be achieved by using all the available evidence. Due to the standard path laid out for a biblical scholar-in-training, the most common sources for comparative evidence, both textual and archaeological, include Mesopotamia (modern Iraq and eastern Syria), and the Levant (modern Israel, western Syria, Jordan, Lebanon, and southeastern Turkey). This choice of geography, made by generations of scholars, is predictable. Textual comparisons between the Hebrew Bible and ancient Mesopotamian literature, for example, are numerous. Yet the refusal to integrate archaeology and textual criticism into biblical scholarship, as well as the continued focus on comparisons with the Ancient Near East, has meant that the Bible’s connection to other ancient cultures remains under-scrutinized.

The author with a scaraboid he excavated at the Iron Age site of Tell Halif, Israel
The author with a scaraboid he excavated at the Iron Age site of Tell Halif, Israel

While textual comparisons with Mesopotamian materials are useful, it is important to recognize the potential biases of Mesopotamian authors. These writers likely represent elite scribal and political classes, with the requisite wealth and status to be exposed to language learning in an advanced professional position. But what about the non-elites? Do their lifestyles reflect the influence of the conquerors of their land coming from far-off Mesopotamia? To untangle this complexity, we must incorporate comparative materials from other cultures bordering the Levant and Mesopotamia to elucidate the lives and beliefs of the non-elites within ancient Israelite society. If the texts reflect upper-class biases, how can we discern elements of the lifestyles of non-elites, particularly those that are influenced by a foreign entity?

Foreign powers in the ancient world tended to display tactics of political imperialism, economic imperialism, and cultural imperialism. Cultural imperialism can be used as a lens by the historian to examine the impact of a foreign culture upon all levels of society. In modern terms, cultural imperialism is most commonly used to describe the influential media of world powers, such as the United States, infiltrating daily lives and influencing cultures across the globe. For instance, the term was used recently by the president of the Canadian Broadcasting Corporation in regard to Netflix. The term can, however, be used to discuss the ancient world, and provides an important framework for examining how foreign powers outside of Mesopotamia exerted great influence over the Levant during the biblical period.

My work on multiple archaeological excavations of Iron Age Israelite sites (c. 1000-586 BCE), primarily domestic areas far from ancient cities, suggests the value of new perspectives. Early on, I was struck by the absence of material culture in these sites related to Mesopotamia, in comparison with fairly regular finds of Egyptian, or Egyptianized, objects. While Mesopotamia is cast as the enemy in the literature of the Israelite period (c. 1000-586 BCE), the Levant was under Egyptian control during the Late Bronze Age (c. 1500-1200 BCE) and is simply closer to Egypt than to Mesopotamia. Why, then, do we continue to rely almost solely on Mesopotamian materials in comparative work when the archaeological evidence frankly demands a focus on Egypt? The reality is that, by the time the Hebrew Bible was being composed, Egyptian rulers had lost much of its influence in the region and was not a political threat in the minds of the biblical authors, except for a brief period in the late seventh century BCE.  Remnants of Egypt’s powerful distant past remain in the minds of the authors, represented in stories such as the Joseph novella.  Unfortunately, arguments about Egyptian influence on the Hebrew Bible tend to lead to meaningless debates, resulting in the few new perspectives regarding the impact cultural contact with Egypt and other neighboring societies on the people of the Levant and on the content of the Hebrew Bible.

The author at the Late Bronze Age Egyptian Governor’s House at Beit She’an (Stela is a replica)
The author at the Late Bronze Age Egyptian Governor’s House at Beit She’an (Stela is a replica)

I argue that Israelite cultural identity is more closely related to that of Egypt, especially at the lower echelons of society. In fact, Egyptian-style scarabs, scaraboids, and Bes figurines are central to local Israelite domestic religion and culture. This is in stark contrast to the portrait of Israelite culture painted within the Hebrew Bible, which displays a gradual shift to centralized worship of YHWH in Jerusalem, particularly under the reigns of Hezekiah and Josiah during the eighth and seventh centuries BCE. This shift is, in my opinion, solely textual, based on the specific religious and political agendas of the scribes who authored these biblical texts. As members of the Jerusalem elite, the scribal school saw as its enemies the Neo-Assyrians and, later, the Neo-Babylonians of Mesopotamia, who threatened to overtake their position in Israelite culture. At the same time, however, Israelite domestic life amongst the populace continued to function as it had for several centuries. This continuation represented not the Mesopotamian culture that threatened the elites but rather a local identity that reflected many aspects of neighboring Egyptian culture, lingering after years of Egyptian rule.

The archaeological record displays Egyptian cultural imperialism reaching down even to the lower rungs of society. The prevalence of Egyptian, or Egyptianized, material culture, like the examples mentioned above, points to influences from the Israelites’ Egyptian neighbors which is not echoed by political powers in Mesopotamia. While biblical scholars will likely continue to use Mesopotamian material as a key point of comparison, we must be aware that influences from other powers such as the Egyptians and the Hittites may not always be reflected in the textual record.

I identify as a historian and scholar of the Hebrew Bible and the Ancient Near East, though many in my field would avoid such a title. Employing both literary and historical methodologies provides a framework for incorporating additional evidence into the study of this ancient text. I study the complex creation of the Hebrew Bible in conjunction with a variety of textual and archaeological evidence in order to reconstruct the historical, social, and political realities of the period. This extra-biblical evidence is extensive, including texts written in Sumerian, Akkadian, Hittite, multiple stages of the Egyptian language, Ugaritic, Aramaic, and other languages that range in time period from about 3,000 BCE to the 1st millennium CE. By incorporating this additional material, I seek to understand groups that are often overlooked in traditional analyses but have important perspectives to offer on the historical context of the Hebrew Bible’s creation. Rather than continuing to search for comparative evidence in the literature of Mesopotamian elites, we must recognize the global character of the Ancient Near East as well as its deep local social networks of actors. Drawing on historical methods like cultural imperialism and focusing on traditionally overlooked cultures encourages scholars to think about the Hebrew Bible from below and beyond.

Fake News and the Agency of Women in Viking Age Iceland

[A note on pronunciation of Old Norse: ‘ð’ and ‘Þ’ are both pronounced ‘th’; ’æ’ is pronounced like the ‘e’ in ‘bed’; ‘j’ is pronounced like ‘y’.]

We live in an era of ‘fake news.’  Fraudulent Facebook accounts and alternative facts have shined a new spotlight on the importance of equal and uncompromised access to the truth. Are biased information sources purely a modern symptom of today’s politics and the unregulated wilderness of the internet? The women of Viking Age Iceland might beg to differ. At times, disinformation and false reporting were utilized to devastating effect in the sagas recorded by medieval Icelandic authors. Even within this temporally distant and culturally distinct context, we can examine how fake news was wielded against medieval women in explicit efforts to undermine their agency.

In 1000 CE, on a small, glaciated island almost a thousand miles from mainland Europe, news meant oral testimony carried on horseback from homestead to homestead, or ferried across storm-tossed oceans on the tongues of travelers. In a world of slow, oral news, far removed from the infrastructure of modern media, we can revisit basic questions about the dissemination of information we moderns might take for granted. What was newsworthy? Where did news come from? Who was responsible for its circulation? How was information verified, and who was able to access it? All of these questions are difficult for scholars of the Viking Age to answer; written sources of the period are few, and those that do exist don’t privilege oral news. In other words, no letters, newspapers, or notice-boards tell us how information was presented in 11th-century Iceland. 

With limited contemporaneous textual records of Viking Age Iceland, we have to turn to alternative sources to piece together answers to these questions. What we know about the lives of Viking migrants and Icelandic settlers around the turn of the first millennium comes primarily from archaeological sources, genealogical records, and the later Icelandic sagas. The sagas were written in Old Norse during the 12th and 13th centuries CE, two or three hundred years after the settlement of Iceland, by Christian clerics, or other church-taught men, in large vellum manuscripts. The sagas relay entertaining legends of Icelandic settlement and details of fiery family feuds, but they are a problematic source for a historian of the Viking Age, given the centuries-wide gap between their creation and the time being described. Whether or not the sagas can be treated as settlement-era sources, they can tell us what 12th-century Icelanders believed or hoped life was like for their ancestors, and they can reveal the attitudes and morals of their later (elite, male) authors.

As is the case for many medieval written sources in Western Europe and beyond, the sagas and other Icelandic texts of the period privilege the actions and perspectives of men. Icelandic laws, first written down in the 13th century but likely codified in an oral tradition much earlier, suggest that women had little de jure authority, though they did have the right to divorce their husbands (for, among other reasons, wearing low-cut shirts). 

Despite the fact that women had fewer rights and limited access to wealth or education, the Icelandic sagas are notable among other medieval sources for their rich depictions of outspoken and intimidating woman characters wielding de facto power within the family and sometimes in society at large. The 13th-century Laxdælasaga, or Saga of the Laxdalers, is so sensitive to the experiences of women that some scholars even suggest it may have been written by a woman. 

Whether or not it comes from a woman’s hand, Laxdælasaga revolves around a host of complex women characters. Many episodes detail the frustrations of navigating social, legal, and physical structures created by and for men. One of these obstacles is the process of obtaining information, a relatively tedious project for everyone in the medieval world, but particularly so for women living on isolated farms, where news traveled only as fast as the fastest Icelandic pony could tölt

Generally confined to the home and discouraged from travelling on their own, women probably relied on male visitors to relay news from the outside world. Middlemen controlling women’s access to information results in notable and familiar problems for which we now have modern buzzwords, such as ‘gaslighting’, ‘alternative facts’, and, of course, ‘fake news.’ 

Guðrún Ósvífrsdóttir. Illustration by Andreas Bloch, “Vore fædres liv” (PD-US)

Guðrún Ósvífursdóttir is one of the protagonists of Laxdælasaga, a beautiful and intelligent farmer’s daughter who nonetheless has difficulty finding and keeping a good man. Her first marriage to Þorvaldr is brief, unhappy, and ends in divorce. Her second husband, Þord, drowns at sea. Finally, Guðrún meets the dashing saga hero Kjartan Óláfsson. They flirt in secret, defying her father’s wishes, and fall passionately in love. 

Before they marry, Kjartan tells Guðrún he wants to seek his fortune in Norway. Angry, Guðrún demands that Kjartan take her with him on the voyage.

“Guðrún said: ‘I want to go with you this summer. Then I could forgive you for arranging this trip so suddenly. After all, it isn’t Iceland I’m in love with.’ ‘It can’t happen,’ said Kjartan. ‘Your brothers are young and your father is old, and there won’t be anyone to take care of them if you leave home. So, wait for me for three winters.’” [Translated from Old Norse by the author]

Kjartan’s decision to sail to Norway alone, despite Guðrún’s request, is a catalyst for the tragic conflict that occurs later in the saga. Like all good romantic dramas, Laxdælasaga involves a love triangle. Guðrún loves Kjartan, Kjartan loves Guðrún…and so does Kjartan’s closest childhood friend, Bolli. Because of their friendship, Bolli accompanies Kjartan on the journey to Norway, but he doesn’t forget about the woman left behind.

Though Kjartan doesn’t explicitly point to Guðrún’s gender as the reason for refusing to bring her along, his dismissal of her desire to travel highlights a clear division between gendered spaces in medieval Iceland. Women tend to the home while men are left to farm, to fish, to study, to vote, and to travel abroad. Kjartan reminds Guðrún of her responsibility towards her younger brothers and elderly father, who will be left unprotected if she were to pursue her desire to travel. 

Emphasis on a woman’s domestic role as grounds for impeding her movement appears in many modern studies of the migration of women. For example, women who emigrated from the country of Georgia in the 1990s were vilified for leaving their families behind. Referring to the “feminization of migration” in Georgia, social scientists Hofmann and Buckley observe, “most respondents described it as unnatural, challenging the male role as breadwinner and female responsibilities for childcare and eldercare.” The clear delineation of gendered occupations is deployed as a barrier to women’s movement outside the home as much today as it was a thousand years ago. Confinement to the home means prohibition from male spheres of political, social, and economic exchange—more often than not, the places where news happens. 

The knowledge and experience gained from travel abroad are traditionally available only to men. In Laxdælasaga, the first thing Kjartan and his followers do when they arrive in Norway is ask other men for tíðindi, or tidings. They catch up on the gossip, such as it was in early medieval northern Norway, undoubtedly including plenty of rumors about who won what battles, the best English beaches for landing a raiding party, and who the king’s sister currently favors. Disinformation and fake news, as we’ll see later on, can be a powerful tool of political and psychological maneuvering in a world without third-party fact-checking services. As the saga continues, Kjartan cozies up to the Norwegian king and starts to make a name for himself as a competent warrior and all-around Icelandic heartthrob. 

Bolli returns early to Iceland, leaving Kjartan at the Norwegian court. He heads straight for Guðrún, armed with all the instruments of modern psychological warfare. Bolli deliberately turns Guðrún against her former lover, describing how Kjartan is enjoying his newfound fame in Norway. He insinuates that Kjartan’s heroic qualities have caught the eye of the king’s marriageable sister, and implies that Kjartan has forgotten Guðrún and their old attachment. 

Guðrún at first refuses to believe him, but Bolli enlists the help of her father and brothers, who together spin stories about Kjartan’s reprehensible behavior and undermine Guðrún’s convictions, until she begins to believe that Kjartan is not the man she thought he was; a classic example of what would today be termed gaslighting. Without any way of communicating with Kjartan, and unable to travel to Norway to ascertain the truth for herself, Guðrún is coerced into marrying Bolli instead.

When Kjartan returns to Iceland a few months later, he is distraught to discover that Guðrún is married to his best friend. News of his arrival and the truth about his stay in Norway reaches Guðrún, revealing Bolli’s deceit. She confronts her husband about his campaign of misinformation, but he demures: “Bolli declared that he had said what he knew to be the truth.” You can almost imagine the deafening shrug. Here, news is weaponized against a woman by a man armed with the facts and determined to twist ‘the truth’ to his own ends. 

Kjartan, dead on the lap of Bolli. Illustration by Andreas Bloch, “Vore fædres liv” (PD-US)

Resentment rages between the three characters, even as Kjartan moves on and marries another woman. After a series of escalating offenses occurs over several years, Bolli, egged on by his brothers, finally takes up a sword against his friend. Kjartan, refusing to fight, casts away his shield and allows himself to be fatally stabbed. Bolli takes the dying Kjartan in his arms and pours out his remorse at being driven to such a terrible act. Soon after, Kjartan’s sons avenge their father by killing Bolli. 

The tragic conclusion hints at an unexpected but relatively lucid Viking Age moral. A great deal of grief originates from Bolli’s decision to modify facts, and from Guðrún’s isolation from the masculine realms of movement and information exchange. If Guðrún had accompanied Kjartan on his journey as she requested, if she had been supplied with all available information or been able to verify the news she received some other way, the saga’s tragic conclusion might have been avoided. Based on the arc of this episode, it would seem the author of Laxdælasaga regards the obstruction of a woman’s movement and access to information as inappropriate and potentially perilous. Manipulation of facts and deliberate misinformation leads to two deaths and an unhappy ending for everyone involved. 

Other brief but telling episodes in medieval Icelandic literature hint at a tacit approval of the movement of women. We see Viking Age heroines throughout the western diaspora (Iceland and the British Isles) commissioning their own ships, setting out on long journeys, and striving to form their own networks of information exchange through kin and marital ties. It may be that these women are simply literary figures playing out imagined fantasies that would never have been possible for real women of the time; or, perhaps these examples reveal some awareness of the importance of the agency of women. 

In this modern era of fake news and alternative facts, we might do well to remember some of the simpler lessons of Icelandic history. Honesty, as a medieval Icelander would probably tell you, is the best policy. Obscuring the truth leads only to blood feud and bitter regret.

Karia, Then and Now

The Cows of Alabanda

For historians, it is easy to view the past as a hermetically sealed world, like a petri dish that we can subject to tests and analyses without fear of contamination. However, this failure to admit that we are implicated in the very thing we are trying to study can allow ideas and practices to fester, unnoticed until some jolt forces us to confront them. Too often, however, this perpetuates problematic ideologies and ignores the fact that many of these historic sites have a modern presence — with modern people living modern lives — too. It wasn’t until I had the chance to travel to the places I had been studying that I received such a jolt that led me to question my role within my field, and my field’s role in the world.

Well, how did I get here?

On a hot afternoon in June of 2017, I found myself wandering over the remains of Alabanda, and around the small cluster of houses of the modern village of Doğanyurt that perch atop them. Alabanda was an ancient city in the Southwest corner of what is now Turkey, and through the centuries was inhabited by native peoples, Greeks and Romans before being abandoned. 

When returning from my research in the field, I found some buildings, a tomb, a theatre, the course of a wall running up over the hillside; the blare of the call to prayer. For someone who studies the ancient Aegean World, it was an idyllic end to the day.  

Image of the ruins of a Roman Theater in Alabanda. Photo taken from above with rocks and dried shrubs in the foreground. Semicircular stone structure with trees in the background in the middle of the image.
Alabanda. View looking down at modern village over the ruins of a Roman Theatre (Photo by author)

When I headed back to my rental car, I found a local farmer was watering a small herd of cows nearby. Summoning up all the Turkish I had learned over the past year, I greeted him with a simple “Merhaba!” (Hello).

He seemed nonplussed that I should know even that much Turkish, but we managed to strike up a very simple conversation. I asked him what he thought of the ruins, and the fact that he lived on top of the ruins of a 2000 year old city. 

“Not much,” was his philosophical reply. He explained that he and his father had been employed to help excavate the city whenever the archaeologists came by, but beyond that, he did not profess any particular attachment to the heaps of stone and brick.

“And you,” he rejoined, “what brings you here?”

I struggled to formulate an answer. To be sure, in Turkish, I only had the vocabulary of an 8 year old, but as I stood there face to face with this man and his cows, it wasn’t my vocabulary that made it hard to formulate my response. What indeed was I, an American student from Suburban Philadelphia doing wandering around this out-of-the way village in southwest Turkey? 

Reflecting on this experience has opened a whole host of other questions about my position in my field and in the world, as well as the responsibilities someone who studies people long dead has to the living. 

Getting into Classics

Here at Brown, I am in the Ancient History Program, which is co-sponsored by the Classics and History departments. I identify more with the Classics department because that is the world I have lived in, well, half my life I suppose. I had the fortune of being able to take Latin classes starting in 7th grade, and even Ancient Greek in 9th. I stayed in Classics because I had good teachers and liked learning the languages. It wasn’t till the end of college that I really became interested in studying history, rather than literature. 

More and more, I became interested in studying the native inhabitants of what is now modern Turkey. Now these peoples have long been known to Classicists but only indirectly: here is no surviving literary tradition in their own languages so much of what we think we know about them comes from Greek and Roman sources. Unfortunately, the one-sided and often prejudiced views of the Greeks and Romans seeped into later views of the natives (as Edward Said documents, Orientalism has a long pedigree). 

Map of Ancient Anatolia, depicted with land in shades of ivory with ocean in brown.
Map of Ancient Anatolia © Finley, M. I. (1977). Atlas of classical archaeology. New York: McGraw-Hill.

The upshot is that we don’t actually know much about these peoples. With my research in the Ancient History Program here at Brown, however, I am trying to rectify that situation by looking at other kinds of evidence, such as material culture and the small amount of inscriptions written on durable materials that has survived. But in order to do this, I have had to step beyond the bounds of what most consider the traditional turf of Classics.  

Classics is usually defined as the study of the Ancient Greeks and Romans, their history, culture, literature, etc. For centuries, it has been a cornerstone of elite, liberal education in the West. As such, it has remained a generally conservative field, slow to adopt innovations in theory and practice. Moreover, it has a lot of colonial, racist, and sexist baggage: the Spanish conquistadors saw themselves as new Romans, bringing civilization to the New World; the Nazis idealized the ancient Spartans as models for the Ubermensch; and the Alt-right is using Stoic philosophy to “prove” that women are irrational and emotionally unstable.

Scary stuff, and not something that makes one proud of one’s field. But as one who loves my field nevertheless, and wants to help it change for the better, I see setting a new research agenda as one small way  to tackle this baggage. At least, this is what I thought as I headed to the coast of Turkey in 2017.

Colonization of the past 

On the one hand, I felt I had to attempt to slough off my field’s colonialist baggage by focusing on other ancient Mediterranean cultures besides the Greeks or Romans. Post-colonial theory has made its way into Classics, and with it the realization that — surprise, surprise! — the Greeks and Romans may not be the best sources of information about all the peoples they traded with, fought, and conquered. So I hope that in my research, I am helping to de-center the Greeks and Romans.

But on the other hand, while trying to escape the colonialist perspective of our sources, am I just perpetuating the colonialist practices of western academia? One of my favorite quotes is L. P. Hartley’s “The past is a foreign country; they do things differently there.” But just like any country, the past can and has been colonized. In this case, I am talking about the process by which American and European scholars claimed Greco-Roman history as their own, thus denying it to the modern inhabitants of places like Greece and Turkey. So, when I went to Turkey just to look at its ancient monuments, and asked people if they cared much about them, was I not just perpetuating this trend? 

Fringes of Classics 

Even apart from these questions, my choice to study the ancient inhabitants of Turkey has consequences for my possible career in Classics. Although  the field is trying to evolve, I still feel very much like this research lies on the fringe. Even to my own colleagues I often have to explain a lot (like, why DID I take a class in Hittite, a language even older and deader than Latin?) And yea, it makes me nervous about the job market; what school needs someone to teach their students about the Lydians, Karians or Lykians —  names no one has heard of? It may be hip to say you’re studying “ancient subalterns,” but can you make a career out of that in a field that in many ways is still focused on a canonical set of texts?

Is this a pigeon? meme, with a man in glasses gesturing toward a stone structure with Greek letters and the caption reading "Is this classics?"
Well, is it? (Meme modified by author)

Now I admit, researching on the fringe has pushed me to make connections with other departments; I’ve taken classes in Archaeology and Assyriology and the connections I’ve made with people in those departments have meant the world to me and my research. In fact, one of the biggest reasons I came to Brown was the promise of low disciplinary walls — and in this I have not been disappointed. I do believe there is much to be gained in questioning disciplinary boundaries. 

So, to end this the way I end all my papers: I don’t have any clear-cut answers. 

As with any career, being an academic requires a balancing act between the practical and aspirational. What I can say is that it is easy to get caught up in the day-to-day work of being a grad student, focusing only on what you are doing and not thinking about why you are doing it and who it might affect. 

Karia, Then and Now
Karia, then and now.Left, a carved stone from the temple to Zeus at Alabanda; right, a shop in the resort town of Bodrum. The axe was a symbol associated with Zeus in ancient Karia. (Photos by author).

At Alabanda, and everywhere else I visited in Turkey, I could not escape this question of who; for everywhere around me, people were making their living on its ruins, through tourism connected to it, and under biases baked into it. The question of who owns that past and who gets to shape it is far from academic. 

Welcome to The Ratty

Public scholarship is a critical component of research work here in 2020. Reaching out to wider public audiences allows scholars to generate interest in their subject matter, cultivate relationships with other scholars, institutions, and funding sources, and combat dangerous ideas that pervade often insular fields. Yet despite the value of public outreach (and the high quality of our education here at Brown), we are not provided with any training in how to engage in such scholarship.

That’s where The Ratty comes in.

The Ratty is a blog for graduate students at Brown University that is designed as both a platform for showcasing public scholarship but also a means by which students can get the training they need to become public scholars. 

Graduate students will write and publish an article that presents their research in a way that the public can grapple with but that doesn’t speak down or obfuscate complexity. They will work with our team of trained editors to create an article geared toward a public audience building from standard academic research models. Through this process, grad students will learn more about the differences between academic and public writing, will gain experience in pitching and editing, and in the end will be able to point to a digital publication of their writing and a digital author page of their contributions.

With The Ratty, we’re trying to fill a couple of gaps we’ve noticed from our own experience: the gaps in public scholarship training, but also the gap in just experiencing the editing process. Much of the time we submit papers to our professors and we get their feedback, but that’s it — you can choose to never open that feedback document and never submit a further edited version of your work. But it’s in that second back-and-forth that you really start to make big changes and real productivity happens in your writing. 

This interaction, however, can be pretty emotionally trying, especially if it’s something you’ve never experienced before. Sure, we’ve all probably been gutted by critique on papers from teachers, but we haven’t necessarily had to push back against their critique and we haven’t had to respond to any of the more emotional changes that have been asked of us. That part of the editing process can be vicious, and this is true in grad school as well as in academic publishing. It’s another one of our goals with The Ratty to help students get used to the editing process in a way that is a little kinder. The world is often unkind, and we don’t have to be that way. 

But we’re not limited to articles! The Ratty also wants to work with students who are interested in showcasing their research in other media — videos, comics, mixed media, etc. While our editors are specifically trained to work with public writing, we also understand that writing isn’t always the best way for students to showcase research, and it’s not always the way that the public is most interested in engaging with you. If you have ideas for other formats and styles of presentation, The Ratty is interested in hearing about it.

We’re also interested in using The Ratty to help train graduate students in Public Scholarship in other ways. Every day we interact with interesting people on Twitter, Instagram, etc. who are really committed to public scholarship, and they all do it in very different ways. So, we are planning to host a yearly speaker series, “The Ratty Presents,” where we bring in people who engage in public scholarship from lots of different fields. This will hopefully allow us to continue to evolve our understanding of public scholarship and push the ways in which The Ratty can help graduate students at Brown engage with the public.

There is real life inspiration for our logo: Daryl (2018-2019), who has carved a place in our hearts and in our branding.

So that’s what The Ratty is and what we hope to do with it, but why is it called The Ratty? Our inspiration is, perhaps obviously, our community’s nickname for Sharpe Refectory. “The Ratty” is something that the students call this dining hall — it’s not University-sanctioned, but the nickname is still known and used outside of the study body. The Ratty, too, is a student-led initiative. A lot of people are frustrated with how the job market has changed, the rules for PhDs have changed, but our training hasn’t been updated to reflect that at all. The Ratty is about taking back at least one aspect of our education. We’re going to take the reins and train ourselves in how to do those things that we know are important.

Right now, the entire team of The Ratty — from managing editors down — are women, people of color, or both, which speaks volumes about who feels like they need to push the boundaries of academia. These are the people who feel like they’re kept out of the traditional model that we’re still trying to use to train our PhD students. This further solidifies our commitment to working to bolster our training through The Ratty in order to help especially those people that academia often leaves behind.

So join us! You can pitch us to work with our editors in bringing your research to a public audience, and we’re always interested in training more editors. The Ratty is reclaiming student space in scholarship, making it public and loud. Sometimes it takes a small rat to make a big change.